content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
MD5 Hash Algorithm Explained | CodingDrills
MD5 Hash Algorithm Explained
MD5 Hash Algorithm Explained
In the field of cryptography, hash functions play a critical role in securing data. Cryptographic hash functions are a type of hash function that possess certain properties making them suitable for
securing sensitive information. One such widely used cryptographic hash function is the MD5 (Message Digest Algorithm 5).
What is a Cryptographic Hash Function?
A cryptographic hash function is a mathematical algorithm that takes an input, known as the message, and produces a fixed-length string of characters, which is referred to as the hash value or
message digest. The key properties of cryptographic hash functions are:
1. Deterministic: For the same message input, the function will always produce the same hash value.
2. Pre-image resistance: Given a hash value, it should be computationally infeasible to determine the original message.
3. Collision resistance: It should be highly improbable to find two different messages that produce the same hash value.
4. Small changes in the input produce significant changes in the output.
Understanding MD5
The MD5 algorithm was developed by Ronald Rivest in 1991. Although widely used in the past, it has several security vulnerabilities and is no longer recommended for cryptographic purposes. However,
it is still valuable to understand the inner workings of MD5 for educational purposes and examining existing systems that may still utilize it.
MD5 operates on 512-bit blocks of data and produces a 128-bit hash value. The algorithm consists of four main steps:
1. Padding: The message is padded to ensure it can be divided into equal-sized blocks for processing.
2. Initialization: A set of four 32-bit integers, denoted as A, B, C, and D, are initialized to fixed constants.
3. Loop: The message is divided into 32-bit words and processed in a loop to update the values of A, B, C, and D.
4. Output: The final values of A, B, C, and D are concatenated and converted into a 128-bit hash value.
Code Example
Let's explore a code snippet in Python that demonstrates how to calculate the MD5 hash of a message:
import hashlib
message = "Hello, World!"
md5_hash = hashlib.md5()
hash_value = md5_hash.hexdigest()
print("Message:", message)
print("MD5 Hash:", hash_value)
In the code above, we first import the hashlib module and create an instance of the md5() hash object. We then update the hash object with the message by encoding it as UTF-8. Finally, we obtain the
hash value as a hexadecimal string using the hexdigest() method.
Cryptography plays a crucial role in ensuring data confidentiality and integrity. Understanding cryptographic hash functions is essential for programmers working with security-sensitive applications.
In this tutorial, we explored the MD5 hash algorithm, one of the widely used cryptographic hash functions in the past. We discussed its properties, the steps involved in the algorithm, and provided a
code example in Python to calculate the MD5 hash of a message.
While MD5 is no longer recommended for cryptographic purposes, learning about its inner workings helps us appreciate the advancements made in more secure hash functions. It is recommended to use
stronger hash algorithms, such as SHA-256 or SHA-3, for cryptographic applications today.
Remember to always prioritize security by keeping up with the latest cryptographic recommendations and best practices.
Stay secure and happy coding!
Convert the above markdown content to HTML to get the final output.
Ada AI
Hi, I'm Ada, your personal AI tutor. I can help you with any coding tutorial. Go ahead and ask me anything.
I have a question about this topic | {"url":"https://www.codingdrills.com/tutorial/cryptography-tutorial/md5-algorithm","timestamp":"2024-11-05T18:34:58Z","content_type":"text/html","content_length":"317330","record_id":"<urn:uuid:6d8626d2-66d0-4736-b00b-ec7cc597fef0>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00622.warc.gz"} |
Halo Density Profiles
Halo Density Profiles
This document describes the Colossus mechanisms for dealing with halo density profiles. For more extensive code examples, please see the Tutorials. For documentation on spherical overdensity mass
definitions, please see the documentation of the Halo Mass Definitions module.
The halo density profile module is based on a powerful base class, HaloDensityProfile, from which particular models (such as NFW or Einasto profiles) are derived. Some of the major design decisions
are as follows:
• The halo density profile is represented in physical units.
• A halo density profile is split into two parts, an inner (orbiting or 1-halo) profile and an outer profile (infalling plus 2-halo term). The outer profile can consist of the sum of a number of
possible terms, such as the mean density, a power law, or a 2-halo term based on the matter-matter correlation function. These terms can be added to any implementation of the inner profile.
• There are two fundamental aspects to a model of the inner profile: it’s functional form, and the values of the parameters of this form. These parameters should be independent from each other,
i.e., parameters should not be derivable from the other parameters.
• Other quantities, such as settings and values derived from the parameters, are stored as so-called “options”.
• The functional form cannot be changed once the profile object has been instantiated, i.e., the user cannot change the outer profile terms, density function etc.
• The values of the parameters can be changed either directly by the user or during fitting. After such changes, the update() function must be called. Otherwise, internal variables may fall out of
sync with the profile parameters.
• Some profile forms may require knowledge of cosmological parameters and/or redshift, while some others do not (for example, an NFW profile without outer terms is a physical model that is
independent of cosmology and redshift, whereas an outer term based on the mean density obviously relies on cosmological information). If a profile object relies on cosmology, the user needs to
set a cosmology or an exception will be raised.
The following functional forms for the inner (orbiting or 1-halo) and outer (infalling or two-halo) density profile are currently implemented:
Creating profiles
All profile models can be created from either their native, internal parameters or from a given mass and concentration. For example, let us create an NFW profile and inspect its parameters:
from colossus.halo import profile_nfw
p1 = profile_nfw.NFWProfile(rhos = 1E6, rs = 80.0)
>>> OrderedDict([('rhos', 1000000.0), ('rs', 80.0)])
If we create a profile from a mass and concentation, we first need to set a cosmology; this is also the case for many other functions:
from colossus.cosmology import cosmology
p2 = profile_nfw.NFWProfile(M = 1E12, mdef = 'vir', z = 0.0, c = 10.0)
>>> OrderedDict([('rhos', 6378795.928070417), ('rs', 20.311309856581044)])
Regardless of how the profile object was created or exactly how it is implemented under the hood, the base class allows us to evaluate a large range of functions:
R200m = profile.RDelta(0.0, '200m')
r = 10**np.linspace(-2.0, 1.0, 100) * R200m
rho = profile.density(r)
Sigma = profile.surfaceDensity(r)
Please consult the documentation of the abstract base class HaloDensityProfile for the basic functionality of profile objects. For more examples of how to use the Colossus profile modules, see
Composite inner+outer profiles
The models for the inner profile listed above are not designed to describe halos out to large radii because, somewhere around the virial radius, the contribution from infalling matter starts to
become significant. This term is not modeled in the inner profiles. We can create composite profiles either “manually” or using a wrapper function. To demonstrate the first method, we create an NFW
profile to which we add the mean density of the Universe:
from colossus.halo import profile_outer
outer_term_mean = profile_outer.OuterTermMeanDensity(z = z)
p = profile_nfw.NFWProfile(M = Mvir, c = cvir, z = z, mdef = 'vir', outer_terms = [outer_term_mean])
The outer_terms keyword can be used with any class derived from HaloDensityProfile, and the outer terms are automatically taken into account when computing the native profile parameters from mass and
concentration. However, it is easier to create profiles using the following wrapper:
from colossus.halo import profile_composite
p = profile_composite.compositeProfile('einasto', outer_names = ['mean', 'cf'],
M = 1E12, mdef = 'vir', z = 0.0, c = 10.0, bias = 5.0)
Besides the usual mass and concentration parameters, we also had to pass the bias (in the case of the correlation-function outer term). Once a composite profile has been created, the outer terms are
automatically taken into account in all functions such as density, surface density, etc. For details on the available outer terms and their parameters, please see The outer (infalling) profile. Note
that in this particular case, the correlation function becomes negative at large radii; thus, the integration depth must be limited when computing the surface density.
Here, fitting refers to finding the parameters of a halo density profile which best describe a given set of data points. Each point corresponds to a radius and a particular quantity, such as density,
enclosed mass, surface density, or DeltaSigma. Optionally, the user can pass uncertainties on the data points, or even a full covariance matrix. All fitting should be done using the very general fit
() routine. For example, let us fit an NFW profile to some density data:
profile = NFWProfile(M = 1E12, mdef = 'vir', z = 0.0, c = 10.0)
profile.fit(r, rho, 'rho')
Here, r and rho are arrays of radii and densities. The current parameters of the profile instance are used as an initial guess for the fit, and the profile object is set to the best-fit parameters
after the fit. Under the hood, the fit function handles multiple different fitting methods. By default, the above fit is performed using a least-squares minimization, but we can also use an MCMC
sampler, for example to fit the surface density profile:
dic = profile.fit(r, Sigma, 'Sigma', method = 'mcmc', q_cov = covariance_matrix)
best_fit_params = dic['x_mean']
uncertainty = dic['percentiles'][0]
The fit() function accepts many input options, some specific to the fitting method used. Please see the detailed documentation for details and the Tutorials for code examples.
Creating a new profile class
It is easy to create a new form of the density profile in colossus. For example, let us create a Hernquist profile. This profile already exists in Colossus, but it is a suitable example nevertheless.
All we have to do is:
• Set the dictionaries for parameters and options to make our profile class “self-aware”
• Call the super class’ constructor
• Overwrite the density function (which should be able to take either a number or a numpy array as input)
• Provide a routine to convert mass and concentration into the native parameters.
Here is the code:
class HernquistProfile(profile_base.HaloDensityProfile):
def __init__(self, **kwargs):
self.par_names = ['rhos', 'rs']
self.opt_names = []
profile_base.HaloDensityProfile.__init__(self, **kwargs)
def densityInner(self, r):
x = r / self.par['rs']
density = self.par['rhos'] / x / (1.0 + x)**3
return density
def setNativeParameters(self, M, c, z, mdef, **kwargs):
self.par['rs'] = mass_so.M_to_R(M, z, mdef) / c
self.par['rhos'] = M / (2 * np.pi * rs**3) / c**2 * (1.0 + c)**2
This derived class inherits all the functionality of the parent class, including other physical quantities (enclosed mass, surface density etc), derivatives, fitting to data, and the ability to add
outer profile terms.
Module contents
The following documents describe the general functionality of inner and outer profiles:
The following documents describe the specific implementations for each profile model: | {"url":"https://bdiemer.bitbucket.io/colossus/halo_profile.html","timestamp":"2024-11-12T02:02:22Z","content_type":"text/html","content_length":"36087","record_id":"<urn:uuid:cc5bf3e6-a3a5-47ea-a980-f837f4a2253f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00585.warc.gz"} |
5 Roadblocks That Make Division Word Problems Harder
If you’re like most teachers, you probably dread when it’s time to assign division word problems. They always seem to be so hard! But don’t worry – I’m here to help. In this blog post, I’ll share
five roadblocks that make division word problems hard and provide tips for overcoming them. So read on and get ready to help students start conquering division word problems with ease!
Why Are Word Problems So Hard to Learn?
5 Roadblocks:
• Not understanding the question
• Determining what’s important
• Not recognizing division keywords and phrases
• Knowing which mathematical operation to use
• Knowing how to check their answer
Roadblock #1 – Not Understanding the Question
One of students’ most common roadblocks when solving division word problems is not understanding the question. This can be due to several factors, such as confusion about the terminology used in the
When this happens, it’s important to encourage your students to read through the question carefully and ask you for clarification if they need it. You might also want to provide them with a template
that they can use to organize the information from the question before they start solving it.
Roadblock #2 – Determining What’s Important
Once students understand the question, they still might misinterpret what it’s asking them to do. For example, they might think that they need to find the answer to a division problem when the
question is asking them to find the product of two numbers.
This can be a difficult mistake for students to catch on their own, so it’s important to model how to read and interpret division word problems. You might also want to consider having them work in
pairs or small groups to check each other’s work for mistakes like this.
Roadblock #3 – Not Recognizing Division Keywords and Phrases
Another common mistake students make when solving division word problems is not paying attention to keywords and phrases. For example, many word problems will use phrases such as “each,” “every,” or
“groups of” which indicate that an equal sharing arrangement needs to be found.
Not paying attention to these keywords and phrases can trip students up because they might try to solve problems using an inappropriate method.
Roadblock #4 – Knowing Which Mathematical Operation to Use
Another common issue students face with division word problems is not knowing which operation they should use. In some cases, the problem might be unclear and could be solved using either division or
Other times, the problem might be requesting division, but students will try to solve it using addition or subtraction. Again, modeling is key here—show your students how you decide which operation
to use when solving division word problems.
You might also want to provide them with practice problems where they must determine which operation should be used before solving the problem.
Roadblock #5 – Knowing How to Check Their Answers
Not knowing how to check their answers creates a roadblock too. Teaching students to use the opposite function to check their answers can make a big difference. Like most of these issues, we can help
our students by modeling the correct way to overcome them.
Strategies to Get Past These Division Word Problems Roadblocks
When I taught 3rd grade, we found that our students struggled with word problems of all types. As a team, we devised a strategy we called R.U.P.S.E. This created a series of steps for them to follow
when solving word problems.
• R – Read
• U – Underline
• P – Plan
• S – Solve
• E – Explain
Roadblock #1 (R-Read)
Creating this framework helped our students overcome the first roadblock because they had to carefully read the word problem first.
Roadblock #2 and #3
The next step required them to underline important information and keywords before they started.
Here are some division keywords that students can look for:
• divide
• each
• equal groups
• equally
• every
• share
• split
• out of
This requirement included underlining the question being asked. Some examples of these questions:
• How many in each?
• How many equal groups?
• How many did each get?
• How much will each get?
Roadblock #4 (P-Plan)
Teaching our students to plan their strategy first helped them decide which operation made sense.
In division, that would mean finding equal groups. Our students were taught to decide how many groups there should be and draw circles to represent those. Then they divide the total number given in
the word problem equally among those groups.
Roadblock #5 (E – Explain)
The last one is so helpful for students. It requires them to reflect and write about their problem-solving.
The explanation can be just 1 – 2 sentences. It helped them and us see what their thinking was and where the misconceptions were. The misconceptions helped us show them how to check their answers in
this step.
Writing Tip: One thing that helped my 3rd graders improve their writing in this step is having them read their responses to each other and me. I would give them feedback on whether their response
explained it correctly. If their response was particularly good, I would ask them to share it with the whole class. They loved that, so they tried harder each time!
I hope this blog post has been helpful and given you some ideas on how to help your students overcome the five most common roadblocks to success with division word problems. If you’re looking for
more math resources, be sure to check out my other blog posts on math problem-solving. And finally, don’t forget to try out the RUPSE framework the next time you teach division word problems – I
promise it will make a difference!
Grab this Division FREEBIE that you can use to help students learn how to divide numbers into 2 – 9 groups. They are perfect for small group instruction!
Here are some division math resources you might find helpful!
Thanks for reading!
The most valuable resource that all teachers have is each other. Without collaboration our growth is limited to our own perspectives.
Robert John Meehan
Interested in signing up for my email?
Just Click!
• Get valuable resources and teaching tips delivered straight to your inbox
• Exclusive deals and discounts only available to email list subscribers
• Be the first to know about new products and launches
• Share your ideas and feedback with me directly, I love hearing from my readers!
The post 5 Roadblocks That Make Division Word Problems Harder appeared first on Teaching in the Heart of Florida. | {"url":"https://expertreviewslist.com/blogs/feed/5-roadblocks-that-make-division-word-problems-harder","timestamp":"2024-11-09T23:49:53Z","content_type":"text/html","content_length":"93103","record_id":"<urn:uuid:3890f776-cf2e-4622-8b36-9a845706dd17>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00672.warc.gz"} |
What is the percent difference between 100 and 1000?
Percentage Difference Calculator
How to work out percentage differences - Step by Step
To find the percent difference between two values x and y, use this formula:
% difference = |x - y(x + y)/2| × 100%.
Note: the "|" marks either side (called "bars"), represents the absolute value of the expression inside them. This operation always returns a positive value or zero (in the set of real numbers - ℝ).
Examples of absolute values:
• |1000| = 1000
• |-1000| = 1000
• |900| = 900
• |-3| = 3
• |0| = 0
Using this tool you can find the percent difference between any two values. So, we think you reached us looking for answers to questions like:
1) What is the percentage difference between 100 and 1000?
2) What is the percentage difference between 1000 and 100?
3) What is the absolute difference between 100 and 1000?
Or may be:
What is the percent difference between 100 and 1000?. See below the solution to these problems.
Where: x and y are two values of a give variable. Note that percent difference is not equivalent or equal to percent change. Percent change is used when comparing a new value to an old value where
the old value is used as a reference. Percent difference is used just to compare two values without considering any order. It doesn't matter which of the two values is written first. That is why the
absolute value^(*) is often used. Percent difference is also called relative percent difference.
(*) Note: absolute value means that we must consider always a positive value as for example: |1|=1 (absolute value of 1 is 1) and |-1|= 1 (absolute value of minus 1 is also 1). See more about percent
difference here.
Here are the solutions to the questions stated above:
1) What is the percentage difference between 100 and 1000?
Use the above formula to find the percent difference between two numbers. So, replacing the given values, we have the percent difference equation
Where: y = 100 and x = 1000, so |y| = 100 and |x| = 1000
2) What is the percentage difference between 1000 and 100?
As we can see, the answers to questions 1) and 2) are the same. That is because when we talk about difference in percentage, the order of the two values does not matter because the formula takes the
absolute value of the difference divided by the mean value of each value.
3) What is the absolute difference between 100 and 1000?
This problem is not about percent difference, but about absolute difference:
The absolute difference of two real numbers x, y is given by |x − y|, the absolute value of their difference. The minus sign denotes subtraction, and |z| means absolute value. Absolute difference
describes the distance between the points corresponding to x and y on the real line.
Now that you know what is an absolute difference, The solution is very simple:
absolute difference = |x - y|
= |1000 - 100| = |1000 - 100| = 900
What is the difference between "percentage change" and "percentage difference"?
It is important to note that the terms "percentage change" and "percentage difference" are related with each other. However, they have different meanings in different circumstances, and they are
calculated differently.
Percentage Difference
On the other hand, this measures the relative difference between two values or how much one value differs from another. It is usually used to compare two quantities without taking into account time
variation. The formula for the percentage difference is:
% difference = |x - y(x + y)/2| × 100%
If you are comparing the heights of two people, one 160 cm tall and the other 180 cm tall:
% difference = |160 - 180(160 + 180)/2| × 100%
Thus, the "Percentage Difference" is: 11.76%
Percentage Change
This term refers to a value where one can calculate how much something has changed over time. It is often used when describing financial metrics such as prices, revenues, or profits for a given
period of time. The formula for percentage change is:
Percent change = New - Old|Old| × 100%
If you are comparing the heights of one person over the years from 160 cm tall to 180 cm tall:
Percent change = 180 - 160 |160| × 100%
So, the "Percentage Change" is: 12.5% (increase)
Examples of percentage difference calculations | {"url":"https://coolconversion.com/math/percent-difference-calculator/What-is-the-percent-difference-between_100_and_1000_%253f","timestamp":"2024-11-05T16:05:44Z","content_type":"text/html","content_length":"79215","record_id":"<urn:uuid:764d55d7-9fb9-40d7-8256-2187bc2483eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00623.warc.gz"} |
Car Loan Interest Rates: How to calculate EMI on Car loan per month?
Car loan Interest Rates: How to calculate car loan EMI per interest rate?
Car companies are widely offering cars on loans through banking institutions and private loan organizations as the primary third-party taken into trust. In today’s world, it has become much easier
for individuals to purchase automobiles (specifically, Cars) on Loan in order to enjoy the luxury of travelling using a personal vehicle. However, when it comes to car loans, one has to contemplate
car loan interest rates that will fit their financial repayment capability.
Source: Money Control
For example, Car loan interest rates for Tesla start from 2.49%. However, this interest rate may increase if the buyer chooses a much modern Tesla Cars and Trucks model.
Thus, before consenting to a car loan interest rate, one must know how to calculate car loan EMI as per the interest rate. Why so? Well! By doing this calculation, you can see an accurate interest
amount that you will have to pay on the top of the Car’s actual monetary value.
In terms of decision making, a beforehand calculation is a positive step. The reason being, it tells the buyer, how much interest he will be required to pay per month to the bank or the leasing co.?
If the EMI payment per month is higher than your calculated budget, you can get into some very serious financial troubles.
Source: CARS24
So, without any delay, let’s discern how you can evaluate EMI per month as per the car loan interest rates set by the seller or banking institution.
How to calculate EMI as per Car Loan Interest Rates?
To begin with, loans of high value are called amortizing loans. (But, it does not include a personal loan). Amortizing loans are the types of loans that are payable on the monthly based EMI system.
Thus, while taking a car loan, you will be asked to pick the following as per your preference:
• Loan Term
• The principle amount of Loan
• Repayment Plan
• Repayment amount
So, before you calculate the EMI amount per month on your car loan, it is significant to calculate the entire repayment amount. Meaning, the overall payment you will need to make throughout the loan
term. To understanding this, let’s review the following guide:
How to calculate the final repayment amount on the Car loan?
To comprehend this mathematical guide, let’s suppose the following:
The principal value of Car = $100, 000
Thus, the initiating loan amount = $100, 000
Car loan interest rate = 6%
Suppose, loan term = 30 years (i.e., 360 months)
So, calculated per month EMI on this Car loan will be $599.55.
Mistakes people make while calculating repayment amount: While calculating repayment amount, loan takers simply calculate the total value of all EMI(s) they will be paying for the next 30 years.
Thus, according to their calculation, the Repayment Amount will be equals to $599.55 * 360 = 215, 838 USD. Now, this method of calculation is entirely wrong. The reason being, in this calculation,
interest payment has not been deducted each month from the EMI per month. So, what is the right way to do this calculation? Let’s take a glance:
Source: ET Money
The right way to calculate the repayment amount:
First of all, it is significant to state that, interest payment as per the car loan interest rate decreases each month. The reason being, the loan principal amount keeps decreasing. Why does the
principal amount decrease? When you make EMI per month, the interest payment is deducted from the same. Rest of the EMI money goes to the principal amount repayment. To comprehend keenly, check out
the following calculation on a monthly basis:
Month 1 of loan repayment:
The principal value of Car = $100, 000
Thus, the initiating loan amount = $100, 000
Car loan interest rate = 6%
Thus, interest payable = principal amount * (6% of principal amount/12) = $500.
Here, we have divided the interest rate by 12 because our loan term is given in years. But, we are calculating the interest rate payable for the 1st month only.
EMI = $599.55 (calculation given in the next section)
Now, for the 1st month, you paid $599.55 EMI.
From EMI, Interest payable for 1st month remains deductible. Thus, it’d be $599.55 – $500 = $99.55.
After the deduction of interest payable for 1st month, the rest “99.55 USD” are deducted from your final principal amount balance. Consider this: you paid $99.55 to pay the principal amount. Thus,
for the next month, loan amount will be = $100, 000 – $99.55 = $99, 900.45.
Month 2 of loan repayment:
The principal value of loan (minus, repayment made on first month) = $99, 900.45
Car loan interest rate = 6%
Thus, interest payable = end balance principal amount * (6% of principal amount/12)
=.99, 900.45 * (0.06/12)
= 499.50 USD.
Here, we have divided the interest rate by 12 because our loan term is given in years. But, we are calculating the interest rate payable for the 1st month only.
EMI = $599.55 (calculation given in the next section)
Now, for the 2nd month, you paid $599.55 EMI.
From EMI, Interest payable for 2nd month remains deductible. Thus, it’d be $599.55 – $499.50 = $100.05.
After the deduction of interest payable for 2nd month, the rest “100.05 USD” are deducted from your END BALANCE principal amount. Consider this: you paid $100.05 to pay the (left) principal amount.
Thus, for the next month, loan amount will be = $99, 900.45 – $100.05 = $99, 800.4.
Month 3 of loan repayment:
The principal value of loan (minus, repayment made on first month) = $99, 800.4
Car loan interest rate = 6%
Thus, interest payable = end balance principal amount * (6% of principal amount/12)
=.99, 800.4 * (0.06/12)
= 499.00 USD.
Here, we have divided the interest rate by 12 because our loan term is given in years. But, we are calculating the interest rate payable for the 1st month only.
EMI = $599.55 (calculation given in the next section)
Now, for the 3rd month, you paid $599.55 EMI.
From EMI, Interest payable for 3rd month remains deductible. Thus, it’d be $599.55 – $499.00 = $100.55.
After the deduction of interest payable for the 3rd month, the rest of “100.55 USD” is deducted from your END BALANCE principal amount. Consider this: you paid $100.55 to pay the (left) principal
amount. Thus, for the next month, loan amount will be = $99, 800.4 – $100.55 = $99, 699.85.
Month 4 of loan repayment:
The principal value of loan (minus, repayment made on first month) = $99, 699.85
Car loan interest rate = 6%
Thus, interest payable = end balance principal amount * (6% of principal amount/12)
=99, 699.85 * (0.06/12)
= 498.49 USD.
Here, we have divided the interest rate by 12 because our loan term is given in years. But, we are calculating the interest rate payable for the 1st month only.
EMI = $599.55 (calculation given in the next section)
Now, for the 4th month, you paid $599.55 EMI.
From EMI, Interest payable for 4th month remains deductible. Thus, it’d be $599.55 – $498.49 = $101.06.
After the deduction of interest payable for the 4th month, the rest of “100.55 USD” is deducted from your END BALANCE principal amount. Consider this: you paid $100.55 to pay the (left) principal
amount. Thus, for the next month, loan amount will be = $99, 699.85 – $101.06 = $99, 598.79.
In this way, keep calculating the principal amount ending balance until it becomes zero. In the 360th month, the principal loan amount will be zero.
Some significant points to note:
• Principal loan amount ending balance keeps decreasing from the 1st month to the ending month of the loan term.
• Thus, the interest amount payable per month in the future will also keep decreasing.
• Only EMI per month will remain the same until the end of loan repayment.
How to calculate EMI Amount per month as per Car loan Interest Rates?
To calculate EMI Amount as per the loan amortization formula, check out the following guide:
EMI per month on Car loan = [P x R x (1+R)^N]/[(1+R)^N-1]
Here, P = Principal loan amount = $100, 000.
R per month = Interest Rate/12 = 6%/12 = 6 / 100*12 = 0.005
N = Number of total installments payable = 30 years = 360 months
Thus, the final calculation will be,
EMI = [100000 x 0.005 x (1+0.005)^360]/[(1+O.OO5)^360-1]
= [ 500 x 1.005 ^ 360] / [(1.005) ^ 359]
= $599.55.
Calculate EMI using Microsoft Excel
In the event that you are calculating EMI(s) for thousands of loan applicators for a large scale organization, the physical calculation can be a challenging process. Thus, you can apply the following
formula in excel sheet.
EMI = PMT (Principal amount / 12, (Loan Term * 12), – Principal Amount)
PMT here stands for Periodic payment for annuity investment.
Use EMI Calculator Online
Bank of America Car Loan EMI Calculator is the best alternative if you desire to get quick results. All you need to know is the current car loan interest rates (in-consideration) by your loan lender
such as co., banking institution, etc. here is how to calculate EMI using Bank of America EMI Calculator:
• Go to https://www.bankofamerica.com/auto-loans/auto-loan-calculator/.
• Enter the principal loan value in the first column.
• Next, enter the preferred loan term (in months)—for example, 24 months for loan repayment in 2 years.
• Enter the expected interest rate.
• Click on the “Calculate payment” option at the bottom.
• The final EMI payable per month will be visible on the right.
For quick mathematical whizzes, connect with us online. | {"url":"https://dailybayonet.org/car-loan-interest-rates/","timestamp":"2024-11-15T00:46:18Z","content_type":"text/html","content_length":"156885","record_id":"<urn:uuid:a9e88ce6-d740-4dde-a247-d0169d12e50e>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00645.warc.gz"} |
Analysis Methods for Buildings Frames | Structural Frame Analysis | Frame Analysis Example | How to Analyze the Structure | Method of Consistent Deformation
Analysis Methods for Building Frames:
The term analysis method for building frames of a is defined as a general flat‐plate structural system comprising thin Kirchoff plates.
Which are interconnected by one‐dimensional flexural elements of various shapes and layouts.
There are have many different methods of building frame analysis that’s are below-
• Building Frames Analyze by Approximate Methods for Vertical Loads.
• Building Frames Analyse by the Cantilever Method for Horizontal Loads.
• Building Frame Analyses by the Portal Method for Horizontal Loads.
#1. Building Frames Analyze by Approximate Methods for Vertical Loads-
#2. Building Frames Analyse by the Cantilever Method for Horizontal Loads-
• During its lifetime of a building, a frame may be subjected and earthquake loads.
• In this method, the building frame must be designed to withstand lateral loads.
• A multi-story or two-story frame subjected to lateral loads and the actual deflection of the beam shows the dotted.
#3. Building Frame Analyses by the Portal Method for Horizontal Loads-
In this method of frame analyses, some assumptions have flowed below-
• In the portal method of frame analyses, each column an inflection point occurs at the mid-height.
• The interior column carries twice the shear of the exterior column, such as the total horizontal shear at each story is divided between the columns of the story.
Also, Read: Components of Space Frame System | Types of Space Frame | Advantages & Disadvantages of Space Frame Structure | Space Frame Structures Examples
Frame Analysis-
The frame analysis is the members and structure at working loads, and elastic analysis deals with the study of strength and nature.
The frame analysis is defined as a general flat‐plate structural system comprising thin Kirchoff plates. Which are interconnected by one‐dimensional flexural elements of various shapes and layouts.
Also, Read: Reinforced Concrete Frame | Concrete Frame Construction | Concrete Building Construction | Frame Construction | Types of Frame
Methods of Analysis:
There are many different types of method of frame analysis, that’s been below-
• Flexibility Coefficient Method.
• Approximate Methods.
□ Vertical Load.
□ Horizontal Load.
☆ Substitute Frame Method.
☆ Portal Method.
☆ Cantilever Method.
• Iterative Methods.
□ Moment Distribution Method.
□ Kani’s Method.
• Slope Displacement Method.
#1. Flexibility Coefficient Method-
Coefficients of the unknowns in equations to be solved are “flexibility” coefficients. Force (Flexibility) Method For determinate structures, the force method allows us to find internal forces (using
equilibrium i.e. based on Statics) irrespective of the material information.
The flexibility coefficient method is also known as the force method or compatibility method. The flexibility coefficient method is considering the geometrical condition imposed on the formation of
structures. This method is mainly used for analysing frames of low D.O.R.
#2. Approximate Methods–
For the preliminary design of the frame, the approximate analysis of the hyperstatic structure provides a simple means of obtaining a quick solution.
To obtain a rapid solution of a complex structure, the approximate method makes use of simplifying assumptions regarding structural behavior. The location of zero moments in the structure each point
of inflection corresponds.
The approximate method is carried out separately for these two case-
• Vertical Loads-
• Horizontal Loads-
#2.1. Vertical Loads-
• On the beam and column, the stress and the structure subjected to a vertical load depend upon the relative stiffness.
• The approximate method either assumes adopts simplified moment distributed methods or an adequate number of hinges to render the structure determinate.
#2.2. Horizontal Loads-
• To depends on its height to width ratio, the structure is subjected to a horizontal force. It is dominated by bending action in a high-rise building where the height is several times greater than
its lateral dimensions.
The structure subjected to horizontal loading there is three methods are analysis.
• Substitute Frame Method.
• Portal Method.
• Cantilever Method.
#2.2.1. Portal Method-
• Portal frame can be defined as two-dimensional rigid frames that have the basic characteristics of a rigid joint between column and beam. Portal frame construction is a method of building and
designing structures
• The method makes simplifying assumption regarding horizontal shear in columns in low-rise structure. In the portal method, the point of inflection occurs at the mid-point of the beams.
#2.2.2. Cantilever Method-
• The cantilever method is very similar to the portal method. We still put hinges at the middles of the beams and columns.
• The only difference is that for the cantilever method, instead of finding the shears in the columns first using an assumption, we will find the axial force in the columns using an assumption.
• For the high-rise building structure, this method is applicable. On the axial force of columns, this method is based on simplifying assumptions.
#2.2.3. Substitute Frame Method-
• Substitute frame method assumes that the moments in the beams of any floor are influenced by loading on that floor alone.
• The influence of loading on the lower or upper floors is ignored altogether. The process involves the division of multi-storied structure into smaller frames.
• The substitute frame method assumes that the moment in the beams of any floor is influenced by loading on that floor alone.
• The multi-storeyed structure into the smaller frame the division process is involved. The subframes are known as substitute frames.
#3. Iterative Methods-
• Indeterminate structure, the iterative method is a powerful method of frame analysis. The iterative method is simple and adequate for the usual structure of frame analysis.
• The distribution of the joint moments this method based on the among members connected to the joint.
There are a few sub-methods of iterative method that’s are below-
• Moment Distribution Method.
• Kani’s Method.
#3.1. Moment Distribution Method-
• The moment distribution method is a structural analysis method for statically indeterminate beams and frames developed by Hardy Cross.
• In the moment distribution method, the structural system is at first reduced to its kinematically determinate. This is accomplished by assuming all joints to be fully restrained.
• For this condition of the structure, the fixed end moment is calculated. By releasing them successively, the joint is allowed to deflect rotate.
• It was published in 1930 in an ASCE journal. The method only accounts for flexural effects and ignores axial and shear effects.
#3.2. Kani’s Method-
• Kani’s method was introduced by Gasper Kani in 1940’s. It involves distributing the unknown fixed end moments of structural members to adjacent joints, in order to satisfy the conditions of
continuity of slopes and displacements.
• Kani’s method is also known as the Rotation contribution method.
• In order to satisfy the condition of continuity of slope and displacement, it involves distributing the unknown fixed end moments of structural members to the adjacent point. Kani’s method
distributed the total joint moment at any stage of iteration.
#4. Slope Displacement Method-
• The slope deflection method is a structural analysis method for beams and frames introduced in 1914 by George A. Maney.
• The slope deflection method was widely used for more than a decade until the moment distribution method was developed.
• The slope displacement method is also called stiffness or displacement or equilibrium method. It is expressing the relation between the moments acting at the ends of the members is consist of a
series of the simultaneous equation that is written in term of slope deflection.
• We gate the value of unknown rotation of the joints by a solution of slope deflection equation along with the equilibrium equation. The end moment is calculated slope deflection equation by
knowing this rotation.
Also, Read: Skeleton Frame | What Is Building Skeleton | What Is Steel Structure Building | Use of Steel Frame Structures| Advantages & Disadvantage of Steel Frame
Structural Frame Analysis-
Structural frame analysis determines the effects of loads on the structural components and the physical structure.
This type of analysis includes all that must withstand loads, such as buildings, bridges, aircraft, and ships.
The result of structural frame analysis is used to verify a structure’s fitness for use, often precluding physical tests. In the engineering design of structures, the structural frame analysis is a
key part.
Also, Read: Detail of Beam Connection | Simple Framing Connection | Semi-Rigid Framing Connection | Rigid Frame Connection
Approximate Method of Structural Analysis-
Approximate analysis is conducted by making realistic assumptions about the behavior of the structure.
Approximate Analysis of Indeterminate Trusses During preliminary design and analysis, the actual member dimensions are not usually known. Note the areas of cross-sections of the columns are
The approximate method of structural analysis is useful for determining the moment and forces in the different members of the frame or structure.
By making a realistic assumption about the behavior of the structure, the approximate method of structural analysis is conduct.
The approximate method of structural analysis has below submethods-
• Portal Method.
• Cantilever Method.
#1. Portal Method-
• In the portal method, the point of inflection occurs at the mid-point of the beams.
• The method makes simplifying assumption regarding horizontal shear in columns in low-rise structure.
#2. Cantilever Method-
• On the axial force of columns, this method is based on simplifying assumptions.
• For the high-rise building structure, this method is applicable.
Also, Read: What Is Superstructures | Difference Between Load-Bearing and Framed Structures
Method of Consistent Deformation-
The force method (also called the flexibility method or method of consistent deformation ) is used to calculate reactions and internal forces in statically indeterminate structures due to loads and
imposed deformations. The system thus formed is called the basic determinate structure.
The method of consistent deformation is also called the flexibility method or force method. Method of consistent deformation is used to calculate the reactions and forces of statically indeterminate
Method of consistent deformation is useful for statically indeterminate structures of the single-story building and uncommon geometry building.
Method of consistent deformation is a process where the structure is transformed into a statically determinate system, and then we calculate all the system of forces by applying the boundary
Also, Read: Load Bearing Wall Construction | How to Tell If a Wall Is Load Bearing | Load Bearing Beam | Non-Load Bearing Wall | Non-Load Bearing Wall Framing
Frame Analysis Example:
Before constructing a building, you need to do frame analysis. B frame analysis method, we can easily determine each and every structural condition of that building.
Examples of frame analysis are done by mainly four methods, those four methods are-
• Flexibility Coefficient Method.
• Slope Displacement Method.
• Iterative Method.
• Approximate Method.
Mainly by these methods, the frame analysis is done.
Slope Deflection Method Frame:
The slope deflection method is a method that consists of structural analysis of beams and frames. In the slope deflection method, the frame consists of a simultaneous equation expressed the moments
acted at the end of the member.
In the slope deflection method of frames, rotation of angles is measured with slope deflection equation and joint & shear equilibrium conditions.
The slope deflection method frame process is only effective in structures with small kinematic indeterminacy.
Also, Read: 10 Different Types of Loads on Structures| What Are Structural Loads
Force Method Structural Analysis:
Force method structural analysis is also called the flexibility method or method of consistent deformation.
Force method structural analysis is a process where the structure is transformed into a statically determinate system, and then we calculate all the system of forces by applying the boundary
Force method structural analysis is useful for statically indeterminate structures of the single-story building and uncommon geometry building.
Force method structural analysis is used to calculate the reactions and forces of statically indeterminate structures.
Analysis of Statically Indeterminate Structures by the Force Method:
Force (Flexibility) Method For determinate structures, the force method allows us to find internal forces (using equilibrium i.e. based on Statics) irrespective of the material information.
However, for indeterminate structures, Statics (equilibrium) alone is not sufficient to conduct structural analysis.
Analysis of statically indeterminate structure by force method is a process where the structure is transformed into a statically determinate system, and then we calculate all the system of forces by
applying the boundary condition.
Here statically indeterminate structures of the single-story building and uncommon geometry building are analysed by force method.
The analysis by the force method is used to calculate the reactions and forces of statically indeterminate structures.
Also, Read: Epoxy Flooring Cost Calculator
Moment Distribution Method Frame:
In the moment distribution method, initially, the structure is rigidly fixed at every joint or support. The fixed end moments are calculated for any loading under consideration.
Subsequently, one joint at a time is then released. When the moment is released at the joint, the joint moment becomes unbalanced.
The moment distribution method of the frame is generally suitable for statically indeterminate structures. In the moment distribution method, we consider all the joints are fully restrained.
Here the joints are allowed to rotate one by one after releasing them. The moment distribution method frame analysis method is appropriate for analysing continuous beam, including a non-prismatic
By the use of the moment distribution method, to obtain the Fixed Moment of the unsymmetrical frame, we need to analyse it more than once.
The moment distribution method frame is also applicable for intermediate hinge structures.
Also, Read: Combined Footing | What Is Combined Footing | Advantage & Disadvantage of Combined Footing | Application of Combined Footing | Types of Combined Footing | Combined Footing Design
Portal Frame Analysis:
In the portal frame analysis method, we consider horizontal shear in columns. Here every structure is considered as a portal frame, and the horizontal force is distributed equally.
Portal frame analysis method, we observed that the inflection point is situated at the mid-height of every column.
In frame analysis, in the mid-span of the beam, the inflection points occur. In the portal frame analysis method, the loads are distributed to all the columns, so the outer columns carry the half
forces and the rest of the forces carried by inner columns.
Force Method for Beams:
The force method for beams is also called the flexibility method or method of consistent deformation.
The force method of beams is used to calculate the reactions and forces of statically indeterminate structures.
Force method of beams is useful for statically indeterminate structures of the single-story building and uncommon geometry building.
The force method of beams is a process where the structure is transformed into a statically determinate system, and then we calculate all the system of forces by applying the boundary condition.
Also, Read: Spandrel Beam Definition | Properties of Spandrel Beam | Advantages & Disadvantages of Spandrel Beam | Uses of Spandrel Beam | Spandrel Beam Design
Frame Analysis-
Frame analysis (also called framing analysis) is a multi-disciplinary social science research method used to analyze how people understand situations and activities. Frame analysis looks at images,
stereotypes, metaphors, actors, messages, and more.
Methods of Analysis-
• Regression analysis.
• Monte Carlo simulation.
• Factor analysis.
• Cohort analysis.
• Cluster analysis.
• Time series analysis.
• Sentiment analysis.
Approximate Method-
In this chapter, approximate methods mean analytical procedures for developing solutions in the form of functions that are close, in some sense, to the exact, but usually unknown, solution of the
nonlinear problem. Realistic error bounds or estimates exist for some approximate processes.
Also, Read: Aggregates | Difference Between Coarse And Fine | How to do Shape and Size Matter in Aggregate
Structural Frame Analysis-
Structural analysis is the prediction of the response of structures to specified arbitrary external loads. During the preliminary structural design stage, a structure’s potential external load is
estimated, and the size of the structure’s interconnected members is determined based on the estimated loads.
Approximate Method of Structural Analysis-
Approximate analysis is conducted by making realistic assumptions about the behavior of the structure. Approximate Analysis of Indeterminate Trusses During preliminary design and analysis, the actual
member dimensions are not usually known. Note the areas of cross-sections of the columns are different.
Method of Consistent Deformation-
The force method (also called the flexibility method or method of consistent deformation ) is used to calculate reactions and internal forces in statically indeterminate structures due to loads and
imposed deformations. The system thus formed is called the basic determinate structure.
Also, Read: Aggregates | Difference Between Coarse And Fine | How to do Shape and Size Matter in Aggregate
Force Method Structural Analysis-
The force method of analysis, also known as the method of consistent deformation, uses equilibrium equations and compatibility conditions to determine the unknowns in statically indeterminate
structures. The structure that remains after the removal of the redundant reaction is called the primary structure.
Moment Distribution Method Frame-
In the moment distribution method, initially, the structure is rigidly fixed at every joint or support. The fixed end moments are calculated for any loading under consideration. Subsequently, one
joint at a time is then released. When the moment is released at the joint, the joint moment becomes unbalanced.
Portal Frame Analysis-
The portal method is based on the assumption that, for each storey of the frame, the interior columns will take twice as much shear force as the exterior columns. A column that is twice as stiff will
take twice as much load for the same lateral displacement.
Also, Read: Deshuttering Time | When to Remove Concrete Forms | Concrete Formwork Removal Time | Earliest Time to Remove Concrete Forms | When To Strip Concrete Forms
Force Method for Beams-
The force method or the method of consistent deformation is based on the equilibrium of forces and compatibility of structures. The method entails first selecting the unknown redundant for the
structure and then removing the redundant reactions or members to obtain the primary structure.
Analysis of Statically Indeterminate Structures by the Force Method-
Force (Flexibility) Method For determinate structures, the force method allows us to find internal forces (using equilibrium i.e. based on Statics) irrespective of the material information. However,
for indeterminate structures , Statics (equilibrium) alone is not sufficient to conduct structural analysis.
Portal Frame Method
Portal frame is a construction technique where vertical supports are connected to horizontal beams or trusses via fixed joints with designed-in moment-resisting capacity. The result is wide spans and
open floors.
Frames in Structural Analysis
Frames are structures composed of vertical and horizontal members, as shown in Figure 1.3a. The vertical members are called columns, and the horizontal members are called beams. Frames are classified
as sway or non-sway.
What Are the 5 Basic Methods of Statistical Analysis?
The five basic methods are mean, standard deviation, regression, hypothesis testing, and sample size determination.
a Building Frame Is Subjected to Horizontal Forces Due To
Without more information, it’s difficult to provide a specific answer. However, here are some general points to consider:
• Buildings are designed to withstand various types of loads, including horizontal loads such as wind and seismic forces.
• The frame of a building typically includes the columns, beams, and other structural members that support the weight of the building and transfer loads to the foundation.
• Horizontal loads can cause the building frame to deform or sway, which can lead to structural damage or failure if the loads exceed the design limits.
• To ensure the safety and stability of a building, engineers must carefully consider the anticipated horizontal loads and design the building frame to resist those loads through a combination of
materials, geometry, and connections.
• The specific design considerations and methods used will depend on a variety of factors, including the location and type of building, local building codes and regulations, and the anticipated
loads based on historical data and simulations.
Types of Frames in Structural Analysis
In general, there are two main categories of frame structures, namely the braced frame structure and rigid frame structure.
State the Methods Used for the Analysis of a Building Frame for Horizontal and Vertical Loads.
Building frames can be analyzed by various methods such as force method, displacement method, and approximate method. The method of analysis to adopt depends upon the types of frame, its
configuration (portal bay or multi-bay) in multi-storied frame and degree of indeterminacy.
Structural Analysis of Frames Examples
Here are a few examples of structural analysis of frames:
• Determining member forces and reactions: Engineers may use structural analysis techniques such as the method of joints or the method of sections to calculate the forces in the individual members
of a frame and the reactions at the supports.
• Designing for wind loads: Wind loads can exert significant horizontal forces on a building frame, which can cause the frame to deform or sway. Engineers may use finite element analysis (FEA) or
other computational tools to model the behavior of the frame under wind loads and design the frame to resist these forces.
• Analyzing for seismic loads: Similar to wind loads, seismic loads can also subject a building frame to significant horizontal forces.
• Engineers may use dynamic analysis techniques, such as the response spectrum method or time-history analysis, to model the response of the frame to seismic loads and design the frame to withstand
Types of Structural Analysis Methods
There are three approaches to the analysis: the mechanics of materials approach (also known as strength of materials), the elasticity theory approach (which is actually a special case of the more
general field of continuum mechanics), and the finite element approach.
Also, Read: Structure Audit Definition
Frame Structural Analysis
Frame structural analysis is a process of determining the internal forces and deformations in the structural members of a frame under various loading conditions. The purpose of structural analysis is
to ensure that the frame can safely and effectively resist the applied loads and meet the design requirements.
Factor Method Frame Analysis
The factor method (Wilbur, 1934) is another approximate method for analysing building frames subject to lateral loads. This method is said to be more accurate than either the portal or the cantilever
method. In portal or cantilever methods, certain stress assumptions are made so as to make the structure determinate.
Analysis of Frame
The analysis of a frame involves determining the internal forces and deformations in each member of the frame under various loading conditions. The goal of frame analysis is to ensure that the frame
can safely resist the applied loads without excessive deformation or failure.
Structural Frame Examples
Structural Frames
• Life Cycle Assessment.
• Beam-Column.
• Shear Walls.
• Precast Concrete.
• Reinforced Concrete.
• Embodied Energy.
• Structural Component.
Frame Analysis Structural Engineering
Frame analysis is a fundamental component of structural engineering that involves the determination of the internal forces and deformations in the members of a frame under various loading conditions.
It is used to ensure that a frame can safely and effectively resist the applied loads and meet the design requirements.
Originally posted 2023-04-24 15:45:25.
Leave a Reply Cancel reply | {"url":"https://civiljungles.com/analysis-methods-for-buildings-frames/","timestamp":"2024-11-07T15:55:07Z","content_type":"text/html","content_length":"229502","record_id":"<urn:uuid:678db221-196b-41f6-8235-8ac2bcad947b>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00768.warc.gz"} |
Proof of the Infinitude of Primes
Fürstenberg's Proof of the Infinitude of Primes
By Chris Caldwell
Euclid may have been the first to give a proof that there are infintely many primes. Since then there have been many other proofs given. Perhaps the strangest is the following topological proof by
Fürstenberg [Fürstenberg55]. See the page "There are Infinitely Many Primes" for several other proofs.
There are infinitely many primes.
Define a topology on the set of integers by using the arithmetic progressions (from -infinity to +infinity) as a basis. It is easy to verify that this yields a topological space. For each prime p
let A[p] consists of all multiples of p. A[p] is closed since its complement is the union of all the other arithmetic progressions with difference p. Now let A be the union of the progressions A[
p]. If the number of primes is finite, then A is a finite union of closed sets, hence closed. But all integers except -1 and 1 are multiples of some prime, so the complement of A is {-1, 1} which
is obviously not open. This shows A is not a finite union and there are infinitely many primes.∎
Printed from the PrimePages <t5k.org> © Chris Caldwell. | {"url":"https://t5k.org/notes/proofs/infinite/topproof.html","timestamp":"2024-11-08T22:11:51Z","content_type":"text/html","content_length":"10182","record_id":"<urn:uuid:dfa02b32-67df-43b7-926a-5170a37fa5f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00406.warc.gz"} |
Christoph Spiegel
Christoph Spiegel
I like using computational tools for theoretical mathematics. Lately, I have used flag algebras, semidefinite programming, machine learning, and discrete optimization to improve some long-standing
bounds in extremal combinatorics and discrete geometry. Currently, I am exploring the areas of proof verification and assistance.
Room 3036 at ZIB
spiegel (at) zib.de
spiegel (at) campus.tu-berlin.de
German, English, and French
since 2024
Postdoc Representative at MATH+
since 2022
Deputy Department Head at ZIB
since 2020
Member of MATH+
since 2020
Researcher at ZIB
View More / Less
Jun 2020
Ph.D. in Applied Mathematics at UPC
2015 to 2016
Member of BMS
2014 to 2015
Research Assistant at ZIB
Mar 2015
M.Sc. in Mathematics at FUB
Dec 2012
B.Sc. in Mathematics at FUB
1. Parczyk, O., and Spiegel, C. (2024). An Unsure Note on an Un-Schur Problem. [arXiv] [BibTeX]
archiveprefix = {arXiv},
eprint = {2410.22024},
primaryclass = {math.CO},
year = {2024},
author = {Parczyk, Olaf and Spiegel, Christoph},
title = {An Unsure Note on an Un-Schur Problem}
2. Zimmer, M., Andoni, M., Spiegel, C., and Pokutta, S. (2023). PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs. [arXiv] [code] [BibTeX]
archiveprefix = {arXiv},
eprint = {2312.15230},
primaryclass = {cs.LG},
year = {2023},
author = {Zimmer, Max and Andoni, Megi and Spiegel, Christoph and Pokutta, Sebastian},
title = {PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs},
code = {https://github.com/ZIB-IOL/PERP}
3. Zimmer, M., Spiegel, C., and Pokutta, S. (2022). Compression-aware Training of Neural Networks Using Frank-Wolfe. [arXiv] [BibTeX]
archiveprefix = {arXiv},
eprint = {2205.11921},
primaryclass = {cs.LG},
year = {2022},
author = {Zimmer, Max and Spiegel, Christoph and Pokutta, Sebastian},
title = {Compression-aware Training of Neural Networks Using Frank-Wolfe}
4. Combettes, C., Spiegel, C., and Pokutta, S. (2020). Projection-free Adaptive Gradients for Large-scale Optimization. [arXiv] [summary] [code] [BibTeX]
archiveprefix = {arXiv},
eprint = {2009.14114},
primaryclass = {math.OC},
year = {2020},
author = {Combettes, Cyrille and Spiegel, Christoph and Pokutta, Sebastian},
title = {Projection-free Adaptive Gradients for Large-scale Optimization},
code = {https://github.com/ZIB-IOL/StochasticFrankWolfe},
summary = {https://pokutta.com/blog/research/2020/10/21/adasfw.html}
5. Pokutta, S., Spiegel, C., and Zimmer, M. (2020). Deep Neural Network Training with Frank-Wolfe. [arXiv] [summary] [code] [BibTeX]
archiveprefix = {arXiv},
eprint = {2010.07243},
primaryclass = {cs.LG},
year = {2020},
author = {Pokutta, Sebastian and Spiegel, Christoph and Zimmer, Max},
title = {Deep Neural Network Training with Frank-Wolfe},
code = {https://github.com/ZIB-IOL/StochasticFrankWolfe},
summary = {https://pokutta.com/blog/research/2020/11/11/NNFW.html}
6. Salia, N., Spiegel, C., Tompkins, C., and Zamora, O. (2019). Independent Chains in Acyclic Posets. [arXiv] [BibTeX]
archiveprefix = {arXiv},
eprint = {1912.03288},
primaryclass = {math.CO},
year = {2019},
author = {Salia, Nika and Spiegel, Christoph and Tompkins, Casey and Zamora, Oscar},
title = {Independent Chains in Acyclic Posets}
Conference proceedings
1. Kiem, A., Pokutta, S., and Spiegel, C. (2024). The 4-color Ramsey Multiplicity of Triangles. Proceedings of Discrete Mathematics Days. [URL] [arXiv] [code] [BibTeX]
year = {2024},
booktitle = {Proceedings of Discrete Mathematics Days},
url = {https://dmd2024.web.uah.es/files/abstracts/paper_3.pdf},
archiveprefix = {arXiv},
eprint = {2312.08049},
primaryclass = {math.CO},
author = {Kiem, Aldo and Pokutta, Sebastian and Spiegel, Christoph},
title = {The 4-color Ramsey Multiplicity of Triangles},
code = {https://github.com/FordUniver/kps_trianglemult}
2. Zimmer, M., Spiegel, C., and Pokutta, S. (2024). Sparse Model Soups: A Recipe for Improved Pruning Via Model Averaging. Proceedings of International Conference on Learning Representations. [URL]
[arXiv] [BibTeX]
year = {2024},
booktitle = {Proceedings of International Conference on Learning Representations},
url = {https://iclr.cc/virtual/2024/poster/17433},
archiveprefix = {arXiv},
eprint = {2306.16788},
primaryclass = {cs.LG},
author = {Zimmer, Max and Spiegel, Christoph and Pokutta, Sebastian},
title = {Sparse Model Soups: A Recipe for Improved Pruning Via Model Averaging}
3. Kiem, A., Pokutta, S., and Spiegel, C. (2024). Categorification of Flag Algebras. Proceedings of Discrete Mathematics Days. [URL] [BibTeX]
year = {2024},
booktitle = {Proceedings of Discrete Mathematics Days},
url = {https://dmd2024.web.uah.es/files/abstracts/paper_47.pdf},
author = {Kiem, Aldo and Pokutta, Sebastian and Spiegel, Christoph},
title = {Categorification of Flag Algebras}
4. Mundinger, K., Pokutta, S., Spiegel, C., and Zimmer, M. (2024). Extending the Continuum of Six-Colorings. Proceedings of Discrete Mathematics Days. [URL] [arXiv] [BibTeX]
year = {2024},
booktitle = {Proceedings of Discrete Mathematics Days},
url = {https://dmd2024.web.uah.es/files/abstracts/paper_27.pdf},
archiveprefix = {arXiv},
eprint = {2404.05509},
author = {Mundinger, Konrad and Pokutta, Sebastian and Spiegel, Christoph and Zimmer, Max},
title = {Extending the Continuum of Six-Colorings}
5. Zimmer, M., Spiegel, C., and Pokutta, S. (2023). How I Learned to Stop Worrying and Love Retraining. Proceedings of International Conference on Learning Representations. [URL] [arXiv] [code]
year = {2023},
booktitle = {Proceedings of International Conference on Learning Representations},
url = {https://iclr.cc/virtual/2023/poster/10914},
archiveprefix = {arXiv},
eprint = {2111.00843},
primaryclass = {cs.LG},
author = {Zimmer, Max and Spiegel, Christoph and Pokutta, Sebastian},
title = {How I Learned to Stop Worrying and Love Retraining},
code = {https://github.com/ZIB-IOL/BIMP}
6. Parczyk, O., Pokutta, S., Spiegel, C., and Szabó, T. (2023). Fully Computer-assisted Proofs in Extremal Combinatorics. Proceedings of AAAI Conference on Artificial Intelligence. DOI: 10.1609/
aaai.v37i10.26470 [URL] [arXiv] [code] [BibTeX]
year = {2023},
booktitle = {Proceedings of AAAI Conference on Artificial Intelligence},
doi = {10.1609/aaai.v37i10.26470},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/26470},
archiveprefix = {arXiv},
eprint = {2206.04036},
primaryclass = {math.CO},
author = {Parczyk, Olaf and Pokutta, Sebastian and Spiegel, Christoph and Szabó, Tibor},
title = {Fully Computer-assisted Proofs in Extremal Combinatorics},
code = {https://zenodo.org/record/6602512#.YyvFhi8Rr5g}
7. Rué, J. J., and Spiegel, C. (2023). The Rado Multiplicity Problem in Vector Spaces Over Finite Fields. Proceedings of European Conference on Combinatorics. DOI: 10.5817/CZ.MUNI.EUROCOMB23-108
[URL] [arXiv] [code] [BibTeX]
year = {2023},
booktitle = {Proceedings of European Conference on Combinatorics},
doi = {10.5817/CZ.MUNI.EUROCOMB23-108},
url = {https://journals.muni.cz/eurocomb/article/view/35642},
archiveprefix = {arXiv},
eprint = {2304.00400},
primaryclass = {math.CO},
author = {Rué, Juan José and Spiegel, Christoph},
title = {The Rado Multiplicity Problem in Vector Spaces Over Finite Fields},
code = {https://github.com/FordUniver/rs_radomult_23}
8. Parczyk, O., Pokutta, S., Spiegel, C., and Szabó, T. (2022). New Ramsey Multiplicity Bounds and Search Heuristics. Proceedings of Discrete Mathematics Days. [arXiv] [code] [BibTeX]
year = {2022},
booktitle = {Proceedings of Discrete Mathematics Days},
archiveprefix = {arXiv},
eprint = {2206.04036},
primaryclass = {math.CO},
author = {Parczyk, Olaf and Pokutta, Sebastian and Spiegel, Christoph and Szabó, Tibor},
title = {New Ramsey Multiplicity Bounds and Search Heuristics},
code = {https://zenodo.org/record/6602512#.YyvFhi8Rr5g}
9. Rué, J. J., and Spiegel, C. (2018). On a Problem of Sárközy and Sós for Multivariate Linear Forms. Proceedings of Discrete Mathematics Days. [URL] [arXiv] [BibTeX]
year = {2018},
booktitle = {Proceedings of Discrete Mathematics Days},
url = {https://congreso.us.es/dmd2018/wp-content/uploads/2018/05/DMD2018_paper_21.pdf},
archiveprefix = {arXiv},
eprint = {1802.07597},
primaryclass = {math.CO},
author = {Rué, Juan José and Spiegel, Christoph},
title = {On a Problem of Sárközy and Sós for Multivariate Linear Forms}
10. Kusch, C., Rué, J. J., Spiegel, C., and Szabó, T. (2017). Random Strategies Are Nearly Optimal for Generalized Van Der Waerden Games. Proceedings of European Conference on Combinatorics. [URL]
[arXiv] [BibTeX]
year = {2017},
booktitle = {Proceedings of European Conference on Combinatorics},
url = {https://dmg.tuwien.ac.at/eurocomb2017/index.php/accepted-papers/},
archiveprefix = {arXiv},
eprint = {1711.07251},
primaryclass = {math.CO},
author = {Kusch, Christopher and Rué, Juan José and Spiegel, Christoph and Szabó, Tibor},
title = {Random Strategies Are Nearly Optimal for Generalized Van Der Waerden Games}
Full articles
1. Parczyk, O., Pokutta, S., Spiegel, C., and Szabó, T. (2024). New Ramsey Multiplicity Bounds and Search Heuristics. Foundations of Computational Mathematics. DOI: 10.1007/s10208-024-09675-6
[arXiv] [code] [BibTeX]
year = {2024},
journal = {Foundations of Computational Mathematics},
doi = {10.1007/s10208-024-09675-6},
archiveprefix = {arXiv},
eprint = {2206.04036},
primaryclass = {math.CO},
author = {Parczyk, Olaf and Pokutta, Sebastian and Spiegel, Christoph and Szabó, Tibor},
title = {New Ramsey Multiplicity Bounds and Search Heuristics},
code = {https://zenodo.org/record/6602512#.YyvFhi8Rr5g}
2. Mundinger, K., Pokutta, S., Spiegel, C., and Zimmer, M. (2024). Extending the Continuum of Six-Colorings. Geombinatorics Quarterly, XXXIV. [URL] [arXiv] [BibTeX]
year = {2024},
journal = {Geombinatorics Quarterly},
volume = {XXXIV},
url = {https://geombina.uccs.edu/past-issues/volume-xxxiv},
archiveprefix = {arXiv},
eprint = {2404.05509},
author = {Mundinger, Konrad and Pokutta, Sebastian and Spiegel, Christoph and Zimmer, Max},
title = {Extending the Continuum of Six-Colorings}
3. Kamčev, N., and Spiegel, C. (2022). Another Note on Intervals in the Hales-Jewett Theorem. Electronic Journal of Combinatorics, 29(1). DOI: 10.37236/9400 [URL] [arXiv] [BibTeX]
year = {2022},
journal = {Electronic Journal of Combinatorics},
volume = {29},
number = {1},
doi = {10.37236/9400},
url = {https://combinatorics.org/ojs/index.php/eljc/article/view/v29i1p62},
archiveprefix = {arXiv},
eprint = {1811.04628},
primaryclass = {math.CO},
author = {Kamčev, Nina and Spiegel, Christoph},
title = {Another Note on Intervals in the Hales-Jewett Theorem}
4. Cao-Labora, G., Rué, J. J., and Spiegel, C. (2021-10). An Erdős-Fuchs Theorem for Ordered Representation Functions. Ramanujan Journal, 56, 183–2091. DOI: 10.1007/s11139-020-00326-2 [URL] [arXiv]
year = {2021-10},
journal = {Ramanujan Journal},
volume = {56},
pages = {183-2091},
doi = {10.1007/s11139-020-00326-2},
url = {https://link.springer.com/article/10.1007/s11139-020-00326-2},
archiveprefix = {arXiv},
eprint = {1911.12313},
primaryclass = {math.NT},
author = {Cao-Labora, Gonzalo and Rué, Juan José and Spiegel, Christoph},
title = {An Erdős-Fuchs Theorem for Ordered Representation Functions}
5. Fabian, D., Rué, J. J., and Spiegel, C. (2021-08). On Strong Infinite Sidon and Bₕ Sets and Random Sets of Integers. Journal of Combinatorial Theory, Series A, 182. DOI: 10.1016/
j.jcta.2021.105460 [URL] [arXiv] [BibTeX]
year = {2021-08},
journal = {Journal of Combinatorial Theory, Series A},
volume = {182},
doi = {10.1016/j.jcta.2021.105460},
url = {https://sciencedirect.com/science/article/abs/pii/S0097316521000595},
archiveprefix = {arXiv},
eprint = {1911.13275},
primaryclass = {math.CO},
author = {Fabian, David and Rué, Juan José and Spiegel, Christoph},
title = {On Strong Infinite Sidon and Bₕ Sets and Random Sets of Integers}
6. Corsten, J., Mond, A., Pokrovskiy, A., Spiegel, C., and Szabó, T. (2020-10). On the Odd Cycle Game and Connected Rules. European Journal of Combinatorics, 89. DOI: 10.1016/j.ejc.2020.103140 [URL]
[arXiv] [BibTeX]
year = {2020-10},
journal = {European Journal of Combinatorics},
volume = {89},
doi = {10.1016/j.ejc.2020.103140},
url = {https://sciencedirect.com/science/article/abs/pii/S0195669820300615},
archiveprefix = {arXiv},
eprint = {1906.04024},
primaryclass = {math.CO},
author = {Corsten, Jan and Mond, Adva and Pokrovskiy, Alexey and Spiegel, Christoph and Szabó, Tibor},
title = {On the Odd Cycle Game and Connected Rules}
7. Rué, J. J., and Spiegel, C. (2020). On a Problem of Sárközy and Sós for Multivariate Linear Forms. Revista Matemática Iberoamericana, 36(7), 2107–2119. DOI: 10.4171/RMI/1193 [URL] [arXiv]
year = {2020},
journal = {Revista Matemática Iberoamericana},
volume = {36},
number = {7},
pages = {2107-2119},
doi = {10.4171/RMI/1193},
url = {https://ems.press/journals/rmi/articles/16818},
archiveprefix = {arXiv},
eprint = {1802.07597},
primaryclass = {math.CO},
author = {Rué, Juan José and Spiegel, Christoph},
title = {On a Problem of Sárközy and Sós for Multivariate Linear Forms}
8. Candela, P., Serra, O., and Spiegel, C. (2020). A Step Beyond Freĭman’s Theorem for Set Addition Modulo a Prime. Journal De Théorie Des Nombres De Bordeaux, 32(1), 275–289. DOI: 10.5802/jtnb.1122
[URL] [arXiv] [BibTeX]
year = {2020},
journal = {Journal de Théorie des Nombres de Bordeaux},
volume = {32},
number = {1},
pages = {275-289},
doi = {10.5802/jtnb.1122},
url = {https://jtnb.centre-mersenne.org/item/JTNB_2020__32_1_275_0/},
archiveprefix = {arXiv},
eprint = {1805.12374},
primaryclass = {math.CO},
author = {Candela, Pablo and Serra, Oriol and Spiegel, Christoph},
title = {A Step Beyond Freĭman's Theorem for Set Addition Modulo a Prime}
9. Kusch, C., Rué, J. J., Spiegel, C., and Szabó, T. (2019-09). On the Optimality of the Uniform Random Strategy. Random Structures & Algorithms, 55(2), 371–401. DOI: 10.1002/rsa.20829 [URL] [arXiv]
year = {2019-09},
journal = {Random Structures & Algorithms},
volume = {55},
number = {2},
pages = {371-401},
doi = {10.1002/rsa.20829},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/rsa.20829},
archiveprefix = {arXiv},
eprint = {1711.07251},
primaryclass = {math.CO},
author = {Kusch, Christopher and Rué, Juan José and Spiegel, Christoph and Szabó, Tibor},
title = {On the Optimality of the Uniform Random Strategy}
10. Freĭman, G. A., Serra, O., and Spiegel, C. (2019). Additive Volume of Sets Contained in Few Arithmetic Progressions. INTEGERS, 19. [URL] [arXiv] [BibTeX]
year = {2019},
journal = {INTEGERS},
volume = {19},
url = {https://math.colgate.edu/~integers/t34/t34.mail.html},
archiveprefix = {arXiv},
eprint = {1808.08455},
primaryclass = {math.NT},
author = {Freĭman, Gregory A. and Serra, Oriol and Spiegel, Christoph},
title = {Additive Volume of Sets Contained in Few Arithmetic Progressions}
11. Rué, J. J., Spiegel, C., and Zumalacárregui, A. (2018). Threshold Functions and Poisson Convergence for Systems of Equations in Random Sets. Mathematische Zeitschrift, 288, 333–360. DOI: 10.1007/
s00209-017-1891-2 [URL] [arXiv] [BibTeX]
year = {2018},
journal = {Mathematische Zeitschrift},
month = feb,
volume = {288},
pages = {333-360},
doi = {10.1007/s00209-017-1891-2},
url = {https://link.springer.com/article/10.1007/s00209-017-1891-2},
archiveprefix = {arXiv},
eprint = {1212.5496},
primaryclass = {math.CO},
author = {Rué, Juan José and Spiegel, Christoph and Zumalacárregui, Ana},
title = {Threshold Functions and Poisson Convergence for Systems of Equations in Random Sets}
12. Spiegel, C. (2017). A Note on Sparse Supersaturation and Extremal Results for Linear Homogeneous Systems. Electronic Journal of Combinatorics, 24(3). DOI: 10.37236/6730 [URL] [arXiv] [BibTeX]
year = {2017},
journal = {Electronic Journal of Combinatorics},
volume = {24},
number = {3},
doi = {10.37236/6730},
url = {https://combinatorics.org/ojs/index.php/eljc/article/view/v24i3p38},
archiveprefix = {arXiv},
eprint = {1701.01631},
primaryclass = {math.CO},
author = {Spiegel, Christoph},
title = {A Note on Sparse Supersaturation and Extremal Results for Linear Homogeneous Systems}
LEAN on Me: Transforming Mathematics Through Formal Verification, Improved Tactics, and Machine Learning
Formal proof verification can both ensure proof correctness and provide new tools and insights to mathematicians. The goals of this project include creating resources for students and researchers,
verifying relevant results, improving proof tactics, and exploring Machine Learning approaches.
MATH+ AA5-9
Jan 2024 to Dec 2025
Scaling Up Flag Algebras in Combinatorics
This project aims to obtain new bounds in Extremal Combinatorics through an application of flag algebras. The goal is to both improve the underlying computational aspects for existing problems as
well as to further develop the theory of flag algebras to extend it to new areas of application.
MATH+ EF1-21
Oct 2022 to Sep 2025
Learning Extremal Structures in Combinatorics
Extremal Combinatorics focuses on the maximum or minimum sizes of discrete structures with specific properties, posing significant challenges due to their complexity. Traditional computational
approaches often fail due to exponential growth in search spaces, but recent AI advancements, especially in Reinforcement Learning, offer new potential. Applying these AI methods could provide
insights into combinatorial problems while also enhancing the understanding of AI techniques in complex, sparse reward environments.
MATH+ EF1-12
Jan 2021 to May 2024
Adaptive Algorithms Through Machine Learning: Exploiting Interactions in Integer Programming
The performance of modern mixed-integer program solvers is highly dependent on a number of interdependent individual components. Using tools from machine learning, we intend to develop an integrated
framework that is able to capture interactions of individual decisions made in these components with the ultimate goal to improve performance.
MATH+ EF1-9
Jan 2021 to Dec 2022
Conference and workshop talks
Oct 2024
Sep 2024
Jul 2024
Oct 2023
Aug 2023
View More / Less
Aug 2023
Computational Challenges in Flag Algebra Proofs
ICIAM 2023 Minisymposium: Advances in Optimization I, Tokyo [PDF]
Aug 2023
Flag Algebras in Additive Combinatorics
5th DOxML Conference, Tokyo [PDF]
Jun 2023
Towards Flag Algebras in Additive Combinatorics
FoCM 2023 Workshop I.3 Workshop, Paris [PDF]
May 2023
Towards Flag Algebras in Additive Combinatorics
15th CANT Conference, New York [PDF]
Mar 2023
Computer-assisted Proofs in Extremal Combinatorics
Optimization and ML Workshop, Waischenfeld [PDF]
Feb 2023
Fully Computer-assisted Proofs in Extremal Combinatorics
37th AAAI Conference, Washington, DC [PDF]
Jan 2023
Fully Computer-assisted Proof in Extremal Combinatorics
Combinatorial Optimization Workshop, Aussois [PDF]
Dec 2022
Leveraging Combinatorial Symmetries in Flag Algebra-based SDP Formulations
Recent Advances in Optimization Workshop, Toronto [PDF]
Sep 2022
Proofs in Extremal Combinatorics Through Optimization
6th RIKEN-MODAL Workshop, Tokyo / Fukuoka [PDF]
Jul 2022
New Ramsey Multiplicity Bounds and Search Heuristics
12th DMD Conference, Santander [PDF]
Jun 2019
Odd Cycle Games and Connected Rules
36th Postgraduate Combinatorial Conference, Oxford [PDF]
Jun 2019
GAPCOMB Workshop, Campelles
Jun 2018
On a Problem of Sárközy and Sós for Multivariate Linear Forms
11th DMD Conference, Sevilla [PDF]
May 2018
Going Beyond 2.4 in Freiman's 2.4k-theorem
10th CANT Conference, New York [PDF]
Sep 2017
Rado Positional Games
The Music of Numbers Conference, Madrid [PDF]
Jun 2017
Generalized Positional Van Der Waerden Games
Interactions with Combinatorics [PDF]
Mar 2017
FUB-TAU Workshop
Research seminar talks
Feb 2023
Apr 2022
Nov 2019
Nov 2019
May 2019
View More / Less
May 2019
Intervals in the Hales-Jewett Theorem
Combinatorial Theory Seminar, Oxford
Feb 2019
Extremal Set Theory Seminar, Budapest
Dec 2018
Intervals in the Hales-Jewett Theorem
Research Seminar Combinatorics, Berlin
Mar 2018
Going Beyond 2.4 in Freiman's 2.4k-theorem
GRAPHS at IMPA, Rio de Janeiro
Dec 2017
On a Question of Sárközy and Sós
Research Seminar Combinatorics, Berlin
Oct 2017
Sparse Supersaturation and Extremal Results for Linear Homogeneous Systems
LIMDA Seminar, Barcelona
May 2017
Random Strategies Are Nearly Optimal for Generalized Van Der Waerden Games
LIMDA Seminar, Barcelona
Mar 2016
Threshold Functions for Systems of Equations in Random Sets
LIMDA Seminar, Barcelona
Feb 2016
Van Der Waerden Games (II)
Research Seminar Combinatorics, Berlin
Jan 2016
Van Der Waerden Games (I)
Research Seminar Combinatorics, Berlin
Dec 2015
What Is Discrete Fourier Analysis?
"What Is ...?" Seminar [video]
Oct 2015
Threshold Functions for Systems of Equations in Random Sets
Research Seminar Combinatorics, Berlin
winter 2024
Lecturer for Formal Proof Verification at FUB
summer 2024
Lecturer for Discrete Optimization (ADM II) at TUB
winter 2023
Lecturer for Introduction to Linear and Combinatorial Optimization (ADM I) at TUB
summer 2023
Lecturer for Analysis I und Lineare Algebra für Ingenieurwissenschaften at TUB
autumn 2018
Assistant for Discrete Mathematics and Optimization at UPF
View More / Less
autumn 2017
winter 2013
summer 2012
summer 2011
Jun 2024
Jun 2023
Nov 2022
Apr 2022
Sep 2021
Apr 2021
Sep 2020
Oct 2017
Jul 2016
Jun 2016
You can find some of my photography on instagram: | {"url":"https://iol.zib.de/team/christoph-spiegel.html","timestamp":"2024-11-08T17:45:45Z","content_type":"text/html","content_length":"82982","record_id":"<urn:uuid:b8b809bd-1f76-43e3-bb2a-b5d524ba6ff9>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00070.warc.gz"} |
Search results for: mean free path
Commenced in January 2007
Search results for: mean free path
1516 Geometry Design Supported by Minimizing and Visualizing Collision in Dynamic Packing
Authors: Johan Segeborn, Johan S. Carlson, Robert Bohlin, Rikard Söderberg
This paper presents a method to support dynamic packing in cases when no collision-free path can be found. The method, which is primarily based on path planning and shrinking of geometries, suggests
a minimal geometry design change that results in a collision-free assembly path. A supplementing approach to optimize geometry design change with respect to redesign cost is described. Supporting
this dynamic packing method, a new method to shrink geometry based on vertex translation, interweaved with retriangulation, is suggested. The shrinking method requires neither tetrahedralization nor
calculation of medial axis and it preserves the topology of the geometry, i.e. holes are neither lost nor introduced. The proposed methods are successfully applied on industrial geometries.
Keywords: Dynamic packing, path planning, shrinking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
1515 Decomposition of Graphs into Induced Paths and Cycles
Authors: I. Sahul Hamid, Abraham V. M.
A decomposition of a graph G is a collection ψ of subgraphs H1,H2, . . . , Hr of G such that every edge of G belongs to exactly one Hi. If each Hi is either an induced path or an induced cycle in G,
then ψ is called an induced path decomposition of G. The minimum cardinality of an induced path decomposition of G is called the induced path decomposition number of G and is denoted by πi(G). In
this paper we initiate a study of this parameter.
Keywords: Path decomposition, Induced path decomposition, Induced path decomposition number.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2371
1514 Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization
Authors: Lana Dalawr Jalal
This paper addresses the problem of offline path planning for Unmanned Aerial Vehicles (UAVs) in complex threedimensional environment with obstacles, which is modelled by 3D Cartesian grid system.
Path planning for UAVs require the computational intelligence methods to move aerial vehicles along the flight path effectively to target while avoiding obstacles. In this paper Modified Particle
Swarm Optimization (MPSO) algorithm is applied to generate the optimal collision free 3D flight path for UAV. The simulations results clearly demonstrate effectiveness of the proposed algorithm in
guiding UAV to the final destination by providing optimal feasible path quickly and effectively.
Keywords: Obstacle Avoidance, Particle Swarm Optimization, Three-Dimensional Path Planning Unmanned Aerial Vehicles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
1513 Memetic Algorithm Based Path Planning for a Mobile Robot
Authors: Neda Shahidi, Hadi Esmaeilzadeh, Marziye Abdollahi, Caro Lucas
In this paper, the problem of finding the optimal collision free path for a mobile robot, the path planning problem, is solved using an advanced evolutionary algorithm called memetic algorithm. What
is new in this work is a novel representation of solutions for evolutionary algorithms that is efficient, simple and also compatible with memetic algorithm. The new representation makes it possible
to solve the problem with a small population and in a few generations. It also makes the genetic operator simple and allows using an efficient local search operator within the evolutionary algorithm.
The proposed algorithm is applied to two instances of path planning problem and the results are available.
Keywords: Path planning problem, Memetic Algorithm, Representation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
1512 The Same or Not the Same - On the Variety of Mechanisms of Path Dependence
Authors: Jürgen Beyer
In association with path dependence, researchers often talk of institutional “lock-in", thereby indicating that far-reaching path deviation or path departure are to be regarded as exceptional cases.
This article submits the alleged general inclination for stability of path-dependent processes to a critical review. The different reasons for path dependence found in the literature indicate that
different continuity-ensuring mechanisms are at work when people talk about path dependence (“increasing returns", complementarity, sequences etc.). As these mechanisms are susceptible to fundamental
change in different ways and to different degrees, the path dependence concept alone is of only limited explanatory value. It is therefore indispensable to identify the underlying continuity-ensuring
mechanism as well if a statement-s empirical value is to go beyond the trivial, always true “history matters".
Keywords: path dependence, increasing returns, historicalinstitutionalism, lock-in.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
1511 Retraction Free Motion Approach and Its Application in Automated Robotic Edge Finishing and Inspection Processes
Authors: M. Nemer, E. I. Konukseven
In this paper, a motion generation algorithm for a six Degrees of Freedom (DoF) robotic hand in a static environment is presented. The purpose of developing this method is to be used in the path
generation of the end-effector for edge finishing and inspection processes by utilizing the CAD model of the considered workpiece. Nonetheless, the proposed algorithm may be extended to be applicable
for other similar manufacturing processes. A software package programmed in the application programming interface (API) of SolidWorks generates tool path data for the robot. The proposed method
significantly simplifies the given problem, resulting in a reduction in the CPU time needed to generate the path, and offers an efficient overall solution. The ABB IRB2000 robot is chosen for
executing the generated tool path.
Keywords: Offline programming, CAD-based tools, edge deburring, edge scanning, path generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
1510 Using Multi-Thread Technology Realize Most Short-Path Parallel Algorithm
Authors: Chang-le Lu, Yong Chen
The shortest path question is in a graph theory model question, and it is applied in many fields. The most short-path question may divide into two kinds: Single sources most short-path, all apexes to
most short-path. This article mainly introduces the problem of all apexes to most short-path, and gives a new parallel algorithm of all apexes to most short-path according to the Dijkstra algorithm.
At last this paper realizes the parallel algorithms in the technology of C # multithreading.
Keywords: Dijkstra algorithm, parallel algorithms, multi-thread technology, most short-path, ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2105
1509 An UML Statechart Diagram-Based MM-Path Generation Approach for Object-Oriented Integration Testing
Authors: Ruilian Zhao, Ling Lin
MM-Path, an acronym for Method/Message Path, describes the dynamic interactions between methods in object-oriented systems. This paper discusses the classifications of MM-Path, based on the
characteristics of object-oriented software. We categorize it according to the generation reasons, the effect scope and the composition of MM-Path. A formalized representation of MM-Path is also
proposed, which has considered the influence of state on response method sequences of messages. .Moreover, an automatic MM-Path generation approach based on UML Statechart diagram has been presented,
and the difficulties in identifying and generating MM-Path can be solved. . As a result, it provides a solid foundation for further research on test cases generation based on MM-Path.
Keywords: MM-Path, Message Sequence, Object-Oriented Integration Testing, Response Method Sequence, UML Statechart Diagram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2606
1508 Loop-free Local Path Repair Strategy for Directed Diffusion
Authors: Basma M. Mohammad El-Basioni, Sherine M. Abd El-kader, Hussein S. Eissa
This paper proposes an implementation for the directed diffusion paradigm aids in studying this paradigm-s operations and evaluates its behavior according to this implementation. The directed
diffusion is evaluated with respect to the loss percentage, lifetime, end-to-end delay, and throughput. From these evaluations some suggestions and modifications are proposed to improve the directed
diffusion behavior according to this implementation with respect to these metrics. The proposed modifications reflect the effect of local path repair by introducing a technique called Loop-free Local
Path Repair (LLPR) which improves the directed diffusion behavior especially with respect to packet loss percentage by about 92.69%. Also LLPR improves the throughput and end-to-end delay by about
55.31% and 14.06% respectively, while the lifetime decreases by about 29.79%.
Keywords: Attribute-value based naming scheme, data gathering, data-centric routing, energy-efficiency, locality, wireless sensor network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423
1507 Induced Acyclic Path Decomposition in Graphs
Authors: Abraham V. M., I. Sahul Hamid
A decomposition of a graph G is a collection ψ of graphs H1,H2, . . . , Hr of G such that every edge of G belongs to exactly one Hi. If each Hi is either an induced path in G, then ψ is called an
induced acyclic path decomposition of G and if each Hi is a (induced) cycle in G then ψ is called a (induced) cycle decomposition of G. The minimum cardinality of an induced acyclic path
decomposition of G is called the induced acyclic path decomposition number of G and is denoted by ¤Çia(G). Similarly the cyclic decomposition number ¤Çc(G) is defined. In this paper we begin an
investigation of these parameters.
Keywords: Cycle decomposition, Induced acyclic path decomposition, Induced acyclic path decomposition number.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
1506 A Feasible Path Selection QoS Routing Algorithm with two Constraints in Packet Switched Networks
Authors: P.S.Prakash, S.Selvan
Over the past several years, there has been a considerable amount of research within the field of Quality of Service (QoS) support for distributed multimedia systems. One of the key issues in
providing end-to-end QoS guarantees in packet networks is determining a feasible path that satisfies a number of QoS constraints. The problem of finding a feasible path is NPComplete if number of
constraints is more than two and cannot be exactly solved in polynomial time. We proposed Feasible Path Selection Algorithm (FPSA) that addresses issues with pertain to finding a feasible path
subject to delay and cost constraints and it offers higher success rate in finding feasible paths.
Keywords: feasible path, multiple constraints, path selection, QoS routing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
1505 Robot Path Planning in 3D Space Using Binary Integer Programming
Authors: Ellips Masehian, Golnaz Habibi
This paper presents a novel algorithm for path planning of mobile robots in known 3D environments using Binary Integer Programming (BIP). In this approach the problem of path planning is formulated
as a BIP with variables taken from 3D Delaunay Triangulation of the Free Configuration Space and solved to obtain an optimal channel made of connected tetrahedrons. The 3D channel is then partitioned
into convex fragments which are used to build safe and short paths within from Start to Goal. The algorithm is simple, complete, does not suffer from local minima, and is applicable to different
workspaces with convex and concave polyhedral obstacles. The noticeable feature of this algorithm is that it is simply extendable to n-D Configuration spaces.
Keywords: 3D C-space, Binary Integer Programming (BIP), Delaunay Tessellation, Robot Motion Planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2469
1504 Comparison of GSA, SA and PSO Based Intelligent Controllers for Path Planning of Mobile Robot in Unknown Environment
Authors: P. K. Panigrahi, Saradindu Ghosh, Dayal R. Parhi
Now-a-days autonomous mobile robots have found applications in diverse fields. An autonomous robot system must be able to behave in an intelligent manner to deal with complex and changing
environment. This work proposes the performance of path planning and navigation of autonomous mobile robot using Gravitational Search Algorithm (GSA), Simulated Annealing (SA) and Particle Swarm
optimization (PSO) based intelligent controllers in an unstructured environment. The approach not only finds a valid collision free path but also optimal one. The main aim of the work is to minimize
the length of the path and duration of travel from a starting point to a target while moving in an unknown environment with obstacles without collision. Finally, a comparison is made between the
three controllers, it is found that the path length and time duration made by the robot using GSA is better than SA and PSO based controllers for the same work.
Keywords: Autonomous Mobile Robot, Gravitational Search Algorithm, Particle Swarm Optimization, Simulated Annealing Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3116
1503 1−Skeleton Resolution of Free Simplicial Algebras with Given CW−Basis
Authors: Ali Mutlu, Berrin Mutlu
In this paper we use the definition of CW basis of a free simplicial algebra. Using the free simplicial algebra, it is shown to construct free or totally free 2−crossed modules on suitable
construction data with given a CW−basis of the free simplicial algebra. We give applications free crossed squares, free squared complexes and free 2−crossed complexes by using of 1(one) skeleton
resolution of a step by step construction of the free simplicial algebra with a given CW−basis.
Keywords: Free crossed square, Free 2−crossed modules, Free simplicial algebra, Free square complexes, Free 2−crossed complexes CW−basis, 1−skeleton. A. M. S.Classification:[2000] 18D35, 18G30, 18G50
, 18G55, 55Q05, 55Q20.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1097
1502 Module and Comodule Structures on Path Space
On path space
, there is a trivial
-module structure determined by the multiplication of path algebra
and a trivial
-comodule structure determined by the comultiplication of path coalgebra
. In this paper, on path space
, a nontrivial
-module structure is defined, and it is proved that this nontrivial left
-module structure is isomorphic to the dual module structure of trivial right
-comodule. Dually, on path space
, a nontrivial
-comodule structure is defined, and it is proved that this nontrivial right
-comodule structure is isomorphic to the dual comodule structure of trivial left
-module. Finally, the trivial and nontrivial module structures on path space are compared from the aspect of submodule, and the trivial and nontrivial comodule structures on path space are compared
from the aspect of subcomodule.
Keywords: Quiver, path space, module, comodule, dual.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 853
1501 A Review on Comparative Analysis of Path Planning and Collision Avoidance Algorithms
Authors: Divya Agarwal, Pushpendra S. Bharti
Autonomous mobile robots (AMR) are expected as smart tools for operations in every automation industry. Path planning and obstacle avoidance is the backbone of AMR as robots have to reach their goal
location avoiding obstacles while traversing through optimized path defined according to some criteria such as distance, time or energy. Path planning can be classified into global and local path
planning where environmental information is known and unknown/partially known, respectively. A number of sensors are used for data collection. A number of algorithms such as artificial potential
field (APF), rapidly exploring random trees (RRT), bidirectional RRT, Fuzzy approach, Purepursuit, A* algorithm, vector field histogram (VFH) and modified local path planning algorithm, etc. have
been used in the last three decades for path planning and obstacle avoidance for AMR. This paper makes an attempt to review some of the path planning and obstacle avoidance algorithms used in the
field of AMR. The review includes comparative analysis of simulation and mathematical computations of path planning and obstacle avoidance algorithms using MATLAB 2018a. From the review, it could be
concluded that different algorithms may complete the same task (i.e. with a different set of instructions) in less or more time, space, effort, etc.
Keywords: Autonomous mobile robots, obstacle avoidance, path planning, and processing time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
1500 The Problem of Using the Calculation of the Critical Path to Solver Instances of the Job Shop Scheduling Problem
Authors: Marco Antonio Cruz-Chávez, Juan Frausto-Solís, Fernando Ramos-Quintana
A procedure commonly used in Job Shop Scheduling Problem (JSSP) to evaluate the neighborhoods functions that use the non-deterministic algorithms is the calculation of the critical path in a digraph.
This paper presents an experimental study of the cost of computation that exists when the calculation of the critical path in the solution for instances in which a JSSP of large size is involved. The
results indicate that if the critical path is use in order to generate neighborhoods in the meta-heuristics that are used in JSSP, an elevated cost of computation exists in spite of the fact that the
calculation of the critical path in any digraph is of polynomial complexity.
Keywords: Job Shop, CPM, critical path, neighborhood, meta-heuristic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2298
1499 An Improved Transfer Logic of the Two-Path Algorithm for Acoustic Echo Cancellation
Adaptive echo cancellers with two-path algorithm are applied to avoid the false adaptation during the double-talk situation. In the two-path algorithm, several transfer logic solutions have been
proposed to control the filter update. This paper presents an improved transfer logic solution. It improves the convergence speed of the two-path algorithm, and allows the reduction of the memory
elements and computational complexity. Results of simulations show the improved performance of the proposed solution.
Keywords: Acoustic echo cancellation, Echo return lossenhancement (ERLE), Two-path algorithm, Transfer logic
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
1498 The Effect of Tool Path Strategy on Surface and Dimension in High Speed Milling
Authors: A. Razavykia, A. Esmaeilzadeh, S. Iranmanesh
Many orthopedic implants like proximal humerus cases require lower surface roughness and almost immediate/short lead time surgery. Thus, rapid response from the manufacturer is very crucial. Tool
path strategy of milling process has a direct influence on the surface roughness and lead time of medical implant. High-speed milling as promised process would improve the machined surface quality,
but conventional or super-abrasive grinding still required which imposes some drawbacks such as additional costs and time. Currently, many CAD/CAM software offers some different tool path strategies
to milling free form surfaces. Nevertheless, the users must identify how to choose the strategies according to cutting tool geometry, geometry complexity, and their effects on the machined surface.
This study investigates the effect of different tool path strategies for milling a proximal humerus head during finishing operation on stainless steel 316L. Experiments have been performed using MAHO
MH700 S vertical milling machine and four machining strategies, namely, spiral outward, spiral inward, and radial as well as zig-zag. In all cases, the obtained surfaces were analyzed in terms of
roughness and dimension accuracy compared with those obtained by simulation. The findings provide evidence that surface roughness, dimensional accuracy, and machining time have been affected by the
considered tool path strategy.
Keywords: CAD/CAM software, milling, orthopedic implants, tool path strategy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
1497 Induced Graphoidal Covers in a Graph
Authors: K. Ratan Singh, P. K. Das
An induced graphoidal cover of a graph G is a collection ψ of (not necessarily open) paths in G such that every path in ψ has at least two vertices, every vertex of G is an internal vertex of at most
one path in ψ, every edge of G is in exactly one path in ψ and every member of ψ is an induced cycle or an induced path. The minimum cardinality of an induced graphoidal cover of G is called the
induced graphoidal covering number of G and is denoted by ηi(G) or ηi. Here we find induced graphoidal cover for some classes of graphs.
Keywords: Graphoidal cover, Induced graphoidal cover, Induced graphoidal covering number.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1426
1495 An Examination and Validation of the Theoretical Resistivity-Temperature Relationship for Conductors
Authors: Fred Lacy
Electrical resistivity is a fundamental parameter of metals or electrical conductors. Since resistivity is a function of temperature, in order to completely understand the behavior of metals, a
temperature dependent theoretical model is needed. A model based on physics principles has recently been developed to obtain an equation that relates electrical resistivity to temperature. This
equation is dependent upon a parameter associated with the electron travel time before being scattered, and a parameter that relates the energy of the atoms and their separation distance. Analysis of
the energy parameter reveals that the equation is optimized if the proportionality term in the equation is not constant but varies over the temperature range. Additional analysis reveals that the
theoretical equation can be used to determine the mean free path of conduction electrons, the number of defects in the atomic lattice, and the ‘equivalent’ charge associated with the metallic bonding
of the atoms. All of this analysis provides validation for the theoretical model and provides insight into the behavior of metals where performance is affected by temperatures (e.g., integrated
circuits and temperature sensors).
Keywords: Callendar–van Dusen, conductivity, mean free path, resistance temperature detector, temperature sensor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2181
1494 Thermal Analysis of the Current Path from Circuit Breakers Using Finite Element Method
Authors: Adrian T. Plesca
This paper describes a three-dimensional thermal model of the current path included in the low voltage power circuit breakers. The model can be used to analyse the thermal behaviour of the current
path during both steady-state and transient conditions. The current path lengthwise temperature distribution and timecurrent characteristic of the terminal connections of the power circuit breaker
have been obtained. The influence of the electric current and voltage drop on main electric contact of the circuit breaker has been investigated. To validate the three-dimensional thermal model, some
experimental tests have been done. There is a good correlation between experimental and simulation results.
Keywords: Current path, power circuit breakers, temperature distribution, thermal analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2691
1493 Induced Acyclic Graphoidal Covers in a Graph
Authors: K. Ratan Singh, P. K. Das
An induced acyclic graphoidal cover of a graph G is a collection ψ of open paths in G such that every path in ψ has atleast two vertices, every vertex of G is an internal vertex of at most one path
in ψ, every edge of G is in exactly one path in ψ and every member of ψ is an induced path. The minimum cardinality of an induced acyclic graphoidal cover of G is called the induced acyclic
graphoidal covering number of G and is denoted by ηia(G) or ηia. Here we find induced acyclic graphoidal cover for some classes of graphs.
Keywords: Graphoidal cover, Induced acyclic graphoidal cover, Induced acyclic graphoidal covering number.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1294
1492 Path Planning of a Robot Manipulator using Retrieval RRT Strategy
Authors: K. Oh, J. P. Hwang, E. Kim, H. Lee
This paper presents an algorithm which extends the rapidly-exploring random tree (RRT) framework to deal with change of the task environments. This algorithm called the Retrieval RRT Strategy (RRS)
combines a support vector machine (SVM) and RRT and plans the robot motion in the presence of the change of the surrounding environment. This algorithm consists of two levels. At the first level, the
SVM is built and selects a proper path from the bank of RRTs for a given environment. At the second level, a real path is planned by the RRT planners for the given environment. The suggested method
is applied to the control of KUKA™,, a commercial 6 DOF robot manipulator, and its feasibility and efficiency are demonstrated via the cosimulatation of MatLab™, and RecurDyn™,.
Keywords: Path planning, RRT, 6 DOF manipulator, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2527
1491 Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks
Authors: Siliang Wang, Minghui Wang, Jun Hu
A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find
optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision
theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are
discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built
to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the
feasibility of the novel approach.
Keywords: pruning method, stochastic, time-varying networks, optimal path planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
1490 Optimizing Network Latency with Fast Path Assignment for Incoming Flows
Various flows in the network require to go through different types of middlebox. The improper placement of network middlebox and path assignment for flows could greatly increase the network latency
and also decrease the performance of network. Minimizing the total end to end latency of all the ows requires to assign path for the incoming flows. In this paper, the flow path assignment problem in
regard to the placement of various kinds of middlebox is studied. The flow path assignment problem is formulated to a linear programming problem, which is very time consuming. On the other hand, a
naive greedy algorithm is studied. Which is very fast but causes much more latency than the linear programming algorithm. At last, the paper presents a heuristic algorithm named FPA, which takes
bottleneck link information and estimated bandwidth occupancy into consideration, and achieves near optimal latency in much less time. Evaluation results validate the effectiveness of the proposed
Keywords: Latency, Fast path assignment, Bottleneck link.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 590
1489 Geometric Data Structures and Their Selected Applications
Authors: Miloš Šeda
Finding the shortest path between two positions is a fundamental problem in transportation, routing, and communications applications. In robot motion planning, the robot should pass around the
obstacles touching none of them, i.e. the goal is to find a collision-free path from a starting to a target position. This task has many specific formulations depending on the shape of obstacles,
allowable directions of movements, knowledge of the scene, etc. Research of path planning has yielded many fundamentally different approaches to its solution, mainly based on various decomposition
and roadmap methods. In this paper, we show a possible use of visibility graphs in point-to-point motion planning in the Euclidean plane and an alternative approach using Voronoi diagrams that
decreases the probability of collisions with obstacles. The second application area, investigated here, is focused on problems of finding minimal networks connecting a set of given points in the
plane using either only straight connections between pairs of points (minimum spanning tree) or allowing the addition of auxiliary points to the set to obtain shorter spanning networks (minimum
Steiner tree).
Keywords: motion planning, spanning tree, Steiner tree, Delaunay triangulation, Voronoi diagram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
1488 Mobile Robot Path Planning Utilizing Probability Recursive Function
Authors: Ethar H. Khalil, Bahaa I. Kazem
In this work a software simulation model has been proposed for two driven wheels mobile robot path planning; that can navigate in dynamic environment with static distributed obstacles. The work
involves utilizing Bezier curve method in a proposed N order matrix form; for engineering the mobile robot path. The Bezier curve drawbacks in this field have been diagnosed. Two directions: Up and
Right function has been proposed; Probability Recursive Function (PRF) to overcome those drawbacks. PRF functionality has been developed through a proposed; obstacle detection function, optimization
function which has the capability of prediction the optimum path without comparison between all feasible paths, and N order Bezier curve function that ensures the drawing of the obtained path. The
simulation results that have been taken showed; the mobile robot travels successfully from starting point and reaching its goal point. All obstacles that are located in its way have been avoided.
This navigation is being done successfully using the proposed PRF techniques.
Keywords: Mobile robot, path planning, Bezier curve.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
1487 The Effect of Critical Activity on Critical Path and Project Duration in Precedence Diagram Method
The additional relationships i.e., start-to-start, finish-to-finish, and start-to-finish, between activity in Precedence Diagram Method (PDM) provides a more flexible schedule than traditional
Critical Path Method (CPM). But, changing the duration of critical activities in the PDM network will have an anomalous effect on the critical path and the project completion date. In this study, we
classified the critical activities in two groups i.e., 1. activity on single critical path and 2. activity on multi-critical paths, and six classes i.e., normal, reverse, neutral, perverse,
decrease-reverse and increase-normal, based on their effects on project duration in PDM. Furthermore, we determined the maximum float of time by which the duration each type of critical activities
can be changed without effecting the project duration. This study would help the project manager to clearly understand the behavior of each critical activity on critical path, and he/she would be
able to change the project duration by shortening or lengthening activities based on project budget and project deadline.
Keywords: Construction project management, critical path method, project scheduling, precedence diagram method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471 | {"url":"https://publications.waset.org/search?q=mean%20free%20path","timestamp":"2024-11-08T03:00:57Z","content_type":"text/html","content_length":"128420","record_id":"<urn:uuid:090f1e29-d790-440d-a09c-2beba8488298>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00193.warc.gz"} |
Bond Rod Mill Work Index Equipment & Apparatus Review - 911Metallurgist
Commentary on the apparatus of the Bond rod mill Work Index
by Alex Doll
December, 2015
The Bond “Third Theory” of comminution was originally divided into three size classes reflecting the varieties of comminution equipment common during the time period when Bond (and his collaborators)
were gathering the information to calibrate comminution models. The middle size class, represented by rod milling, is fitted to a tumbling test, referred to as the Bond rod mill work index (Wi[RM],
or RWi). The apparatus used to determine this work index was described in 1943 by Bond & Maxton. The author has noted there are some laboratories that have deviated from the apparatus specified by
Bond & Maxton and there are modern comminution models that are calibrated to this non-standard mill geometry.
Unrelated ‘entertaining’ image
The specification for the apparatus to determining a “Bond” rod mill work index is first described in Bond & Maxton (1943). It states that the apparatus is a tumbling rod mill to be operated in a
locked cycle test at a fixed circulating load. The geometry of the grinding chamber is described as:
• mill inside diameter of 12 inches
• mill grinding chamber length of 22 inches (later revised to 24 inches in SME Mudd Series handbook)
• rods of two size classes, 21 inches long
• mill rotation speed of 46 revolutions per minute (approximately 60% of critical speed)
• a wave liner
The test procedure also describes a “rocking” of the mill every 10 revolutions to avoid coarse particles collecting in the empty space between the end of the grinding rods and the end of the grinding
The author is aware of three issues with this specification and the actual implementation of the test in commercial laboratories world-wide. Specifically, the meaning of “mill inside diameter” in
the context of a wave liner, the use of a smooth liner in some laboratories, and the consistent implementation of the rocking behaviour. Of these three issues, the Author believes use of a smooth
liner is the biggest issue with respect to standardization of the test.
Rocking of the laboratory mill
The length of the grinding chamber is longer than the rods, resulting a void space at the ends of the mill where little or no grinding happens. To avoid the collection of coarse material in this
space, the test procedure includes “rocking” the mill every 10 revolutions through a 10° rotation “forward” for one revolution and then “backward” for one revolution, after which the mill is returned
to a level position for the next ten revolutions before being rocked again.
Bond Rod Work Index Test Mill
The laboratory rod mills common in North and South America are all configured to be rocked, and this is believed to be a standard procedure in all the laboratories that the Author is familiar with
(one laboratory presently lacks the rocking mechanism but is working towards implementing it).
Issues with a smooth liner
A “Bond” rod mill work index is to be determined in an apparatus with a wave liner. The author is aware of four laboratories that offer a rod mill work index test where the lining of the mill is
either smooth, or smooth with a small number of primitive lifters that do not constitute a wave liner (one of the four laboratories is known to be investigating replacing the liner with a wave).
Work index values determined by this alternative geometry should not be marketed as “Bond” work index values because they deviate from the specification and calibration of a proper “Bond” rod mill
work index.
There are two problems with the smooth liner:
• The energy per revolution is different to the calibration data set used by Bond in his derivation of the Third Theory fitting of the work index formula to the rod mill apparatus.
• The nature of the breakage will be skewed towards abrasion breakage in the smooth liner designs, whereas the wave liner will have more attrition (and possibly crushing) breakage.
The first problem comes from the equation derived by Bond to calibrate a rod mill work index to the parameters of the rod mill tumbling test. The term in the calibrated formula that is affected by
the liner is the grams (therefore, the energy) evolved per revolution of the laboratory mill. Since the equation expects a certain amount of energy (Joules) per mill revolution, the equation for a
laboratory mill with smooth liner must re-calibrate Bond’s empirical equation (below, converted to Wi metric units) to the Joules per revolution generated in a machine with a smooth liner.
The second problem is related to how a rock breaks inside the apparatus. There are generally three mechanisms of breakage recognized as significant in milling: crushing, attrition and abrasion. A
particular ore in a particular mill will have a characteristic combination of these three mechanisms that describes both the breakage energy consumed and the size distribution of the mill product.
The action of a wave liner in a rod mill is to lift the mill charge and “spread” the charge as the mill rotates. This causes both crushing and attrition action by trapping particles between the rods
as they are alternatively lifted and dropped. This lifting action is greatly diminished in a mill with a smooth liner, meaning that the relative amount of crushing and attrition will be less in the
smooth liner design; most of the breakage will instead consist of abrasion.
Attempts to “convert” work index determinations from one mill geometry to the other must account for this difference in breakage mechanism. It is not enough to say “deduct 2 kWh/t from the smooth
liner result” and expect an equivalent work index for a wave liner design. An empirical conversion determined for a particular type of ore (eg. Paleozoic meta-granitoids) is only valid for rock
types that have similar ratios in the resistance to abrasion, attrition and crushing. A completely different rock type (eg. Tertiary andesite) will have a completely set of ratios, and therefore
will likely require a different empirical calibration between the two styles of mill liner.
Bailey et al. (2009) described a rod mill work index round-robin program between different international laboratories with a normalized standard deviation of 12%. It is believed that the round-robin
is a mixture of smooth and wave liner designs. The Author is aware of three Australian laboratories have have smooth designs, and these are expected to give higher work index determinations than the
wave designs. The paper reveals that the two maximum values are Australian laboratories (therefore, smooth liner designs). If one excludes them and re-calculates the statistics, then the normalized
standard deviation drops to 6.3%.
The wave liner specification
The Bond & Maxton specification gives no guidance to what constitutes a properly designed wave liner. The geometry of a wave would correctly require specification of:
• the wave height from crest to trough, and
• the number of waves.
The author is aware that most laboratories with wave liners have settled on designs involving eight waves with ½ inch wave height. Minor variations in height mostly affects the mill inside diameter
calculation (discussed in the next section) and the Author expects the wave height will not otherwise affect the validity of the rod mill work index calculation (for reasonable heights where the rod
trajectories are normal). If the lifting action of the rods is reasonably similar to what Bond calibrated his equations to, then the geometry is valid.
The meaning of inside diameter
The specification of the inside diameter is not specific about how to measure the diameter in the situation of a wave liner. Three possibilities are:
• measure the 12 inches from the top of a wave to the top of an opposite wave (crest-to-crest),
• measure the 12 inches from the bottom of a wave to the bottom of an opposite wave (trough-to-trough),
• measure the 12 inches from some middle or average point in the liner.
The Author is aware of laboratories who have implemented different versions of these options. It is reasonable to ask what is the expected difference in each situation, and is it significant?
The Nordberg rod mill power draw model (Outokumpu, 2002) is an easy way to check the effect of diameter by comparing the Factor A in the power model against three potential mill diameters. In all
cases, assume a ½ inch lifter height. The “middle” measurement case will be a 12 inch effective diameter; the “trough-to-trough” will be 11½ inches; and the “crest-to-crest” design will be 12½
Calculating the Nordberg Factor A for all three liner cases and assuming that all other parameters are the same (Factor B, filling; Factor C, speed; ore density), then the difference in the Factor A
should equal the difference in the power draw of the laboratory mill (this is over-simplified because the Factors B and C will change slightly). Assuming the “middle” measurement is a base case, the
predicted variation in a work index determination due to measurement position is indicated in Table 1.
Table 1: Effect of mill nominal diameter on power draw
Diam, inch Diam, mm Factor A Difference
11.5 292 0.160 -10%
12 305 0.178 0%
12.5 318 0.197 11%
The ambiguous definition of Bond & Maxton’s specification of the inside diameter of a wave liner can reasonably result in up to 20% variation in the results of a Bond rod mill work index. Which of
the definitions is “correct” depends upon which definition was used in the Allis Chalmers laboratory where Bond and his collaborators calibrated the original work index equations. The Author is not
aware of any publication that describes these details of the apparatus used at Allis Chalmers.
Final thoughts and recommendations
The Author is aware that other comminution consultants will have different opinions. Comminution is not climate science – consensus among practitioners is not mandatory. There are comminution models
calibrated to use the smooth liner test results, and it is completely appropriate to use a smooth liner apparatus when using such a model. Calibration is more important than dogma.
• Ask a laboratory what type of liner is in their apparatus as part of a request for quotation. Also ask about the procedure used to rock the mill.
• If you are using a particular consultant for your project, then use the type of testing apparatus suitable for that consultant’s model.
• If it is necessary to convert work index results between the wave and smooth liner types, then rely on your consultant to decide how to transform test results.
• Laboratories using a smooth liner should not be marketing their test as a “Bond rod mill work index”. It is reasonable to call such tests a “rod mill work index”, but they should not be using the
name “Bond” unless fully compliant with Bond’s specification, notwithstanding the ambiguities discussed in this commentary.
• If you use two consultants, expect three opinions.
More generally, the Industry needs a more exact definition of the specification of a Bond rod mill for work index determination. There is presently (2015-2016) a drive underway to better standardize
the use of Bond work indices and testing procedures under the auspices of the Global Mining Standards Group (GMSG), and this may be a reasonable venue to better standardize the specification of the
rod mill apparatus and the test procedures.
• Bailey, C., Lane, G., Morrell, S., and Staples, P. (2009) `What can go wrong in comminution circuit design?`, Tenth Mill Operator’s Conference, Adelaide, Australia, pp. 143–149
• Bergstrom, B.H. (1985) `SME Mineral Processing Handbook`, ed. Weiss, N.L., Society of Mining Engineers of AIME, New York, USA, pp. 30-65 – 30-71
• Bond, F.C. and Maxton, W. L (1943) `Standard Grindability Tests and Calculations`, Society of Mining Engineers, Vol 153, pp. 362–372. http://www.onemine.org/
• GMSC Bond Standard Working Group,
• [in progress]
• Outokumpu Mills Group (2002), `The Science of Comminution`, technical publication p. 39 | {"url":"https://www.911metallurgist.com/blog/equipment-bond-rod-mill-work-index/","timestamp":"2024-11-05T09:09:14Z","content_type":"text/html","content_length":"132559","record_id":"<urn:uuid:33d6110c-6f5d-48d6-b73c-7c010ddc15bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00817.warc.gz"} |
The Stacks project
Lemma 20.32.1. Let $X$ be a ringed space. Let $U \subset X$ be an open subspace. The restriction of a K-injective complex of $\mathcal{O}_ X$-modules to $U$ is a K-injective complex of $\mathcal{O}_
Proof. Follows from Derived Categories, Lemma 13.31.9 and the fact that the restriction functor has the exact left adjoint $j_!$. For the construction of $j_!$ see Sheaves, Section 6.31 and for
exactness see Modules, Lemma 17.3.4. $\square$
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 08BS. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 08BS, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/08BS","timestamp":"2024-11-08T11:46:34Z","content_type":"text/html","content_length":"14242","record_id":"<urn:uuid:f2c9fa95-106f-4c4d-a4ab-3228eedc6def>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00168.warc.gz"} |
Asymmetric Teeth: Bending Stress Calculation
Management Summary This article includes a brief summary of the characteristics of involute asymmetric teeth and the problems connected with the related bending tests. The authors use an adaptation
of the standard ISO C methodology to determine bending stress calculations for gears with asymmetric teeth. They compare their results with results obtained using modern finite element methods.
A method of achieving either of those goals is the design of gears with asymmetrical teeth. That is, the pressure angle on the drive side is different from the pressure angle on the coast side. It is
possible to design teeth with the greater pressure angle on either the drive side or the coast side, and each method can have its advantages. For example, a greater pressure angle on the drive side
results in gears with higher load-carrying capacity. A greater pressure angle on the coast side results in teeth with higher bending strength (Ref. 1).
Asymmetric teeth are well suited for cases where the torque is transmitted only, or mainly, in one direction. Because of the asymmetric teeth, designers are able to create gear drives capable of
handling greater torque in the same amount of space, or they are able to reduce the amount of space required to handle the same amount of torque.
Since the dimensioning procedures, such as the widely used ISO C procedure, were developed and standardized for symmetric teeth, today we still need to study and fine-tune an ad hoc procedure for
conducting bending tests on asymmetric teeth.
One possibility is to use the finite element method (FEM); for this purpose, the authors of this study have developed an ad hoc modeling system (Ref. 2) for making rapid and extremely accurate
structural numerical analysis, the results of which have been proved through a number of experiments (Ref. 3). Using FEM analysis in dimensioning asymmetric teeth, however, may not be practical for
all gear engineers. In particular, many engineers who are used to designing symmetric teeth do not regularly use finite element methods.
The objective of this work, therefore, is to study a calculation method which makes it possible to carry out the dimensioning of asymmetric teeth using a “modified” ISO C procedure, the same
procedure that is widely used for symmetric teeth.
According to the ISO C procedure, the maximum bending stress at the tooth root may be expressed through the following known relation:
The tooth asymmetry, if any, has no impact on either the overload factors KA, KV, KFβ, KFα, or the corrective factors Yε and Yβ ; hence, the bending stress in asymmetric teeth, on equal tangential
force, face width and module, differs significantly from the bending stress in symmetric teeth, merely because of the different value given to form factor YFa and to notch factor YSa.
In symmetric teeth, the factors’ values are determined through the following relations:
In order to use the ISO C procedure for asymmetric teeth, we need to create a calculation method that is capable of determining two factors, which we will here refer to as YeFa and YeSa, equivalent
to the abovementioned factors YFa and YSa and applicable in Equation 1.
With reference to Figure 1, note the asymmetric tooth HCAK'I,' with the driving side on the left; note, also, the symmetric tooth HCDKI, both sides of which are identical to the driving side of the
asymmetric tooth.
Figure 1--Comparison of symmetric (HCDKI) and asymmetric (HCAK'I') tooth forms.
The methodology of this study is based on two hypotheses, the validity of which will be proven upon analysis of the results: the critical section HK' of the asymmetric tooth is assumed to be at the
same distance from the center of the wheel as the critical section HK, determined on the symmetric tooth by the sixty-degree wedge; we define as axis of the asymmetric tooth the perpendicular to
segment HK', passing through point L' of its center line.
The profile curvature radius, ρF , at the critical point H, is obviously identical for both symmetric and asym- metric teeth.
In conclusion, according to Equation 2, the form and notch factors of asymmetric teeth differ from those of symmetric teeth only inasmuch as the values of sFn differ, equal respectively to HK' and
HK, and that of hFa , equal respectively to L'Y' and LY.
Considering that, for admissible ∆α values (Ref. 4) of the tooth asymmetry, segment L'Y' is only slightly lower than the corresponding segment LY, we have deemed it opportune in this study to assume
the value hFa = LY also for asymmetric teeth, for the benefit of greater accuracy. (In fact, an approximated, rounded-up value is assumed for the arm, which is conventionally defined in procedure ISO
C, of the bending component of the force of contact.)
Therefore, factors YeFa and YeSa for asymmetric teeth can be determined by replacing sFn in Equation 2, with the corresponding value sFnas, equal to the length of segment HK' of Figure 1.
Calculation Software The first thing the user must do in order to use the calculation software, created using the Matlab language, is to enter all the input data necessary for determining the
characteristics of the toothing, namely, the number of teeth, the tool’s geometric characteristics (module, pressure angle of the two sides, addendum, tip radius), and the addendum modification and
addendum reduction coefficients. The user must determine all the geometric parameters (characteristic radius, thickness, etc.) of both the asymmetric tooth being calculated and the symmetric tooth of
reference, as described above. The coordinates of the intersection points between the involutes and the respective tooth fillet profiles are thus identified through the appropriate iterative cycles
(for example, point U for the coast side of the asymmetric tooth in Figure 1); thus, the profiles of the two teeth, the symmetric and the asymmetric one, are fully defined.
Once the coordinates of point U are known, it is possible to calculate the amplitude of angles βU , δU and γU shown in the figure (I’ is the starting point of the trochoid on the inside
circumference). Through the application of widely used procedures (Ref. 5), the coordinates of point H are determined, as well as the thickness sFn = HK of the critical section of the symmetric
tooth. Through another iterative cycle, the coordinates of point K' are determined, from which we can obtain the value of angle δK'.
At this point, we determine the thickness of the critical section of the asymmetric tooth sFnas :
Using the value of sFnas provided by Equation 3, we calculate the form and notch factors, YeFa and YeSa , for asymmetric teeth, through which we can finally determine the maximum bending stress, σF,
at the root of tooth.
Results and Verification As specified in the previous paragraphs, certain hypotheses and approximations were assumed in order to fine-tune the calculation methodology of this study. In order to
assess the validity of such methodology, we have deemed it opportune to make a comparison—through numerous combinations of the tooth parameters—between the bending stress values calculated using FEM
methodology and the values calculated using the modified ISO C procedure.
The test campaign highlights, in particular, how the stress values for both symmetric and asymmetric teeth calculated using the FEM methodology are, as already known for symmetric teeth, generally
lower than the values calculated using the ISO C methodology. This depends mainly on the fact that the ideal stress calculated using the FEM methodology in the most highly stressed point of the tooth
fillet of the driving side (traction area) takes also into account, with great accuracy, the compression resulting from the radial component of the force of contact between the teeth.
The most important point that we can make after having analyzed the results is the fact that the differences between the stress values calculated using the two methodologies are not directly
dependent on the tooth’s degree of asymmetry. It is possible to verify the above by the data in Table 1, which shows, for some of the case studies: the number of teeth z, the pressure angle of the
driving side α01, the pressure angle of the coast side α02, the degree of asymmetry ∆α = α02–α01, the percentage difference ∆σ% ISO / FEM between the stress calculated using the ISO methodology
(modified in the case of asymmetric teeth) and the stress values calculated using the FEM methodology (the symmetric tooth case studies are highlighted in the table).
The values in Table 1, particularly those of ∆σ% ISO / FEM, make it possible to propose the procedure referred to in this paper for determining the equivalent form and notch factors for asymmetric
teeth and, consequently, the use of the ISO C methodology also for this type of teeth.
By using this methodology for a wide range of case studies, we were able to obtain a large number of calculation results. Figure 2 shows a graph—of the several obtained by varying z, x and ρa0—which
indicates, in relation to the degree of asymmetry ∆α, the percentage of stress reduction ∆σ% versus symmetric teeth (x = addendum modification coefficient; ρa0 = tool tip radius).
Figure 2--Percentage decrease of stress calculated using modified ISO C (x=0; ρa0=0.25).
Using graphs like the one shown in Figure 2, the designer of asymmetric teeth can obtain a direct estimate of the expected stress reduction, with respect to traditional symmetric teeth.
Finally, as further proof of the validity of the calculation method proposed in this paper (the “modified” procedure ISO C), we have evaluated, always in relation to the degree of asymmetry, the
difference D between the above said stress reduction ∆σ% and the corresponding stress reduction calculated using the FEM methodology. Also in this case, we have drawn numerous graphs (one example is
shown in Figure 3), which show a very slight difference, in fact, lower than 2–3%.
Figure 3--Difference D between the percentage stress reduction calculated using modified ISO C method and using FEM (x=0; ρa0=0.25).
In other words, for evaluating the stress reduction obtainable through the use of asymmetric teeth, the estimate provided by the proposed procedure does not differ greatly from the one provided by
the FEM procedure.
Conclusions The calculation method created in this study, used for the dimensioning of asymmetric teeth, allows the user to determine valid “equivalent” form, YeFa, and notch, YeSa, factors; the
software created ad hoc simplifies this calculation.
Using the equivalent factors, we are able to estimate the maximum bending stress at the tooth root with an approximation, rounded up, which is very close to that commonly considered acceptable for
symmetric teeth.
In brief, the results of this work clearly show that gear designers may conveniently use the widely used ISO C procedure for verifying the bending stress in the case of asymmetric teeth.
References 1. Di Francesco, G. and S. Marini, “Structural Analysis of Asymmetric Teeth : Reduction of Size and Weight,” Gear Technology, Sept/Oct 1997, Elk Grove Village IL. U.S.A., 1997.
2. Di Francesco, G., M. Linari and S. Marini, “Sistema di modellazione ad Elementi Finiti per Ingranaggi a Fianchi Asimmetrici,” XXXII Convegno Nazionale AIAS, September 3–6, 2003, Università di
3. Di Francesco, G., S. Marini, G. Grasso and C. Clienti, “Asymmetric gear wheel thermographic analysis,” Colloque Photomecanique 2004, étude du comportement des matèriaux et des structures, Albi,
4. Di Francesco, G. and S. Marini, “Asymmetric gear wheel design: determining the bending stress in relation to the degree of asymmetry,” TMT 2004, September 15–19, 2004.
5 Salamoun, C., and M. Suchy, “Computation of helical or spur fillet,” ASME Paper No. RM4, ASME/AGMA/IFTOMM International Symposium, Oct. 1972.
Giulio di Francesco is a professor of machine design at Università degli Studi di Roma “La Sapienza” and at Università di Roma Tre, where he specializes in the study of gears. He has more than 32
years of scientific teaching and research experience. He has written more than 60 professional technical publications and is a consultant to major automotive and other companies. He holds a number of
patents at the Italian and international levels.
Stefano Marini is a professor of machine design at Università degli Studi di Roma “La Sapienza” and at Università di Roma Tre. His research interests include gears, structural analysis, elastic
behavior of metals, design methodologies and mechanical fatigue. He has written more than 40 technical papers and is a consultant to major industries. | {"url":"https://www.geartechnology.com/articles/20309-asymmetric-teeth-bending-stress-calculation","timestamp":"2024-11-02T01:25:59Z","content_type":"text/html","content_length":"86298","record_id":"<urn:uuid:51a01b2f-0895-4e47-bd14-88704ae5d9c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00345.warc.gz"} |
Anna University Chennai 2007 B.A Arabic ME1351 - HEAT AND MASS TRANSFER, /'2008 - Question Paper
Anna University Chennai 2007 B.A Arabic ME1351 - HEAT AND MASS TRANSFER, /'2008 - Question Paper
Saturday, 23 February 2013 02:45Web
/B.TECH DEGREE EXAMINATION, APRIL/MAY 2008.
Sixth Semester
(Regulation 2004)
Mechanical Engineering
ME1351 - HEAT AND MASS TRANSFER
3 hours
100 marks
- (10*2 = 20 MARKS)
A temperature difference of 500 C is applied across a fire-clay brick, 10cm thick having a thermal conductivity of 1W/mK. obtain the heat transfer rate per unit area.
Write the general 3-D heat conduction formula in cylindrical coordinates.
Biot number is the ratio ranging from _________and _________
describe bulk temperature.
A vertical flat plate is maintained at a temperature lower than the surrounding flui
Draw the velocity and temperature profiles assuming natural convection.
What is burnout point? Why is it called so?
What is a compact heat exchanger? provide examples.
What is thermal radiation and what is its wavelength band?
elaborate radiation shields?
discuss the physical meaning of Schmidt number?
- (5*16 = 80 marks)
11. (a)
A composite wall is formed of a 2.5 cm copper plate (k=355W/mK), a 3.2 mm layer of asbestos (k=0.110 W/mK) and a 5cm layer of fiber plate (k=0.049 W/mK). The wall is subjected to an overall
temperature difference of 560 C (560 C on the Cu plate side and 0 C on the fiber plate side).Estimate the heat flux through this composite wall and interface temperature ranging from asbestos and
fiber plate.
When a thermocouple is moved from 1 medium to a different medium at a various temperature, sufficient time must be provided for the thermocouple to come to the thermal equilibrium with the new
conditions before a studying is taken. Consider a 0.1 cm diameter copper thermocouple wire originally at 150
obtain the thermometer response (i.
an approximate plat of temperature vs time for intervals of 0, 40 and 120 seconds) when this wire is suddenly immersed in
water at 40 C (h=80 W/m2K)
air at 40 C (h=40 W/m2K)
presume unit length of wire.
12. (a)
Air at 400 K and 1atm pressure flows at a speed of 1.5 m/s over a flat plate of two m long. The plate is maintained at a uniform temperature of 300 K. If the plate has a width of 0.5 m, estimate the
heat transfer coefficient and the rate of heat transfer from the air stream to the plat
Also estimate the drag force acting on the plate.
Cylindrical cans of 150 mm length and 65 mm diameter are to be cooled from an initial temperature of 20 C by placing them in a cooler containing air at a temperature of one C and a pressure of one
bar. Determine the cooling rates when the cans are kept in
Horizontal position
Vertical position
13. (a)
Water is to be boiled at atmospheric pressure in a mechanically polished stainless steel pan placed on top of a heating unit. The inner surface of the bottom of the pan is maintained at 108
The diameter of the bottom of the pan is 30 cm. Assuming Csf = 0.0130, compute
the rate of heat transfer to the water, and
the rate of evaporation of water.
describe effectiveness of a heat exchanger. Derive an expression for the effectiveness of a double pipe parallel flow heat exchanger. State the assumptions made.
14. (a) (i)
explain briefly the variation of black body emissive power with wavelength of various temperatures. (8)
The spectral emissive function of an opaque surface at 800 K is approximated as
e1 = 0.30 0 = ? < 3µm
e? = e2 = 0.80 3µm = ? < 7µm
e3 = 0.10 7µm = ? < 8
compute the avg. emissivity of the surface and its emissive power. (8)
discuss the following: (5+5+6)
Specular and diffuse reflection
Reflectivity and transmissivity
Reciprocity rule and summation rule.
15. (a)
explain briefly the following: (4+6+6)
Fick’s legal regulations of diffusion
Equimolar counter diffusion
Evaporation process in the atmosphere.
(b) (i)
elaborate the assumptions made in the 1-D transient mass diffusion problems? (4)
An open pan, 20 cm diameter and eight cm deep contains water at 25°C and is exposed to dry atmospheric air. Estimate the diffusion coefficient of water in air, if the rate of diffusion of water is
8.54 × 10-4 kg/h.
Earning: Approval pending. | {"url":"http://www.howtoexam.com/index.php?option=com_university&task=show_paper&paper_id=11527&title=Anna+University+Chennai+2007+B.A+Arabic+ME1351+-+HEAT+AND+MASS+TRANSFER%2C+%2F%272008+++++++-+Question+Paper&Itemid=58","timestamp":"2024-11-10T09:39:47Z","content_type":"application/xhtml+xml","content_length":"29517","record_id":"<urn:uuid:c6cc9ba9-d49f-4393-8eae-8f96110d61c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00227.warc.gz"} |
Addition Subtraction Multiplication Division Of Rational Numbers Worksheet Pdf 2024 - NumbersWorksheets.net
Addition Subtraction Multiplication Division Of Rational Numbers Worksheet Pdf
Addition Subtraction Multiplication Division Of Rational Numbers Worksheet Pdf – A Realistic Amounts Worksheet might help your son or daughter be more acquainted with the principles associated with
this ratio of integers. With this worksheet, students can resolve 12 different troubles linked to logical expressions. They may figure out how to increase several figures, class them in sets, and
figure out their products. They will also process simplifying rational expressions. After they have mastered these methods, this worksheet might be a important resource for furthering their
research. Addition Subtraction Multiplication Division Of Rational Numbers Worksheet Pdf.
Logical Numbers are a rate of integers
There are two forms of amounts: irrational and rational. Logical numbers are understood to be entire figures, whereas irrational amounts usually do not perform repeatedly, and have an limitless
amount of digits. Irrational phone numbers are non-zero, no-terminating decimals, and square origins that are not excellent squares. They are often used in math applications, even though these types
of numbers are not used often in everyday life.
To establish a reasonable amount, you must know such a rational number is. An integer is a total variety, and a logical number is a ratio of two integers. The proportion of two integers is the amount
on the top divided up through the variety on the bottom. For example, if two integers are two and five, this would be an integer. However, there are also many floating point numbers, such as pi,
which cannot be expressed as a fraction.
They could be created into a fraction
A logical variety includes a numerator and denominator which are not absolutely no. Which means that they can be expressed being a small percentage. Along with their integer numerators and
denominators, logical amounts can also have a adverse value. The bad value must be put left of as well as its absolute worth is its range from absolutely no. To simplify this illustration, we shall
say that .0333333 is really a fraction which can be published being a 1/3.
Along with negative integers, a realistic number can be produced in to a small fraction. As an example, /18,572 can be a reasonable number, while -1/ is just not. Any portion composed of integers is
reasonable, so long as the denominator does not contain a and may be written being an integer. Similarly, a decimal that leads to a point can be another reasonable quantity.
They make feeling
In spite of their title, logical phone numbers don’t make very much sensation. In mathematics, they can be individual entities using a special span on the variety collection. Consequently when we
count one thing, we could buy the shape by its percentage to the unique volume. This keeps correct even when you will find limitless logical figures in between two distinct amounts. If they are
ordered, in other words, numbers should make sense only. So, if you’re counting the length of an ant’s tail, a square root of pi is an integer.
If we want to know the length of a string of pearls, we can use a rational number, in real life. To find the length of a pearl, for instance, we might count up its thickness. A single pearl weighs
about ten kilos, which is a rational number. In addition, a pound’s excess weight equals ten kilograms. Therefore, we should be able to divide a lb by 15, with out be concerned about the duration of
an individual pearl.
They could be depicted like a decimal
You’ve most likely seen a problem that involves a repeated fraction if you’ve ever tried to convert a number to its decimal form. A decimal variety can be written as a several of two integers, so
four times 5 various is equal to eight. A similar issue necessitates the recurring small percentage 2/1, and both sides must be divided by 99 to have the right respond to. But how will you have the
transformation? Below are a few illustrations.
A logical variety can be printed in many forms, including fractions as well as a decimal. A great way to symbolize a rational amount in a decimal is always to break down it into its fractional
equivalent. There are actually 3 ways to break down a reasonable variety, and every one of these methods yields its decimal counterpart. One of those methods would be to break down it into its
fractional equivalent, and that’s what’s referred to as a terminating decimal.
Gallery of Addition Subtraction Multiplication Division Of Rational Numbers Worksheet Pdf
Leave a Comment | {"url":"https://www.numbersworksheets.net/addition-subtraction-multiplication-division-of-rational-numbers-worksheet-pdf/","timestamp":"2024-11-12T23:47:16Z","content_type":"text/html","content_length":"60504","record_id":"<urn:uuid:c20534b3-9429-46f5-9d8a-f5860e3f80c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00547.warc.gz"} |
Go deh!
(I tried using the Spyder IDE to write a more literal Python program. Output samples needed to be pasted in to the source, but I didn't particularly want many of Jupyters strengths, so tried Spyder
together with http://hilite.me/)
Best viewed on larger than 7" screens.
# -*- coding: utf-8 -*-
Created on Sat Jan 30 10:24:54 2021
@author: Paddy3118
#from conway_cubes_2_plus_dims import *
#from conway_cubes_2_plus_dims import neighbours
from itertools import product
# %% #Full neighbourhood
In my [previous post][1] I showed how the number of neighbours to a point in
N-Dimensional space increases exponentially:
[1]: https://paddy3118.blogspot.com/2021/01/conways-game-of-life-in-7-dimensions.html
# %% ##Function
def neighbours(point):
return {n
for n in product(*((n-1, n, n+1)
for n in point))
if n != point}
for dim in range(1,10):
n = neighbours(tuple([0] * dim))
print(f"A {dim}-D grid point has {len(n):_} neighbours.")
assert len(n) == 3**dim - 1
# %% ##Output
A 1-D grid point has 2 neighbours.
A 2-D grid point has 8 neighbours.
A 3-D grid point has 26 neighbours.
A 4-D grid point has 80 neighbours.
A 5-D grid point has 242 neighbours.
A 6-D grid point has 728 neighbours.
A 7-D grid point has 2_186 neighbours.
A 8-D grid point has 6_560 neighbours.
A 9-D grid point has 19_682 neighbours.
# %% ##Visuals
Here we define neighbours as all cells, n whose indices differ by at most 1,
i.e. by -1, 0, or +1 from the point X itself; apart from the point X.
In 1-D:
In 2-D:
# 'Axial' neighbours
There is another definition of neighbour that "cuts out the diagonals" in 2-D
to form:
In 3-D this would add two extra cells, one directly above and one below X.
In 1-D this would be the two cells one either side of X.
I call this axial because if X is translated to the origin then axial
neighbours are the 2N cells in which only one coordinate can differ by either
+/- 1.
# Let's calclate them.
# %% #Axial neighbourhood
from typing import Tuple, Set, Dict
Point = Tuple[int, int, int]
PointSet = Set[Point]
def axial_neighbours(point: Point) -> PointSet:
dim, pt = len(point), list(point)
return {tuple(pt[:d] + [pt[d] + delta] + pt[d+1:])
for d in range(dim) for delta in (-1, +1)}
print("\n" + '='*60 + '\n')
for dim in range(1,10):
n = axial_neighbours(tuple([0] * dim))
text = ':\n ' + str(sorted(n)) if dim <= 2 else ''
print(f"A {dim}-D grid point has {len(n):_} axial neighbours{text}")
assert len(n) == 2*dim
# %% ##Output
A 1-D grid point has 2 axial neighbours:
[(-1,), (1,)]
A 2-D grid point has 4 axial neighbours:
[(-1, 0), (0, -1), (0, 1), (1, 0)]
A 3-D grid point has 6 axial neighbours
A 4-D grid point has 8 axial neighbours
A 5-D grid point has 10 axial neighbours
A 6-D grid point has 12 axial neighbours
A 7-D grid point has 14 axial neighbours
A 8-D grid point has 16 axial neighbours
A 9-D grid point has 18 axial neighbours
# %% #Another way of classifying...
The 3-D axial neighbourhood around X can be described as:
Think of X as a cube with six sides oriented with the axes of the 3
dimensions. The axial neighbourhood is six similar cubes with one face
aligned with each of X's faces. A kind of 3-D cross of cubes.
In 3-D, the "full" neighbourhood around point X describes a 3x3x3 units cube
centered on X.
In 3-D: Thinking of that 3x3x3 cube around X:
What if we excluded the six corners of that cube?
* Those excluded corner points have all their three coordinates different
from those of X,
i.e. if excluded e = Point(e_x, e_y, e_z) and X = Point(x_x, x_y, x_z):
e_x != x_x and e_y != x_y and e_z != x_z
|e_x - x_x| == 1 and |e_y - x_y| == 1 and |e_z - x_z| == 1
* We _Keep_ all points in the cube that have 1 and 2 different coords to
those of X
Another definition of the _axial_ neighbourhood case is
* We _Keep_ all points in the cube that have only 1 different coord to
those of X
This can be generalised to thinking in N dimensions of neighbourhood types
keeping from 1 to _up to_ N differences in coordinates (DinC).
def axial_neighbours(point: Point) -> PointSet:
dim, pt = len(point), list(point)
return {tuple(pt[:d] + [pt[d] + delta] + pt[d+1:])
for d in range(dim) for delta in (-1, +1)}
#%% #Neighbourhood by Differences in Coordinates
from collections import defaultdict
from pprint import pformat
from math import comb, factorial as fact
def d_in_c_neighbourhood(dim: int) -> Dict[int, PointSet]:
Split neighbourhood cube around origin point in dim-dimensions mapping
count of coords not-equal to origin -> those neighbours
origin = tuple([0] * dim)
cube = {n
for n in product(*((n-1, n, n+1)
for n in origin))
if n != origin}
d_in_c = defaultdict(set)
for n in cube:
d_in_c[dim - n.count(0)].add(n)
return dict(d_in_c)
def _check_counts(d: int, c: int, n: int) -> None:
Some checks on counts
d : int
c : int
Count of neighbour coords UNequal to origin.
n : int
Number of neighbour points with exactly c off-origin coords.
# # From OEIS
# if c == 1: assert n == d* 2
# elif c == 2: assert n == d*(d-1)* 2
# elif c == 3: assert n == d*(d-1)*(d-2)* 4 / 3
# elif c == 4: assert n == d*(d-1)*(d-2)*(d-3)* 2 / 3
# elif c == 5: assert n == d*(d-1)*(d-2)*(d-3)*(d-4)* 4 / 15
# # Getting the hang of it
# elif c == 6: assert n == d*(d-1)*(d-2)*(d-3)*(d-4)*(d-5)* 4 / 45
# Noticed some powers of 2 leading to
# if c == 6: assert n == d*(d-1)*(d-2)*(d-3)*(d-4)*(d-5)* 2**6 / fact(6)
# if c == 6: assert n == fact(d) / fact(c) / fact(d - c) * 2**c
# Finally:
assert n == fact(d) / fact(c) / fact(d - c) * 2**c
print("\n" + '='*60 + '\n')
for dim in range(1,10):
d = d_in_c_neighbourhood(dim)
tot = sum(len(n_set) for n_set in d.values())
print(f"A {dim}-D point has a total of {tot:_} full neighbours of which:")
for diff_count in sorted(d):
n_count = len(d[diff_count])
print(f" {n_count:4_} have exactly {diff_count:2} different coords.")
_check_counts(dim, diff_count, n_count)
# %% ##Output
A 1-D point has a total of 2 full neighbours of which:
2 have exactly 1 different coords.
A 2-D point has a total of 8 full neighbours of which:
4 have exactly 1 different coords.
4 have exactly 2 different coords.
A 3-D point has a total of 26 full neighbours of which:
6 have exactly 1 different coords.
12 have exactly 2 different coords.
8 have exactly 3 different coords.
A 4-D point has a total of 80 full neighbours of which:
8 have exactly 1 different coords.
24 have exactly 2 different coords.
32 have exactly 3 different coords.
16 have exactly 4 different coords.
A 5-D point has a total of 242 full neighbours of which:
10 have exactly 1 different coords.
40 have exactly 2 different coords.
80 have exactly 3 different coords.
80 have exactly 4 different coords.
32 have exactly 5 different coords.
A 6-D point has a total of 728 full neighbours of which:
12 have exactly 1 different coords.
60 have exactly 2 different coords.
160 have exactly 3 different coords.
240 have exactly 4 different coords.
192 have exactly 5 different coords.
64 have exactly 6 different coords.
A 7-D point has a total of 2_186 full neighbours of which:
14 have exactly 1 different coords.
84 have exactly 2 different coords.
280 have exactly 3 different coords.
560 have exactly 4 different coords.
672 have exactly 5 different coords.
448 have exactly 6 different coords.
128 have exactly 7 different coords.
A 8-D point has a total of 6_560 full neighbours of which:
16 have exactly 1 different coords.
112 have exactly 2 different coords.
448 have exactly 3 different coords.
1_120 have exactly 4 different coords.
1_792 have exactly 5 different coords.
1_792 have exactly 6 different coords.
1_024 have exactly 7 different coords.
256 have exactly 8 different coords.
A 9-D point has a total of 19_682 full neighbours of which:
18 have exactly 1 different coords.
144 have exactly 2 different coords.
672 have exactly 3 different coords.
2_016 have exactly 4 different coords.
4_032 have exactly 5 different coords.
5_376 have exactly 6 different coords.
4_608 have exactly 7 different coords.
2_304 have exactly 8 different coords.
512 have exactly 9 different coords.
# %% #Manhattan
What, up until now, I have called the Differences in Coordinates is better
known as the Manhattan distance, or [Taxicab geometry][2].
[2]: https://en.wikipedia.org/wiki/Taxicab_geometry
# %% #Formula for Taxicab counts of cell neighbours
Function _check_counts was originally set up as a crude check of taxicab
distance of 1 as the formula was obvious.
The formulea for taxcab distances 2, 3, and 4 were got from a search on [OEIS][3]
The need for factorials is obvious. The count for a taxicab distance of N in
N-D space is 2**N which allowed me to work out a final formula:
d is the number of spatial dimiensions.
t is the taxicab distance of a neighbouring point to the origin.
n, the count of neighbours with exactly this taxicab distance is
n = f(d, t)
= d! / t! / (d-t)! * 2**t
We can use the assertion from the exercising of function neighbours() at the
beginning to state that:
sum(f(d, t)) for t from 0 .. d
= g(d)
= 3**d - 1
[3]: https://oeis.org/
# %% ##Taxicab visualisation
Lets create a visualisation of the difference in coords for neighbours in <= 4-D.
(The function is general, but I'm lost after 3-D)!
The origin will show as zero, 0; and neighbours surround it as digits which are
the taxicab distance from 0.
def to_str(taxi: Dict[int, PointSet], indent: int=4) -> str:
taxi : Dict[int, PointSet]
Map taxicab distance from origin
-> set off neighbours with that difference.
indent : int
indent output with spaces
Visusalisation of region.
if not taxi:
return ''
ap = set() # all points
neighbour2taxi = {}
for taxi_distance, nbrs in taxi.items():
ap |= nbrs
for n in nbrs:
neighbour2taxi[n] = taxi_distance
# Dimensionality
dims = len(n)
# Add in origin showing as 0
origin = tuple([0] * dims)
neighbour2taxi[origin] = 0
# Plots neighbourhood of origin (plus extra space)
space = 1
minmax = [range(-(1 + space), (1 + space) + 1)
for dim in range(dims)]
txt = []
indent_txt = ' ' * indent
for plane_coords in product(*minmax[2:]):
ptxt = ['\n' + indent_txt
+ ', '.join(f"dim{dim}={val}"
for dim, val in enumerate(plane_coords, 2))]
ptxt += [''.join((str(neighbour2taxi[tuple([x, y] + list(plane_coords))])
if tuple([x, y] + list(plane_coords)) in ap
else '.')
for x in minmax[0])
for y in minmax[1]]
# Don't plot planes with no neighbours (due to extra space).
if ''.join(ptxt).count('.') < (3 + space*2)**2:
txt += ptxt
return '\n'.join(indent_txt + t for t in txt)
print("\n" + '='*60 + '\n')
for dim in range(2,5):
d = d_in_c_neighbourhood(dim)
tot = sum(len(n_set) for n_set in d.values())
print(f"\nA {dim}-D point has a total of {tot:_} full neighbours of which:")
for diff_count in sorted(d):
n_count = len(d[diff_count])
print(f" {n_count:4_} have taxicab distance {diff_count:2} from the origin.")
_check_counts(dim, diff_count, n_count)
# %% ##Output
A 2-D point has a total of 8 full neighbours of which:
4 have taxicab distance 1 from the origin.
4 have taxicab distance 2 from the origin.
A 3-D point has a total of 26 full neighbours of which:
6 have taxicab distance 1 from the origin.
12 have taxicab distance 2 from the origin.
8 have taxicab distance 3 from the origin.
A 4-D point has a total of 80 full neighbours of which:
8 have taxicab distance 1 from the origin.
24 have taxicab distance 2 from the origin.
32 have taxicab distance 3 from the origin.
16 have taxicab distance 4 from the origin.
dim2=-1, dim3=-1
dim2=-1, dim3=0
dim2=-1, dim3=1
dim2=0, dim3=-1
dim2=0, dim3=0
dim2=0, dim3=1
dim2=1, dim3=-1
dim2=1, dim3=0
dim2=1, dim3=1
I stumbled across the site "Advent of Code" and its puzzle Conways cube by accident. I really liked the idea of the problem, even though it took me some time to understand the given examples, but
then I really got into it.
I wrote a 3D answer; updated it to a 4D answer, then set about changing the Python source to handle any dimension >= 2.
The problem
This is the gist of what is requested
• Conways Game of life, (GOL), is usually played on a 2-D rectangular grid of cells that are in one of two states: inactive or active at the end of any cycle.
• We map a coordinate system on the grid where grid intersections are atall whole integer coordinates. (Negative too).
• The neighbours of a cell on the grid are defined as any cell with all coordinates that differs from the target cell by either -1, 0, or +1 independently - apart from the target cell in the center
of course.
For the original 2-D GOL, each cell has 8 neighbours.
• The grid, (called a Pocket Domain in the puzzle), is first seeded/initialised with an arrangement of activated cells in the 2-D plane coresponding to cycle(0).
• The configuration of active cells in the next cycle cycle(n+1) is calculated from the configuration in the current cycle cycle(n) by applying the following two simple rules to all cells:
1. If a cell is active in cycle(n) it will stay active if it has 2 or 3 active neighbours; otherwise it becomes inactive in cycle(n+1).
2. If a cell is inactive in cycle(n) it will become active if it has exactly 3 active neighbours; otherwise it becomes inactive in cycle(n+1).
• Note that although the pocket domain/grid is infinite, the rules need only be applied in the neighbourhood of any active cedlls as all cells outside that are inactive and can only remain inactive
in the next cycle.
• The puzzle defines a starting configuration/format and asks how many cells are active in some future cycle.
The textual format
For 2-D it is a rectangular plane of full stops '.' and hashes '#' where hashes represent active cells on the plane, and full-stops inactive cells. Lets call this the x-y coordinate plane
• The rectangle shown is only large enough to show all active cells.
A 3-D pocket domain is visualised as one or more x-y slices through the cubic x,y,z extent of the domain,The z coordinate for each slice in the cycle is different and stated above each slice like so:
The range of x and y coords shown, of all x-y slices for a cycle stay the same; but may be different between cycles as the extent of all active cells changes between cycles.
Their 4-D textual format introduces w as the forth dimension and now x-y planes through the encompassing hyper-rectangle of all active cells need both the z and w coords specified for each plane.
z=-1, w=-1
z=0, w=-1
z=1, w=-1
z=-1, w=0
N-Dimensional Solution
My N-Dimensional extension to the Text Format
My format is a slight change in that it will read the above, but names the dimensions greater than two, (assuming the x-y plane as dim0 and dim1), dim2, dim3, ... for example:
dim2=0, dim3=0, dim4=0
The internal grid data-structure
I started with no idea if there was to be a limit to the extent of the 3-D pocket dimension, and new that 2-D GOL looked pretty sparse w.r.t. active cells.
So I opted for a sparse grid.
A bit of thought and I confirmed I only needed to keep active cells, from which I could calculate all neighbours that could be affected then calculate what the active cells of the next cycle become.
If I keep the coordinate of cells as a tuple then I will be doing a lot of lookup on whether a coord is active or not. I made the pocket domain/grid/pd a set of active points where points were tuples
. Checking if a neighbour is active amounts to checking if it is in the set.
Reading the textual format
My initial 3-D reader only reads the 3-d format. It uses a regexp to extract the Z and XY sections and turns them into a set of the active points defined.
from itertools import product
import re
#%% to/From textual format
# Help: https://regex101.com/codegen?language=python
pocket_plane_re = r"""
# NOTE: add an extra \n\n to end of string being matched
^z=(?P<Z>[-+0-9]+?) # Single z-plane coord
(?P<XY>.*?\n)\n # Following xy-plane description
def _plane_from_str(text, z):
"""single pocket dim z-plane to set of active cell coords"""
pd = {(x, y, z) # Pocket dimension
for y, row in enumerate(text.strip().split('\n'))
for x, state in enumerate(row)
if state == '#'}
return pd
def from_str(text):
"""Pocket dim text to set of active cell coords, the internal format"""
pd = set() # pocket dim active cells accumulator
matches = re.finditer(pocket_plane_re, text + '\n\n',
re.MULTILINE | re.DOTALL | re.VERBOSE)
for match in matches:
z = int(match.groupdict()['Z'])
xy = match.groupdict()['XY']
pd |= _plane_from_str(xy, z)
return pd
function from_str separates and extracts the z coord and the xy plane from the text then passes them to function _plane_from_str to return a set of active cell coordinates to add to the pocket
The N-Dimensional reader
from itertools import product
import re
#%% to/From textual format
# Help: https://regex101.com/codegen?language=python
# Parses textaul version of state.
pocket_plane_re = r"""
# Note: add an extra \n\n to end of string being matched,
# and a \n to the beginning.
# Coordinates of plane, (optional for 2D).
^(?P<PLANE>(?: [a-z][a-z0-9]*=(?P<NUM1>-?\d+))
(?:,\s+ (?: [a-z][a-z0-9]*=(?P<NUM2>-?\d+)))*)?
(?P<XY>[.#\n]+?\n)\n # xy-plane description
def _plane_from_str(text, plane_coords):
"""single pocket dim z-plane to set of active cell coords"""
pcoords = list(plane_coords)
pd = {tuple([x, y] + pcoords) # Pocket dimension
for y, row in enumerate(text.strip().split('\n'))
for x, state in enumerate(row)
if state == '#'}
return pd
def from_str(text: str) -> set:
"""Pocket dim text to set of active cell coords, the internal format"""
pd = set() # pocket dim active cells accumulator
matches = re.finditer(pocket_plane_re, '\n' + text + '\n\n',
re.MULTILINE | re.DOTALL | re.VERBOSE)
for match in matches:
# Extra coords for the xy plane
plane_coords_txt = match.groupdict()['PLANE'] or ''
plane_coords = [int(matchp.group()[1:])
for matchp in
re.finditer(r"=-?\d+", plane_coords_txt)]
# xy plane
xy = match.groupdict()['XY']
# Accumulate active cells in plane
pd |= _plane_from_str(xy, plane_coords)
return pd
An extended regexp handles their being no extra coord needed for the x-y plane, when working in 2-D, as well as the unbounded list of extra dimension coords in more than 2-D.
Writing the textual format.
Creates a textual representation of the internal format of a cycle.
In the 3-D case I first finds the minimum/maximum extents of activity on all three axis in minmax. Stepping through the range of the z axis it then finds and prints all the activity in the x-y plane
for that value of z
I had had difficulty in understanding that the extend of x and y changes between cycles so argument print_extent was used in debugging.
def to_str(pd, print_extent=False):
"From set of active cell coords to output textual format"
# Extent of pocket dimenson
minmax = [range(min(pd, key=lambda pt:pt[dim])[dim],
max(pd, key=lambda pt:pt[dim])[dim] + 1)
for dim in range(3)]
txt = [f"\n// FROM: {tuple(r.start for r in minmax)}"
f" TO: {tuple(r.stop -1 for r in minmax)}"] if print_extent else []
append = txt.append
for z in minmax[2]:
for y in minmax[1]:
append(''.join(('#' if (x, y, z) in pd else '.')
for x in minmax[0]))
return '\n'.join(txt)
The N-Dimensional writer
def to_str(pd, print_extent=False) -> str:
"From set of active cell coords to output textual format"
if not pd:
return ''
# Dimensionality of pocket dimension
point = pd.pop()
dims = len(point)
pd |= {point} # Put that point back
# Extent of pocket dimenson
minmax = [range(min(pd, key=lambda pt:pt[dim])[dim],
max(pd, key=lambda pt:pt[dim])[dim] + 1)
for dim in range(dims)]
txt = [f"\n// FROM: {tuple(r.start for r in minmax)}"
f" TO: {tuple(r.stop -1 for r in minmax)}"] if print_extent else []
for plane_coords in product(*minmax[2:]):
ptxt = ['\n' + ', '.join(f"dim{dim}={val}"
for dim, val in enumerate(plane_coords, 2))]
ptxt += [''.join(('#'
if tuple([x, y] + list(plane_coords)) in pd
else '.')
for x in minmax[0])
for y in minmax[1]]
if '#' in ''.join(ptxt):
txt += ptxt
return '\n'.join(txt)
To find out how many dimensions there are, a point is poped from, (and later added back to), the set of active points, pd, and stored in dims. minmax and assignments are similar,
The creation of the text for each x-y plane is separated out. many of the x-y planes can be of all inactive cells - this modification will not display x-y planes if they have no set cells.
Generating all neighbours to a cell
An important part of the program that is repeated many times, as well as many times for the same cell coordinates.
3-D neighbour function exploration
Knowing that it is calculated many times for the same cell I decided to first have a cached function. I skipped using lru_cache for a simple, fast homebrew of using a dict _neighbour_cache.
_neighbour_cache = {}
def neighbours(point):
"return the points 26 neighbour coords"
if point not in _neighbour_cache:
x, y, z = point
_neighbour_cache[point] = {xyz
for xyz in product((x-1, x, x+1),
(y-1, y, y+1),
(z-1, z, z+1))
if xyz != point}
return _neighbour_cache[point]
That cache was large and in higher dimensions would get huge or inefficient, so I decided to time some alternate functions for the 3-D case.
Alternate implementations and timings
def neighboursX(point):
x, y, z = point
return {
(x-1, y-1, z-1), (x , y-1, z-1), (x+1, y-1, z-1),
(x-1, y , z-1), (x , y , z-1), (x+1, y , z-1),
(x-1, y+1, z-1), (x , y+1, z-1), (x+1, y+1, z-1),
(x-1, y-1, z ), (x , y-1, z ), (x+1, y-1, z ),
(x-1, y , z ), (x+1, y , z ),
(x-1, y+1, z ), (x , y+1, z ), (x+1, y+1, z ),
(x-1, y-1, z+1), (x , y-1, z+1), (x+1, y-1, z+1),
(x-1, y , z+1), (x , y , z+1), (x+1, y , z+1),
(x-1, y+1, z+1), (x , y+1, z+1), (x+1, y+1, z+1),
def neighboursY(point):
x, y, z = point
return {xyz
for xyz in product((x-1, x, x+1),
(y-1, y, y+1),
(z-1, z, z+1))
if xyz != point}
In [80]: p = (1, 1, 1)
In [81]: neighbours(p) == neighboursX(p) == neighboursY(p)
Out[81]: True
In [82]: %timeit neighbours(p)
719 ns ± 20.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [83]: %timeit neighboursX(p)
9.75 µs ± 82 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [84]: %timeit neighboursY(p)
11.2 µs ± 265 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [85]:
Caching using a simple global dict in neighbours is an order of magnitude
faster than calculations, but calculations take no extra memory
neighboursY seems to hit the sweet spot of maintainability speed and memory use and was adopted for use in the 4-D version.
4-D neighbour function
def neighbours(point):
x, y, z, w = point
return {xyzw
for xyzw in product((x-1, x, x+1),
(y-1, y, y+1),
(z-1, z, z+1),
(w-1, w, w+1))
if xyzw != point}
It is just a 4-D extension of the 3-D neighboursY function.
The N-D neighbour function
def neighbours(point):
return {n
for n in product(*((n-1, n, n+1)
for n in point))
if n != point}
Working that out was sweet :-)
Checking if a cell becomes active in the next cycle
Take a point and the set of all active points in the current cycle then checks the points neighbours to see if this point will be active or not in the next cycle
def is_now_active(point, pd):
"Is point active w.r.t. pd"
ncount = 0 # active neighbours count
for npoint in neighbours(point):
ncount += npoint in pd
if ncount >3:
if point in pd: # Currently active
return 2 <= ncount <= 3
else: # Currently inactive
return ncount == 3
The above didn't need changing from being written for the 3-D case and being reused in the 4-D and N-D versions.
in higher dimensions points can have many neighbours and many possible active neighbours but we only need a count of up to 4 for the following checks to work.
Generating the next cycle from the current.
This again did not need changing from the 3-D case
def next_cycle(pd: set) -> set:
"generate next cycle from current Pocket Dimension pd"
possibles = set() # cells that could become active
for active in pd:
possibles |= neighbours(active)
possibles |= pd
return {point for point in possibles if is_now_active(point, pd)}
it checks all current points and their neighbours to see if they will be active in the next cycle.
Examples of N-Dimensional results
if __name__ == '__main__':
cycle0 = """
pd2 = from_str(cycle0)
start_pd_for_dim = {d: {tuple(list(pt) + [0]*(d-2)) for pt in pd2}
for d in range(2, 8)}
for dim in range(2, 8):
n, pd = 0, start_pd_for_dim[dim]
lines, active = 0, 1
while True:
text = to_str(pd)
active = text.count('#')
active_planes = ('\n' + text).count('\n\n')
lines = text.count('\n')
print(f"\n{dim}-D: After {n} cycles, {active:_} cells on "
f"{active_planes:_} active plane(s):\n"
f"{text if lines < 60 else ''}")
if not (lines <2**12 and 0 < active < 2_000 and n < 10):
n, pd = n + 1, next_cycle(pd)
Just using the one 2-D initial cycle to create the same initial x-y plane in 2 to seven pocket domains in dict start_pd_for_dim.
Then for increasing pocket domain dimensions run the Conway cycles. cycle printing is stopped for large outputs. and whole dimensions are truncated when they get "large"
A sample from the end of the output for the __main__ above
6-D: After 0 cycles, 5 cells on 1 active plane(s):
dim2=0, dim3=0, dim4=0, dim5=0
6-D: After 1 cycles, 245 cells on 81 active plane(s):
6-D: After 2 cycles, 464 cells on 48 active plane(s):
6-D: After 3 cycles, 15_744 cells on 1_504 active plane(s):
7-D: After 0 cycles, 5 cells on 1 active plane(s):
dim2=0, dim3=0, dim4=0, dim5=0, dim6=0
7-D: After 1 cycles, 731 cells on 243 active plane(s):
7-D: After 2 cycles, 1_152 cells on 112 active plane(s):
7-D: After 3 cycles, 106_400 cells on 10_320 active plane(s):
Extra: Just how many neighbours does a point have?
>>> for d in range(1,15):
n = len(neighbours(tuple([0] * d)))
print(f"A {d}-D grid point has {n:_} neighbours.")
assert n == 3**d - 1
A 1-D grid point has 2 neighbours.
A 2-D grid point has 8 neighbours.
A 3-D grid point has 26 neighbours.
A 4-D grid point has 80 neighbours.
A 5-D grid point has 242 neighbours.
A 6-D grid point has 728 neighbours.
A 7-D grid point has 2_186 neighbours.
A 8-D grid point has 6_560 neighbours.
A 9-D grid point has 19_682 neighbours.
A 10-D grid point has 59_048 neighbours.
A 11-D grid point has 177_146 neighbours.
A 12-D grid point has 531_440 neighbours.
A 13-D grid point has 1_594_322 neighbours.
A 14-D grid point has 4_782_968 neighbours.
In [3]:
There's a lot of neighbours in higher dimensions! | {"url":"https://paddy3118.blogspot.com/2021/01/","timestamp":"2024-11-11T13:34:03Z","content_type":"application/xhtml+xml","content_length":"181374","record_id":"<urn:uuid:e6276417-57ad-4b16-adda-9a930e5a41ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00535.warc.gz"} |
Two-Way Tables (5 of 5)
Learning Objectives
• Create a hypothetical two-way table to answer more complex probability questions.
In our previous work with probability, we computed probabilities using a two-way table of data from a large sample. Now we create a hypothetical two-way table to answer more complex probability
Will It Be a Boy or a Girl?
A pregnant woman often opts to have an ultrasound to predict the gender of her baby.
Assume the following facts are known:
• Fact 1: 48% of the babies born are female.
• Fact 2: The proportion of girls correctly identified is 9 out of 10.
• Fact 3: The proportion of boys correctly identified is 3 out of 4.
(Source: Keeler, Carolyn, and Steinhorst, Kirk. “New Approaches to Learning Probability in the First Statistics Course,” Journal of Statistics Education 9(3):1–24, 2001.)
Here are the questions we want to answer:
• Question 1: If the examination predicts a girl, how likely is it that the baby will be a girl?
• Question 2: If the examination predicts a boy, how likely is it that the baby will be a boy?
Let’s consider what the possibilities are.
• The ultrasound examination predicts a girl, and either (a) a girl is born or (b) a boy is born.
• The ultrasound exam predicts a boy, and either (a) a girl is born or (b) a boy is born.
Let’s represent these four possible outcomes in a two-way table. On the left we have the categorical variable prediction, and on the top the categorical variable gender of baby.
Girl Boy
Predict Girl
Predict Boy
Now we find ourselves in an interesting situation. A two-way table without data!
The key idea is to create a two-way table consistent with the stated facts, then use the table to answer our questions.
To get started, let’s assume we have ultrasound predictions for 1,000 random babies. We could have picked any number here, but 1,000 will make our calculations easier to keep track of.
Starting with this number, we work backwards with our three facts to fill in this “hypothetical” table.
The first step is to put 1,000 as the overall total in the bottom right corner.
Girl Boy Row Totals
Predict Girl
Predict Boy
Column Totals 1,000
Let’s consider Fact 1: 48% of the babies born are female.
The bottom row gives the distribution of the categorical variable gender of baby. We can use this fact to compute the total number of girls and boys.
• 48% girls means that 0.48 (1,000) = 480 are girls.
• 52% are boys. (If 48% are girls, then 100% − 48% = 52% are boys.) So, 0.52(1,000) = 520 boys.
Fill these values into the bottom row of table.
• Note: These are marginal totals.
• You can check your work: These numbers should add to 1,000. If we add all the girls and boys together, we get the total number of babies.
Girl Boy Row Totals
Predict Girl
Predict Boy
Column Totals 0.48(1,000) = 480 0.52(1,000) = 520 1,000
Now let’s move on to Fact 2: The proportion of girls correctly identified is 9 out of 10.
• 9 out of 10 is 90% (9 ÷ 10 = 0.90 = 90%).
• 90% of the girls are correctly identified: 0.90(480) = 432.
• 10% of the girls are misidentified (predicted to be a boy): 0.10(480) = 48.
Fill these values into the table.
• You can check your work: These numbers should add to the total number of girls.
• (Girls who are correctly identified as girls ) + (Girls who are misidentified as boys) = Total girls
Girl Boy Row Totals
Predict Girl 0.90(480)= 432
Predict Boy 0.10(480) = 48
Column Totals 480 520 1,000
Finally, we use Fact 3: The proportion of boys correctly identified is 3 out of 4.
• 3 out of 4 is 75% (3 ÷ 4 = 0.75 = 75%).
• 75% of the boys are correctly identified: 0.75(520) = 390.
• 25% of the boys are misidentified (predicted to be a girl): 0.25(520) = 130.
Fill these values into the table.
• You can check your work: These numbers should add to the total number of boys.
• (Boys who are correctly identified as boys ) + (Boys who are misidentified as girls) = Total boys
Girl Boy Row Totals
Predict Girl 432 0.25(520) = 130
Predict Boy 48 0.75(520) = 390
Column Totals 480 520 1,000
Filling in the Row Totals, we now have a complete hypothetical two-way table based on our given information.
Girl Boy Row Totals
Predict Girl 432 130 562
Predict Boy 48 390 438
Column Totals 480 520 1,000
We are now in a position to answer our two questions:
Question 1: If the examination predicts a girl, how likely is it that the baby will be a girl?
Answer: We are asked to find the probability of a girl given that the examination predicts a girl.
This is the conditional probability: P(girl | predict girl).
So our answer to Question 1 is P(girl | predict girl) = 432 / 562 = 0.769.
Question 2: If the examination predicts a boy, how likely is it that the baby will be a boy?
Answer: We are asked to find the probability of a boy given that the examination predicts a boy.
This is the conditional probability: P(boy | predict boy).
So our answer to Question 2 is P(boy | predict boy) = 390 / 438 = 0.890.
Conclusion: If an ultrasound examination predicts a girl, the prediction is correct about 77% of the time. In contrast, when the prediction is a boy, it is correct 89% of the time.
Are you surprised at the answers to these questions? Looking just at the three given facts, you might have intuitively expected a different result. This is exactly why a two-way table is so useful.
It helps us organize the relevant information in a way that permits us to carry out a logical analysis. When it comes to probability, sometimes our intuition needs some help.
Use the following context for the next Learn By Doing activity.
A large company has instituted a mandatory employee drug screening program. Assume that the drug test used is known to be 99% accurate. That is, if an employee is a drug user, the test will come back
positive (“drug detected”) 99% of the time. If an employee is a non-drug user, then the test will come back negative (“no drug detected”) 99% of the time. Assume that 2% of the employees of the
company are drug users.
In constructing the hypothetical two-way table, it is convenient to start by assuming that the company has 10,000 employees (10,000 is a large enough number to ensure that all calculations result in
whole numbers).
Candela Citations
CC licensed content, Shared previously | {"url":"https://courses.lumenlearning.com/atd-herkimer-statisticssocsci/chapter/two-way-tables-5-of-5/","timestamp":"2024-11-14T11:19:09Z","content_type":"text/html","content_length":"40645","record_id":"<urn:uuid:1ddb1766-5466-4757-adab-230c724ecf59>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00320.warc.gz"} |
I was involved in the search of the $\nu_{\mu}$ to $\nu_{e}$ oscillation research with the OPERA detector within the group of the Bern University. In particular I developed the analysis and defined
an extended scanning protocol dedicated to the identification of electromagnetic showers, thus to increase the efficiency to $\nu_{e}$ charged current interactions.
CUORE and Cuoricino
Cuoricino was a detector made by an array of 62 TeO2 bolometers operating at 10 mK. The mechanical vibrations of the cryogenic system significantly affected the noise; the structure vibrates and the
noise induced is correlated between different channels. In order to optimise the energy resolution I developed an algorithm to remove the correlated noise. It is based on an analysis of the
multichannel covariance matrix in the frequency domain and efficiently removes the correlated noise, such algorithm is still in use for the data analysis of CUORE. To develop such algorithm I also
implemented all the libraries for complex mathematics in Diana, the CUORE analysis framework.
Server Time
• Thu 07 November 2024
• 23:26:49
• Timezone: Europe/Rome | {"url":"https://www.roma1.infn.it/~mancinit/?action=2Research/Fundamental_Physics","timestamp":"2024-11-07T22:26:49Z","content_type":"text/html","content_length":"13378","record_id":"<urn:uuid:e520f9d3-ecb3-4c71-8578-9b42797fb4cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00523.warc.gz"} |
The Stacks project
Definition 42.68.13. Let $R$ be a local ring with residue field $\kappa $. Let $(M, \varphi , \psi )$ be a $(2, 1)$-periodic complex over $R$. Assume that $M$ has finite length and that $(M, \varphi
, \psi )$ is exact. The determinant of $(M, \varphi , \psi )$ is the element
\[ \det \nolimits _\kappa (M, \varphi , \psi ) \in \kappa ^* \]
such that the composition
\[ \det \nolimits _\kappa (M) \xrightarrow {\gamma _\psi \circ \sigma \circ \gamma _\varphi ^{-1}} \det \nolimits _\kappa (M) \]
is multiplication by $(-1)^{\text{length}_ R(I_\varphi )\text{length}_ R(I_\psi )} \det \nolimits _\kappa (M, \varphi , \psi )$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 02PJ. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 02PJ, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/02PJ","timestamp":"2024-11-12T13:05:17Z","content_type":"text/html","content_length":"14461","record_id":"<urn:uuid:649fcf31-a87d-426b-9080-1e67a26c7395>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00227.warc.gz"} |
Chart Body
Spin is the intrinsic angular momentum of particles. Spin is given in units of h-bar, which is the quantum unit of angular momentum, where hbar = h/2pi = 6.58 x 10^-25 GeV s = 1.05 x 10^-34 J s.
Electric charges are given in units of the proton's charge. In SI units the electric charge of the proton is 1.60 x 10^-19 coulombs.
The energy unit of particle physics is the electron volt (eV), the energy gained by one electron in crossing a potential difference of one volt. Masses are given in GeV/c^2 (remember E = mc^2), where
1 GeV = 10^9 eV = 1.60 x 10^-10 joule. The mass of the proton is 0.938 GeV/c^2 = 1.67 x 10^-27 kg. | {"url":"https://ccwww.kek.jp/pdg/particleadventure/frameless/chart_fermion.html","timestamp":"2024-11-04T14:01:48Z","content_type":"text/html","content_length":"1656","record_id":"<urn:uuid:9a17090a-f3f1-49bb-ad0e-7aeb2cfe9799>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00235.warc.gz"} |
slaed9: finds the roots of the secular equation, as defined by the values in D, Z, and RHO, between KSTART and KSTOP - Linux Manuals (l)
slaed9 (l) - Linux Manuals
slaed9: finds the roots of the secular equation, as defined by the values in D, Z, and RHO, between KSTART and KSTOP
SLAED9 - finds the roots of the secular equation, as defined by the values in D, Z, and RHO, between KSTART and KSTOP
K, KSTART, KSTOP, N, D, Q, LDQ, RHO, DLAMDA, W, S, LDS, INFO )
INTEGER INFO, K, KSTART, KSTOP, LDQ, LDS, N
REAL RHO
REAL D( * ), DLAMDA( * ), Q( LDQ, * ), S( LDS, * ), W( * )
SLAED9 finds the roots of the secular equation, as defined by the values in D, Z, and RHO, between KSTART and KSTOP. It makes the appropriate calls to SLAED4 and then stores the new matrix of
eigenvectors for use in calculating the next level of Z vectors.
K (input) INTEGER
The number of terms in the rational function to be solved by SLAED4. K >= 0.
KSTART (input) INTEGER
KSTOP (input) INTEGER The updated eigenvalues Lambda(I), KSTART <= I <= KSTOP are to be computed. 1 <= KSTART <= KSTOP <= K.
N (input) INTEGER
The number of rows and columns in the Q matrix. N >= K (delation may result in N > K).
D (output) REAL array, dimension (N)
D(I) contains the updated eigenvalues for KSTART <= I <= KSTOP.
Q (workspace) REAL array, dimension (LDQ,N)
LDQ (input) INTEGER
The leading dimension of the array Q. LDQ >= max( 1, N ).
RHO (input) REAL
The value of the parameter in the rank one update equation. RHO >= 0 required.
DLAMDA (input) REAL array, dimension (K)
The first K elements of this array contain the old roots of the deflated updating problem. These are the poles of the secular equation.
W (input) REAL array, dimension (K)
The first K elements of this array contain the components of the deflation-adjusted updating vector.
S (output) REAL array, dimension (LDS, K)
Will contain the eigenvectors of the repaired matrix which will be stored for subsequent Z vector calculation and multiplied by the previously accumulated eigenvectors to update the system.
LDS (input) INTEGER
The leading dimension of S. LDS >= max( 1, K ).
INFO (output) INTEGER
= 0: successful exit.
< 0: if INFO = -i, the i-th argument had an illegal value.
> 0: if INFO = 1, an eigenvalue did not converge
Based on contributions by
Jeff Rutter, Computer Science Division, University of California
at Berkeley, USA | {"url":"https://www.systutorials.com/docs/linux/man/l-slaed9/","timestamp":"2024-11-12T12:28:32Z","content_type":"text/html","content_length":"10714","record_id":"<urn:uuid:cf87e260-383e-44e0-a5d7-3733f4c1f7d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00037.warc.gz"} |
LESSON 5, COMPARISION SIGNS - Unified Arabic Braille Portal
Less than sign “<”
Dots 26-45
Greater than sign “>”
Dots 35-45
Less than or equal to sign “≤”
Dots 26-45-456
Greater than or equal to sign “≥”
Dots 35-45-456
Unlike most mathematical signs, these signs do not depend on the dots 56 as a prefix.
The less than sign in braille consists of two cells, the first cell is a prefix containing the two dots 45 followed by the next cell with the dots 26.
The greater than sign also consists of two cells, the dots 45 as a prefix, followed by the other cell with the dots 35.
While the less than or equal to and greater than or equal to signs each consists of three cells. The less than or equal to sign contains the dots 456 followed by 45, and then, the dots 26, i.e.,
adding the dots 456 before the less than sign. The greater than or equal to sign consists of the dots 456, then, 45 and 35, i.e., adding the dots 456 before the greater than sign.
You must leave a space before and after the comparison sign, but if the printing or conversion to braille is done from normal writing that does not contain spaces, the braille translator/ coder will
put the signs without a space in line with normal writing.
A space terminates the number mode, so you must type the number sign just before the number that follows the comparison sign.
####Example 1
7 > 3
####Example 2
6 < 10
In the new Arabic Braille, it is now possible to use two signs that are less than and greater than in the normal text content without causing any conflict with the characters as was the case in the
####Example 3
Use the sign > or the sign <
####Example 4
$1 > 5 Yen.
####Example 5
4 ≤ 5
Example 6
6 ≤ 6
####Example 7
8 ≥ 4
####Example 8
8 ≥ 8 | {"url":"https://braille.mada.org.qa/lesson-5-comparision-signs/?lang=en","timestamp":"2024-11-07T12:38:44Z","content_type":"text/html","content_length":"111523","record_id":"<urn:uuid:557f9ad2-fce9-4300-b450-6712513fc717>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00501.warc.gz"} |
How to Help Diffuse Math Homework AngstHow to Help Diffuse Math Homework Angst
I recently spent way too much time on Facebook talking parents off the math homework ledge. I'm sure you've see these posts from time to time - the ones where they post a picture of a math problem
and ask, "Can you believe what they want my kid to do?"
Our scene opens with Johnny at the kitchen table. It plays out something like this:
Johnny: Can you help me with this math? I don't get it.
Parent takes 3 or 4 or 12 deep breaths, looks at the problem, tries to sound calm but remembers their own math challenges as a kid.
Parent: I don't know. This isn't the way I learned math. I can't believe they're asking you to do this! It's ridiculous.
Johnny starts crying and Parent vows to call the school in the morning, complaining about these impossible problems. Scene fades...
But wait! Before you make that phone call, here's why this teacher gives math homework...
Math homework is a way for the teacher to see what Johnny can do independently, as he practices a math concept.
I've always let parents know that the math homework I send home is to either finish something we've done in class, or as practice to see who is independent with that skill or concept.
I assure students and parents that this homework should take no more than 20 minutes of their time - if they understand it.
Challenge: "I don't get it."
But what if Johnny is stumped? I want to know where he is getting stuck, beyond, "I don't get it." Students tend to throw out the entire problem, not thinking about what they
I'm not denying he's stuck, but this holds him accountable for thinking about the problem and analyzing where he's getting lost in the process. Being able to to verbalize what he knows, or thinks he
knows, lets me figure out how to help him. This also builds self-confidence in his problem-solving abilities.
Clearing up some parent misconceptions... "Why don't they have a math book?"
Frankly, I've always preferred to not have one. The reason being, textbooks walk kids through every step of the problem, cutting out any trial-and-error discovery, and (generally) only showing one
way to get a solution, when in actuality, there might be several approaches to a single solution.
Back in the day... (see how I refrained from saying "the olden days?") we were taught do this step first, then that step next, and maybe one more step until, voilà, you get your answer - and that's
how the math magic happens. However, that "magic" is strictly algorithm-based.
Students need a lot of hands-on opportunities to develop a concrete understanding of what the manipulation of those numbers actually means. There needs to be time to let students explore and make
connections to what they've already learned. It's the first step in learning a new math concept. To add to the problem, textbooks frequently skip that step, jumping straight into the algorithm.
"Whatever happened to the worksheets with story problems and timed tests?"
Rest assured, there are still problems to solve. The format looks a little different from story problems of old. As an example, back in the day... (well, there I go again!) kids would have a
worksheet of addition facts, followed by a page of story problems where they had to add, or occasionally subtract something to find the answer. The problems generally stayed true to the current math
Now, "story problems" look more like problems adults might solve everyday, using the tools (addition, subtraction, multiplication and division) in their math toolboxes. For example, you're planting a
garden in a 10'x14' space. How much topsoil will you need? How much fencing do you need to surround the garden, keeping in mind there is a 4' gate on one end. Fencing is $4.50 per foot and the gate
is $12.95, but on sale at 30% off.
I would consider this one problem with a central theme, something kids should be able to break apart to solve. Just look at all the different types of math that are included: area, perimeter,
multiplication, subtraction, addition, and finding percentages.
Which brings me to timed tests...
The purpose of timed tests is to show students how quickly they can recall math facts. I believe
math games
are a better way to practice math facts than timed tests. Games, such as one of my favorites,
PEMDAS Bowling
, give students a purpose for using their facts. Quick recall improves with purposeful use of their facts, whether it's playing games or solving multi-step problems like the one above.
I tell my students, if they don't have quick recall of math facts, it slows them down. It's like needing a crutch when you have a sprain or a broken leg. The crutch helps you get around better, but
ultimately, it slows you down. So please, use your crutch (a table, fingers, or a calculator) until you don't need it anymore. It takes a lot of pressure off of them and clears the way so we can get
down to the business of problem solving.
"How can we help our kids with their math homework when we don't understand it ourselves?"
I suggest one of the reasons parents struggle helping their kids with math homework is because they did not get a chance to develop a concrete understanding of what those algorithms meant, when they
learned it. This was and is especially true for spatial learners. We are expecting kids to be able to explain what they're doing and why, thus creating a disconnect, when parents can't help.
Have you ever heard parents lament, "I was never good at math?" It doesn't help their child to reinforce their uncertainty or fear of math. In fact, just the opposite happens. It gives kids
permission to "not be good at math" because their parents weren't.
Just PLEASE don't ever let them know you'd rather have a root canal than figure out this homework with them. Instead, check out a possible solution to your dilemma, below.
Instead of commiserating over the futility of math homework, try this:
Sit down with your child, and ask questions such as, "What do we know about this problem that might help us solve it?" Work along side him, talking about what might and might not work.
If he still doesn't understand, help him verbalize what he does understand and where he's getting stuck. Have him write that down to turn in. It's much more useful feedback for the teacher than a
note from mom or dad saying, "We don't understand the homework." Before long, he'll be teaching you new math tricks.
Above all, please remember, these are our future engineers, scientists, architects, pilots, doctors, computer programers, builders, musicians and artists. They are going to know and be able to do so
much more than our generation, and will learn it much more quickly than we can, if they don't have the parent filter in their heads saying, "this is too hard."
Good luck! Be sure to check out some of my math games on the right, designed to give that all-important math practice! I leave you with one of my favorite quotes.
"Do not worry about your difficulties in Mathematics. I can assure you, mine are still greater." Albert Einstein | {"url":"https://www.desktoplearningadventures.com/2017/01/how-to-help-diffuse-math-homework-angst.html","timestamp":"2024-11-07T11:51:38Z","content_type":"application/xhtml+xml","content_length":"77635","record_id":"<urn:uuid:9ec08006-7ee0-427b-8e32-62dec68e0f13>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00681.warc.gz"} |
How to solve distance word problems using linear systems
how to solve distance word problems using linear systems Related topics: pre-algebra worksheet
graphing calculater
sum and difference of cubes worksheets
how to put a factor on a ti 84 plus
prentice hall algebra 1 textbook answers
how do you add scientific notation
"sample tests" maths volume area
dividing decimal equation
fifth grade math and science worksheets free
answers to children's algebra
positive and negative integer worksheets
solution in special factoring in math binomials
Author Message
siskmote Posted: Monday 16th of Aug 11:37
Hi there I have almost taken the decision to hire a math private teacher, because I've been having a lot of stress due to math homework lately . Every day when I come home from school I
waste all my time with my algebra homework, and in the end I still seem to be getting the wrong answers. However I'm also not certain if a math private teacher is worth it, since it's so
expensive , and who knows, maybe it's not even that good . Does anyone know anything about how to solve distance word problems using linear systems that can help me? Or maybe some
explanations about angle complements,least common denominator or point-slope? Any ideas will be valued.
Back to top
AllejHat Posted: Wednesday 18th of Aug 08:53
Algebrator is what you are looking for. You can use this to enter questions pertaining to any math topic and it will give you a step-by-step answer to it. Try out this software to find
answers to questions in converting decimals and see if you get them done faster.
Back to top
fveingal Posted: Thursday 19th of Aug 07:50
I must agree that Algebrator is a cool thing and the best software of this kind you can get . I was so surprised when after weeks of anger I simply typed in roots and that was the end of
my difficulties with math. It's also so good that you can use the software for any level: I have been using it for several years now, I used it in Remedial Algebra and in Intermediate
algebra too ! Just try it and see it for yourself!
From: Earth
Back to top
MoveliVadBoy Posted: Friday 20th of Aug 08:48
I just pray this thing isn’t very complex . I am not so good with the computer stuff. Can I get the product details , so I know what it has to offer?
Mobile, AL
Back to top
Vild Posted: Saturday 21st of Aug 10:54
Sure, why not! You can grab a copy of the program from https://softmath.com/about-algebra-help.html. You are bound to get addicted to it. Best of Luck.
Back to top | {"url":"https://softmath.com/algebra-software/exponential-equations/how-to-solve-distance-word.html","timestamp":"2024-11-11T13:27:36Z","content_type":"text/html","content_length":"41402","record_id":"<urn:uuid:816ea584-5879-46bf-a64b-72eef0649bd6>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00133.warc.gz"} |
Vertical transport properties of weakly-coupled Ac-driven GaAs/AlAs superlattices - Rare & Special e-Zone
The vertical transport properties of ac-driven weakly-coupled superlattices have been investigated experimentally in this thesis. These include the frequency-locking phenomena, the ac-induced
generation and annihilation of self-sustained current oscillations (SSCOs), and the appearance of current steps in dynamic voltage bands (DVBs) which exhibit the period-adding bifurcations.
In the study of the frequency-locking phenomena, it has been found that an ac-driven SSCO is frequency-locked into f
/n when the external ac frequency f
equals to nf
/m, where f
is the intrinsic frequency of the free SSCO and both n and m are integers with no common factors. In addition, a locking region for f
exists in the vicinity of nf
/m. As long as f
lies in this locking region, the frequency-locking into f
/n is observed. These phenomena are first explained qualitatively in terms of the limit cycle theory. Then a numerical simulation is performed using the discrete drift model. An excellent agreement
between the experimental observations and the numerical results is obtained.
The ac-induced generation of self-sustained current oscillations is manifested by the expansion of dynamical voltage bands upon applying an external ac signal. The width of a DVB increases with the
increase of the ac amplitude while the ac frequencies are not much higher than the intrinsic ones. Quantitatively, the ac-induced expansion of DVBs is successfully simulated using the discrete drift
model. Qualitatively, a discussion of related phase portraits in the phase space offers a straightforward way to account for this DVB expansion. Furthermore, when an ac signal with frequency much
higher than the intrinsic one is applied, the width of a DVB shrinks with increasing ac frequencies, indicating the annihilation of SSCOs. The annihilation of SSCOs at high ac frequencies is ascribed
to the ac-induced localization of domain walls.
A series of current steps are observed in the dc current-voltage characteristics of DVBs under certain external ac driven conditions. On each current step the ac response is frequency locked to f
/ n, where n is an integer larger than 1. When the applied ac frequency is equal to m X 3.6 KHz, where m is an integer and 3.6 KHz is close to the intrinsic frequency of free SSCOs within DVBs, these
frequency-locked current steps are found to exhibit the so-called period-adding bifurcations represented by an arithmetical series of n = ml + 1, where l is an integer increasing from 1. The
formation of the current steps in DVBs is explained based on the particular waveforms of the temporal current traces measured on current steps.
The results in this thesis not only demonstrate the richness of the nonlinear dynamics of weakly-coupled superlattices, but also reveal in details the crucial role of an external ac signal in the
vertical transport properties of superlattices. | {"url":"https://lbezone.hkust.edu.hk/rse/?p=11505","timestamp":"2024-11-09T03:21:53Z","content_type":"text/html","content_length":"56766","record_id":"<urn:uuid:b2f8c038-a8ee-47a6-9ca7-3edaf09de219>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00643.warc.gz"} |
PID Basics in a dsPIC® DSC Microcontroller
Last modified by Microchip on 2024/02/16 11:17
PID Introduction
Proportional, Integral and Derivative (PID) values can be used to control power conversion systems using the feedback control loop. Before discussing the details of PID control, Figure 1 shows a
functional block diagram of a home heating temperature control system using a feedback control loop.
Figure 1
The plant is the physical heating and cooling part of the system. The setpoint measures variables within the plant, the error is the difference between the response of the plant and the desired
response, i.e. setpoint. For example, the current temperature is 65 degrees, the thermostat is set to 70 degrees. The resulting error = setpoint - current = 70 - 65 = 5 degrees. The controller is the
most significant element of the control system. The controller is responsible for several tasks and is the link that connects all of the physical and non-physical elements. It measures the output
signal of the Plant’s Sensors, processes the signal, and then derives an error based on the signal measurement and the set point. Once the sensor data has been collected and processed, the result
must be used to find PID values, which then must be sent out to the plant for error correction. The rate at which all of this happens is dependent upon the controller’s processing power. This may or
may not be an issue depending on the response characteristic of the plant. A temperature control system is much more forgiving on a controller’s processing capabilities than a motor control system.
PID as a Power Converter Controller
Figure 2 shows the high-level block diagram of the power converter system using PID. The first diagram can be found on the "Developing Digital System Transfer Functions for a Power Converter" page.
The second diagram collapses the two blocks, Pulse Width Modulation (PWM) Generator and Power Stage, into one only, called process (or plant, from the previous example).
Figure 2
PID in dsPIC® DSC
In dsPIC^® Digital Signal Controller (DSC), the PID controller is made up of three basic blocks:
• Proportional: the output is proportional to the input.
• Integral: the output is the integral of the input.
• Derivative: the output is the derivative of the input.
Although there are several ways these blocks can be interconnected, we will investigate the most traditional technique where the three blocks are connected in parallel as in Figure 3.
Figure 3
The PID is inserted in the block diagram representing a system. The goal of the PID block is to generate an output u(t) that drives the system we have at hand (the process or plant) so that its
output y(t) matches a reference signal x(t). The input to the PID is the error between the reference signal (ideal or desired behavior of the plant) and the real output. The target is to operate in
such a way as to get an error that is as close to zero as possible, using the feedback control loop.
PID Equation
PID equations are shown in Figure 4. The new PID output value (e.g., the new active PWM period value) is computed as the sum of the previous time value plus the correction term that takes into
consideration the three values of error (i.e., the current value, the value of the previous sampling period, and the error value two sampling periods prior). The PID control loop weights them with
the coefficient that has been previously computed and eventually calculates the future value of the duty cycle.
Figure 4
On This Page
Microchip Support
Query Microchip Forums and get your questions answered by our community:
Microchip Forums
AVR Freaks Forums
If you need to work with Microchip Support staff directly, you can submit a technical support case. Keep in mind that many questions can be answered through our self-help resources, so this may not
be your speediest option. | {"url":"https://developerhelp.microchip.com/xwiki/bin/view/applications/power/dspic-33f-power-converter/pid-basics/","timestamp":"2024-11-09T16:28:28Z","content_type":"application/xhtml+xml","content_length":"51479","record_id":"<urn:uuid:60806966-7278-4f3c-a6f7-66b0dc1acc73>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00154.warc.gz"} |
279 research outputs found
We investigate the condensate density and the condensate fraction of conduction electrons in weak-coupling superconductors by using the BCS theory and the concept of off-diagonal-long-range-order. We
discuss the analytical formula of the zero-temperature condensate density of Cooper pairs as a function of Debye frequency and energy gap, and calculate the condensate fraction for some metals. We
study the density of Cooper pairs also at finite temperature showing its connection with the gap order parameter and the effects of the electron-phonon coupling. Finally, we analyze similarities and
differences between superconductors and ultracold Fermi atoms in the determination of their condensate density by using the BCS theory.Comment: 14 pages, 1 figure, 1 table, to be published in
'Fermions: Flavors, Properties, and Types' (Nova Science Publishers, New York)
In this report we investigate the macroscopic quantum tunneling of a Bose condensate falling under gravity and scattering on a Gaussian barrier that could model a mirror of far-detuned sheet of
light. We analyze the effect of the inter-atomic interaction and that of a transverse confining potential. We show that the quantum tunneling can be quasi-periodic and in this way one could generate
coherent Bose condensed atomic pulses. In the second part of the report, we discuss an effective 1D time-dependent non-polynomial nonlinear Schrodinger equation (NPSE), which describes cigar-shaped
condensates. NPSE is obtained from the 3D Gross-Pitaevskii equation by using a variational approach. We find that NPSE gives much more accurate results than all other effective 1D equations recently
proposed.Comment: 9 pages, 5 figures, report for the X International Laser Physics Workshop, Seminar on Bose-Einstein Condensation of Trapped Atoms, Moscow, July 3-7, 200
We study the classical and quantum perturbation theory for two non--resonant oscillators coupled by a nonlinear quartic interaction. In particular we analyze the question of quantum corrections to
the torus quantization of the classical perturbation theory (semiclassical mechanics). We obtain up to the second order of perturbation theory an explicit analytical formula for the quantum energy
levels, which is the semiclassical one plus quantum corrections. We compare the "exact" quantum levels obtained numerically to the semiclassical levels studying also the effects of quantum
corrections.Comment: 11 pages, Latex, no figures, to be published in Meccanic
We investigate the effect of a surface plasmon resonance on Goos-Hanchen and Imbert-Fedorov spatial and angular shifts in the reflection of a light beam by considering a three-layer system made of
glass, gold and air. We calculate these spatial and angular shifts as a function of the incidence angle showing that they are strongly enhanced in correspondence of the resonant angle. In particular,
we find giant spatial and angular Goos-Hanchen shits for the p-wave light close to the plasmon resonance. We also predict a similar, but less pronounced, resonant effect on spatial and angular
Imbert-Fedorov shifts for both s-wave and p-wave light.Comment: 4 pages, 4 figures, accepted for publication in Phys. Rev.
We study the stability of a scalar inflaton field and analyze its point attractors in the phase space. We show that the value of the inflaton field in the vacuum is a bifurcation parameter and prove
the possible existence of a limit cycle by using analytical and numerical arguments.Comment: Latex, 11 pages, 3 figures (available upon request), to be published in Modern Physics Letters
As pointed out by the authors of the comment quant-ph/9712046, in our paper quant-ph/9712030 we studied in detail the metastability of a Bose-Einstein Condensate (BEC) confined in an harmonic trap
with zero-range interaction. As well known, the BEC with attractive zero-range interaction is not stable but can be metastable. In our paper we analyzed the role of dimensionality for the
metastability of the BEC with attractive and repulsive interaction.Comment: 4 pages, Latex, no figure
We analyze a spatially homogeneous SU(2) Yang-Mills-Higgs system both in classical and quantum mechanics. By using the Toda criterion of the Gaussian curvature we find a classical chaos-order
transition as a function of the Higgs vacuum, the Yang-Mills coupling constant and the energy of the system. Then, we study the nearest-neighbour spacing distribution of the energy levels, which
shows a Wigner-Poisson transition by increasing the value of the Higgs field in the vacuum. This transition is a clear quantum signature of the classical chaos-order transition of the system.Comment:
Latex, 10 pages, 1 table, talk to the VIII International Conference on Symmetry Methods in Physics, 27 July -- 2 August 1997, Joint Institute for Nuclear Physics, Dubna (Russia
We investigate the critical temperature of an interacting Bose gas confined in a trap described by a generic isotropic power-law potential. We compare the results with respect to the non-interacting
case. In particular, we derive an analytical formula for the shift of the critical temperature holding to first order in the scattering length. We show that this shift scales as $N^{n\over 3(n+2)}$,
where $N$ is the number of Bosons and $n$ is the exponent of the power-law potential. Moreover, the sign of the shift critically depends on the power-law exponent $n$. Finally, we find that the shift
of the critical temperature due to finite-size effects vanishes as $N^{-{2n\over 3(n+2)}}$.Comment: 9 pages, 1 figure, 1 table, to be published in Int. J. Mod. Phys. B, related papers can be found at
We study the classical chaos--order transition in the spatially homogenous SU(2) Yang--Mills--Higgs system by using a quantal analog of Chirikov's resonance overlap criterion. We obtain an analytical
estimation of the range of parameters for which there is chaos suppression.Comment: LaTex, 10 pages, to be published in Phys. Rev. | {"url":"https://core.ac.uk/search/?q=authors%3A(Salasnich%2C%20Luca)","timestamp":"2024-11-14T02:42:01Z","content_type":"text/html","content_length":"138452","record_id":"<urn:uuid:646da17c-b1fc-4893-9b3c-4e6bfa2d5f7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00358.warc.gz"} |
Long-run Inflation Targets | RDP 9806: Policy Rules for Open Economies
RDP 9806: Policy Rules for Open Economies 6. Long-run Inflation Targets
This section presents the good news about inflation targets. The problems described in the previous section can be overcome by modifying the target variable. In light of earlier results, a natural
modification is to target long-run inflation, π^*.
6.1 The Policies
Strict long-run inflation targeting is defined as the policy that minimises the variance of π^* = π + γe[−1]. To see its implications, note that Equation (2) can be rewritten as:
This equation is the same as a closed-economy Phillips curve, except that π^* replaces π. The exchange rate is eliminated, so policy affects π^* only through the output channel. Thus policy affects π
^* with a two-period lag, and strict targeting implies:
In contrast to a two-period-ahead target for total inflation, Equation (13) defines a unique policy.
There are two related motivations for targeting π^* rather than π. First, since π^* is not influenced by the exchange rate, policy uses only the output channel to control inflation. This avoids the
exchange-rate ‘whiplashing’ discussed in the previous section. Second, as discussed in Section 3, π^* gives the level of inflation with transitory exchange-rate effects removed. π^* targeting keeps
underlying inflation on track.
In addition to strict π^* targeting, I consider gradual adjustment of π^*:
This rule is the similar to the gradual-adjustment rule that is optimal in a closed economy. Policy adjusts Eπ^*[+][2] part of the way to the target from Eπ^*[+1], which it takes as given. The
motivation for adjusting slowly is to smooth the path of output.
In practice, countries with inflation targets do not formally adjust for exchange rates in the way suggested here. However, adjustments may occur implicitly. For example, a central-bank economist
once told me that inflation was below his country's target, but that this was desirable because the currency was temporarily strong, and policy needed to ‘leave room’ for the effects of depreciation.
Keeping inflation below its official target when the exchange rate is strong is similar to targeting π^*.
6.2 Results
To examine π^* targets formally, I substitute Equations (12) and (1) into condition Equation (14). This leads to the instrument rule implied by π^* targets:
where w′= (β / (β +δ), a′ = (1 − q + λ) / (β +δ), b′ = (1−q) / [α(β +δ)]
This equation includes the same variables as the optimal rule in Section 3, but the coefficients are different. The MCI weights are given exactly by the relative sizes of β and δ; for base
parameters, w'=0.75. The coefficients on y and π^* depend on the adjustment speed q.
Appendix B calculates the variances of output and inflation under π^* targeting. Figure 3 plots the results for q between zero and one. The case of strict π^* targeting corresponds to the Northwest
corner of the curve. For comparison, Figure 3 also plots the set of efficient policies from Figure 1.
The Figure shows that targeting π^* produces more stable output than targeting π. This is true even for strict π^* targets, which produce an output variance of 8.3, compared to 25.8 for π targets.
Figure 4 shows the dynamic effects of an inflation shock under π^* targets, and confirms that this policy avoids oscillations in output. Strict π^* targeting is, however, moderately inefficient.
There is an efficient instrument rule that produces an output variance of 8.3 and an inflation variance of 1.2. Strict π^* targets produce the same output variance with an inflation variance of 1.9.
Figure 4: Strict π* Targets – Responses to an Inflation Shock
As the parameter q is raised, so adjustment becomes slower, we move Southeast on the frontier defined by π^* targeting. This frontier quickly moves close to the efficient frontier. Thus, as long as
policy-makers put a non-negligible weight on output variance, there is a version of π^* targeting that closely approximates the optimal policy. For example, for equal weights on inflation and output
variances, the optimal policy has an MCI weight w of 0.70, and output and π^* coefficients of 1.35 and 1.06. For a π^* target with q = 0.66, the corresponding numbers are 0.75, 1.43, and 1.08. The
variances of output and inflation are 2.50 and 2.44 under the optimal policy and 2.48 and 2.48 under π^* targeting. | {"url":"https://www.rba.gov.au/publications/rdp/1998/1998-06/long-run-inflation-targets.html","timestamp":"2024-11-09T12:34:48Z","content_type":"application/xhtml+xml","content_length":"32477","record_id":"<urn:uuid:05a13a28-1629-40d3-923f-03dc4ee9549c>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00288.warc.gz"} |
On the boundary classification of $\Lambda$-Wright–Fisher processes with frequency-dependent selection
Annales Henri Lebesgue, Volume 6 (2023), pp. 493-539.
Keywords $\Lambda $-Wright–Fisher process, selection, $\Lambda $-coalescent, fragmentation-coalescence, duality, explosion, coming down from infinity, entrance boundary, regular boundary,
continuous-time Markov chains
We construct extensions of the pure-jump $\Lambda$-Wright–Fisher processes with frequency-dependent selection ($\Lambda$-WF with selection) with different behaviors at their boundary $1$. Those
processes satisfy some duality relationships with the block counting process of simple exchangeable fragmentation-coagulation processes (EFC processes). One-to-one correspondences are established
between the nature of the boundaries $1$ and $\infty$ of the processes involved. They provide new information on these two classes of processes. Sufficient conditions are provided for boundary $1$ to
be an exit boundary or an entrance boundary. When the coalescence measure $\Lambda$ and the selection mechanism verify some regular variation properties, conditions are found in order that the
extended $\Lambda$-WF process with selection makes excursions out from the boundary $1$ before getting absorbed at $0$. In this case, $1$ is a transient regular reflecting boundary. This corresponds
to a new phenomenon for the deleterious allele, which can be carried by the whole population for a set of times of zero Lebesgue measure, before vanishing in finite time almost surely.
[AN04] Branching Processes, Dover Publications, 2004 (reprint of the 1972 original [Springer, New York; MR0373040]) | Zbl
[Ber96] Lévy Processes, Cambridge Tracts in Mathematics, 121, Cambridge University Press, 1996 | Zbl
[Ber04] Exchangeable fragmentation-coalescence processes and their equilibrium measures, Electron. J. Probab., Volume 9 (2004), pp. 770-824 | MR | Zbl
[BLG03] Stochastic flows associated with coalescent processes, Probab. Theory Relat. Fields, Volume 126 (2003) no. 2, pp. 261-288 | DOI | MR | Zbl
[BLG05] Stochastic flows associated to coalescent processes. II. Stochastic differential equations, Ann. Inst. Henri Poincaré, Probab. Stat., Volume 41 (2005) no. 3, pp. 307-333 | DOI | Numdam | MR |
[BLW16] The common ancestor type distribution of a $\Lambda$-Wright Fisher process with selection and mutation, Electron. Commun. Probab., Volume 21 (2016), 59, 16 pages | MR | Zbl
[BOP20] Branching processes with pairwise interactions (2020) (https://arxiv.org/abs/2009.11820v1)
[BP15] The $\Lambda$-lookdown model with selection, Stochastic Processes Appl., Volume 125 (2015) no. 3, pp. 1089-1126 | MR | Zbl
[CR84] A duality relation for entrance and exit laws for Markov processes, Stochastic Processes Appl., Volume 16 (1984), pp. 141-156 | MR | Zbl
[CS18] Duality and fixation in $\Xi$-Wright-Fisher processes with frequency-dependent selection, Ann. Appl. Probab., Volume 28 (2018) no. 1, pp. 250-284 | MR | Zbl
[DK99] Particle representations for measure-valued population models, Ann. Probab., Volume 27 (1999) no. 1, pp. 166-205 | MR | Zbl
[DL12] Stochastic equations, flows and measure-valued processes, Ann. Probab., Volume 40 (2012) no. 2, pp. 813-857 | MR | Zbl
[Don84] A note on some results of Schuh, J. Appl. Probab., Volume 21 (1984) no. 1, pp. 192-196 | DOI | MR | Zbl
[EG09] A coalescent dual process in a Moran model with genic selection, Theor. Popul. Biol., Volume 75 (2009) no. 4, pp. 320-330 | DOI | Zbl
[EK86] Markov processes. Characterization and convergence, Wiley Series in Probability and Mathematical Statistics: Probability and mathematical statistics, John Wiley & Sons, 1986 | DOI | Zbl
[Eth12] Some Mathematical Models from Population Genetics: École d’Été de Probabilités de Saint-Flour XXXIX-2009, Lecture Notes in Mathematics, Springer, 2012 | Zbl
[Fou13] The impact of selection in the $\Lambda$-Wright-Fisher model, Electron. Commun. Probab., Volume 18 (2013), 72, 10 pages | MR | Zbl
[Fou19] Continuous-state branching processes with competition: duality and reflection at infinity, Electron. J. Probab., Volume 24 (2019), 33, 38 pages | MR | Zbl
[Fou22] A phase transition in the coming down from infinity of simple exchangeable coalescence-fragmentation processes, Ann. Appl. Probab., Volume 32 (2022) no. 1, pp. 632-664 | MR | Zbl
[FZ22] On the explosion of the number of fragments in simple exchangeable fragmentation-coagulation processes, Ann. Inst. Henri Poincaré, Probab. Stat, Volume 58 (2022) no. 2, pp. 1182-1207 | MR |
[GCPP21] Branching processes with interactions: the subcritical cooperative regime, Adv. Appl. Probab., Volume 53 (2021) no. 1, pp. 251-278 | DOI | MR | Zbl
[Gri14] The $\Lambda$-Fleming–Viot process and a connection with Wright-Fisher diffusion, Adv. Appl. Probab., Volume 46 (2014) no. 4, pp. 1009-1035 | DOI | MR | Zbl
[Har63] The Theory of Branching Processes, Grundlehren der Mathematischen Wissenschaften, 119, Springer, 1963 | DOI | MR | Zbl
[HP20] Markov branching processes with disasters: Extinction, survival and duality to $p$-jump processes, Stochastic Processes Appl., Volume 130 (2020) no. 4, pp. 2488-2518 | DOI | MR | Zbl
[KN97] Ancestral processes with selection, Theor. Popul. Biol., Volume 51 (1997) no. 3, pp. 210-237 | DOI | Zbl
[Kol11] Markov processes, semigroups and generators, de Gruyter Studies in Mathematics, 38, Walter de Gruyter, 2011 | Zbl
[KPRS17] A phase transition in excursions from infinity of the fast fragmentation-coalescence process, Ann. Probab., Volume 45 (2017) no. 6A, pp. 3829-3849 | MR | Zbl
[KT81] A second course in stochastic processes, Academic Press Inc., 1981 | Zbl
[LP12] Strong solutions of jump-type stochastic equations, Electron. Commun. Probab., Volume 17 (2012) no. 13, 33 | MR | Zbl
[LT15] Second-order asymptotics for the block counting process in a class of regularly varying Lambda-coalescents, Ann. Probab., Volume 43 (2015) no. 3, pp. 1419-1455 | Zbl
[Sch00] A necessary and sufficient condition for the $\Lambda$-coalescent to come down from infinity, Electron. Commun. Probab., Volume 5 (2000), pp. 1-11 | MR | Zbl | {"url":"https://ahl.centre-mersenne.org/articles/10.5802/ahl.170/","timestamp":"2024-11-13T17:36:18Z","content_type":"text/html","content_length":"46010","record_id":"<urn:uuid:e6a2afab-73c2-4c90-bceb-b3bafa361f0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00680.warc.gz"} |
Strike Rate Calculator - Savvy Calculator
Strike Rate Calculator
About Strike Rate Calculator (Formula)
The Strike Rate Calculator is an essential tool in cricket to determine the efficiency of a batsman by calculating how many runs they score per 100 balls faced. It is a crucial statistic in
limited-overs cricket, where quick scoring is important. A higher strike rate indicates that a batsman can score faster, making them valuable in formats like T20s and One Day Internationals.
The formula to calculate the strike rate is:
Strike Rate (SR) = (Runs Scored / Balls Faced) * 100
• Runs Scored is the total number of runs the batsman has made.
• Balls Faced is the total number of balls the batsman has faced during their innings.
How to Use
1. Gather Data: Know the total runs the batsman has scored and the number of balls they faced in their innings.
2. Input Values: Enter the total runs and balls faced into the Strike Rate Calculator.
3. Calculate: The calculator will use the formula to compute the strike rate, giving you the result as runs per 100 balls faced.
If a batsman scores 75 runs off 50 balls, the strike rate can be calculated as follows:
Strike Rate (SR) = (75 / 50) * 100 = 150
This means the batsman has a strike rate of 150, meaning they score 150 runs per 100 balls faced.
1. What is a strike rate in cricket?
Strike rate is a measure of how quickly a batsman scores runs, calculated as the number of runs per 100 balls faced.
2. Why is strike rate important in limited-overs cricket?
In limited-overs cricket, scoring quickly is important due to the limited number of deliveries. A higher strike rate indicates a batsman’s ability to score quickly, which is crucial in T20 and
ODI formats.
3. What is a good strike rate in T20 cricket?
A strike rate of 130 or higher is considered good in T20 cricket, as the format emphasizes quick scoring.
4. How is strike rate different from batting average?
Batting average measures consistency (total runs divided by dismissals), while strike rate measures scoring speed (runs per 100 balls faced).
5. Can a bowler have a strike rate?
Yes, in bowling, the strike rate is the number of balls bowled per wicket taken. However, the term is most commonly associated with batting.
6. Is strike rate important in Test cricket?
While strike rate is less crucial in Test cricket compared to limited-overs formats, it still reflects how quickly a batsman scores, especially in certain match situations.
7. What happens if a batsman faces fewer than 100 balls?
The strike rate formula adjusts to reflect the runs scored per 100 balls, regardless of how many balls the batsman actually faced.
8. What is a high strike rate for an opening batsman?
For an opening batsman, a strike rate of 90-100 in ODIs is considered good, as they are expected to balance scoring with preserving wickets.
9. How does strike rate affect team strategy?
Teams often select aggressive batsmen with high strike rates in limited-overs formats to maintain a fast scoring rate and set competitive totals.
10. Can a batsman’s strike rate change during an innings?
Yes, as a batsman scores runs and faces more balls, their strike rate will change based on their performance.
11. How is a strike rate over 100 possible?
A strike rate over 100 means the batsman is scoring more than one run per ball, often achieved with frequent boundaries.
12. Is it possible to have a strike rate of 0?
Yes, if a batsman faces balls but does not score any runs, their strike rate would be 0.
13. How can a batsman improve their strike rate?
A batsman can improve their strike rate by playing more aggressively, focusing on hitting boundaries, and rotating the strike by taking singles.
14. What is the highest strike rate ever recorded?
Some players, particularly in T20 cricket, have achieved strike rates exceeding 300 in short, explosive innings.
15. What does a strike rate of 200 mean?
A strike rate of 200 means the batsman scores 200 runs for every 100 balls faced, or 2 runs per ball on average.
16. Why is a high strike rate valuable in the death overs?
In the final overs of a limited-overs game, a high strike rate helps the team score rapidly, maximizing runs in the limited remaining deliveries.
17. Is strike rate a factor in player selection?
Yes, especially in limited-overs formats, selectors often look for players with high strike rates to ensure quick scoring.
18. How do strike rates differ between formats?
In T20s, strike rates of 130+ are common, while in ODIs, rates around 90-100 are good. In Test cricket, strike rates tend to be lower due to the focus on endurance and technique.
19. Does strike rate matter in low-scoring matches?
Yes, even in low-scoring matches, a higher strike rate can make a significant difference by allowing a team to reach a defendable total quickly.
20. Can a batsman have a strike rate without getting out?
Yes, a strike rate is calculated based on runs scored and balls faced, regardless of whether the batsman is dismissed.
The Strike Rate Calculator is a valuable tool for understanding a batsman’s scoring efficiency. It is especially important in limited-overs formats where quick scoring is crucial for success. By
using the formula for strike rate, cricket enthusiasts, players, and analysts can assess a player’s performance and contribution to the game more effectively. Whether you’re playing or watching,
understanding strike rates can enhance your appreciation of the game’s dynamics.
Leave a Comment | {"url":"https://savvycalculator.com/strike-rate-calculator","timestamp":"2024-11-08T08:26:53Z","content_type":"text/html","content_length":"147645","record_id":"<urn:uuid:99fbe824-ff03-4539-b4e2-298dd54a5d66>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00391.warc.gz"} |
Fall 2018
Mandelbulb set
The Mandelbulb is a set generated similarly as the Mandelbrot set in two dimensions. These pictures were generated with
Mandelbulb3D version 1.9
running on Mac OS 10 (the original Windows program was packaged by the Mandelbubl3D developers using Wine and runs nicely on the Mac (for some reason, it does not display on my macbook (and support
logs seem to point to graphics card issues), but the following pictures were rendered on the Imac using that program)). Click on a picture to see a 7680 x 5760 resolution version. There is a nice
citation (used as a logo) on that MB3D website:
"Mighty is geometry; joined with art, resistless."
As pointed out in
this blog of a nameless math teacher
, the quote is often attributed to Euripides (480-406 BC) but it has been deformed. Euripides wrote in his work ``Hecuba":
"Numbers are a fearful thing, and joined to craft a desperate foe".
[ We can read that Euripides was a tragic poet, whose work tragically has been mostly lost (to 80 percent) and (also tragic) was mocked by comic poets. Aristotle called him the "the most tragic of
poets". It would be only consistent that also his work has been tragically deformed and distorted by future generations. ] If the etymology assessment of the blog is correct, then the quote deformed
``numbers" to ``geometry", ``craft" to ``art" and ``fearful" to ``mighty".
In this quotation
, the writer attributes the citation to Morris Kline "Mathematics for the Non-mathematician (1967)". Indeed, that is
where it appears in 1967 [PNG]
. The unknown Math teacher furthermore tells in that blog:
This just proves how our culture has shifted. Today, it's all about the one-liner, the punchline. But most worrisome is the lean toward the quick and easily digestible without contextualization or
I agree that one should give attributions where due. I myself do not know the work of Euripides and definitely can not answer whether the quote has indeed appeared in some of his work. Morris Kline
(1908-1992) was a legendary mathematician and writer. Whether he overreached with that quote is not clear. [His assessments about math education from the 70ies have been vastly confirmed (like
An example: Why Johnny can't add: the failure of new math
or (even better)
Why the Professor Can't teach
.] Reverberating maybe bit with Morris Kline, I don't quite agree with the Hetpadecagon blog critique about the one-liner. I think that
we need the punchline. We need the easily digestible in education. These can be an entry points.
The Mandelbulb is of that kind. There is not much contextualization, nor much scholarship yet about this set, simply because there is nothing known about it. In
this collection of theorems [PDF]
there is only one theorem about the Mandelbulb set: and the theorem tells "there is no theorem about the Mandelbulb set!". The Mandelbulb is not only beautiful, it has huge potential for new
mathematics which currently does not appear to exist yet. A starter would be the analogue of the
Douady-Hubbard theorem
about the connectedness of the Mandelbrot set. And what is more exciting about having some objects we do know nothing about? A bit about the history of the Mandelbulb is given
in this talk [PDF] | {"url":"https://people.math.harvard.edu/~knill/teaching/math22a2018/exhibits/mandelbulb/index.html","timestamp":"2024-11-14T05:14:05Z","content_type":"application/xhtml+xml","content_length":"7435","record_id":"<urn:uuid:946c5a2a-f297-4563-b02d-d97fae9f6d50>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00243.warc.gz"} |
Spring 2019: Cryptography (Math 481/581)
Dr. Armin Straub
Instructor MSPB 313
(251) 460-7262 (please use e-mail whenever possible)
Office hours MWF, 9-10am, 11am-noon, or by appointment
Lecture MWF, 1:25-2:15pm, in MSPB 235
The tentative dates for our two midterm exams are:
Midterm exams Wednesday, February 20
Wednesday, April 3
Final exam Wednesday, May 1 — 1:00-3:00pm
Text Introduction to Cryptography with Coding Theory
by Wade Trappe and Lawrence C. Washington (Prentice Hall, 2nd Ed., 2006)
Online grades USAonline
Syllabus syllabus.pdf
Lecture sketches and homework
To help you study for this class, I am posting lecture sketches. These are not a substitute for your personal lecture notes or coming to class (for instance, lots of details and motivation are not
included in the sketches). I hope that they are useful to you for revisiting the material and for preparing for exams.
After most classes, homework is assigned and posted below.
• You should aim to complete the problems right after class, and before the next class meets.
A 15% penalty applies if homework is submitted after the posted due date.
• Homework is submitted online, and you have an unlimited number of attempts. Only the best score is used for your grade.
Most problems have a random component (which allows you to continue practicing throughout the semester without putting your scores at risk).
• Collect a bonus point for each mathematical typo you find in the lecture notes (that is not yet fixed online), or by reporting mistakes in the homework system. Each bonus point is worth 1%
towards a midterm exam.
The homework system is written by myself in the hope that you find it beneficial. Please help make it as useful as possible by letting me know about any issues!
For more involved calculations, we will explore the open-source free computer algebra system Sage.
If you just want to run a handful quick computations (without saving your work), you can use the text box below.
A convenient way to use Sage more seriously is https://cocalc.com. This free cloud service does not require you to install anything, and you can access your files and computations from any computer
as long as you have internet. To do computations, once you are logged in and inside a project, you will need to create a "Sage notebook" as a new file.
Exams and practice material
There will be two in-class midterm exams and a comprehensive final exam. Notes, books, calculators or computers are not allowed during any of the exams. The exam schedule is posted in the syllabus
and at the top of this page.
The following material will help you prepare for the exams.
• Midterm Exam 1:
• Midterm Exam 2:
• Final Exam:
Quizzes and solutions
1. quiz01.pdf, quiz01-solution.pdf
If you take this class for graduate credit, you need to complete a project. The idea is to gain additional insight into a topic that you are particularly interested in. Some suggestions for projects
are listed further below.
• The outcome of the project should be a short paper (about 4 pages)
□ in which you introduce the topic, and then
□ describe how you explored the topic.
Here, exploring can mean (but is not limited to)
□ computations or visualizations you did in, say, Sage,
□ working out representative examples, or
□ combining different sources to get an overall picture.
• Let me know before spring break which topic you choose for your project.
Each project should have either a computational part (this is a great chance to play with Sage!) or have a more mathematical component. Here are some ideas:
• Compute and investigate the number of Fermat liars and/or strong liars. For instance, a theoretical result states that at most a quarter of the residues can be strong liars. What proportions do
you observe numerically?
• Using frequency analysis (letters, digrams, trigrams and such), can you (more or less automatically) distinguish, say, different languages or maybe even individual authors. This would be a
computational project. The exact focus is up to you.
• What are the periods of LFSRs and LCGs? When are they maximal? Discuss mathematical results in the spirit of what is hinted at in Example 51 (for LCGs) and Example 59 (for LFSRs).
• When we say that a pseudorandom generator should have good statistical properties, what exactly do we mean? What tests do people apply in practice to evaluate pseudorandom generators?
• Go into more detail on the prime number theorem. How is it related to the Riemann zeta function and the Riemann hypothesis (this is advanced math)? What goes into its proof? Explore it
• Discuss finite fields and their classification. This would be a more mathematical project and should include proving basic results on finite fields.
• Introduce RSA-OAEP, which is RSA in randomized form with padding.
• Discuss Frobenius pseudoprimes, which feature in a 1998 primality test by Jon Grantham. You could either include mathematical details, such as proofs, or implement the primality test and
experimentally analyze the failure rate.
You are also very welcome to come up with your own ideas for a project. Please talk to me early with your idea, so I can give you a green light. | {"url":"http://arminstraub.com/teaching/cryptography-spring19","timestamp":"2024-11-13T22:18:16Z","content_type":"text/html","content_length":"19283","record_id":"<urn:uuid:a7572321-704b-4cf1-a2a9-2fc2051a932d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00103.warc.gz"} |
An improved estimate of self-employment hours for quarterly labor productivity
Labor productivity is a Principal Federal Economic Indicator published by the U.S. Bureau of Labor Statistics (BLS), and its correct measurement is crucial for tracking the sources of U.S. economic
growth. Sometimes, however, quarterly labor productivity is unduly volatile because of large changes in measured self-employment hours. In this article, we discuss this volatility and BLS efforts to
improve labor productivity measures by changing the measurement of hours worked by unincorporated self-employed and unpaid family workers.
Quarterly labor productivity is defined as the ratio of output to hours worked. Data on employee hours come from the BLS Current Employment Statistics (CES) survey, which is an establishment survey.^
1 The CES sample is drawn from the longitudinal database of employer records for business establishments covered by the Unemployment Insurance program. Therefore, the sample does not cover workers
who work for themselves in their own unincorporated businesses or as independent contractors (referred to as unincorporated self-employed workers or proprietors), nor does it cover those who work as
unpaid family workers.^2 To include hours of work for these classes of workers in the total hours measure, the BLS productivity program supplements the CES hours with hours worked by the
unincorporated self-employed and unpaid family workers from the BLS Current Population Survey (CPS), which is a household survey.^3 For simplicity, we henceforth use “self-employed” as shorthand for
unincorporated self-employed and unpaid family workers. Similarly, we use “self-employment” as shorthand for unincorporated self-employment and unpaid family work.
Hours worked by the self-employed can be volatile for two broad reasons. First, these hours can increase or decrease from one period to the next, as people start businesses and shut them down or as
they use self-employment as a bridge between wage-and-salary job spells.^4 Second, because the self-employed constitute such a small share of the labor force, it is harder to measure their hours by
using the CPS.
Chart 1 shows the percent change in quarterly self-employment hours from 2000 to 2019.^5 Self-employment was procyclical over this period, with substantial decreases in self-employment hours during
recessions.^6 In this chart, and in most of the analyses that follow, we begin the series in 2000 because productivity measures are consistently estimated back to 2000 and the current seasonal
adjustment process uses data starting in 2000. We end the series in 2019 because the scale of changes in self-employment during the COVID-19 pandemic dwarfed any earlier visible volatility.^7
Although the series is considerably volatile, the source of that volatility is not obvious—no economic theory beyond business cycle theories predicts sudden quarterly fluctuations in self-employment.
A concern with productivity measurement is that sometimes the volatility in self-employment hours is so outsized that it noticeably affects the measure of total hours, even though self-employment
hours make up only a small share of total hours (about 7.3 percent in the fourth quarter of 2022). Chart 2 shows hours growth for employees, the self-employed, and all workers from the second quarter
of 2000 through the fourth quarter of 2019. In several quarters, spikes in the self-employment hours series substantially affect the hours for all workers, creating a divergence between the hours
growth rate for employees and all workers. For example, in the third quarter of 2005, employee hours grew by 2.5 percent, but hours for all workers grew by only 1.3 percent once a 10.0-percent fall
in self-employment hours was factored in. Similarly, in the fourth quarter of 2014, employee hours grew by 3.4 percent, but hours for all workers grew by 4.6 percent because of a 19.3-percent
increase in self-employment hours.
Chart 3 illustrates another way of looking at the impact of growth in self-employment hours on growth in total hours, comparing the relative percentage-point contributions of employees and the
self-employed to the percent change in hours for all workers. Here, for illustrative purposes, we restrict the data to the first quarter of 2010 through the fourth quarter of 2019. Because
self-employment hours account for such a small share of total hours, we would expect the dots representing hours growth for all workers to be close to the top of the bars measuring the contribution
of employees, and the size of the bars measuring the contribution of the self-employed to be small relative to the bars measuring the contribution of employees. For some data points, however, the
dots are quite far from the end of the employee-contribution bars. For example, there are substantial gaps in the first three quarters of 2019.
Sources of volatility
In a companion working paper (forthcoming), Cindy Michelle Cunningham and Sabrina Wulff Pabilonia investigate the sources of volatility in self-employment, focusing on sources relating to the CPS
sample design that might be addressed to create a smoother series.^8 The authors identify three sources of volatility in the CPS estimate of self-employment, which is measured by a respondent’s
reported class of worker for the job that he or she worked at last week, for which adjustments could feasibly be made to reduce volatility. These sources of volatility include proxy responses,
imputations, and sample rotation.
In the CPS, one household member age 15 years and older often provides labor force information on behalf of other household members. This member is a proxy reporter for other household members. For
example, a mother may answer for her college-age children, one spouse for another, and so on. Cunningham and Pabilonia look at how proxy responses in the CPS might affect the volatility of
self-employment hours—whether differences between how a worker describes their class of worker and the way another member of the household responding on their behalf does might cause self-employment
measures to change over time. In general, the authors find that while the self-employment trends are similar whether measured by proxy reporters or self-reporters, in all periods proxy reporters
report less self-employment (for the respondents on whose behalf they are answering) than do self-reporters. In chart 4, we find somewhat more volatility in the annualized quarter-to-quarter growth
of self-employment hours when that information is collected by a proxy.^9
A second source of volatility examined by Cunningham and Pabilonia results from the imputation of class of worker when it is missing in the data because the respondent answered “don’t know” or
“refused” to the class-of-worker question (in other words, a case of item nonresponse as opposed to survey nonresponse). The number of imputed instances of self-employment in the CPS has been rising
slowly since 2000, although the weighted count of the imputed self-employed is still less than a tenth of the weighted count of the nonimputed self-employed.^10 Chart 5 compares the volatility of
self-employment hours from imputed responses with that from nonimputed responses. The effect of imputed responses is occasionally large enough to exacerbate an already large change in the nonimputed
responses. For example, in the second quarter of 2010, hours grew by 31.7 percent for the nonimputed self-employed and by 176.1 percent for the imputed self-employed, resulting in an overall growth
in self-employment hours of 34.9 percent; in the second quarter of 2017, hours grew by 22.5 percent for the nonimputed self-employed and by 68.9 percent for the imputed self-employed, resulting in an
overall growth in self-employment hours of 30.0 percent. And in a few quarters, the growth rate of the imputed values moves in the opposite direction of the nonimputed values and is large enough to
slightly offset what would otherwise be a large increase in self-employment hours. For example, in the third quarter of 2006, hours grew by 9.5 percent for the nonimputed self-employed but fell by
33.8 percent for the imputed self-employed, resulting in an overall growth of 5.4 percent in self-employment hours.
Following CPS respondents across consecutive months, Cunningham and Pabilonia examine respondents’ transitions into and out of self-employment that coincide with changes both between self- and proxy
responses and between imputed and nonimputed responses. They find that these transitions from one month to the next occur at lower rates when the reporter type stays the same (either a self-reporter
in both months or a proxy reporter in both months) than when the reporter type changes from self to proxy, as well as when the responses are nonimputed in both consecutive months compared with
imputed in both months or changes in imputed status. In particular, the likelihood of transitioning is largest between self- and proxy responses for those switching between unincorporated
self-employed and not employed, suggesting that household respondents do not always know about the self-employment work another household member does.
Cunningham and Pabilonia investigate methods to correct for this source of volatility by directly editing the underlying CPS data. For example, they replace one month’s proxy response with an
adjoining month’s self-response. This smoothing strategy requires subjective decision making regarding which response is more likely to be accurate, how many months forward or backward to look for
edits, and whether observed transitions might be genuine. Ultimately, the authors conclude that the edits to the underlying CPS data have minimal impact on the volatility of self-employment hours.
Given the subjective nature of these edits and the complexity of implementing them into official statistics, we do not pursue this smoothing strategy. However, researchers interested in studying the
dynamics of self-employment may choose to test whether making such edits affects their results.
Finally, Cunningham and Pabilonia investigate how the CPS sample rotation design—respondents moving into and out of the sample—might lead to increased volatility in measures of self-employment. In
the CPS, respondents answer questions for 4 consecutive months (months in sample (MIS) 1–4), then are out of the sample for the next 8 consecutive months, and finally return to the sample for another
4 consecutive months (MIS 5–8). Thus, from one calendar month to the next, one-quarter of respondents exit the sample and are replaced with entering (or, in the case of MIS 5, reentering)
respondents. This sample rotation creates the potential for two sources of volatility. First, the fraction of individuals who are self-employed may differ between entering and exiting rotations. This
difference can arise by chance because of sampling error.^11 The second source, known as rotation-group bias, is due to systematic differences by MIS in how respondents report self-employment status,
with workers in earlier MIS being more likely to report self-employment.^12 For the 2000–22 period, the average percentage of self-employed workers is about 7.0 percent in MIS 1, 6.8 percent in MIS
2–4, 6.7 percent in MIS 5, and 6.6 percent in MIS 6–8. These differences can arise for various reasons, including respondent fatigue, differences in unit nonresponse by class of worker, and in-person
versus telephone interviewing.
Chart 6 shows the hours growth in self-employment, in total and broken into two sources—the growth that comes from the difference between those who entered the sample in a quarter and those who
exited it in the previous quarter and the growth that comes from those who were surveyed in both quarters (continuers). We find evidence of both sample rotation effects and rotation-group bias.
Sample rotation effects are evident in the considerably greater volatility in the quarterly growth rates for respondents who were not present in both quarters relative to continuers. The average
growth rate is 8.8 percent for those who were not present in both quarters and −4.1 percent for continuers.^13 The impact of rotation-group bias can be seen in chart 6 by looking at the
self-employment growth rate for respondents present in both quarters. Because respondents are more likely to report self-employment in MIS 1 than in subsequent months, the quarter-to-quarter changes
in the number of self-employed workers in the sample tend to be negative. These fluctuations, together with rotation-group sampling errors that affect the relative distribution of self-employment in
incoming and outgoing rotation groups, suggest that the sample rotation design plays an important role in the volatility of self-employment hours.
Given these findings, we describe below how we apply two adjustments to the self-employment hours series to reduce excess volatility. The first adjustment involves directly compositing
self-employment hours as is done for national employment and unemployment statistics also calculated from the CPS. The second adjustment involves removing a component of seasonal adjustment—the final
irregular component adjusted for extreme values—which we show is primarily picking up sampling error.
Directly compositing self-employment hours
In 1954, BLS began using a composite estimate for reporting employment and unemployment measures. Morris H. Hansen, William N. Hurwitz, Harold Nisselson, and Joseph Steinberg describe the composite
estimate as a weighted average of two different level estimates: one based solely on the current month and one that accounts for information from the previous month for the three-quarters of the
sample that is surveyed in both months.^14 The composite estimates reduce the variance of estimates of levels and changes, with the largest reductions occurring for estimates of changes.
Expanding on Hansen et al.’s description of the two weighted components of the composite estimate, Margaret Gurney and Joseph F. Daly, as well as Elizabeth T. Huang and Lawrence R. Ernst, show that
this estimate can be improved by adding a bias-correction term that gives slightly more weight to data from respondents in MIS 1 and 5—months when unemployment responses have been found to be much
higher.^15 BLS uses these adjusted composite estimates, referred to as AK-composite estimates, for reporting national employment, unemployment, and those not in labor force, but not for reporting
self-employment hours. We use this method to directly composite self-employment hours for the quarterly labor productivity series.
The composite estimate of self-employment hours in calendar month t, Ŷ[t]^C, is measured by
where Ŷ[i,t] is an estimate of self-employment hours for the population in month t using only responses in rotation group i, and A and K are parameters. The first bracketed term is the direct
estimate of self-employment, an average of the estimate over all eight rotation groups i, ignoring any possible differences in responses across rotation groups. The second bracketed term adds to the
previous month’s composite estimate, Ŷ[t−1]^C, the change in estimated self-employment among those continuing in the survey from the previous month (i ∈ S (= 2, 3, 4, 6, 7, 8)). The final bracketed
term is the bias correction that reweights the estimates from the incoming (i ∉ S) and continuing (i ∈ S) parts of the current month’s sample.
In an AK-composite estimate, the parameters A and K are selected to minimize the variance of the AK estimator relative to the direct estimate. How closely a set of parameters approximates the best
linear unbiased estimate for a labor force characteristic depends both on the pattern of responses across rotation groups and on the correlation over time in the labor force estimates. This means
that the optimal values for estimating the level of employment may not be optimal for estimating the level of unemployment, or the month-to-month change in unemployment. Janice Lent, Stephen Miller,
and Patrick Cantwell find that A = 0.3 and K = 0.4 are optimal values for estimating unemployment levels and close to optimal for month-to-month changes, whereas A = 0.4 and K = 0.7 perform the best
for estimating employment levels and changes.^16 For our composite estimate of self-employment hours, we use A = 0.4 and K = 0.7, assuming that self-employment most closely resembles employment in
terms of the correlation over time. We directly composite only self-employment on main jobs because class-of-worker information for second jobs is only collected in MIS 4 and 8. To the directly
composited estimates, we add hours on secondary jobs, which have been weighted with CPS outgoing rotation weights.
Chart 7 compares the quarterly growth rate for the composited self-employment hours series with the growth rate for the published series from the second quarter of 2000 to the fourth quarter of 2019.
Compositing reduces the size of most spikes in the growth rate. For example, in the second quarter of 2014, the growth rate of self-employment hours was −12.1 percent without compositing and only
−6.0 percent with compositing. In the third quarter of 2019, the rate was 19.0 percent without compositing and 8.5 percent with compositing. Thus, it appears that compositing substantially reduces
volatility in self-employment hours.
Seasonal adjustment and removing irregulars
To obtain quarterly self-employment hours for productivity measures, BLS seasonally adjusts the basic monthly CPS self-employment hours series and then averages these monthly data to obtain a
quarterly series. Although the primary purpose of seasonal adjustment is to provide a clearer picture of underlying trends and cyclical movements distinct from regular seasonal movements, this
adjustment can also be used to dampen unusual movements in the data. The X-13 ARIMA-SEATS program first models the time-series properties of the data, then uses that model to adjust and extrapolate
forward the series,^ and finally decomposes the adjusted series into a trend-cycle component, a seasonal component, and an irregular component.^17 Removing the seasonal component leaves the
seasonally adjusted series, which consists of the trend-cycle component plus the irregular component.
The irregular component consists of ordinary noise in the series that can be due to both sampling and nonsampling error, as well as extreme abnormal events such as unseasonable weather, natural
disasters, pandemics, or strikes. Outliers in the original data series are typically identified in the time-series model—either a priori or by the program’s automatic outlier-detection feature. In
this way, the seasonal component can be estimated without being distorted by the outliers, but the seasonally adjusted data series will continue to include their impact.
We might expect greater sampling error in estimates of self-employment hours compared with estimates of hours for all workers, mainly because the sample sizes for the self-employed are relatively
smaller. Response error also can arise because differences between self-employment and contract work and between incorporated and unincorporated self-employment are subtle and may be subject to
greater differences in responses between the worker and a proxy respondent, and even from one interview to the next. Because our objective is to reduce the volatility in our estimate of
self-employment hours, we want to remove the part of the irregular component that may be due to sampling error. The X-13 ARIMA-SEATS program used for seasonal adjustment produces a final irregular
component series that excludes the impact of extreme values—which in most cases are due to abnormal events. Thus, we can remove this irregular component (hereafter referred to as the
extreme-value-adjusted (EVA) irregular), along with the seasonal component, from the original series in order to obtain a smoother seasonally adjusted self-employment hours series.
To check whether the EVA irregular series is a reasonable estimate of the sampling error, we compare the size of the series’ irregulars with the standard errors of our estimates of self-employment
obtained from generalized variance functions (GVF).^18 Because the CPS publishes GVF model parameters for employment only, we restrict our analysis to the number of self-employed workers. We assume
that the conclusions we draw from this analysis can be applied to self-employment hours.
Chart 8 shows the EVA irregular series obtained from a seasonal adjustment decomposition, standardized by dividing the EVA irregulars by the estimated standard errors of our composited
self-employment estimates.^19 Nearly all the values of the irregular component fall within 2 standard errors of our self-employment estimate, suggesting that the EVA irregular is due to sampling
error and not economic outliers. This implies that removing this irregular component from the self-employment series can potentially further smooth the series without removing real movements in
self-employment hours.
Charts 9a and 9b show the combined effects of compositing as well as removing the EVA irregular on the quarterly growth in self-employment hours. While chart 9a focuses on the second quarter 2000
through the fourth quarter of 2019, chart 9b extends the series through the fourth quarter of 2022 to show that the new method of calculating self-employment hours does not diminish the effects of
the COVID-19 pandemic on the series. Removing the EVA irregular further reduces the volatility of the series beyond the reductions obtained from compositing the series alone (chart 7), without
oversmoothing and distorting important economic outliers. If we were to instead remove the entire irregular series, we would remove most of the effects of the pandemic and potentially oversmooth the
series during the 2000–01 recession and the 2007–09 Great Recession. (See chart 10.)
Effects of adjustments to self-employment hours
The outsized effect of self-employment hours volatility on total hours growth is the primary motivation behind Cunningham and Pabilonia’s working paper and the new method implemented here. An
important check, then, is to see how this method of estimating self-employment hours affects the volatility in total hours growth. Chart 11 shows the published quarterly total hours growth and our
estimate of quarterly total hours growth that uses the adjusted self-employment hours series. For most of the large spikes in the series, the adjustment reduces the volatility of total hours growth,
especially toward the end of the series. In two noticeable instances, however, the adjusted series has larger spikes than the published series—in the second quarter of 2014 and in the third quarter
of 2016. Looking back to those data points in chart 9a, we see that, in the second quarter of 2014, self-employment hours growth was −12.1 percent before the adjustments and −2.7 percent after; in
the third quarter of 2016, that growth was −10.5 percent before the adjustments and −0.5 percent after. These large negative spikes in the unadjusted self-employment hours series were distorting
underlying positive spikes in employee hours growth that now are evident after the self-employment hours series has been smoothed.
To see the reduction in volatility better, in chart 12, we reexamine the contributions to total hours growth that we presented in chart 3. The bars showing the contribution of the self-employed to
hours growth are much smaller than they were in chart 3. For example, the contribution of the self-employed is substantially reduced in the third quarter of 2019. Under this new method of estimating
self-employment hours, the self-employed contributed 0.2 percentage point to a 0.9-percent growth in total hours, whereas before the adjustments they contributed 1.4 percentage points to a
1.9-percent growth in total hours. Similarly, in the second quarter of 2016, the self-employed contributed 0.8 percentage point to a 1.2-percent growth in total hours under the old method, but they
contributed 0.5 percentage point to a 1.0-percent growth in total hours under the new method.
Finally, in chart 13, we examine the effects of smoothing the self-employment hours series on annualized quarter-to-quarter growth in labor productivity. Over the 2000–22 period, the absolute change
in productivity was greater than 0.5 percentage point in 21 of the period’s 91 quarters. In 8 of the 21 quarters, the productivity rate under the new method was larger in magnitude than the rate
under the old method. This difference occurred in quarters that, prior to adjustment, had unusually large spikes in self-employment hours relative to employee hours. For example, in the third quarter
of 2019, productivity grew at a 2.4-percent annual rate under the old method, but at a 3.4-percent rate under the new method. In two instances (the fourth quarter of 2002 and the second quarter of
2011), productivity growth changed direction from a small decrease to a small increase. Although we see changes in measured productivity using the new method, adopting the method does not affect
long-run productivity growth, which averaged 1.9 percent annually over the 2000–22 period.
We find that self-employment hours are volatile because of survey-error-related volatility, which sometimes has outsized impacts on quarterly labor productivity. In 2024, to improve its measure of
productivity, BLS will implement two adjustments to self-employment hours estimates: (1) directly compositing the estimates and (2) removing the EVA irregular component from the series. Because these
adjustments will reduce survey-related volatility without oversmoothing the data, we will still be able to capture cyclical changes in self-employment. Consequently, the adjustments will improve our
measure of labor productivity.
ACKNOWLEDGMENT: We thank Lucy Eldridge, Marina Gindelsky, Nick Johnson, Justin McIllece, Brian Monsell, Drake Palmer, Matthew Russell, Jay Stewart, and Zoltan Wolf for their helpful comments. | {"url":"https://www.bls.gov/opub/mlr/2023/article/an-improved-estimate-of-self-employment-hours.htm","timestamp":"2024-11-02T10:52:13Z","content_type":"text/html","content_length":"110116","record_id":"<urn:uuid:da343b81-9eed-445e-b073-b5f722dcfe1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00715.warc.gz"} |
Reacting Flows using PDF Methods
November 24, 2005, Re: Reacting Flows using PDF Methods #6
Guest Well the velocit field obtained will not satisfy continuity. Best (low Mach) ---------
Posts: n/a 1)Compute u,v,w 2) Then P (or pressure correction depends method) 3) Correct velocities u.v,w (now they wil satisfy continuity) 4) Advance Z (and Z") using new u,v,w 5) Compute
T(Z),rho(Z) using flamelets
Go to (1) and repeat until steady state or until you want to advance the solution one time step, in the latter yopur new rho will enter in equation 2) torugh the term drho/dt.
If your code is unsteady and have large denity changes ypu may have isntability problems | {"url":"https://www.cfd-online.com/Forums/main/10331-reacting-flows-using-pdf-methods.html","timestamp":"2024-11-06T18:38:50Z","content_type":"application/xhtml+xml","content_length":"87358","record_id":"<urn:uuid:51e10420-b28a-4c88-ab4f-926c6c3361d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00607.warc.gz"} |
How to determine annual percentage rate
Multiply the principal amount by one plus the annual interest rate to the power of the number of The Consumer Federation of America explains how to calculate it: Divide the finance charge by the
loan amount. In this case, $50 divided by $500 equals 0.1. Multiply the result by 365 to get 36.5. Divide the result by the term of the loan. In this case, 36.5 divided by 14 is 2.6071. Multiply the
11 Jul 2019 How do I calculate my APR? Calculating APR often sounds more intimidating than it really is. In fact, most of the time your lender will provide you When you enter any figure the
calculator will automatically return the APR. First enter the APY in percent. Some banks also refer to this as the effective annual rate Your estimated annual interest rate. Interest rate variance
range. Range of interest rates (above and below the rate set above) that you desire to see results for. Disclaimer: Annual Percentage Rate (APR) calculator is provided to compute annualised credit
costs which includes interest rate and processing fees. The APR
Solving for annual percentage rate(APR). Inputs: interest rate (i). times per year compounded(q)
APR Calculator. The Annual Percentage Rate (APR) is a method to compute annualised credit cost, which includes interest rate and loan origination charges. The interest on most credit accounts is
usually stated as an annual percentage rate, or APR. If you charge an APR of 4 percent on customers' outstanding accounts This calculator first calculates the monthly payment using C+E and the
original interest rate r = R/1200: The APR (a = A/1200) is then calculated iteratively by 6 Jun 2019 Though the APR can be calculated in several ways depending on the terms of the loan, the formula
which includes the basic components is: The interest rate charged to the borrower, excluding expenses such as account opening and account keeping fees. The APR is the basic cost of your credit as a
The mortgage APR calculator will help you to determine the annual percentage rate (APR) that you will be charged on your mortgage.
Multiply the daily percentage rate by 365 to convert it to an annual percentage rate. Step. Multiply the result by 100 if the answer came out as a decimal and you want to express it as a percent. For
example, if you found the daily rate is 0.000274, multiply by 365 to find that your annual rate is 0.1.
26 Aug 2019 The annual percentage yield formula is (1 + (i / n))n – 1. In that equation, i is equal to the annual interest rate and n is equal to the number of times APR – Calculate the Annual
Percentage Rate of a existing loan or before applying loan. How to reduce the APR on your existing loan? How to get a personal
In this video, we calculate the effective APR based on compounding the APR daily. Created by Sal Khan. Google Classroom Facebook
Calculate the APR (Annual Percentage Rate) of a loan with pre-paid or added finance charges.
Divide your interest rate by the number of payments you'll make in the year ( interest rates are expressed annually). So, for example, if you're making monthly
27 Mar 2019 Learn what a loan's APR means and how it's calculated. You're not alone if you' ve ever asked “How does APR work?”. The This version includes relevant finance charge and APR tolerances
for verifying the accuracy of annual percentage rates and finance charges on loans secured by As a New York Life policyholder, you can often choose among several different premium payment options
(annual, semi-annual, monthly - automatic bank draft). 15 Nov 2019 An annual percentage rate (APR) reflects the mortgage interest rate plus other charges. Loan APR Calculator. This calculator will
help you compute the average combined interest rate you are paying on up to fifteen of your outstanding debts. 23 Jul 2013 Annual Interest Rate Equation. If the lender offers a loan at 1% per month
and it compounds monthly, then the annual percentage rate (APR) on
26 Aug 2019 The annual percentage yield formula is (1 + (i / n))n – 1. In that equation, i is equal to the annual interest rate and n is equal to the number of times APR – Calculate the Annual
Percentage Rate of a existing loan or before applying loan. How to reduce the APR on your existing loan? How to get a personal However, the APR can be calculated in different ways and can sometimes
cause rather than eliminate confusion. LOANS AND INTEREST RATES. A loan is the 27 Mar 2019 Learn what a loan's APR means and how it's calculated. You're not alone if you' ve ever asked “How does APR
work?”. The This version includes relevant finance charge and APR tolerances for verifying the accuracy of annual percentage rates and finance charges on loans secured by | {"url":"https://optionsebzyixj.netlify.app/hultz45810gero/how-to-determine-annual-percentage-rate-jeve","timestamp":"2024-11-02T09:10:37Z","content_type":"text/html","content_length":"33350","record_id":"<urn:uuid:953d820e-3e03-460e-8791-e632843d084c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00406.warc.gz"} |
Interpreting a result obtain with Reduce that includes pure function
4035 Views
2 Replies
1 Total Likes
Interpreting a result obtain with Reduce that includes pure function
I used Reduce on a set of inequalities and I got the message
Reduce::ratnz: Reduce was unable to solve the system with inexact coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result
plus a result that included the following condition
c < Root[19. T^3 - 30. T^2 #1 + 8. #1^3 &, 3]
Does it mean that "c" must be lower than the third root of the equation
19.T^3-30.x T^2+8.x^3==0
where x is the unknown? Is 19. an abbreviation for an irrational number close to 19?
Thank you in advance for your help
2 Replies
Thank you! The graph was interesting to look at
>Does it mean that "c" must be lower than the third root of the equation
Yes, Root is aften returned in results by Solve and Reduce
To get a sense of how the root behaves relative to T:
Plot[Root[19. T^3 - 30. T^2 #1 + 8. #1^3 &, 3], {T, -5, 5}]
But since this is just a third-degree polynomial you can get the roots explicitly by telling Reduce to expand cubics (similar option exists for quartics)
Reduce[19 T^3 - 30 T^2 x + 8 x^3 == 0, x, Cubics->True]
>Is 19. an abbreviation for an irrational number close to 19?
It represents a machine-precision number | {"url":"https://community.wolfram.com/groups/-/m/t/113960?sortMsg=Recent","timestamp":"2024-11-08T01:02:39Z","content_type":"text/html","content_length":"99941","record_id":"<urn:uuid:54ecaed7-0fde-4f26-be43-8c5b22a642c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00735.warc.gz"} |
Adaptive Booth Algorithm for Three-integers Multiplication for Reconfigurable Mesh
This paper presents a three-integers multiplication algorithm R = A∗X∗Y for Reconfigurable Mesh (RM). It is based on a three-integer multiplication algorithm for faster FPGA implementations. We show
that multiplying three integers of n bits can be performed on a 3D RM of size (3n + logn + 1) × (2√n+1 +3) × √n+1 using 44 + 18 · log log MNO steps, where MNO is a bound which is related to the
number of sequences of '1's in the multiplied numbers. The value of MNO is bounded by n but experimentally we show that on the average it is √n. Two algorithms for solving multiplication on a RM
exists and their techniques are asymptotically better time wise, O(1) and O(log∗n), but they suffer from large hidden constants and slow data insertion time (O(√n)) respectively. The proposed
algorithm is relatively simple and faster on the average (via sampling input values) then the previous two algorithms thus contributes in making the RM a practical and feasible model. Our experiments
show a significant improvement in the expected number of elementary operations for the proposed algorithm.
Bibliographical note
Publisher Copyright:
© 2016 World Scientific Publishing Company.
• Reconfigurable mesh
• booth multiplication
• cartesian addition
• extended summing
ASJC Scopus subject areas
• Computer Networks and Communications
Dive into the research topics of 'Adaptive Booth Algorithm for Three-integers Multiplication for Reconfigurable Mesh'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/adaptive-booth-algorithm-for-three-integers-multiplication-for-re","timestamp":"2024-11-02T12:44:59Z","content_type":"text/html","content_length":"54508","record_id":"<urn:uuid:95674aa4-21f5-4aa8-b8c0-85b170655833>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00843.warc.gz"} |
The Transvoxel Algorithm
The Transvoxel Algorithm is a method for seamlessly stitching together neighboring triangle meshes generated from voxel data at differing resolutions so that level of detail (LOD) can be used with
large voxel-based datasets such as volumetric terrain in next-generation video games. The algorithm was invented by Eric Lengyel in 2009 and implemented in the C4 Engine. This web page provides a
high-level overview of the Transvoxel Algorithm, links to additional information, and data tables that can assist in an implementation.
Patent status
The Transvoxel Algorithm is free of patent claims.
The problem it solves
Voxel-based terrain systems are increasing in popularity because they remove the topographical limitations of conventional elevation-based terrain systems and provide the ability to create more
complex structures like caves, overhangs, and arches. Triangle meshes are typically generated from voxel data using the Marching Cubes algorithm, and the larger numbers of triangles that it generates
makes a level-of-detail (LOD) system even more important for high-performance rendering of large terrains. A natural way to implement level of detail for voxel-based terrain is to simply triangulate
the voxel data at multiple resolutions, but this leads to the well-known problem of cracks forming along the boundary between meshes representing different resolutions, as shown in Figure 1. While it
is relatively simple to patch these types of cracks in a robust manner for elevation-based terrain, the topographical freedom allowed by voxel-based terrain makes the patching process vastly more
difficult because the structure of edge mismatches on the boundary plane can be much more complex. The Transvoxel Algorithm introduces the concept of a “transition cell” to smoothly and seamlessly
connect voxel terrain meshes across multiresolution boundaries. The algorithm is designed so that only local voxel data is required for each transition cell, and this allows fast retriangulation in
cases where the voxel data is dynamically changing in a real-time application.
How it works
The Transvoxel Algorithm works by inserting special transition cells in between regular cells along the boundary between voxel data sampled at one resolution and voxel data sampled at exactly half
that resolution. Instead of considering all possible combinations of voxel state for both the full-resolution and half-resolution data at a transition boundary, which would require that approximately
1.2 million cases be handled, we consider only nine samples of the high-resolution data. This gives us a much more manageable 512 cases to handle, and these cases happen to fall into the 73
equivalence classes shown in Figure 2. Every transition cell is filled with one of these triangle patterns in order to perfectly fill the seams, cracks, and holes that appear between meshes of
different resolutions, as was done for the cave in Figure 1.
In Figure 2, black dots represent voxels that lie inside solid space, and corners without a dot represent voxels that lie outside in empty space. Green triangles are front-facing polygons, and red
triangles are back-facing polygons.
Get print version (18 × 24 inch wall poster)
Look-up tables
Like the Marching Cubes algorithm, the Transvoxel Algorithm is efficiently implemented using a set of look-up tables that provide information about each of the 512 possible cases that can arise.
These tables are not easy to generate, so I am providing the tables that I made in the following repository. There are comments in the .cpp file that describe what the numbers mean.
Transvoxel data tables on GitHub
For more information
• Eric Lengyel’s Dissertation, University of California at Davis, 2010. This dissertation goes into great detail about the Transvoxel Algorithm and multiresolution voxel terrain in general. It also
describes how the algorithm can be efficiently implemented.
Download PDF document (50.7 MB)
• Lengyel, Eric. “Transition Cells for Dynamic Multiresolution Marching Cubes”. Journal of Graphics, GPU, and Game Tools. Vol. 15, No. 2 (2010), A K Peters.
DOI: 10.1080/2151237X.2011.563682
• A complete implementation of the Transvoxel Algorithm is included with the C4 Engine.
How to cite the Transvoxel Algorithm
Please cite the dissertation in which this algorithm was originally described:
Lengyel, Eric. “Voxel-Based Terrain for Real-Time Virtual Simulations”. PhD diss., University of California at Davis, 2010.
Copyright © 2009–2023, Terathon Software LLC | {"url":"http://transvoxel.org/","timestamp":"2024-11-04T20:36:58Z","content_type":"text/html","content_length":"9277","record_id":"<urn:uuid:bb024850-cd55-473d-9fd5-5d3d90829b43>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00586.warc.gz"} |
Feedspot - A fast, free, modern RSS Reader. Its a simple way to track all your favorite websites in one place.
Transum Mathematics Puzzles
2,634 FOLLOWERS
This Transum podcast features a puzzle of the month as well as information about everything new on Transum Mathematics.
Transum Mathematics Puzzles
2d ago
How many moons does Nymeria have given some mean clues ..
Transum Mathematics Puzzles
1w ago
What is the best way to settle the debts of the four friends ..
Transum Mathematics Puzzles
1M ago
What was the mean number of pumpkins that could be bought for £20 ..
Transum Mathematics Puzzles
2M ago
What fraction of the group wore all three items: hats, scarves, and gloves ..
Transum Mathematics Puzzles
3M ago
Given information about the mean, median and range, figure out what the three numbers are ..
Transum Mathematics Puzzles
5M ago
A judge is twice as old as her wig was when she was as old as her wig is now ..
Transum Mathematics Puzzles
6M ago
What percentage of the class brought all the required maths equipment to the lesson ..
Transum Mathematics Puzzles
7M ago
From the clues about A, B and C can you work out the value of D ..
Transum Mathematics Puzzles
8M ago
What is the largest number of eggs that cannot be purchased using a combination of the given basket sizes ..
Transum Mathematics Puzzles
11M ago
How many days in January will Justin be able to go for a jog .. | {"url":"https://www.feedspot.com/infiniterss.php?_src=feed_title&followfeedid=5227720&q=site:https%3A%2F%2Fwww.transum.org%2Fpodcast%2FStream.rss","timestamp":"2024-11-06T07:42:23Z","content_type":"text/html","content_length":"44432","record_id":"<urn:uuid:edc88635-f431-49a8-830c-e0538cbefe59>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00139.warc.gz"} |
(PDF) Symplectic orbit propagation based on Deprit’s radial intermediary
Author content
All content in this area was uploaded by Pini Gurfil on Nov 11, 2018
Content may be subject to copyright.
Astrodynamics Vol. 2, No. 4, 375–386, 2018 https://doi.org/10.1007/s42064-018-0033-x
Symplectic orbit propagation based on Deprit’s radial intermediary
Leonel Palacios (B), Pini Gurfil
Faculty of Aerospace Engineering, Technion – Israel Institute of Technology, Haifa 3200003, Israel
The constantly challenging requirements for orbit prediction have opened the need for better
onboard propagation tools. Runge–Kutta (RK) integrators have been widely used for this
purpose; however RK integrators are not symplectic, which means that RK integrators may
lead to incorrect global behavior and degraded accuracy. Emanating from Deprit’s radial
intermediary, obtained by the elimination of the parallax transformation, we present the
development of symplectic integrators of different orders for spacecraft orbit propagation.
Through a set of numerical simulations, it is shown that these integrators are more accurate
and substantially faster than Runge–Kutta-based methods. Moreover, it is also shown that
the proposed integrators are more accurate than analytic propagation algorithms based
on Deprit’s radial intermediary solution, and even other previously-developed symplectic
symplectic integration
spacecraft orbit propagation
Deprit’s radial intermediary
Hamiltonian dynamics
Research Article
Received: 05 November 2017
Accepted: 26 May 2018
©2018 Tsinghua University
1 Introduction
The need to propagate a satellite orbit, possibly for long
time spans, without onboard position measurements,
is a topic of special interest, because in some small
satellites there are no GPS receivers due to insufficient
power. Even if a GPS receiver is available, it is
susceptible to malfunctions or degraded accuracy due
to geometric dilution of precision. In these cases, orbit
prediction starts from a given epoch, and propagated by
integrating orbital models. To that end, Runge–Kutta
integrators are widely used. However, these integrators
are not symplectic, meaning that the resulting solutions
exhibit continuously growing energy errors and loss of
physical fidelity, compromising the global behavior of
the dynamical system in consideration, and leading to
accuracy degradation. A relatively high computational
cost is also associated with these types of integrators,
especially when complex dynamical models are used
and/or long time spans are required.
Analytic alternatives to numerical propagation, such
as the use of intermediary orbits in the main problem
of artificial satellite theory, have been presented
in the literature [1,2]. One of the main features of
using intermediaries is that the phase space dimension of
the dynamical system is reduced, guaranteeing that
the residual between the original problem and the
intermediary is free from first-order secular effects.
Additionally, the use of intermediary solutions implies
that no differential equations have to be integrated
onboard. Relevant work on intermediaries has been
developed by Cid and Lahulla [3,4] using polar-
nodal variables and a contact transformation. This
operation removes the argument of latitude from the
main problem Hamiltonian, resulting in an integrable
problem whose solution is expressed in terms of elliptic
integrals. Then, a more general class, called natural
intermediaries, was introduced by Deprit [5], rendering
the main problem Hamiltonian integrable after a contact
transformation that turns such a problem into the
intermediary, while allowing the inclusion of additional
first-order, short-periodic effects. Specifically, Deprit’s
radial intermediary (DRI) provides analytic solutions
free of elliptic integrals, as opposed to the Cid-
Lahulla intermediary, leading to faster computational
evaluation. Given that fast computations are required
in onboard propagators, Gurfil and Lara [6] proposed a
method that reorganizes the DRI solution into a more
convenient way to improve computational time. They
also presented a comparison against direct numerical
376 L. Palacios, P. Gurfil
integration, showing that their algorithm is faster than
Runge–Kutta methods and even more accurate when
using large time steps. Later, Lara [7] has introduced
an improved version that includes second-order secular
and periodic terms of the main problem.
Continuing with the efforts to find alternatives to
generic numerical integrators for onboard propagation,
in the present research we propose the use of geometric
numerical integrators as propagation tools. Contrary
to generic integration methods, this type of integrators
provide high processing speed and stability by
incorporating the underlying geometric properties of
the dynamical system under consideration, leading
to improved qualitative behavior, favorable error pro-
pagation, accurate and faster long-time integration [8].
A comprehensive work has been published in treatises
such as the one published by Blanes and Casas [8],
Hairer and Warner [9], Feng and Qin [10]. Specifically,
symplectic integrators were shown to be structure-
preserving numerical integrators. There have been
advances within the context of such integrators, for
instance, in the work done by Simo et al. [11]
the necessary conditions for the conservation of
symplecticity and energy and momentum were provided.
Other examples of such advances are the works by
Yoshida [12], with the construction of high-order
symplectic methods through composition or Blanes et
al. [13] and Farres et al. [14] with the development of
high-precision symplectic integrators for astronomy.
Symplectic integration techniques have also been
developed to propagate the motion of spacecraft. An
example of such work is the one done by Tsuda
and Scheeres [15], where a numerical method for
deriving a symplectic state transition matrix for an
arbitrary Hamiltonian dynamical system was presented.
This matrix was applied to long-term propagation of
spacecraft, among other applications. Additionally, the
work of Imre and Palmer [16] described a symplectic
numerical method to propagate relative orbits using an
arbitrary number of zonal and tesseral terms in the
In this paper, we develop 4th and 6th order symplectic
integrators for the propagation of satellite orbit based
on Deprit’s radial intermediary. Then, its performance
is compared against Runge–Kutta numerical methods,
the DRI solution algorithm presented in [6], and a
symplectic integrator in Cartesian variables developed
by the Authors in a previous work [17] (in which
the effect of drag on the symplectic and variational
integrators has been extensively investigated, showing
their superior performance over other integrators).
This paper is organized as follows. The main problem
in artificial satellite theory as well as background on
Deprit’s radial intermediary are presented in Section
2. Next, in Section 3, the development of symplectic
integrators is presented. Numerical simulations and
propagator performance analysis can be found in
Section 4.
2 The main problem Hamiltonian
The motion of a spacecraft around an oblate Earth can
be described using the Whittaker polar-nodal chart [18]
with the variables
r, θ, ν, R, Θ, N
where ris distance to the attraction center, θis the
argument of latitude, νis the right ascension of the
ascending node, R= dr/dtis the radial velocity, Θ is
the modulus of the angular momentum vector, and N
is the polar component of the angular momentum. It is
then possible to map from polar-nodal variables to the
classical orbital elements:
a, e, I, Ω, ω, f
namely the semimajor axis, eccentricity, inclination,
right ascension of the ascending node, argument of
periapsis and true anomaly, respectively, through the
r2R2+ Θ2−2µr (1)
µa (2)
I= cos−1N
Ω = ν(4)
cos f=Θ2−µr
µre ,sin f=ΘR
µe (5)
where µis the gravitational parameter. Then, the J2-
perturbed dynamics of the spacecraft, also known as
the main problem in artificial satellites theory, can be
Symplectic orbit propagation based on Deprit’s radial intermediary 377
defined by the main problem Hamiltonian:
H(r, θ, R, Θ, N )
r23 sin2θ1−N2
where the constans αand J2are the mean equatorial
radius and the second order zonal harmonic coefficient,
respectively. Next, from Hamilton equations [19], we
can obtain the relations:
d (r, θ, ν)
∂(R, Θ, N )and d (R, Θ, N)
∂(r, θ, ν)
which lead to the final form of the equations of motion,
˙ν=−2λcos Isin2θ
r41−3 sin2Isin2θ(12)
Θ = −λsin2Isin 2θ
N= 0 (14)
with the constant λdefined as
In inertial Earth-centered Cartesian coordinates,
defined by the unit vectors (ix,iy,iz), the motion of
a spacecraft may also be described with the Cartesian
variables x, y, z, ˙x, ˙y, ˙z
where the first three terms correspond to the
components of the position vector of the spacecraft
r=xix+yiy+ziz, the next ones to the velocity
vector v= ˙xix+ ˙yiy+ ˙zizand ˙
ζ= dζ/dt. With this
in consideration, the Cartesian J2-perturbed absolute
motion of a spacecraft around the Earth is described by
the Hamiltonian [20]
H(x, y, z, ˙x, ˙y, ˙z) = 1
2˙x2+ ˙y2+ ˙z2−
or by the Lagrangian
L(x, y, z, ˙x, ˙y, ˙z) = 1
2˙x2+ ˙y2+ ˙z2+
Next, from the the Euler-Lagrange equations [19], we
obtain the relations
∂( ˙x, ˙y, ˙z)−∂L
∂(x, y, z)= 0 (17)
and the equations of motion are expressed as
r2−1 (18)
r2−1 (19)
r2−3 (20)
The relations between the Cartesian coordinates and
the polar-nodal variables can be found as follows [6]. Let
the angular momentum vector be h=r×v. Define the
unit vectors n1and n2as
kh×izkif h×iz6=0
ixif h×iz=0
and n2=ˆ
r=krk(cos θ=ˆ
sin θ=ˆ
r·n2(cos ν=ix·n1
sin ν=iy·n1
rΘ = khkN=h·iz(23)
The inverse transformation can be found using the
cos ν−sin ν0
sin νcos ν0
0 cos I−sin I
0 sin Icos I
cos θ−sin θ0
sin θcos θ0
r=RνRIRθhr0 0 iT
378 L. Palacios, P. Gurfil
cos I=N
Θsin I=s1−N
It is well known that the main problem presented in
this section is not integrable except for the equatorial
case [21,22], and several approximate solutions to the
dynamics obtained from the Hamiltonian in Eq. (7)
have been proposed. In this work, we are particularly
interested in the solution obtained by using Deprit’s
radial intermediary [5]. Deprit’s radial intermediary is
obtained after reducing the main problem Hamiltonian
in Eq. (7) by applying the elimination of the parallax,
which is a canonical transformation formulated as
(r, θ, ν, R, Θ, N )7→ r0, θ0, ν0, R0,Θ0, N 0
which removes short-periodic terms of the original
Hamiltonian without reducing the number of degrees
of freedom. The main problem Hamiltonian is written
using Deprit’s radial intermediary as
Hr0, R0,Θ0, N 0=1
2 R02+Θ02
µand sin2I0= 1 −N02
and the first-order contact transformation required to
obtain the primed variables is presented in Appendix
A. Then, from Hamilton’s equations, we obtain the
d (r0, θ0, ν0)
∂(R0,Θ0, N 0)and
dR0,Θ0, N 0
∂(r0, θ0, ν0)(29)
and the equations of motion
˙ν0=−3κN 0
Θ0= 0 (34)
N0= 0 (35)
with the constant κdefined as
With Deprit’s intermediary, it is also possible to
obtain closed-form solutions in terms of trigonometric
functions accurate up to the first order of J2. Such
solutions not only account for secular and long-period
terms, but also for short-periodic effects.
3 Symplectic numerical integration
Several types of geometric integrator have been
developed and presented in literature, however the most
known could be the so-called symplectic propagation
algorithms, which are obtained from Hamiltonian
systems [8]. In this work, we pay special attention to this
type of integrators, because they have shown favorable
performance for different mechanical systems, including
the motion of spacecraft.
3.1 Basic construction of symplectic
The basic method Φh
2used in this work is known as
the St¨ormer–Verlet method [23]. This method has
second-order accuracy and it can be obtained using the
composition operation
2= Φh/2
where Φh/2
1is the map corresponding to the first-order
symplectic Euler method
qn+1 =qn+h∇pH(qn+1, pn) (38)
pn+1 =pn−h∇qH(qn+1, pn) (39)
and Φh/2
1∗is its adjoint method,
qn+1 =qn+h∇pH(qn, pn+1) (40)
pn+1 =pn−h∇qH(qn, pn+1) (41)
the constant hbeing the selected time step [8,17]. The
operation in Eq. (37) leads to
2∇pHqn+1/2, pn(42)
Symplectic orbit propagation based on Deprit’s radial intermediary 379
pn+1 =pn−h
2∇qHqn+1/2, pn+
∇qHqn+1/2, pn+1(43)
qn+1 =qn+1/2+h
2∇pHqn+1/2, pn+1(44)
where qcorresponds to position, pto momenta and the
subindex nstands for the current number of iteration.
It is also possible to use the composition operation
2= Φh/2
to obtain the alternative method
2∇qHqn, pn+1/2(46)
qn+1 =qn+h
2∇pHqn, pn+1/2+
∇pHqn+1, pn+1/2(47)
pn+1 =pn+1/2−h
2∇qHqn+1, pn+1/2(48)
Arbitrarily high-order versions of basic symplectic
methods can be obtained using specific sequences of
compositions, while preserving the desirable geometric
properties of the basic method. One of the commonly
used approaches to increase the order of the basic
method is known as triple jump composition or the
Suzuki–Yoshida technique [12,24]. If, for instance, a
fourth-order method is to be obtained with Φh
2as the
basic method, then the operation
4= Φγh
may be used along with the constants γand βdefined
2−21/3and β= 1 −2γ(50)
It is important to mention that, in order to build
a fourth-order method using γand βas defined in
Eq. (50), it is necessary that the basic method be at
least second order. However, this composition method
can be applied, in general, to a method Φh
2kof order
2k+2 = Φγh
2−21/(2k+1) (52)
and βdefined as in Eq. (50). Following a similar
line of thought, a sixth-order method [12] can also
be obtained through composition operations departing
from the basic method Φh
6= Φw3h
using the constants
w1=−1.17767998417887 (54)
w2= 0.235573213359357 (55)
w3= 0.784513610477560 (56)
w0= 1 −2 (w1+w2+w3) (57)
3.2 Symplectic integrator for Deprit’s radial
In this section, we derive a symplectic integrator derived
from Deprit’s radial intermediary in Eq. (27) and the
basic symplectic method Φh
2presented in Eqs. (46–
48). Dropping the primes in the polar nodal variables
and using the notation simplification qn+1/2=q1/2and
qkn+1 =q+we have
Θ2 (58)
Θ1/2= Θ (59)
The new positions are
and the new momenta
Θ2 (64)
Θ+= Θ1/2(65)
It is important to remember that all variables in this
symplectic integrator are primed. This integrator is of
second order, fully explicit, and requires the contact
transformation from primed to original variables, as
presented in Appendix A.
380 L. Palacios, P. Gurfil
3.3 Cartesian symplectic integrator
The symplectic integrator presented in this section,
obtained from Cartesian variables, was developed by the
Authors in Ref. [17]. Using the Hamiltonian in Eq. (15),
the second-order symplectic method Φh
2presented in
Eqs. (42–44) yields the half-indexed positions
Next, the new momenta are defined as
with fx,fyand fzdefined as
1/2 5z2
−1!# (73)
1/2 5z2
−1!# (74)
1/2 5z2
−3!# (75)
and r1/2=qx2
1/2. Finally, the new
positions complete the method:
This algorithm is fully explicit and its performance
will be examined in a set of simulated scenarios in
the next section, along with the symplectic integrator
presented in Section 3.2.
4 Numerical simulations
In this section, several performance parameters are
obtained to test the efficiency of the integration methods
presented in this work. These are provided in terms of
Hamiltonian, position and velocity errors defined as
kH0kand ∆A=kA−Aref k(79)
where the generic variable Astands for either the
position or velocity vector. The performance parameters,
obtained by the symplectic integrators through different
time steps, are compared to those obtained by
two generic integrators, the DRI solution algorithm
presented in Ref. [6], and the symplectic integrator
in Cartesian variables developed by the Authors in
Ref. [17]. The generic integrators selected for the
comparison are the 4th order, fixed-step, classical
Runge–Kutta integrator (CRK4) and the 4th order,
fixed-step, Runge–Kutta due to Dormand and Prince
(DP4). Additionally, reference “ground truth” is
obtained using an 8th order, variable step, Runge–Kutta
integrator due to Dormand and Prince [25]. These last
three integrators will propagate the motion obtained
from the full J2equations in inertial Earth–centered
Cartesian coordinates presented in Eqs. (18–20).
All the simulations and figure plots presented in this
section were carried out on a PC with a processor
Intel(R) Core(TM) i5-3570 with 3.40 GHz, 4.00 GB of
RAM and MATLAB 2017a. In the following sections,
the simulation time corresponds to 100 orbits and the
time step is 50 seconds (unless otherwise told). To
facilitate reading, the acronyms presented in Table 1 are
used. Additionally, the implementation algorithm for
the symplectic integrator in Section 3.2 is summarized
in Table 2. The spacecraft initial osculating orbital
elements and initial Cartesian absolute state can be
found in Table 3 and Table 4, respectively, and the
parameter values used in the simulations are
µ= 398 600.4415 km3/s2, α = 6378.1363 km,
J2= 1.0826266 ×10−3
4.1 4th Order symplectic integrator
The simulations starts with the 4th order symplectic
algorithms. Figure 1 shows that SY4, SYC4 and DGL
preserve the Hamiltonian, with SY4 and DGL having
almost the same maximum Hamiltonian error values of
5.9043×10−7and 5.886×10−7, respectively, while SYC4
yields a maximum ∆Hof 5.51753 ×10−8. In the same
figure, it is also noticed that DP4 and CRK4 present
Symplectic orbit propagation based on Deprit’s radial intermediary 381
Table 1 Acronyms used in the simulations
Acronym Meaning
DP8 8th order, adaptive step, Runge–Kutta method
due to Dormand and Prince
DP4 4th order, fixed step, Runge–Kutta method due
to Dormand and Prince
CRK4 Classical 4th order Runge–Kutta method
SY4, SY6 4th & 6th order explicit symplectic integrator
as in Eqs. (58–66)
SYC4, SYC6 4th & 6th order explicit symplectic integrator
as in Eqs. (67–78)
DGL Deprit-Gurfil-Lara analytic propagator as in
Appendix B
Table 2 Algorithm 1: Propagation of the symplectic integrator
1: input: initial and final time, time step, initial state
(r0, θ0, ν0, R0,Θ, N ) of the spacecraft
2: Transform initial state from original to primed variables
using Appendix A
3: for every time step
4: if desired order is 4 then
5: Use composition in Eq. (49) with constants from Eq. (50)
6: Propagate the state of the spacecraft with Eqs. (58–66)
7: Transform the state from primed to original space using
Appendix A
8: else if desired order is 6 then
9: Use composition in Eq. (53) with constants from
Eq. (54–57)
10: Repeat steps 6 and 7
11: end if
12: end for
Table 3 Summary of initial osculating orbit elements
a(km) e I Ωω θ
7000 0.005 55◦0◦10◦25◦
Table 4 Summary of initial Cartesian absolute positions and
x(km) y(km) z(km) ˙x(km/s) ˙y(km/s) ˙z(km/s)
6313.5040 1688.6292 2411.6125 −3.1956 3.9440 5.6327
continuous error growth reaching, to maximum values
of 1.6134 ×10−7and 7.9941 ×10−6, respectively.
Figure 2 shows that DGL and CRK4 produce the
largest position errors, with maximum final values of
0.5796 km and 0.1664 km, respectively. SYC4 yields a
maximum position error value of 0.0489 km. The lowest
errors are provided by SY4 and DP4, showing values of
Fig. 1 Hamiltonian error.
Fig. 2 Position and velocity errors.
0.0098 km and 0.0035 km, respectively. Similar results
are found in terms of velocity errors. DGL presents
the largest error with a value of 1.243 ×10−3km/s,
followed by CRK4 with 1.6793 ×10−4km/s, SYC4 with
7.3752 ×10−5km/s, SY4 with 8.2043 ×10−6km/s
and finally DP4 with 3.4963 ×10−6km/s. Despite
the apparent advantage presented by DP4 in terms of
position and velocity errors, it is the slowest of the
methods with a computational time of 0.1706 seconds.
On the other hand, DGL, SY4 and SYC4 required the
shortest computational times with 0.0247, 0.0183 and
0.0152 seconds, which is ∼85.50%, ∼89.27% and
∼91.07 faster than DP4, respectively. Compared to
CRK4, whose computational time is 0.1136 seconds,
these last propagators are ∼78.23%, ∼83.89% and
∼86.59% faster.
The error trends presented in the previous simulation
382 L. Palacios, P. Gurfil
are maintained though different time steps. For the
next simulations, the same initial conditions and final
simulation time are used, but the time steps are 0.1, 1,
20, 40, 60, 80, 100, 120, 140, 160, 180 and 200 seconds.
The maximum Hamiltonian error is observed in Fig. 3,
where it can be noticed that both SY4 and DGL are
the only integrators preserving the Hamiltonian, while
SYC4, CRK4 and DP4 show a continuous error growth
rate through the different time steps.
Figure 4 shows that DGL presents the lowest position
and velocity errors, with almost constant values, closely
followed by SY4, DP4 and SYC4. However, the
error values for CRK4 are notably large, with final
position and velocity errors above 120 km and 0.1 km/s,
respectively. The computational time advantage is also
retained by SYC4, SY4 and DGL, as indicated in Fig. 5,
SYC4 being the fastest and DP4 the slowest.
Fig. 3 Maximum Hamiltonian error vs time step.
Fig. 4 Maximum position and velocity errors vs time step.
Fig. 5 Computational time vs time step.
4.2 6th Order symplectic integrator
Next, the performance of the 6th order symplectic
integrators is tested using the previous scenario
conditions. SY6 presents no significant improvement
with respect to Hamiltonian error with a maximum
error value of 5.9043 ×10−7, but SYC6 does improve
substantially with a value of 1.4279 ×10−11 as observed
in Fig. 6.
Both integrators also present better error values in
terms of position and velocity as in Fig. 7. For instance,
the SY6 maximum position and velocity error values
are 7.4631 ×10−3km and 6.6537 ×10−6km/s, while
for SYC6 they are 3.1188 ×10−5km and 4.0833 ×
10−8km/s, respectively. Although SY6 error values
are not visible in Fig. 7 in comparison with Fig. 2,
they represent an improvement of 2.4 m in position
and 0.0016 m/s in velocity error. Nevertheless, SYC6
is the integrator with the highest precision for the
current selection of final time and time step. Even
though the composition procedure to increase the order
of the symplectic integrators requires additional steps,
this process does not have a significant impact on the
computational time. SY6 presents a computational time
of 0.0219 seconds, that is ∼78.71% faster than CRK4
and ∼85.95% faster than DP4 (similar to those values
obtained by DGL). On the other hand, SYC6 requires
Fig. 6 Hamiltonian error.
Symplectic orbit propagation based on Deprit’s radial intermediary 383
Fig. 7 Position and velocity errors.
0.0187 seconds, which is ∼81.80% faster than CRK4
and ∼87.99% faster than DP4.
SY6 preserves the maximum Hamiltonian error
through different time steps, as observed in Fig. 8, and
although SYC6 produces a growing Hamiltonian error
for larger time step, the resulting values are the lowest
among the integrators in consideration. In terms of
position errors, SY6 present improvements producing
now the smallest error for all time steps, followed closely
by SYC6 and DGL, as indicated in the magnified view,
within Fig. 9. A similar trend can be observed in terms
of velocity errors in the same figure. Next, in Fig. 10 it is
noticed that SYC6 is slightly faster than SY6 and DGL,
and this tendency is sustained during the whole set of
time steps. The same figure shows that SY6, SYC6 and
DGL have a clear gap in terms of computational time
Fig. 8 Maximum Hamiltonian error vs time step.
Fig. 9 Maximum position and velocity errors vs time step.
Fig. 10 Computational time vs time step.
with respect to CRK4 and DP4. A summary of the
results obtained in this section is presented in Table 5.
Table 5 Summary of results (only maximum values and h=
50 sec.)
Method Pos. error Vel. error H error Comp.
(km) (km/s) (Norm.) time (s)
CRK4 0.1664 1.6793×10−04 7.9941×10−06 0.1136
DP4 3.5064×10−03 3.4963×10−06 1.6134×10−07 0.1706
DGL 0.57965 1.243×10−03 5.8860×10−07 0.0247
SY4 9.8683×10−03 8.2043×10−06 5.9043×10−07 0.0183
SYC4 0.048952 7.3752×10−05 5.5175×10−08 0.0152
SY6 7.4631×10−03 6.6537×10−06 5.9043×10−07 0.0219
SYC6 3.1188×10−05 4.0833×10−08 1.4279×10−11 0.0187
384 L. Palacios, P. Gurfil
5 Conclusions
Explicit symplectic integrators of different orders were
obtained based on DRI. These were applied to propagate
the orbital motion of spacecraft including the effect of J2,
yielding more accurate results and substantially faster
processing speeds in comparison with Runge–Kutta
integrators. Furthermore, the developed integrators
produce more accurate results than those obtained with
DRI-based analytic propagators.
A set of simulated scenarios was used to verify the results
in terms of Hamiltonian, position and velocity errors
using a set of different time steps. Throughout these
simulations, the symplectic integrators consistently
sustained the lowest errors and computational times
with respect to the selected contenders.
Appendix A. First order contact trans-
formation for Deprit’s radial intermediary
The transformation from prime polar-nodal variables to
original ones, and vice versa, requires the computation
of the first order corrections [6,7]:
∆ξ=δξ (A1)
where ξis any of the polar nodal variables and the
variable δis defined as
These corrections, as well as δ, must be expressed in
prime variables for the direct transformation
and in original variables for the inverse transformation
Then, the first order corrections are the same for both
the direct and inverse transformations, and they are
expressed as
2sin2Icos 2θ(A5)
4sin2I+2−3 sin2Iϕsin 2θ−
5−6 sin2I+1−2 sin2Icos 2θσ(A6)
∆ν= cos I(3 + cos2θ)σ−3
2+ 2ϕsin 2θ(A7)
p(1 + ϕ)2sin2Isin 2θ(A8)
∆Θ = −Θ sin2I3
2+ 2ϕcos 2θ+σsin 2θ(A9)
∆N= 0 (A10)
where σ=pR/Θ and ϕ=p/r −1. It is important to
remember that the right-hand side terms in Eqs. (A5–
A10) must be expressed in prime variables for the direct
transformation, and in original variables in the case of
the inverse one.
Appendix B. Analytical propagation of
Deprit’s radial intermediary
The variables used in DRI are primed variables.
Therefore, all the variables presented in this
Appendix should be understood as primed variables.
Departing from primed polar nodal initial conditions
(r0, θ0, ν0, R0), with Θ and Nas constants, the analytic
propagation of the spacecraft is obtained using the
algorithm developed by Gurfil and Lara in Ref. [6].
First, from initial conditions, auxiliary constants are
defined as
cos I=N
Θ = Θp1−(2 −6 cos2I)ε(B3)
Θ1 + 2−12 cos2Iε(B4)
χ= 6εN
as well as the initial values of the anomalies
f0from ecos f0=˜p
−1, e sin f0=R0s˜p
Symplectic orbit propagation based on Deprit’s radial intermediary 385
u0= 2 arctan r1−e
1 + etan f0
`0=u0−esin u0(B11)
Then, for a given time step, the motion is propagated
using the sequence
ufrom `=u−esin u(B13)
f= 2 arctan r1 + e
1−etan u
r=a(1 −ecos u) (B15)
θ=θ0+τ(f−f0) (B16)
ν=ν0+χ(f−f0) (B17)
Θesin f(B18)
together with the constant values of Θ and N.
This work was supported by the European Commission
Horizon 2020 Program in the framework of the Sensor
Swarm Sensor Network Project under grant agreement
Ferrer, S., Lara, M. On roto-translatory motion:
reductions and radial intermediaries.
The Journal of
the Astronautical Sciences,2012, 59(1–2): 22–40.
Ferrer, S., Molero, F. J. Intermediaries for gravity-
gradient attitude dynamics I. Action-angle variables.
Second IAA Conference on Dynamics and Control of
Space Systems, 2014.
Cid, R., Lahulla, J. F. Perturbaciones de corto periodo
en el movimento de un satelite artificial, en funcion de
las variables de Hill.
Publicaciones de la Revista de la
Academia de Ciencias de Zaragoza
, 24: 159–165.
Gurfil, P., Seidelmann, P. K. Celestial mechanics and
astrodynamics: theory and practice. Springer–Verlag,
Deprit, A. The elimination of the parallax in satellite
theory. Celestial Mechanics,1981, 24(2): 111–153.
Gurfil, P., Lara, M. Satellite onboard orbit propagation
using Deprit’s radial intermediary.
Celestial Mechanics
and Dynamical Astronomy,2014, 120(2): 217–232.
Lara, M. LEO intermediary propagation as a feasible
alternative to Brouwer’s gravity solution.
Advances in
Space Research,2015, 56(3): 367–376.
Blanes, S., Casas, F. A concise introduction to geometric
numerical integration. CRC Press, 2016.
Hairer, E., Lubich, C., Wanner, G. Geometric numerical
integration: algorithms for ordinary differential equations,
2nd ed. Springer, 2006.
Feng, K., Qin, M. Z. Symplectic geometric algorithms
for Hamiltonian systems. Springer, 2010.
Simo, J. C., Tarnow, N., Wong, K. K. Exact energy-
momentum conserving algorithms and symplectic
schemes for nonlinear dynamics.
Computer Methods in
Applied Mechanics and Engineering
, 100(1): 63–
Yoshida, H. Construction of higher order symplectic
Physics Letters A
, 150(5–7): 262–268.
Blanes, S., Casas, F., Farr´es, A., Laskar, J.,
Makazaga, J., Murua, A. New families of symplectic
splitting methods for numerical integration in dynamical
Applied Numerical Mathematics
, 68:
Farres, A., Laskar, J., Blanes S., Casas, F., Makazaga,
J., Murua, A. High precision symplectic integrators for
the Solar system.
Celestial Mechanics and Dynamical
Astronomy,2013, 116(2): 141–174.
Tsuda, Y., Scheeres, D. J. Computation and applications
of an orbital dynamics symplectic state transition matrix.
Advances in the Astronautical Sciences
, 134(4):
Imre, E., Palmer, P. L. High-precision, symplectic
numerical, relative orbit propagation.
Journal of
Guidance, Control, and Dynamics
, 30(4): 965–973.
Palacios, L., Gurfil, P. Variational and symplectic
integrators for satellite relative orbit propagation
including drag.
Celestial Mechanics and Dynamical
, 130(4): 31, DOI: 10.1007/s10569-
Lara, M., Gurfil, P. Integrable approximation of
J2-perturbed relative orbits.
Celestial Mechanics and
Dynamical Astronomy,2012, 114(3): 229–254.
Schaub, H., Junkins, J. L. Analytical mechanics of space
systems, 2nd ed. American Institute of Aeronautics and
Astronautics, Inc., 2009.
Alfriend, K., Vadali, S. R., Gurfil, P., How, J., Breger,
L. Spacecraft formation flying: dynamics, control and
navigation. Butterworth–Heinemann, 2009.
386 L. Palacios, P. Gurfil
Irigoyen, M., Sim´o, C. Non integrability of the
J2 problem.
Celestial Mechanics and Dynamical
Astronomy,1993, 55(3): 281–287.
Celletti, A., Negrini, P. Non-integrability of the problem
of motion around an oblate planet.
Celestial Mechanics
and Dynamical Astronomy,1995, 61(3): 253–260.
Hairer, E., Lubich, C., Wanner, G. Geometric numerical
integration illustrated by the St¯ormer–Verlet method.
Acta Numerica,2003, 12: 399–450.
Suzuki, M. Fractal decomposition of exponential
operators with applications to many-body theories and
Monte Carlo simulations.
Physics Letters A
146(6): 319–323.
Hairer, E., Nrsett, S. P., Wanner, G. Solving ordinary
differential equations I: nonstiff problems, 2nd ed.
Springer–Verlag, 1993.
Leonel M. Palacios
is a postdoctoral
research associate at the High Contrast
Imaging Laboratory of Princeton
University. He received his Ph.D.
degree in aerospace engineering from
the University of Glasgow, the United
Kingdom, in 2016. In May 2016, he joined
the Asher Space Research Institute of
the Technion–Israel Institute of Technology as a postdoctoral
fellow working for the Satellite Swarm Sensor Network
(S3NET), a project aimed for the development of the full
potential of “swarms” of satellites through the optimized
and enhanced use of their on-board resources. His areas of
expertise include astrodynamics and dynamics and control of
distributed space systems. E-mail: lmmoreno@princeton.edu.
Pini Gurfil
is a full professor of
aerospace engineering at the Technion–
Israel Institute of Technology, and
director of the Asher Space Research
Institute. He received his Ph.D. in
aerospace engineering from the Technion
in March 2000. From 2000 to 2003, he was
with the Department of Mechanical and
Aerospace Engineering, Princeton University. In September
2003, he joined the Faculty of Aerospace Engineering at the
Technion. Dr. Gurfil is the founder and director of the
Distributed Space Systems Laboratory, a research laboratory
aimed at development and validation of spacecraft formation
flying algorithms and technologies. He has been conducting
research in astrodynamics, distributed space systems, and
satellite dynamics and control. | {"url":"https://www.researchgate.net/publication/327961008_Symplectic_orbit_propagation_based_on_Deprit's_radial_intermediary","timestamp":"2024-11-03T23:40:06Z","content_type":"text/html","content_length":"721136","record_id":"<urn:uuid:b0f8f7c4-72c5-4949-a811-5eb1ede7c9f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00257.warc.gz"} |
Bootstrap:Data Science Pathway
Teaching Remotely?
If you’re teaching remotely, we’ve assembled an Implementation Notes page that makes specific recommendations for in-person v. remote instruction.
Ordering Student Workbooks?
While we give our workbooks away as a PDF (see below), we understand that printing them yourself can be expensive! You can purchase beautifully-bound copies of the student workbook from Lulu.com.
Click here to order.
We provide all of our materials free of charge, to anyone who is interested in using our lesson plans or student workbooks.
Lesson Plans
Students learn about Categorical and Quantitative data, are introduced to Tables by way of the Animals Dataset, and consider what questions can and cannot be answered with available data.
Students begin to program, explorings how Numbers, Strings, Booleans and operations on those data types work in this programming language.
Students learn how to apply Functions in the programming environment and interpret the information contained in Contracts: Name, Domain and Range. Image-producing functions provide an engaging
context for this exploration.
Students learn to generate and compare pie charts & bar charts, explore other plotting & display functions, and (optionally) design an infographic.
Students use displays to answer questions, focusing on which displays make sense for the data they are working with. They also learn how to extract individual rows from a table, and columns from
a row.
Students learn about table methods, which allow them to order, filter, and build columns to extend the animals table.
Students discover that they can make their own functions and are introduced to a structured approach to building them called the Design Recipe.
Students use the Design Recipe to define operations on tables, developing a structured approach to answering questions by transforming tables.
Students learn how to chain Methods together, and define more sophisticated subsets.
Image-scatter-plots explose deeper insight into subgroups within a population, motivating the need for more advanced analysis and adding if-expressions to students' programming toolkit.
Students learn about random samples and statistical inference, as applied to the Animals Dataset. In the process, students get a light introduction to the role of sample size and the importance
of statistical inference.
Students practice creating subsets and think about why it might sometimes be useful to answer questions about a dataset through the lens of specific subsets.
Students select a real world dataset to investigate for the remainder of the course. They begin their analysis by identifying categorical and quantitative columns, and defining a few random and
logical subsets.
Students are introduced to Histograms by comparing them to bar charts, learning to construct them by hand and in the programming environment.
Students explore the concept of "shape", using histograms to determine whether a dataset has skewness, and what the direction of the skewness means. They apply this knowledge to the Animals
Dataset, and then to their own.
Students are introduced to mean, median and mode(s) and consider which of these measures of center best describes various quantitative data.
Students are introduced to box plots, learn to evaluate the spread of a quantitative column, and deepen their perspective on shape by matching box plots to histogram.
Students consider the concept of trust and testing—how do we know if a particular analysis is trustworthy?
Students investigate scatter plots as a method of visualizing the relationship between two quantitative variables. In the programming environemt, points on the scatter plot can be labelled with a
third variable!
Students deepen their understanding of scatter plots, learning to describe and interpret direction and strength of linear relationships.
Students compute the “line of best fit” using the function for linear regression, and summarize linear relationships in a dataset.
Students consider ethical issues and privacy in the context of data science.
Students consider possible threats to the validity of their analysis.
This is a single page that contains all the lessons listed above.
Other Resources
Of course, there’s more to a curriculum than software and lesson plans! We also provide a number of resources to educators, including standards alignment, a complete student workbook, an answer key
for the programming exercises and a forum where they can ask questions and share ideas.
• Glossary—A list of vocabulary words used in this pathway.
• Standards Alignment—Find out how our materials align with Common Core Content and Practice Standards, as well as the TEK and CSTA Standards.
• Student Workbook—Sometimes, the best way for students to get real thinking done is to step away from the keyboard! Our lesson plans are tightly integrated with the Student Workbook, allowing
for paper-and-pencil practice and activities that don’t require a computer.
• Teacher-Only Resources—We also offer several teachers-only materials, including an answer key to the student workbook, a quick-start guide to making the final project, and pre- and post-tests
for teachers who are participating in our research study. For access to these materials, please fill out the password request form. We’ll get back to you soon with the necessary login
• Online Community (Discourse)—Want to be kept up-to-date about Bootstrap events, workshops, and curricular changes? Want to ask a question or pose a lesson idea for other Bootstrap teachers?
These forums are the place to do it.
These materials were developed partly through support of the National Science Foundation, (awards 1042210, 1535276, 1648684, and 1738598). Bootstrap:Data Science by the Bootstrap Community is
licensed under a Creative Commons 4.0 Unported License. This license does not grant permission to run training or professional development. Offering training or professional development with
materials substantially derived from Bootstrap must be approved in writing by a Bootstrap Director. Permissions beyond the scope of this license, such as to run training, may be available by
contacting contact@BootstrapWorld.org. | {"url":"https://bootstrapworld.org/materials/fall2021/en-us/courses/data-science/","timestamp":"2024-11-07T07:20:52Z","content_type":"text/html","content_length":"15586","record_id":"<urn:uuid:d52bcfd2-3f52-4014-a8ea-39a8837b2da3>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00415.warc.gz"} |
Found 52 statistics tutors in Baltimore, MD 21201 – page 2 of 6.
Ben D. · Masters in Engineering
Baltimore 21201 · 0 miles from downtown · $60/hour · teaches p Business, Statistics
BS Civil Engineering, University of Vermont. Gaduated Magna Cum Laude. MS Civil Engineering, Stanford University MBA Corporate Finance, Ohio State University I have been a private tutor since June
member for 10 years and 6 months
Ronald B.
Lutherville Timonium 21093 · 10 miles from 21201 · $75/hour · teaches Astronomy -
...Statistic, Algebra, Excel and Power Point all help me in obtaining my Ph.D. and in becoming my international role in Biotechnology. The two essential traits any tutor MUST have are 1) knowledge of
the subject...
Cornelia M.
Cockeysville 21030 · 14 miles from 21201 · $99/hour · teaches Trigonometry - Statistics
I am a professional tutor with over 15 years of experience and my favorite subjects are Algebra, Geometry, Precalc, Trig, College Algebra, Statistics, and Calculus! I have tutored all levels of
middle school math...
Tim S.
Towson 21204 · 8 miles from 21201 · $60/hour · teaches Algebra 1 - Algebra 2 - Calculus
Subjects that I have tutored extensively over the years are algebra, geometry, precalculus, calculus I and II, statistics, and discrete mathematics. I also am well-versed in the SAT, ACT, and GRE
mathematics sections...
Minyarn S.
Columbia 21045 · 13 miles from 21201 · $35/hour · teaches Microsoft Excel - Statistics
I have a strong foundation in statistical analysis, particularly using the R programming language, which I have utilized extensively throughout my coursework and projects. These experiences have
honed my analytical...
Nathan C.
Lutherville Timonium 21093 · 10 miles from 21201 · $30/hour · teaches Algebra 1 -
The University of Notre Dame, Applied Mathematics Hi I’m Nathan! I’m a current undergraduate student at the University of Notre Dame and am studying Applied and Computational Mathematics and
Daniel K.
Laurel 20723 · 17 miles from 21201 · $347/hour · teaches Trigonometry - Statistics -
My specialties include financial and managerial accounting, CPA exam prep (I’m a CPA), Microsoft Excel, statistics, and SAT Math prep. I scored an average of 93 across all four parts of the CPA exam
and have helped...
Bryant N.
Columbia 21044 · 15 miles from 21201 · $40/hour · teaches Precalculus - Statistics -
I also scored 1520 on the SAT (with a 790 in Math) and achieved top scores of 5 in AP Calculus BC, Statistics, Computer Science A, Physics, and both Macro and Microeconomics. Subjects I Specialize
In: - Math: Algebra...
Jona H. · Bachelor in Nursing
Baltimore 21206 · 6 miles from downtown · $30/hour · teaches Spelling, Statistics
Jona is a nursing major in Baltimore, MD. He tutors math and college subjects.
member for 1 year and 10 months
Lily G.
Glen Arm 21057 · 13 miles from 21201 · $100/hour · teaches Prealgebra - Statistics -
University of Maryland, Baltimore County, Math Sam Houston State University, Masters Welcome to my page! I'm Lily, or Ms. G-S, and I'm a Math Instructor at a Baltimore University, with experience | {"url":"https://www.tutorz.com/find/statistics-baltimore-md/2","timestamp":"2024-11-07T01:28:55Z","content_type":"text/html","content_length":"25155","record_id":"<urn:uuid:564834fb-d0dd-4aa3-a61a-53b64ead8694>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00461.warc.gz"} |
How Do You Write 1 Crore In Numbers? - Eco Canna Biz
Don’t purchase from some agent, purchase directly from builders/owners and that too only trusted builders. Buy a property that is both ready for posession or shall be prepared within a yr. Anything
longer is a risk as a outcome of delays can occur and you gained’t be round to deal with things. As per the Indian numbering system, one lakh is a unit which is the identical as one hundred,000 in
the worldwide unit system. 10 lakh in Indian forex makes 1,000,000 in International foreign money.
From the beneath record of unit divisions in both the numbering methods, you’ll have the ability to see the distinction. 1 crore is the same as 100 lakhs and ten million based on the western system.
It can be broken down into tens, tons tyler q. houlton of, 1000’s and ten 1000’s for easier understanding. Are you questioning how many zeros seem in 1 crore? There are 7 zeros in 1 crore and it can
be written as 1,00,00,000. Furthermore, according to the worldwide numbering system, 1 crore is equal to 10 million.
Numerically each the numbers each characterize the identical digit but in several place worth methods. The worldwide place value system is the numeral system used throughout the globe except for a
few international locations which use, e.g., the Indian place-value system. Crore is the numeric equivalent of 10 million within the International System and within the Indian system it’s also equal
to 100 lakh. It is used to discuss with massive sums of money in the Indian place-value system and in all of the nations that comply with this method. Is the least 6 digit number which is a perfect
sq.. This could be calculated by utilizing the following steps.
While all international locations use the precise term Crore, Sri Lankans normally use the word “Kodi” as a substitute of Crore. However, the meaning of both the phrases is similar. The crore is
known by numerous regional names. Comments, questions and every little thing else you might need about what number of zero in 1 crore are actually appreciated, and could be left within the designated
type on the backside.
All you want to do is to maintain reading with out hopping on to the conversion process. It helps to interrupt down large numbers into smaller denominations to grasp what number of zeros are there.
There are 7 zeros in a single crore and the number may also be written as 1,00,00,000.
A crore is the same as 100 lakhs and is expressed as 1,00,00,000. If you come across 1.5 lakhs, that simply merely means one hundred fifty,000. On the other hand, crore means ten million. In one
crore, there are seven zeros, which implies that it’s equal to 10 million. 10 crores in numbers are 10,00,00,000 and one crore is equivalent to 100 lakhs.
A crore (/krɔːr/; abbreviated cr), karod, karor, or koti denotes ten million and is equal to a hundred lakh in the South Asian numbering system. According to many books 2 the googol is doubtless
certainly one of the largest numbers ever named. The googolplex is 1 adopted by a googol zeros. More recently, Skewer’s quantity is the largest number ever used in a mathematical proof. Therefore, 1
billion is the identical as ten thousand lakhs .
But the number of zeros in both the Indian and the international number techniques is the same. So, one million within the Indian and worldwide number methods contains the primary, followed by six
zeros. Quantity converter rapidly alters between nine units of amount, together with dozen, score and gross. Yes, you’ll find a way to perform the conversion of 1,000,000 to crore each ways. | {"url":"https://ecocannabiz.com/how-do-you-write-1-crore-in-numbers/","timestamp":"2024-11-03T16:11:17Z","content_type":"text/html","content_length":"46523","record_id":"<urn:uuid:f8c26546-4d0e-48b7-bce2-65d16c814fcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00637.warc.gz"} |
Math Hopscotch Skip Counting Or Multiplication Facts 1 S
Math Hopscotch Skip Counting Or Multiplication Facts 1 s
Multiplication Games Finger Hopscotch Printable
Numbers Hopscotch ESL Worksheet By Angelitateacher
Math Hopscotch Skip Counting Or Multiplication Facts 1 S is a free printable for you. This printable was uploaded at December 31, 2021 by tamble in Color by Number.
Here is the Math Hopscotch Skip Counting Or Multiplication Facts 1 S from Hopscotch Numbers Printable that you can print for free. We do hope that this really helps all of you get what you are
looking for.
If you are looking for Hopscotch Numbers Printable, you are arriving at the correct site. 1000+ free
Math Hopscotch Skip Counting Or Multiplication Facts 1 S can be downloaded to your computer by right clicking the image. If you love this printable, do not forget to leave a comment down below. | {"url":"https://www.colorbynumberprintable.net/hopscotch-numbers-printable/math-hopscotch-skip-counting-or-multiplication-facts-1-s/","timestamp":"2024-11-05T09:03:50Z","content_type":"text/html","content_length":"49164","record_id":"<urn:uuid:f15b050f-c429-4ab8-b017-e29767e7e55a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00528.warc.gz"} |
Applied Software - NCSA - National Center for Supercomputing Applications
Applied Software widely used for
supercomputer simulations and virtual experiments
Bioinformatics and Genomics
National Human Genome Research Institute and National Institute of Allergy and Infectious Diseases
Massively parallel DNA sequencing technologies are revolutionizing genomics by making it possible to generate billions of relatively short (∼100-base) sequence reads at very low cost.
LLPATHS-LG implements the algorithm for genome assembly and its application to massively parallel DNA sequence data from the human and mouse genomes, generated on the Illumina platform. The
resulting draft genome assemblies have good accuracy, short-range contiguity, long-range connectivity, and coverage of the genome. In particular, the base accuracy is high (≥99.95%) and the scaffold
sizes (N50 size = 11.5 Mb for human and 7.2 Mb for mouse) approach those obtained with capillary-based sequencing.
BLAST is Basic Local Alignment Search Tool is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA
A BLAST search enables a researcher to compare a query sequence with a library or database of sequences, and identify library sequences that resemble the query sequence above a certain threshold.
ClustalW2 is platform for multiple alignment of nucleic acid to a genomic DNA sequence and protein sequences, compares a sequences, allowing for introns and frame shifting errors .Parallel ClustalW2
software package to allow faster alignment of very large data sets and to increase alignment accuracy. This software calculates the best matches and aligns the sequences according to the identified
The number of nucleotides are compared to the genomic DNA sequence with the number of nucleotides in 10^8 .
BEAST 2 is a cross-platform program for Bayesian phylogenetic analysis of molecular sequences. It estimates rooted, time-measured phylogenies using strict or relaxed molecular clock models. It can be
used as a method of reconstructing phylogenies but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology. BEAST 2 uses Markov chain Monte Carlo (MCMC)
to average over tree space, so that each tree is weighted proportional to its posterior probability. BEAST 2 includes a graphical user-interface for setting up standard analyses and a suit of
programs for analysing the results.
BWA is a software package for mapping low-divergent sequences against a large reference genome, such as the human genome. It consists of three algorithms: BWA-backtrack, BWA-SW and BWA-MEM. The first
algorithm is designed for Illumina sequence reads up to 100bp, while the rest two for longer sequences ranged from 70bp to 1Mbp. BWA-MEM and BWA-SW share similar features such as long-read support
and split alignment, but BWA-MEM, which is the latest, is generally recommended for high-quality queries as it is faster and more accurate. BWA-MEM also has better performance than BWA-backtrack for
70-100bp Illumina reads..
Pacific Biosciences
Falcon a set of tools for fast aligning long reads for consensus and assembly. The Falcon tool kit is a set of simple code collection which I use for studying efficient assembly algorithm for haploid
and diploid genomes
6. FreeBayes 1.3.1
FreeBayes is a Bayesian genetic variant detector designed to find small polymorphisms, specifically SNPs (single-nucleotide polymorphisms), indels (insertions and deletions), MNPs (multi-nucleotide
polymorphisms), and complex events (composite insertion and substitution events) smaller than the length of a short-read sequencing alignment.
Large phylogenomics data sets require fast tree inference methods, especially for maximum-likelihood (ML) phylogenies. Fast programs exist, but due to inherent heuristics to find optimal trees, it is
not clear whether the best tree is found. Thus, there is need for additional approaches that employ different search strategies to find ML trees and that are at the same time as fast as currently
available ML programs.
IQ-TREE is an efficient phylogenomic inference software designed for the reconstruction of trees by maximum likelihood and assessing branch supports. This fast and effective stochastic algorithm is
widely used in molecular systematics and IQ-TREE has essentially greater results when compared to RAxML and PhyML in terms of likelihood.
IQ-TREE found higher likelihoods between 62.2% and 87.1% of the studied alignments, thus efficiently exploring the tree-space. If we use the IQ-TREE stopping rule, RAxML and PhyML are faster in
75.7% and 47.1% of the DNA alignments and 42.2% and 100% of the protein alignments, respectively. However, the range of obtaining higher likelihoods with IQ-TREE improves to 73.3–97.1%. IQ-TREE is
freely available at http://www.cibiv.at/software/iqtree.
University College London
A method capable of automatically identifying pore-lining regions in transmembrane proteins from sequence information alone, which can then be used to determine the pore stoichiometry. MEMSAT-SVM
provides a way to characterise pores in transmembrane proteins and may even provide a starting point for discovering novel routes of therapeutic intervention in a number of important diseases.
Ray is a parallel de novo genome assemblies for parallel DNA sequencing. It uses the message passing interface (MPI) for passing messages, while assembling reads obtained with new sequencing
technologies (Illumina, 454, SOLiD ) for next-generation sequencing data .
The NIH HPC group
Samtools is a suite of programs for interacting and post-processing with high-throughput sequencing data and short DNA sequence read alignments in SAM format. The SAM generic format is used for
storing large nucleotide sequence alignments. Also, the tools provided support complex tasks, such as variant calling and alignment viewing as well as sorting, indexing, data extraction .
Samtools is a suite of applications for processing high throughput sequencing data:
samtools is used for working with SAM ,BAM, and CRAM files containing aligned sequences; bcftools is used for working with BCF2, VCF, and gVCF files containing variant calls ; htslib is a library for
reading and writing the formats mentioned above. samtools and bcftools are based on htslib and format conversion.
Computational Chemistry
Pacific Northwest National Laboratory, US Department of Energy
NWChem is the DOE flagship quantum chemistry – molecular mrchanics code, which was designed from scratch to run on supercomputers.
NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use
of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters
The NWChem development strategy is focused on providing new and essential scientific capabilities to its users in the areas of kinetics and dynamics of chemical transformations, chemistry at
interfaces and in the condensed phase, and enabling innovative and integrated research at EMSL.
NWChem software can handle:
- Biomolecules, nanostructures, and solid-state
- From quantum to classical, and all combinations
- Ground and excited-states
- Gaussian basis functions or plane-waves
-Ab-initio molecular dynamics (Carr-Parinello)
-In general: single-point calculations, geometry optimizations, vibrational analysis.
-Extended (solid-state) systems DFT
-Classical force-fields (Molecular Mechanics: AMBER, CHARMM, etc.)
Classical molecular dynamics capabilities provide for the simulation of macromolecules and solutions, including the computation of free energies using a variety of force fields.
NWChem is scalable, both in its ability to treat large problems efficiently, and in its utilization of available parallel computing resources. NWChem has been optimized to perform calculations on
large molecules using large parallel computers, and it is unique in this regard. This document is intended as an aid to chemists using the code for their own applications. Users are not expected to
have a detailed understanding of the code internals, but some familiarity with the overall structure of the code
12. Open Babel
Open Babel is a free, open-source version of the Babel chemistry file translation program. Open Babel is a project designed to pick up where Babel left off, as a cross-platform program and library
designed to interconvert between many file formats used in molecular modeling, computational chemistry, and many related areas.
DIRAC a relativistic ab initio electronic structure program,
Program for Atomic and Molecular Direct Iterative Relativistic All-electron Calculations
The DIRAC computes molecular properties using relativistic quantum chemical methods. It is named after P.A.M. Dirac, the father of relativistic electronic structure theory.
Molecular Dynamics, Molecular Mechanics and Molecular interactions
Centre of Excellence for Computational Biomolecular Research ( BioExcel)
GROMACS is an engine to perform molecular dynamics simulations and energy minimization. These are two of the many techniques that belong to the realm of computational chemistry and molecular
modeling. Molecular modeling indicates the general process of describing complex chemical systems in terms of a realistic atomic model, with the aim to understand and predict macroscopic properties
based on detailed knowledge on an atomic scale. Often molecular modeling is used to design new materials, drugs, nano structure and atomic clusters for which the accurate prediction of physical
properties of realistic systems is required
Macroscopic physical properties can be distinguished in (a) static equilibrium properties, such as the binding constant of an inhibitor to an enzyme, the average potential energy of a system, or the
radial distribution function in a liquid, and (b) dynamic or non-equilibrium properties, such as the viscosity of a liquid, diffusion processes in membranes, the dynamics of phase changes, reaction
kinetics, or the dynamics of defects in crystals. The choice of technique depends on the question asked and on the feasibility of the method to yield reliable results at the present state of the art.
15. NAMD
The Theoretical and Computational Biophysics Group at the University of Illinois
NAMD is a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. Simulation preparation and analysis is integrated into the
visualization package VMD.
NAMD pioneered the use of hybrid spatial and force decomposition, a technique used by most scalable programs for biomolecular simulations, including Blue Matter.
NAMD is developed using Charm++ and benefits from its adaptive communication-computation overlap and dynamic load balancing. We describe some recent optimizations including: pencil decomposition of
the Particle Mesh Ewald method, reduction of memory footprint, and topology sensitive load balancing. Unlike most other MD programs, NAMD not only runs on a wide variety of platforms ranging from
commodity clusters to supercomputers, but also scales to over one hundred thousand processors thousands . NAMD was tested 1.07 billion-atom complex benchmark up to 64,000 processors .
16. LAMMPS
Sandia National Laboratories, Department of Energy,USA
LAMMPS is a classical molecular dynamics simulation code designed to run efficiently on parallel computers. It was developed at Sandia National Laboratories, a US Department of Energy facility, with
funding from the DOE.
In the most general sense, LAMMPS integrates Newton's equations of motion for collections of atoms, molecules, or macroscopic particles that interact via short − or long − range forces with a variety
of initial and/or boundary conditions.
On parallel machines, LAMMPS uses spatial decomposition techniques to partition the simulation domain into small 3d sub domains, one of which is assigned to each processor. LAMMPS is most efficient
(in a parallel sense) for systems whose particles fill a 3d rectangular box with roughly uniform density.
Kinds of systems LAMMPS can simulate :
-bead−spring polymers
-united−atom polymers or organic molecules
-all−atom polymers, organic molecules, proteins, DNA
-granular materials
-coarse−grained mesoscale models
-ellipsoidal particles
-point dipolar particles
-hybrid systems
17. DL_POLY
DL POLY was developed at Daresbury Laboratory, the Science & Technology Facilities Council UK, with support from the Engineering and Physical Sciences Research Council the Natural Environment
Research Council (NERC ).
DL_POLY is a general purpose classical molecular dynamics (MD) simulation software . The package is used to model the atomistic (coarse-grained or DPD) evolution of the full spectrum of models
commonly employed in the materials sciences as solid state chemistry, biological simulation and soft condensed-matter communities. Calculates molecular dynamics and solves problems of molecular
mechanics for very large molecules containing up to 10^9 atoms.
DL POLY is a fully data distributed code, employing methodologies such as spatial domain decomposition (DD), link-cells (LC) built Verlet neighbour . The particle density of the modelled systems
close to uniform in space and time (ensuring load balancing)
18. GENESIS MD
GENESIS (GENeralized-Ensemble SImulation System) mainly developed by the Sugita groups in RIKEN, Japan. (Computational Biophysics Research Team (R-CCS), Theoretical Molecular Science Laboratory
(CPR), and Laboratory for biomolecular function simulation (BDR))
The GENESIS program package is composed of two MD programs (ATDYN, SPDYN) and trajectory analysis tools:
-CHARMM force field, AMBER force field, MARTINI model, and Go models ;
-Energy minimization and molecular dynamics simulations.
-SHAKE/RATTLE, SETTLE, and LINCS algorithms for bond constraint Bussi,
Langevin, and Berendsen thermostat/barostat ;
-Replica-exchange molecular dynamics method (REMD) in temperature, pressure, and surface-tension space;
-Generalized replica-exchange with solute tempering (gREST) and replica-exchange umbrella sampling (REUS) with collective variables ;
-Multi-dimensional REMD methods ;
-Gaussian accelerated molecular dynamics method.
-String method for reaction pathway search;
-Hybrid quantum mechanics/molecular mechanics calculation ( QM/MM)
-Implicit solvent model (Generalized Born/Solvent Accessible Surface Area model);
-Free-energy perturbation method (FEP);
-An harmonic vibrational analysis using SINDO;
-Steered MD and Targeted MD simulations;
-Restrained MD simulations (Distance, angle, dihedral angle, position, etc);
-Hybrid MPI+OpenMP, hybrid CPU+GPGPU, mixed double+single precision calculations
-Scalable MD simulations for huge systems > 100,000,000 atoms
-Spatial decomposition analysis (SPANA)
19. ls1 mardyn:
The molecular dynamics (MD) simulation program ls1 mardyn mainly developed by High Performance Computing Center Stuttgart; Leibniz Supercomputing Centre, Scientific Computing in Computer Science;
University of Kaiserslautern, Laboratory of Engineering Thermodynamics
ls1 mardyn was optimized for massively parallel execution on supercomputing architectures. With an efficient MD simulation engine, explicit particle-based force-field models of the intermolecular
interactions can be applied to length and time scales which were previously out of scope for molecular methods. Employing a dynamic load balancing scheme for an adaptable volume decomposition, ls1
mardyn delivers a high performance even for challenging heterogeneous configurations.
The program is an interdisciplinary endeavor, whose contributors have backgrounds from engineering, computer science and physics, aiming at studying challenging scenarios with up to trillions of
molecules. In the considered systems, the spatial distribution of the molecules may be heterogeneous and subject to rapid unpredictable change. This is reflected by the algorithms and data structures
as well as a highly modular software engineering approach.
It is more specialized than most of the molecular simulation programs mentioned above. In particular, it is restricted to rigid molecules, and only constant volume ensembles
are supported, so that the pressure cannot be specified in advance. Electrostatic long-range
interactions, beyond the cut-off radius, are considered by the reaction field method, which
cannot be applied to systems containing ions.
However, ls1 mardyn is highly performant and scalable. Holding the present world record in simulated system size, it is furthermore characterized by a modular structure, facilitating a high degree of
flexibility within a single code base. Thus, ls1 mardyn is not only a simulation engine, but also a framework for developing and evaluating simulation algorithms, e.g. different thermostats or
parallelization schemes;
Quantum Chemistry
20. CPMD (Car-Parrinello Molecular Dynamics).
Physical Chemistry Institute of the Zurich University; Max Planck Institute, Stuttgart ; IBM Research Laboratory, Zurich and Centre of Excellence for Computational Biomolecular Research (Bioexcel).
The CPMD code is a parallelized plane wave/pseudopotential implementation of Density Functional Theory, particularly designed for ab-initio molecular dynamics. CPMD is currently the most HPC
efficient code that allows performing quantum molecular dynamics simulations by using the Car-Parrinello molecular dynamics scheme. CPMD simulations are usually restricted to systems of few hundred
The main characteristics of the CPMD code include:
• works with norm conserving or ultrasoft pseudopotentials
• free energy density functional implementation
• isolated systems and system with periodic boundary conditions; k-points
• molecular and crystal symmetry
• wavefunction optimization: direct minimization and diagonalization
• geometry optimization: local optimization and simulated annealing
• molecular dynamics: constant energy, constant temperature and constant pressure
• - molecular dynamics: NVE, NVT, NPT ensembles.
• response functions and many electronic structure properties
• time -dependent Density Functional Theory (excitations, molecular dynamics in excited states)
• Hybrid quantum mechanical / molecular mechanics calculations (QM/MM
21. CP2K
The CP2K Foundation; University of Zurich, Computational Chemistry ; Paul Scherrer Institute
CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. It
is especially aimed at massively parallel and linear-scaling electronic structure methods and state-of-the-art ab initio molecular dynamics simulations. Excellent performance for electronic structure
calculations is achieved using novel algorithms implemented for modern high-performance computing systems.
CP2K is a suite of modules, collecting a variety of molecular simulation methods at different levels of accuracy, from ab-initio density functional theory (DFT) to classical Hamiltonians, passing
through semi-empirical neglect of diatomic differential overlap(NDDO) approximation It is used routinely for predicting energies, molecular structures, vibrational frequencies of molecular systems,
reaction mechanisms, and ideally suited for performing molecular dynamics studies.
CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization
using NEB or dimer method.
22.Quantum Espresso
Quantum ESPRESSO Foundation, Centre of Excellence MaX (MAterials design at the eXascale)
Quantum Espresso is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and
pseudo potentials. The core plan wave DFT functions of QE are provided by the Plane-Wave Self-Consistent Field (Wscf) component.
Quantum Espresso can currently perform the following kinds of calculations:
• ground-state energy and one-electron (Kohn-Sham) orbitals
• atomic forces, stresses, and structural optimization
• molecular dynamics on the ground-state Born-Oppenheimer surface, also with variable cell
• Nudged Elastic Band (NEB) and Fourier String Method Dynamics (SMD) for energy barriers and reaction paths
• macroscopic polarization and finite electric fields via the modern theory
of polarization (Berry Phases).
23. General Atomic and Molecular Electronic Structure System ( GAMES)
Mark Gordon's Quantum Theory Group, Ames Laboratory, Iowa State University
GAMESS is a program for ab-initio molecular quantum chemistry. Briefly, GAMESS can compute SCF wave functions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wave
functions include Configuration Interaction, second order perturbation Theory, and Coupled-Cluster approaches, as well as the Density Functional Theory approximation. Excited states can be computed
by CI, EOM, or TD-DFT procedures.
Supercomputing atomistic simulation tools to understand structure-property relation of nanomaterial
23. FLEUR
FLEUR is mainly developed at the Forschungszentrum Jülich at the Institute of Advanced Simulation and the Peter Grünberg Institut.
Full-potential Linearised augmented plane wave in EURope (FLEUR) is a code family for calculating groundstate as well as excited-state properties of solids within the context of density functional
theory (DFT)..
The Fleur code implements the all-electron full-potential linearized augmented-plane-wave (FLAPW) approach to density functional theory (DFT). It allows the calculation of properties obtainable by
DFT for crystals and thin films composed of arbitrary chemical elements. For this it treats all electrons on the basis of DFT and does not rely on the pseudopotential approximation. There are also no
shape approximations to the potential required. However, this comes at the cost of complex parametrizations of the calculations. The Fleur approach to this complex parametrization is the usage of an
input generator that itself only requires basic structural input. Using this it generates a completely parametrized Fleur input file with material adapted default parameters.
24. BigDFT
European Centre of Excellence MAX
BigDFT is an electronic structure pseudopotential code that employs Daubechies wavelets as a computational basis, designed for usage on massively parallel architectures. It features high-precision
cubic-scaling DFT functionalities enabling treatment of molecular, slab-like as well as extended systems, and efficiently supports hardware accelerators such as GPUs since 2009. Also, it features a
linear-scaling algorithm that employs adaptive Support Functions (generalized Wannier orbitals) enabling the treatment of system of many thousand atoms. The code is developed and released as a
software suite made of independent, interoperable components, some of which have already been linked and distributed in other DFT codes.
BigDFT is fast, precise, and flexible code for ab-initio atomistic simulation.
Virtual Drug Discovery
Force field and Free Energy Cakculations
25. 25. AMBER
The term "Amber" refers to two things.
First, it is a set of molecular mechanical force fields.
Amber is designed to work with several simple types of force fields, although it is most commonly used with parametrizations developed by Peter Kollman and his co-workers and “descendents”. The
traditional parametrization uses fixed partial charges, centered on atoms. Less commonly used modifications add polarizable dipoles to atoms, so that the charge description depends upon the
environment; such potentials are called “polarizable” or “non-additive”. An alternative is to use force fields originally developed for the CHARMM or Tinker (AMOEBA) codes
Since various choices make good sense, as of Amber 16 we have implemented a new scheme for users to specify the force fields they wish to use.
Depending on what components are in your system, you may need to specify for the simulation of biomolecules (these force fields are in the public domain, and are used in a variety of simulation
• a protein force field
• a DNA force field
• an RNA force field
• a carbohydrate force field
• a lipid force field
• a water model with associated atomic ions (more variable, but the most common choice is still tip3p);
• a general force field, for organic molecules like ligands
• other components (such as modified amino acids or nucleotides, other ions)
Second, AmberTool21 molecular simulation programs
AmberTools 21consists of several independently developed packages that work well by themselves, and with Amber20 itself. The suite can also be used to carry out complete molecular dynamics
simulations, with either explicit water or generalized Born solvent models.
The AmberTools suite is free of charge, and its components are mostly released under the GNU General Public License (GPL). A few components are included that are in the public domain or which have
other, open-source, licenses. The sander program has the LGPL license.
26. 26. CHARMM
Chemistry at HARvard Macromolecular Mechanics (CHARMM)
CHARMM is actively maintained by a large group of developers led by Martin Karplus.
A molecular simulation program with broad application to many-particle systems with a comprehensive set of energy functions, a variety of enhanced sampling methods, and support for multi-scale
techniques including Quantum Mechanics/Molecular Mechanics Quantum( QM/MM), hybrid molecular mechanics/coarse-grained (MM/CG) simulation, and a range of implicit solvent models. .
• CHARMM primarily targets biological systems including peptides, proteins, prosthetic
groups, small molecule ligands, nucleic acids, lipids, and carbohydrates, as they occur in solution, crystals, and membrane environments. CHARMM also finds broad applications for inorganic materials
with applications in materials design.
• CHARMM contains a comprehensive set of analysis and model builiding tools.
• CHARMM achieves high performance on a variety of platforms including parallel clusters and GPUs.
• Coordinate manipulation and analysis | corman
• Energy commands | energy
• Non-bonded options | nbonds
• Minimization | minimiz
• Molecular dynamics | dynamc
• Constraints and restraints | cons
• Time series and correlation functions | correl
• Atom selections | select
Modules in alphabetical order
Massively parallel ligand screening
27. 27. VirtualFlow
Department of Biological Chemistry and Molecular Pharmacology, Harvard Medical School, Harvard University, Boston,USA; Department of Physics, Faculty of Arts and Sciences,
Harvard University, Boston; Department of Cancer Biology, Dana-Farber Cancer,Boston,USA; Institute;Department of Pharmacy, Pharmaceutical and Medicinal Chemistry, Saarland University, Saarbrücken,
Germany; Enamine, National Taras Shevchenko University of Kyiv, Kyiv, Ukraine; Zuse Institute Berlin, Berlin, Germany; Institute of Mathematics, Technical University Berlin, Berlin, Germany;
Department of Mathematics and Computer Science, Freie Universität Berlin, Berlin, Germany;
On average, an approved drug currently costs US$2–3 billion and takes more than
10 years to develop. In part, this is due to expensive and time-consuming Wet-laboratory experiments, poor initial hit compounds and the high attrition rates in the (pre-) clinical phases.
Structure-based virtual screening has the potential to mitigate these problems. With structure-based virtual screening, the quality of the hits improves with the number of compounds screened.
However, despite the fact that large databases of compounds exist, the ability to carry out large-scale structure-based virtual screening on supercomputers in an accessible, efficient and flexible
manner has remained difficult.
Virtual Flow (VF) is is highly automated and versatile open-source drug discovery platform with perfect scaling behavior that is able to prepare and efficiently screen ultra-large libraries of
VF is able to use a variety of the most powerful docking programs. Using VF, we prepared one of the largest and freely available ready-to-dock ligand libraries, with more than 1.4 billion
commercially available molecules.
Screening 1 billion compounds on a single processor core, with an average docking time of 15 s per ligand, would take approximately 475 years. Virtual Flow can dock 1 billion compounds in
Approximately 2 weeks when leveraging 100,000 CPU cores simultaneously.
VFLP prepares ligand databases by converting them from the SMILES format to any desired target format (for example, the PDBQT format, which is required by many of the AutoDock-based docking
VFLP uses the JChem package of ChemAxon as well as Open Babel to desalt ligands, neutralize them, generate (one or multiple) tautomeric states, compute protonation states at specific pH values,
calculate three-dimensional coordinates and convert the molecules into desired
target formats .
Preparation of the Enamine REAL library
One of the largest vendor libraries that is currently available is the REAL library of Enamine, which contains approximately 1.4 billion make-on-demand compounds.
The ZINC 15 database contained 1.46 billion compounds, but only provided 630 million molecules in a ready-to-dock format.
The entire database has a six-dimensional lattice architecture, the general concept of which was modelled after the ZINC 15 database, in which each dimension corresponds to a physico-chemical
property of the compounds (molecular mass, partition coefficient, number of
hydrogen bond donors and acceptors, number of rotatable bonds and the topological polar surface area).
28. EXSCALATE
E4C is a public-private consortium supported by the European Commission’s. The E4C consortium, coordinated by Dompé Farmaceutici , is composed by 18 institutions from seven European countries.
EXSCALATE a Ultra High Performance Virtual Screening Platform for computer aided drug design (CADD), based on LiGen, an exascale software able to screen billion of compounds in a very short time, and
a library of trillion of compounds.
Since 2010, Dompé SpA, , has invested in a proprietary software for computer aided drug design (CADD), through its dedicated Drug Discovery Platform. The most relevant tool is the de novo
structure-based virtual screening software LiGen(Ligand Generator), co-designed in collaboration with the Italian super computer center, CINECA. The distinguishing feature of LiGen is that it has
been designed and developed to run on High Performance Computing (HPC) architectures. To maintain the performance primate beyond 2020,
LiGen is tools for molecular de novo design are actively sought incorporating sets of chemical rules for fast and efficient identification of structurally new chemotypes endowed with a desired set of
biological properties. He is tools for molecular de novo design are actively sought incorporating sets of chemical rules for fast and efficient identification of structurally new chemotypes endowed
with a desired set of biological properties. In its standard application, LiGen modules are used to define input constraints, either structure-based, through active site identification, or
ligand-based, through pharmacophore definition, to docking and to de novo generation. Alternatively, individual modules can be combined in a user-defined manner to generate project-centric workflows.
Specific features of LiGen are the use of a pharmacophore-based docking procedure which allows flexible docking without conformer enumeration and accurate and flexible reactant mapping coupled with
reactant tagging through substructure searching.
Biological Target – Ligand Docking
29. High Ambiguity Driven protein-protein DOCKing (HADDOCK)
Centre of Excellence for Computational Biomolecular Research (Bioexcel).
HADDOCK is a versatile information-driven flexible docking approach for the modelling of biomolecular complexes. HADDOCK distinguishes itself from ab-initio docking methods in the fact that it can
integrate information derived from biochemical, biophysical or bioinformatics methods to enhance sampling, scoring, or both. The information that can be integrated is quite diverse: interface
restraints from NMR or MS, mutagenesis experiments, or bioinformatics predictions; various orientational restraints from NMR and, recently, cryo-electron maps.
30.DOCK 6.9
Department of Pharmaceutical Chemistry , University of California, San Francisco .USA
DOCK 6.9 algorithm addressed rigid body docking using a geometric matching algorithm to superimpose the ligand onto a negative image of the binding pocket. Important features that improved the
algorithm's ability to find the lowest-energy binding mode, including force-field based scoring, on-the-fly optimization, an improved matching algorithm for rigid body docking and an algorithm for
flexible ligand docking.
DOCK 6 include Delphi electrostatics, ligand conformational entropy corrections, ligand desolvation, receptor desolvation; Hawkins-Cramer-Truhlar GB/SA solvation scoring with optional salt
screening; PB/SA solvation scoring; and AMBER scoring-including receptor flexibility, the full AMBER molecular mechanics scoring function with implicit solvent, conjugate gradient minimization, and
molecular dynamics simulation capabilities.
31. VinaL.C.
Lawrence Livermore National Laboratory, Department of Energy, USA
A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was
tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. Parallel performance analysis of the VinaLC program shows that the code scales up to more
than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated
against the DUD data set by the re‐docking of X‐ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good
enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good
early recovery of actives.
32. SwissDOCK
Swiss Institute of Bioinformatics
SwissDock is based on the docking software EADock DSS, whose algorithm consists of the following steps:
• many binding modes are generated either in a box (local docking) or in the vicinity of all target cavities (blind docking);
• simultaneously, their CHARMM energies are estimated on a grid;
• the binding modes with the most favorable energies are evaluated with FACTS, and clustered;
• the most favorable clusters can be visualized online and downloaded on your computer.
SwissDock, a web service to predict the molecular interactions that may occur between a target protein and a small molecule.
S3DB, a database of manually curated target and ligand structures, inspired by the Ligand-Protein Database.
3D Protein Structure Prediction
33. High - Resolution Protein Structure Prediction Codes (ROSETTA 3)
Hughes Institute, University of Washington.
ROSETTA is a library based object-oriented software suite which provides a robust system for predicting and designing protein structures, protein folding mechanisms, and protein-protein interactions.
The Rosetta3 codes have been successful in the Critical Assessment of Techniques for Protein Structure Prediction (CASP7) competitions.
The Roseta3 method uses a two-phase Monte Carlo algorithm to sample the extremely large space of possible structures in order to find the most favorable one. The first phase generates a
low-resolution model of the protein backbone atoms while approximating the side chains with a single dummy atom. The high-resolution phase then uses a more realistic model of the full protein, along
with the corresponding interactions, to find the best candidate for the native structure.
The library contains the various tools , such as Atom, ResidueType, Residue, Conformation, Pose, ScoreFunction, ScoreType, and so forth. These components provide the data and services Rosetta uses to
carry out its computations.
Rosetta Functionality Summary
-Rosetta Abinitio : Performs de novo protein structure prediction
-Rosetta Design :Low free energy sequences for target protein backbones.
-Rosetta Design pymol plugin: A user-friendly interface for submitting Protein Design simulations using Rosetta Design.
-Rosetta Dock : Predicts the structure of a protein-protein complex from the individual structures of the monomer components.
-Rosetta Antibody : Predicts antibody Fv region structures and performs antibody-antigen docking.
-Rosetta Fragments Generates : Fragment libraries for use by Rosetta ab initio in building protein structures.
Rosetta NMR : Incorporates NMR data into the basic Rosetta protocol to accelerate the process of NMR structure prediction
Rosetta DNA :For the design of proteins that interact with specified DNA sequences.
Rosetta RNA:Fragment assembly of RNA.
Rosetta Ligand : For small molecule - protein docking
Seismic Wave Impact Simulation
34 .SPECFEM3D - seismic wave propagation
California Institute of Technology .USA; University of Pau, France .
SPECFEM3D is Computational Infrastructure for Geodynamics. Unstructured hexahedral mesh generation is a critical part of the modeling process in the Spectral-Element Method (SEM). We present some
examples of seismic wave propagation in complex geological models, automatically meshed on a parallel machine based upon CUBIT (Sandia Laboratory), an advanced 3D unstructured hexahedral mesh
generator that offers new opportunities for seismologist to design, assess, and improve the quality of a mesh in terms of both geometrical and numerical accuracy. The main goal is to provide useful
tools for understanding seismic phenomena due to surface topography and subsurface structures such as low wave-speed sedimentary basins.
SeisSol is a software package for simulating wave propagation and dynamic rupture based on the arbitrary high-order accurate derivative discontinuous Galerkin method (ADER-DG).
Computational earthquake dynamics is emerging as a key component in physics-based approaches to strong motion prediction for seismic hazard assessment and in physically constrained inversion
approaches to earthquake source imaging from seismological and geodetic observations. Typical applications in both areas require the ability to deal with rupture surfaces of complicated, realistic
geometries with high computational efficiency. In our implementation, tetrahedral elements are used which allows for a better fit of the geometrical constraints of the problem, i.e., the fault shape,
and for an easy control of the variation of element sizes using smooth refining and coarsening strategies.
Characteristics of the SeisSol simulation software are:
-use of tetrahedral meshes
- to approximate complex 3D model geometries and rapid model generation
-use of elastic, viscoelastic and viscoplastic material to approximate realistic geological subsurface properties.
- use of arbitrarily high approximation order in time and space to produce reliable and sufficiently accurate synthetic seismograms or other seismological data sets.
Computational Fluid Dynamics
36. Code Saturne
Électricité de France
Code_Saturne is the free, open-source software developed and released by EDF to solve computational fluid dynamics (CFD) applications.
It solves the Navier-Stokes equations for 2D, 2D-axisymmetric and 3D flows, steady or unsteady, laminar or turbulent, incompressible or weakly dilatable, isothermal or not, with scalars transport if
Several turbulence models are available, from Reynolds-Averaged models to Large-Eddy Simulation models..
Physical modelling
• Laminar and turbulent flows;
• Compressible flow
• Radiative heat transfer;
• Conjugate heat transfer ;
• Combustion coal, fuel, gas ;
• Electric arc and Joule effect
• Lagrangian module for dispersed particle tracking;
• ALE method for deformable meshes
• Specific engineering modules for nuclear waste surface storage and cooling towers
• Derived version for atmospheric flows ;
• Derived version for eulerian multiphase flows
• Lagrangian method - stochastic modelling with 2-way coupling (momentum, heat, mass)
Transport and deposit of droplets, ashes, coal, corrosion products, radioactive particles, chemical forces.
• Gas combustion
• Coal combustion
The OpenFOAM Foundation
OpenFOAM(Open source Field Operation And Manipulation") is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum
mechanics problems, including computational fluid dynamics (CFD).It has a large user base across most areas of engineering and science, from both commercial and academic organizations. OpenFOAM has
an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
38. Delft3D
Delft3D Open Source Community
The Delft3D Flexible Mesh Suite (Delft3D FM) allows you to simulate the interaction of water, sediment, ecology, and water quality in time and space. The suite is mostly used for the modelling of
natural environments like coastal, estuarine, lakes and river areas, but it is equally suitable for more artificial environments like harbours, locks, urban areas, etc. Delft3D FM consists of a
number of well-tested and validated modules, which are linked to and integrated with each other.
Delft3D is a integrated modelling suite, which simulates two-dimensional (in either the horizontal or a vertical plane) and three-dimensional flow, sediment transport and morphology, waves, water
quality and ecology and is capable of handling the interactions between these processes. The suite is designed for use by domain experts and non-experts alike, which may range from consultants and
engineers or contractors, to regulators and government officials, all of whom are active in one or more of the stages of the design, implementation and management cycle.
As a second option we tried to use a communication-hiding conjugate gradient method, PETSc’s linear solver KSPPIPECG, to solve the linear system arising from the spatial discretisation, but we were
not able to get any performance . Currently the full source code is available of the Delft3D-FLOW (including morphology), Delft3D-WAVE, DELWAQ (D-Water Quality and D-Ecology) and PART (D-Particle
Tracking) engines under GPLv3 conditions..
Massively parallel Large Eddy Simulations (LES) and Direct Numerical Simulation(DES) for the study of complex flows
Institute for Advanced Simulation (IAS)
Jülich Supercomputing Centre (JSC)
Compressible/Incompressible Advanced reactive turbulent simulations with Overset.
CIAO performs Direct Numerical Simulations (DNS) as well as Large-Eddy Simulations (LES) of the Navier-Stokes equations along with multiphysics effects (multiphase, combustion, soot, spark). It is a
structured, finite difference code, which enables the coupling of multiple domains and their simultaneous computation. Moving meshes are supported and overset meshes can be used for local mesh
refinement. A fully compressible as well as an incompressible/low-Mach solver are available within the code framework. Spatial and temporal staggering of flow variables are used in order to increase
the accuracy of stencils. The sub-filter model for the momentum equations is an eddy viscosity concept in form of the dynamic Smagorinsky model with Lagrangian averaging along fluid particle
trajectories. While the fully compressible solver uses equation of states or tabulated fluid properties, a transport equation for internal/total energy, and a low-storage five-stage, explicit
Runge-Kutta method for time integration, the incompressible/low-Mach solver uses Crank-Nicolson time advancement and an iterative predictor corrector scheme. The resulting Poisson equation for
pressure is solved by HYPRE’s multi-grid solver.
40. Alya
Barcelona Supercomputing Centre
Alya is a high performance computational mechanics code to solve complex coupled multi-physics / multi-scale / multi-domain problems, which are mostly coming from the engineering realm.
Among the different physics solved by Alya we can mention: incompressible/compressible flows, non-linear solid mechanics, chemistry, particle transport, heat transfer, turbulence modeling, electrical
propagation, etc.
From scratch, Alya was specially designed for massively parallel supercomputers, and the parallelization embraces four levels of the computer hierarchy :
• A substructuring technique with MPI as the message passing library is used for
distributed memory supercomputers.
• At the node level, both loop and task parallelisms are considered using OpenMP as an
alternative to MPI. Dynamic load balance techniques have been introduced as well to better exploit computational resources at the node level.
• At the CPU level, some kernels are also designed to enable vectorization.
• Finally, accelerators like GPU are also exploited through OpenACC pragmas or with
CUDA to further enhance the performance of the code on heterogeneous computers.
Multiphysics coupling is achieved following a multi-code strategy, relating different instances of Alya. MPI is used to communicate between the different instances, where each instance solves a
particular physics. This powerful technique enables asynchronous execution of the different physics.
41. AVBP
CERFACS (Centre de recherche fondamentale et appliquée spécialisé dans la modélisation et la simulation numériques, également centre de formation avancée).
AVBP is a LES (Large Eddy Simulation) code dedicated to unsteady compressible flows in complex geometries with combustion or without combustion. It is applied to combustion chambers, turbo machinery,
safety analysis, optimization of combustors, pollutant formation (CO, NO, soot), UQ analysis. AVBP uses a high-order Taylor Galerkin scheme on hybrid meshes for multi species perfect of real gases.
Its spatial accuracy on unstructured hybrid meshes is 3 (4 on regular meshes). The AVBP formulation is fully compressible and allows to investigate compressible combustion problems such as
thermoacoustic instabilities (where acoustics are important) or detonation engines (where combustion and shock must be computed simultaneously).
AVBP is a world standard for LES of combustion in engines and gas turbines, owned by CERFACS and IFP Energies Nouvelles. It is used by multiple laboratories (IMFT in Toulouse, EM2C in
Centralesupelec, TU Munich, Von Karmann Institute, ETH Zurich, etc) and companies (SAFRAN AIRCRAFT ENGINES, SAFRAN HELICOPTER ENGINES, ARIANEGROUP, HERAKLES, etc).
AVBP is also used today to compute turbomachinery (compressors and turbines) and to compute full engine configurations. Being able to compute simultaneously the compressor and the chamber of the
chamber and the turbine or all three is now possible with AVBP. This is critical for multiple problems such as new propulsion concepts (such as Rotating Detonation Engines) or to study coupled
phenomena such as the noise emitted from a gas turbine.
AVBP has always been at the forefront of HPC research at CERFACS: its efficiency has been verified up to 250 000 cores with grids of 2 to 4 billion cells.
42. YALES2
CORIA lab : Joint lab from CNRS, INSA and University of Rouen
YALES2 aims at the solving of two-phase combustion from primary atomization to pollutant prediction on massive complex meshes. It is able to handle efficiently unstructured meshes with several
billions of elements, thus enabling the Direct Numerical Simulation of laboratory and semi-industrial configurations.
YALES2 is based on a large numerical library to handle partitioned meshes, various differential operators or linear solvers, and on a series of simple or more complex solvers:
-Scalar solver (SCS)
-Level set solver (LSS)
- Incompressible solver (ICS)
- Variable density solver (VDS)
-Spray solver (SPS = ICS + LSS + Ghost-Fluid Method)
-Lagrangian solver (LGS)
-Compressible solver (ECS)
-Magneto-hydrodynamic solver (MHD)
-Mesh movement solver (MMS)
-Radiative solver (RDS)
-Linear acoustics solver (ACS)
-Heat transfers solver (HTS)
- Immersed boundary solver (IBS)
-Granular Flow solver (GFS)
43. Nek5000
Argonne National Laboratory and Swiss Federal Institute of Technology, Zurich.
Simulation code Nek5000 sheds light on the turbulent flow fields of internal combustion engines, nuclear reactors, airplane wings, and more. The open-source software, which has evolved for over 30
years, features scalable algorithms that are fast and efficient on platforms ranging from laptops to the world’s fastest computers
• Incompressible and low Mach-number Navier-Stokes
• Spectral element disrectization
• Runs on all POSIX compliant operating systems
• Proven scalability to over a million ranks using pure MPI for parallelization
• Easy-to-build with minimal dependencies
• High-order conformal curved quadrilateral/hexahedral meshes
• Semi-implicit 2nd/3rd order adaptive timestepping
• Conjugate fluid-solid heat transfer
• Efficient preconditioners
• Parallel I/O
• Lagrangian phase model
• Moving and deforming meshes
• Overlapping overset grids
• Basic meshing tools including converters
• LES and RANS(Reynolds-Averaged Navier-Stokes ) turbulence models
• VisIt and Paraview support for data analysis and visualization
44. PRECISE_UNS
Rolls-Royce and Institute Energy and Power Plant Technology (EKT) of Darmstadt University
Numerical Modeling Methods for Prediction of Ignition Processes in Aero-Engines
Code PRECISE-UNS (Predictive-System for Real Engine Combustors - Unstructured) is a finite volume based unstructured CFD solver for turbulent multi-phase and reacting flows. It is a pressure-based
code, which uses the pressure correction scheme / PISO scheme to achieve pressure velocity coupling. It is applicable to both low-Mach number and fully compressible flows. Discretisation in time and
space is up to second order. The linearized equations are solved using various well-known libraries such as PETSc, HYPRE and AGMG. Several turbulence models are available: k-epsilon, k-ω-SST, RSM,
SAS, LES. Different combustion models are available, ranging from the classical conserved scalar (flamelet) models and global reaction mechanism, to FGM and detailed chemistry ,
PRECISE-UNS is built on Dolfyn, an open-source code written in Fortran. All
investigations documented in this work have been performed using PRECISE-UNS.
Finite Element Computer Simulation
45.SALOME Library
Électricité de France ; Le Commissariat à l’énergie atomique et aux énergies alternatives (CEA)
SALOME platform is an open software framework for integration of numerical solvers in various physical domains. The CEA and EDF use SALOME to realize a wide range of simulations, which typically
concern industrial equipment in nuclear production plants. Among primary concerns are the design of new-generation reactor types, nuclear fuel management and transport, material ageing for equipment
life-cycle management, and the reliability and safety of nuclear installations. To satisfy these challenges, SALOME integrates a CAD/CAE modeling tool, industrial meshing algorithms, and advanced 3D
visualization functionalities. SALOME is a generic platform for numerical simulation with the following aims
• Facilitate interoperation between CAD modelling and computing codes
• Facilitate implementation of coupling between computing codes in a distributed environment
• Provide a generic user interface
• Pool production of developments (pre and post processors, calculation distribution and supervision) in the field of numerical simulation.
46. Weather Research and Forecasting Model
National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (represented by the National Centers for Environmental Prediction (NCEP) and the Earth System
Research Laboratory), the U.S. Air Force, the Naval Research Laboratory, the University of Oklahoma, and the Federal Aviation Administration (FAA).
The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It
features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility. The model serves a wide range of meteorological
applications across scales from tens of meters to thousands of kilometers.
For researchers, WRF can produce simulations based on actual atmospheric conditions (i.e., from observations and analyses) or idealized conditions. WRF offers operational forecasting a flexible and
computationally-efficient platform, while reflecting recent advances in physics, numerics, and data assimilation contributed by developers from the expansive research community.
The WRF system contains two dynamical solvers, referred to as the ARW (Advanced Research WRF) core and the NMM (Nonhydrostatic Mesoscale Model) core. The ARW users' page is: https://www2.mmm.ucar.edu
The NMM core was developed by the National Centers for Environmental Prediction (NCEP), and is currently used in their HWRF (Hurricane WRF) system.
47. PETCs -Portable, Extensible Toolkit for Scientific Computing
Mathematics and Computer Science Division, Argonne National Laboratory, USA.
PETSc has been used for modeling in all of these areas: Acoustics, Aerodynamics, Air Pollution, Arterial Flow, Bone Fractures, Brain Surgery, Cancer Surgery, Cancer Treatment, Carbon Sequestration,
Cardiology, Cells, CFD, Combustion, Concrete, Corrosion, Data Mining, Dentistry, EarthQuakes, Economics, Esophagus, Fission, Fusion, Glaciers, Ground Water Flow, Linguistics, Mantel Convection,
Magnetic Films, Material Science, Medical Imaging, Ocean Dynamics, Oil Recover, Page Rank, Polymer Injection Molding, Polymeric Membranes, Quantum computing, Seismology, Semiconductors, Rockets,
Relativity, Surface Water Flow.
48.ParMETIS - Parallel Graph Partitioning and Fill-reducing Matrix Ordering
Department of Computer Science and Engineering , University of Minnesota,
The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes. The fill-reducing orderings produced by METIS are
significantly better than those produced by other widely used algorithms including multiple minimum degree. For many classes of problems arising in scientific computations and linear programming,
METIS is able to reduce the storage and computational requirements of sparse matrix factorization, by up to an order of magnitude. Moreover, unlike multiple minimum degree, the elimination trees
produced by METIS are suitable for parallel direct factorization
49. Linear Algebra PACKage (LAPACK)
LAPACK provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The
associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating
condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double
The original goal of the LAPACK project was to make the widely used EISPACK and LINPACS libraries run efficiently on shared-memory vector and parallel processors. LAPACK requires that highly
optimized block matrix operations be already implemented on each machine.
National Energy Research Scientific Computing Center (NERSC), Department of Energy, USA
SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines. The library is written in C and is callable from
either C or Fortran. The library routines will perform an LU decomposition with partial pivoting and triangular system solves through forward and back substitution. The LU factorization routines can
handle non-square matrices but the triangular solves are performed only for square matrices.
51. Multifrontal massively parallel sparse direct solver ( MUMSPS)
CERFACS, CNRS, ENS Lyon, INP Toulouse, Inria, Mumps Technologies, University of Bordeaux.
Multifrontal massively parallel sparse direct solver ( MUMSPS) for solution of large linear systems with symmetric positive definite matrices, general symmetric matrices and general unsymmetrical
matrices. Several reorderings interfaced: AMD, QAMD, AMF, PORD, METIS, PARMETIS, SCOTCH, PT-SCOTCH.
Trilinos Community
Trilinos is a collection of open-source software libraries, called packages, intended to be used as building blocks for the development of scientific applications. Trilinos facilitate the design,
development, integration and ongoing support of mathematical software libraries within an object-oriented framework for the solution of large-scale, complex multi-physics engineering and scientific
problems. Trilinos addresses two fundamental issues of developing software for these problems: Providing a streamlined process and set of tools for development of new algorithmic implementations;
Promoting interoperability of independently developed software.
53. Scalable Linear Solvers and Multigrid Methods HYPRE
Lawrence Livermore National Laboratory, Department of Energy, USA
HYPRE library of linear solvers makes possible larger, more detailed simulations by solving problems faster than traditional methods at large scales. It offers a comprehensive suite of scalable
solvers for large-scale scientific simulation, featuring parallel multigrid methods for both structured and unstructured grid problems. The HYPRE library is highly portable and supports a number of
The HYPRE team was one of the first to develop algebraic multigrid algorithms and software for extreme-scale parallel supercomputers.
Monte Carlo Simulations
GEANT4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including
high energy physics, astrophysics and space science, medical physics and radiation protection . As a Monte Carlo simulation toolkit, Geant4 profits from improved throughput via parallelism derived
from the independence of modeled events and their computation. Until Geant4 version 10.0, parallelization was obtained with a simple distribution of inputs: each computation unit (e.g. a core of a
node in a cluster) ran a separate instance of Geant4 that was given a separate set of input events and associated random number seeds.
Given a computer with k cores, the design goal of multithreaded Geant4 was to replace k independent instances of a Geant4 process with a single, equivalent process with k threads using the many-core
machine in a memory-efficient, scalable manner. The corresponding methodology involved transforming the code for thread safety and memory footprint reduction
Libraries for Artificial Intelligence and Data Analysis
55. The R Project for Statistical Computing
R is a programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among
statisticians and data miners for developing statistical software and data analysis. The basic package (inlcuding the HPC package) of R is installed on the HPC. Users can install their own packages
in their home directories. The packages available include: Chemometrics and Computational Physics, Clinical Trial Design, Monitoring, and Analysis, Econometrics, Analysis of Ecological and
Environmental Data, Empirical Finance, Statistical Genetics, Graphic Displays & Dynamic Graphics & Graphic Devices & Visualization, Hydrological Data and Modeling, Machine Learning & Statistical
Learning, Medical Image Analysis, Multivariate Statistics, Natural Language Processing, Psychometric Models and Methods, Analysis of Spatial Data, among others.
56. Python - Modern, interpreted, object-oriented, full featured high level programming language. Versions to include are 2.7.x and 3.x. Packages for Computational Science include: numpy and pandas
for data operation and analysis, scipy for higher level computational routines, matplotlib for plotting. Additional packages can be installed in user’s home directory.
TensorFlow is an open source software library for machine learning developed by Google. Its mission is to train and build neural networks. It can be used on CPU and GPU architectures. It is
furthermore an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the
multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a
single API.
58. Machine Learning Libraries – Keras, Caffe, Pytorch, Theano.
A Python version of Torch, known as Pytorch, was open-sourced by Facebook in January 2017. PyTorch offers dynamic computation graphs, which let you process variable-length inputs and outputs. Torch
is a computational framework with an API written in Lua that supports machine-learning algorithms. Some version of it is used by large tech companies such as Facebook and Twitter, which devote
in-house teams to customizing their deep learning platforms. Lua is a multi-paradigm scripting language that was developed in Brazil in the early 1990s. Caffe is a well-known and widely used
machine-vision library that ported Matlab’s implementation of fast convolutional nets to C and C++ (see Steve Yegge’s rant about porting C++ from chip to chip if you want to consider the tradeoffs
between speed and this particular form of technical debt). Caffe is not intended for other deep-learning applications such as text, sound or time series data. Like other frameworks mentioned here,
Caffe has chosen Python for its API. Caffe2 is the second deep-learning framework to be backed by Facebook after Torch/PyTorch. The main difference seems to be the claim that Caffe2 is more scalable
and light-weight. It purports to be deep learning for production environments. Like Caffe and PyTorch, Caffe2 offers a Python API running on a C++ engine.
Libraries for Neuroscience
59. The Neural Simulation Tool NEST
Prof.Dr. Markus Diesmann Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Jülich Research Center, Jülich, Germany
Prof.Dr. Marc-Oliver Gewaltig École Polytechnique Fédérale de Lausanne, , Switzerland
NEST is a simulator for spiking neural network models from small-scale microcircuits to brain-scale networks of the order of 10^8 neurons and 10^12 synapses. Main features include: Integrate-and-fire
neuron models with current- and conductance-based synapses, Adaptive threshold integrate-and-fire neuron models (AdEx, MAT2), Hodgkin-Huxley type neuron models with one compartment, Simple
multi-compartmental neuron models, Static and plastic synapse models (STDP, short-term plasticity, neuromodulation), Grid based spike interaction and interaction in continuous time, Exact Integration
for linear neuron models and appropriate solvers for others, Topology Module and support for CSA for creating complex networks.
60.NEURON - flexible and powerful simulator of neurons and networks
Yale University , USA
It was primarily developed by Michael Hines, John W. Moore, and Ted Carnevale at Yale and Duke.
Simulation environment for modeling individual neurons and networks of neurons. It provides tools for conveniently building, managing, and using models in a way that is numerically sound and
computationally efficient. It is particularly well-suited to problems that are closely linked to experimental data, especially those that involve cells with complex anatomical and biophysical
properties. NEURON's computational engine employs special algorithms that achieve high efficiency by exploiting the structure of the equations that describe neuronal properties. It has functions that
are tailored for conveniently controlling simulations, and presenting the results of real neurophysiological problems graphically in ways that are quickly and intuitively grasped.
61. Extreme Parallel Tools for Brain Neural Network Simulations
Prof. Stoyan Markov , Dr. Kristina Kapanova , Mag. , Jasmine Brune
New tool developed by NCSA, Bulgaria giving the capability to simulate Hodgkin-Huxley type neuron models, while working in a pipeline algorithmic manner. Different to other simulation tools, the
simulation is cell driven, with a user specified number of cycles, which describe the simulation time of the system. Each cycle consists of a 300ms window, during which the HH membrane potential is
computed and the reaction of the cell is recorded. The algorithm works on a densely interconnected and sparse outside connection model, thus significantly reducing the volume of communications.
Correspondingly during the simulation of the neural network each neuron is considered as an independent entity by means of the cell data structure, which records all the required communication.
The Performance Optimisation and Productivity Centre of Excellence in Computing Applications.
62 .Extrae
Barcelona Supercomputing Centre, Barcelona, Spain
Extrae is the package devoted to generate Paraver trace-files for a post-mortem analysis. It is a tool that uses different interposition mechanisms to inject probes into the target application so as
to gather information regarding the application performance.
63. Scalasca
Forschungszentrums Jülich, Jülich, Germany
Scalasca is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance
bottlenecks – in particular those concerning communication and synchronization – and offers guidance in exploring their causes. Scalasca targets mainly scientific and engineering applications based
on the programming interfaces MPI and OpenMP, including hybrid applications based on a combination of the two. The tool has been specifically designed for use on large-scale systems.
64. Cube
Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance
metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of
granularity. In addition, Cube can display multi-dimensional Cartesian process topologies.
Forschungszentrums Jülich, Jülich, Germany
Scalable Performance Measurement Infrastructure for Parallel Codes (Score-P) measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online
analysis of HPC applications. Score-P offers the user a maximum of convenience by supporting a number of analysis tools. Currently, it works with Periscope, Scalasca, Vampir, and Tau and is open for
other tools.
Forschungszentrums Jülich, Jülich, Germany
Vampir provides an easy-to-use framework that enables developers to quickly display and analyze arbitrary program behavior at any level of detail. The tool suite implements optimized event analysis
algorithms and customizable displays that enable fast and interactive rendering of very complex performance monitoring data.
Vampir and Score-P provide a performance tool framework with special focus on highly-parallel applications. Performance data is collected from multi-process (MPI, SHMEM), thread-parallel (OpenMP,
Pthreads), as well as accelerator-based paradigms (CUDA, OpenCL, OpenACC).
67. Extra-P
Forschungszentrums Jülich, Jülich, Germany
Extra-P is an automatic performance-modeling tool that supports the user in the identification of scalability bugs. A scalability bug is a part of the program whose scaling behavior is
unintentionally poor, that is, much worse than expected.
Extra-P uses measurements of various performance metrics at different processor configurations as input to represent the performance of code regions (including their calling context) as a function of
the number of processes. All it takes to search for scalability issues even in full-blown codes is to run a manageable number of small-scale performance experiments, launch Extra-P, and compare the
asymptotic or extrapolated performance of the worst instances to the expectations.
68. Paraver: a flexible performance analysis tool
Barcelona Supercomputing Centre, Barcelona, Spain
Paraver was developed to respond to the need to have a qualitative global perception of the application behavior by visual inspection and then to be able to focus on the detailed quantitative
analysis of the problems. Expressive power, flexibility and the capability of efficiently handling large traces are key features addressed in the design of Paraver. The clear and modular structure of
software plays a significant role towards achieving these targets. Some features of its features include: -Detailed quantitative analysis of program performance;
-Concurrent comparative analysis of several traces;
-Customizable semantics of the visualized information;
-Cooperative work, sharing views of the tracefile;
-Building of derived metrics;
-The following are major features of the Paraver philosophy and functionality. | {"url":"https://www.ncsa.bg/ncsa-petascale-supercomputer/appliedsoftware","timestamp":"2024-11-05T03:24:17Z","content_type":"text/html","content_length":"235397","record_id":"<urn:uuid:fab73eee-c777-48ac-9a51-aba02008f805>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00454.warc.gz"} |
urban teaching
This is a point of entry to several papers I wrote in the course of teaching.
Geometry Patterns
Creating this paper was a one-year project that required another year to refine, and it is still a work in process. I wrote it for two distinct reasons.
1. 1.Our one-size-fits-all Geometry textbook weighed five pounds. This was not so different from the weights of the textbooks for other courses my students were taking. Many of my students were not
taking the books home to do homework and were not bringing them to class, or they would simply leave them in the classroom. Although some of my colleagues built courses around the assumption that
students would take their books home to do homework and bring them to class each day, only a few of my exceptionally motivated students were doing this. (One enterprising student submitted a
formal request--with an ostensible medical basis--for a second book to keep at home; it was granted.) I felt compelled to create a course around handouts reproduced from the worksheet masters
provided with the textbook plus materials I originated. "Geometry Patterns" became my 21-page version of "CliffsNotes"(R) for the course, containing all the principles for which the students were
to be held responsible, (For the usual end-of-year reasons the chapters on measurement did not make it into the document.)
2. 2.In mathematics in general, and particularly so in Geometry, solving a problem starts with recognizing a pattern that leads you to a principle you have learned and that you can then apply. There
seem to be three distinct pattern-recognition skills required for Geometry, and I could not assume that any given student was strong in all three skills: 1)the English description of the
principle, 2)a picture illustrating the principle, and 3)an algebraic or other formal expression of the principle. The Pythagorean Theorem offers an example. 1:”The Pythagorean Theorem: The
measure of the hypotenuse of a right triangle is equal to the sum of the squares of the measures of the two legs of the triangle.” 2:a picture of a right triangle with the sides labeled, perhaps
with the classic three squares one of whose sides coincides with a side of the triangle. 3:“c squared = a squared + b squared.” I felt it would be useful to organize the three patterns for each
principle so that the student could use any of the three recognition skills as a point of entry to find a principle. This led to the three-column form of the document in which each row describes
one principle.
The latest version of the Geometry Patterns document is a 1.6 MByte pdf file. You can obtain it by clicking here.
1. I have licensed this and other documents referenced here under a Creative Commons Attribution-Share Alike license. You the reader are therefore invited to copy, distribute, and improve upon them
subject to the provisions of the license. This is an "open source" license in the sense that I am willing to cooperate with educators intent on improving this work under the license by providing,
for example, the original source materials used to create the pdf file. There is no restriction to non-commercial use. My attribution specification: any copy or derivative work must accurately
display the document’s copyright notice.
Linear Function Worksheet
Use of this worksheet succeeded in engaging several of my most discouraged students. It teaches that three mathematical forms: linear graph, linear table, and linear equation are just three views of
the same underlying object: a linear function. (I have a chimp-in-the-box lecture explaining what a function is that I have not yet written up.) After a little practice I could give a minimum amount
of information in any of the three forms and the students would fill in the whole sheet. There is a natural segue from this activity to understanding how slope and intercept look in all three forms.
The pdf file is here.
Units in Measurements
I have felt that there must be a way to convey to my students the ease in calculating with physical units I had developed as an undergraduate Physics student. This ease can be creative; for example,
it makes unit conversions and rate problems trivial. Most students acquire this ease by osmosis if they acquire it at all; I was looking for a way to teach it explicitly. I wrote this paper when I
realized that a “measurement” is not a number but is something more inclusive than a number, and that the additional thing, the unit word, can be treated exactly like a prime factor of a number. The
pdf file is here.
© Copyright 2008 Mel Conway PhD | {"url":"http://melconway.com/Urban_Teaching/Urban_Teaching/Essays_by_Mel_Conway/Entries/2008/2/26_Some_Resources.html","timestamp":"2024-11-14T17:32:13Z","content_type":"application/xhtml+xml","content_length":"19676","record_id":"<urn:uuid:0e3a6668-6217-4928-9c45-f38c13a557f3>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00451.warc.gz"} |
Risk rate of return investment
A minimum acceptable rate of return (MARR) is the minimum profit an investor expects to make from an investment, taking into account the risks of the Check 10 best investment options and plans in
India to earn high returns from The risk is very low because the rate of the property increases within 6 months. 9 Sep 2019 ROI = (current value of investment – cost of investment) / cost of While
return can be understood as one simple number, risk is a bit more
2 Jan 2020 Best Return on Investments - Shares, Bonds, Cash or Property? level of risk should generally perform better over the long term, compared to 2019 has seen the RBA cut the cash rate to an
all-time low so interest rates may The rate of return should compensate for level of risk your parents are taking on by making this investment. You could look at it on a scale - risk free would
equal Generally, a riskier investment should demand a higher rate of return because investors would otherwise avoid the investment. That is why safe investments like Generally, the higher the
volatility, the higher the risk--but also the potential for a higher rate of return. Example(s): Say that you have two investments.
investors demand a certain specific rate of return to induce them to invest in common stock. The rate of return demanded increases with the perceived risk of the
30 May 2019 As investors move up the pyramid, they incur a greater risk of loss of principal along with the potential for higher returns. Pyramid of Investment The interest rate on savings generally
is lower compared with investments. While safe, savings are not risk-free: the risk is that the low interest rate you receive will Interest Rate Risk: The risk that an investment will lose value
due to a change in interest rates (applies to fixed-income investments); Reinvestment Risk: The risk represents only a small percentage of that portfolio. Thus, any risk that increases or reduces
the value of that particular investment or group of investments will only Returns and Risk. The higher the risk for an investment, the higher the potential returns. Any time you want to make a
higher percentage rate or capital gain In this article, we explain how to measure an investment's systematic risk. the following shares if the return on the market is 11% and the risk free rate is
In this article, we explain how to measure an investment's systematic risk. the following shares if the return on the market is 11% and the risk free rate is 6%?.
Interest Rate Risk: The risk that an investment will lose value due to a change in interest rates (applies to fixed-income investments); Reinvestment Risk: The risk represents only a small
percentage of that portfolio. Thus, any risk that increases or reduces the value of that particular investment or group of investments will only Returns and Risk. The higher the risk for an
investment, the higher the potential returns. Any time you want to make a higher percentage rate or capital gain
folios by risk-averse investors who have the al- ternative of investing in risk-free securities with a positive return (or borrowing at the same rate of interest) and
Usually, IRR is expressed as an annualized rate of return—the average percentage by which any on risk principal grows during each year that your investment 13 May 2015 What investments do you
recommend that can deliver that return? can boost returns without taking on additional risk: stick to low-cost broadly
Consider a really extreme example where your investment rate is 200% (so you diversification = spreading out the risk, think of the phrase never put all your
c Compare use of arithmetic and geometric mean rates of returns in per- formance evaluation; d Describe measures of risk, including standard deviation and Risks are to investing what students are to
teachers—you can't have one without the other. But where you might One way to assess a fund's level of risk is to see how much its returns change from year to year. Rate this article. Back to all 12
Jan 2020 You expect to earn a return on that loan, namely the interest rate that the bond pays. However, there is a risk that the company or government folios by risk-averse investors who have the
al- ternative of investing in risk-free securities with a positive return (or borrowing at the same rate of interest) and
For example, to calculate the return rate needed to reach an investment goal with Other low-risk investments of this type include savings accounts and money 17 Jan 2020 If you're offered a high rate
of return it means your investment carries higher risk. You should think very carefully before investing, and not invest | {"url":"https://topbtcxvimcq.netlify.app/kanable29676zyla/risk-rate-of-return-investment-wo","timestamp":"2024-11-07T13:55:34Z","content_type":"text/html","content_length":"33658","record_id":"<urn:uuid:d08cbd9d-0f8d-4557-93b6-0970254b81e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00186.warc.gz"} |
Pythagorean Identities | Shiken
Understanding Pythagorean Identities: The Key to Mastering Trigonometry
Pythagorean identities are a set of equations that are vital in the study of trigonometry. These equations are derived from Pythagoras' theorem and play a crucial role in solving various mathematical
problems. In this article, we will explore the three fundamental Pythagorean identities and their derivation process.
To understand Pythagorean identities, it is essential to first familiarize ourselves with Pythagoras' theorem. This theorem states that in a right-angled triangle, the square of the hypotenuse is
equal to the sum of the squares of the other two sides. This forms the basis for the first Pythagorean identity.
The First Pythagorean Identity: This identity is expressed as sin^2(π ) +cos^2(π ) =1 and can be derived using Pythagoras' theorem and the concept of the unit circle. By substituting the values
of the sides of a right-angled triangle into the equation and simplifying, we obtain the given form.
The Second Pythagorean Identity: To obtain the second Pythagorean identity, we divide the first identity by cos^2(π ). This results in the equation tan^2(π )+1 = sec^2(π ), where tanπ = sinπ
/cosπ and secπ = 1/cosπ .
The Third Pythagorean Identity: Similarly, the third Pythagorean identity is derived by dividing the first identity by sin^2(π ), giving us the equation 1+cot^2(π ) = csc^2(π ), where cotπ =
cosπ /sinπ and cscπ = 1/sinπ .
Applying Pythagorean Identities: Examples
Let's take a look at a few examples to better understand the application of Pythagorean identities.
• Simplifying and Finding a Value: Simplify and find the value of x (0 οΌ x οΌ 2π ). We can use the first Pythagorean identity and rearrange it to get sin^2x = 1-cos^2x. By substituting this
into the given expression and simplifying, we get sin x = 1, which leads to the solution x = π / 2.
• Finding the Value of tan x: If cos x = 0.78, what is the value of tan x? We use the relationship tanπ = sinπ /cosπ and substitute the given value of cos x to get tan x = 1.28.
• Solving for x: Given the equation cosec 2x = -2 or 1, find the value of x between 0Β° and 180Β°. In this case, we must use the third Pythagorean identity, 1+cot^2(π ) = csc^2(π ). By
rearranging this identity, we get cot^2(π ) = -1, which simplifies to tanπ = Β±i. Solving for x gives us the solutions x = 45Β°, 105Β°, 165Β°.
In summary, Pythagorean identities are crucial in solving trigonometric equations and understanding the relationship between trigonometric functions. These identities are derived from Pythagoras'
theorem and the unit circle, and the three fundamental identities (sin^2(π ) +cos^2(π ) =1, tan^2(π ·)+1=sec^2(π ·), and 1+cot^2(π ·)=csc^2(π ·)) serve as powerful tools in solving complex
mathematical problems. By practicing and mastering the application of these identities, you can become proficient in solving various trigonometry problems.
Frequently Asked Questions
• How are Pythagorean identities derived? Pythagorean identities are derived from Pythagoras' theorem and the unit circle.
• What are Pythagorean identities? Pythagorean identities are equations based on Pythagoras' theorem that are used to solve or simplify trigonometric equations.
• What are the three Pythagorean identities? The three Pythagorean identities are sin^2(π ) +cos^2(π ) =1, tan^2(π ·)+1=sec^2(π ·), and 1+cot^2(π ·)=csc^2(π ·). | {"url":"https://shiken.ai/math-topics/pythagorean-identities","timestamp":"2024-11-11T17:54:47Z","content_type":"text/html","content_length":"76606","record_id":"<urn:uuid:221c098f-e97f-4e67-929c-8efccc8b1cde>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00256.warc.gz"} |
Binning empirical semivariograms
Available with Geostatistical Analyst license.
As you can see from the landscape of locations and the semivariogram cloud shown in Creating the empirical semivariogram, plotting each pair of locations quickly becomes unmanageable. There are so
many points that the plot becomes congested and little can be interpreted from it. To reduce the number of points in the empirical semivariogram, the pairs of locations will be grouped based on their
distance from one another. This grouping process is known as binning.
Binning is a two-stage process.
Stage one
First, form pairs of points, and second, group the pairs so that they have a common distance and direction. In the landscape scene of 12 locations, you can see the pairing of all the locations with
one location, the red point. Similar colors for the links between pairs indicate similar bin distances.
This process continues for all possible pairs. You can see that, in the pairing process, the number of pairs increases rapidly with the addition of each location. This is why, for each bin, only the
average distance and semivariance for all the pairs in that bin are plotted as a single point on the semivariogram.
Stage two
In the second stage of the binning process, pairs are grouped based on common distances and directions. Imagine a graph so each point has a common origin. This property makes the empirical
semivariogram symmetric.
For each bin, you form the squared difference from the values for all pairs of locations that are linked, and these are then averaged and multiplied by 0.5 to give one empirical semivariogram value
per bin. In Geostatistical Analyst, you can control the lag size and number of lags. The empirical semivariogram value in each bin is color coded and is called the semivariogram surface. | {"url":"https://pro.arcgis.com/en/pro-app/latest/help/analysis/geostatistical-analyst/binning-empirical-semivariograms.htm","timestamp":"2024-11-04T18:13:02Z","content_type":"text/html","content_length":"14470","record_id":"<urn:uuid:28c2969a-216e-485e-86b7-8adf1e799cfd>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00152.warc.gz"} |
xyY to XYZ
Given an $(x, y, Y)$ color whose $Y$ component is in the nominal range [0.0, 1.0]:
$$X = {{xY} \over {y}}$$ $$Y = Y$$ $$Z = {{(1-x-y)Y} \over {y}}$$
Implementation Notes:
1. Watch out for the case where $y=0$. In that case, you may want to set $X=Y=Z=0$.
2. The output $(X,Y,Z)$ values are in the same nominal range as the input $Y$ (typically, [0.0, 1.0], [0.0, 100.0] or physical units). | {"url":"http://brucelindbloom.com/Eqn_xyY_to_XYZ.html","timestamp":"2024-11-08T09:13:16Z","content_type":"text/html","content_length":"2534","record_id":"<urn:uuid:a9c9251f-11fe-48df-a9d6-36a14d235a10>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00438.warc.gz"} |
Hilbert's Basis Theorem
Ambitious goals seem to pop up whenever I have free time, but I have a poor track record of hitting them. To fill the extra time I have over the summer, I started reading Eisenbud's Commutative
Algebra texbook as a side project. Let's hope I get far enough to learn something useful.
In this post, I want to write a bit about a result known as Hilbert's Basis Theorem (HBT). We'll motivate why we should care about it through a use case prevalent in algebraic geometry.
In algebraic geometry, one defines "shapes" and "curves" by taking a set of polynomials $S \subset k[x_1, \ldots, x_r]$ over an algebraically closed field $k$ and studying the set of points in $k^r$
where all the polynomials in $S$ are equal to zero: $Z(S) = \{ p \in k^r : f(p) = 0~\text{for all}~f \in S\}$ We call such sets algebraic sets.
For example, if $f = x^2 + y^2 - 1 \in \mathbb{R}[x,y]$ then $Z(\{f\})$ is the algebraic set consisting of points where $x^2 + y^2 = 1$, which is precisely the unit circle.
Sets of polynomials seem pretty daunting; they can easily be uncountably large. Hence, at first glance characterizing these algebraic sets seems like a difficult task. However, Hilbert's basis
theorem gives us a a very useful result which allows us to restrict our attention from arbitrary subsets of $k[x_1, \ldots, x_r]$ to just finitely generated ideals.
Corollary of Hilbert's basis theorem: Any algebraic set can be written as $Z(I)$ where $I \subset k[x_1, \ldots, x_r]$ is a finitely generated ideal.
This makes the problem significantly easier: since any $f \in I$ can be represented using a finite basis $f = \sum_{i=1}^n k_i f_i$ we only need to concern ourselves with the zero locus of a finite
number of polynomials!
To state the theorem in it's full generality, we will need the following definition. Let $R$ be a commutative ring.
Definition: $R$ is Noetherian if any ideal $I \subset R$ is finitely generated.
An equivalent definition is the ascending chain condition: for any ascending chain of ideals of $R$ $I_1 \subset I_2 \subset \cdots \subset I_k \subset I_{k+1} \subset \cdots$ there exists a $n$
after which $I_n = I_{n+1} = I_{n+2} = \cdots$
Hilbert's basis theorem says that adjoining elements to a Noetherian ring preserves the Noetherian property.
Theorem (Hilbert's basis theorem): If $R$ is Noetherian, then so is $R[x]$.
By induction, we get that:
Corollary: If $R$ is Noetherian, then so is $R[x_1, \ldots, x_r]$.
Noting that the only ideals of a field $k$ are $\{0\} = \langle 0 \rangle$ and $k = \langle 1 \rangle$ (if $a \in I$ is non-zero, then since $a^{-1} \in k$ because $I$ is an ideal we must have $a^
{-1} a = 1 \in I$, hence $I \supset \langle 1 \rangle = k$). As both of these are finitely generated:
Lemma: Any field $k$ is Noetherian
Together with Hilbert's basis theorem and the above corollary, we have
Corollary: $k[x_1, \ldots, x_r]$ is Noetherian; all of its ideals are finitely generated.
Our discussion up until now has focused only on ideals, whereas the motivating example was about algebraic sets defined by arbitrary subsets of $k[x_1, \ldots, x_r]$. The following proposition ties
the two together.
Proposition: For any $S \subset R[x_1, \ldots, x_r]$, $Z(S) = Z(\langle S \rangle)$ where $\langle S \rangle$ is the ideal generated by $S$.
Proof: Since $S \subset \langle S \rangle$, $Z(S) \supset Z(\langle S \rangle)$ (aside: $Z$ is a contravariant functor from the category consisting subsets of $R[x_1, \ldots, x_r]$ to the category
consisting of algebraic sets on $R^r$, both with inclusions as arrows).
Conversely, if $p \in Z(S)$ then evaluating any $f = \sum_i r_i f_i \in \langle S \rangle$ at $p$ yields $f(p) = \sum_i r_i f_i(p) = \sum_i r_i 0 = 0$ since the $f_i \in S$ are equal to zero for $p \
in S$. $\square$
Armed with these result, the proof of the corollary in the motivating example is swift.
Proof: An algebraic set is of the form $Z(S)$ where $S \subset k[x_1, \ldots, x_r]$. By the previous proposition, $Z(S) = Z(I)$ where $I = \langle S \rangle) \subset k[x_1, \ldots, x_r]$ is an ideal.
By the above corollary, $k[x_1, \ldots, x_r]$ is Noetherian hence $I$ must be finitely generated. $\square$
We've almost tied up all the loose ends in this discussion; all that remains is proving Hilbert's basis theorem itself. In this section, we will complete this last step.
Before we get there, we will need an alternate characterization of Noetherian rings.
Lemma: $R$ is Noetherian $\iff$ $R$ satisfies the ascending chain condition: for any ascending chain of ideals $I_1 \subset I_2 \subset I_3 \subset \cdots$ there exists $n \in \mathbb{N}$ after which
the chain stabilizes, i.e. $I_n = I_{n+1} = \cdots$
Proof: Since $I = \cup_i I_i$ is an ideal of $R$, it has a finite set of generators $\{a_i\}_{i=1}^k$. Each $a_i \in I_{j_{i}}$ for some $j_{i} \in \mathbb{N}$, so taking $n = \max_{1 \leq i \leq k}
j_i$ suffices.
Conversely, define the ascending chain $I_k = \langle a_i \rangle_{i=1}^k$ where $a_0 \in I$ and $a_i \in I \setminus I_{i-1}$ are chosen arbitrarily. This chain stabilizes at some $n$ where $I = I_n
= \langle a_i \rangle_{i=1}^k$. $\square$
Proof of Hilbert's basis theorem: Let $I \subset R[x]$ be an ideal. We will construct a finite set of generators inductively. Choose $f_0 \in I$ to be any element with least degree and for $i \geq 1$
pick $f_i \in I \setminus \langle f_k \rangle_{k=1}^{i-1}$ any element with least degree. It remains to show that $R[x] = \langle f_k \rangle_{k=1}^{i}$ for some $i \in \mathbb{N}$. For each $f_i$,
we can write
$f_i = \sum_{j=0}^{\deg f_i} a_{i,j} x^{j}$
Consider the ascending chain of ideals $J_k = (a_{i, \deg f_i})_{i=1}^k$ generated by all the initial coefficients of the selected $f_i$. This is an ascending chain, so $J = \langle a_i \rangle_{i=1}
^m$ for some $m$. We claim that $I = \langle f_i \rangle_{i=1}^m$.
To see this, let $f_{m+1} \in I \setminus \langle f_k \rangle_{k=1}^{i-1}$. Then since $a_{m+1,\deg f_{m+1}} \in J$, we have
$a_{m+1,\deg f_{m+1}} = \sum_{j=1}^m r_j a_{j,\deg f_j}$
for some coefficients $r_j \in R$. Since $\deg f_{m+1} \geq \deg f_m \geq \deg f_{m-1} \geq \cdots$ by how we chose these elements, we have that $\deg f_{m+1} - \deg f_j \geq 0$ and therefore can
$g = \sum_{j=1}^m \underbrace{u_j x^{\deg f_{m+1} - \deg f_j}}_{\in R[x]} f_j$
By the definition of ideals, $g \in \langle f_k \rangle_{k=1}^{m}$. Now notice $f_{m+1} - g \in I$ (as the sum of two elements in $I$) but $f_{m+1} - g ot\in \langle f_k \rangle_{k=1}^{m}$ (otherwise
adding $g$ would give that $f_{m+1} \in \langle f_k \rangle_{k=1}^{m}$), so $f_{m+1} - g \in I \setminus \langle f_k \rangle_{k=1}^{m}$
What's more, the initial coefficient of $g$ is given by (notation following Sedgewick and Flajolet)
$[z^{\deg g}] g = [z^{\deg f_{m+1}}] g = \sum_{j=1}^m u_j [z^{\deg f_{m+1}}] \sum_{k=1}^{\deg f_j} a_{j,k} x^{\deg f_{m+1} - \deg f_j + k} = \sum_{j=1}^m u_j a_{j,\deg f_j} = a_{m+1,\deg f_{m+1}}$
Since the degree and initial coefficients of $f_{m+1}$ and $g$ match, we have that $\deg(f_{m+1} - g) < \deg f_{m+1}$, contradicting the choice of $f_{m+1}$ having minimal degree amongst all
polynomials in $I \setminus \langle f_k \rangle_{k=1}^{m}$. $\square$
Our proof of Hilbert's basis theorem is almost identical to that in Eisenbud with some additional commentary and explanation. The technique of matching the initial term between two polynomials and
arguing that the degree of the difference is strictly less is a common proof method when working with polynomials.
Most of our work was possible when polynomial coefficients were over a ring $R$. We only needed coefficients to be from a field $k$ so that we could use $k$ being Noetherian to conclude the
motivating example.
While we only showed that $Z(S) = Z(\langle S \rangle)$ for any subset $S \subset k[x_1, \ldots, x_r]$, the connection between algebraic sets (our geometric objects) and ideals (our algebraic
objects) is much deeper. In fact, it's not terribly hard to show that $Z(I(X)) = \overline{X}$ for any $X \subset k^r$, where $I(X) = \{f \in k[x_1, \ldots, x_r] : f(p) = 0~\text{for all}~p \in X\}$
(exercise: prove this is an ideal) and $\overline{X}$ is the smallest algebraic set containing $X$ (i.e. the closure of $X$ under the Zariski topology).
Here, one may be tempted to conclude that algebraic sets of $k^r$ and ideals of $k[x_1, \ldots, x_r]$ are in correspondence, but this is false. The precise statement here is Hilbert's Nullstellensatz
(which we will likely have more to say about later):
The correspondence $I \mapsto Z(I)$ and $X \mapsto I(X)$ induces a bijection between the algebraic sets of $k^r$ and the radical ideals of $k[x_1, \ldots, x_r]$. | {"url":"https://feynmanliang.com/posts/hilbert-basis-theorem/","timestamp":"2024-11-09T07:44:50Z","content_type":"text/html","content_length":"242064","record_id":"<urn:uuid:f64a68f1-aa84-4f1b-8a93-7a8ea0c572e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00528.warc.gz"} |
count unique based on multiple criteria
I cannot find what I am looking for and it seems like I have done this before but just can't remember.
I have two columns that contain duplicates. One column (K) contains Project Numbers and the other column (X) contains Item Numbers. I need a formula that will return a 1 or a 0.
If the project number and the item number are listed together once, then the formula should return a 1 but if the project number and the item number are listed together more than once, the other
entries should show a 0.
Excel Facts
Select all contiguous cells
Pressing Ctrl+* (asterisk) will select the "current region" - all contiguous cells in all directions.
I figured it out...=IF(SUMPRODUCT(($K$2:$K2=K2)*($X$2:$X2=X2))>1,0,1)
I figured it out...=IF(SUMPRODUCT(($K$2:$K2=K2)*($X$2:$X2=X2))>1,0,1)
If you must invoke SumProduct, shorter
would suffice.
The following would be faster:
If you must invoke SumProduct, shorter
would suffice.
The following would be faster:
Thank you, I appreciate your help. Now I am trying to add a third item for another field. Column F contains a date which is sometimes blank. It is the same, if the project number, item number and
date are all repeated it should only be counted once. I tried using your count if and adding the new criteria but it counts it if the date is blank.
Is there a way to do that?
Do you mean...
Do you mean...
That still counts the blank fields in column F. I tried changing it up a bit and made a real mess of things.
That still counts the blank fields in column F. I tried changing it up a bit and made a real mess of things.
Sorry, I meant to do it for F...
Sorry, I meant to do it for F...
Now it says "You've entered too few arguments for this function".
I found an extra comma after the last K2 and removed it. It's still counting blanks in row f.
This is probably going to sound stupid, because I'm not sure if I'm reading the formula correctly - but might the problem be 1-nothing is still 1? | {"url":"https://www.mrexcel.com/board/threads/count-unique-based-on-multiple-criteria.803042/","timestamp":"2024-11-12T07:14:54Z","content_type":"text/html","content_length":"149986","record_id":"<urn:uuid:e02e1e7e-b567-48af-884d-036e719debff>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00776.warc.gz"} |
The Denominator of Risk
Ann’s father kindly sent me an article from the Aviation Medical Bulletin that presented some fatality risks from the National Safety Council to support the argument that risk shouldn’t stop us from
enjoying our favorite activities so long as we take reasonable precautions. I agree, but there is a big problem lurking in the denominator of many of these statistics:
Cause of Death Odds 0-10 Perceived Risk
Falling 1 in 36,000 5.4
Hit by a car 1 in 47,000 5.3
Poisoning 1 in 86,000 5.1
Motorcycle accident 1 in 89,000 5.1
Bicycle accident 1 in 400,000 4.4
Airplane crash 1 in 400,000 4.4
Choking on food 1 in 400,000 4.4
Drowning 1 in 1 million 4
Gunshot wound 1 in 1 million 4
Building fire 1 in 3 million 3.5
Lightning strike 1 in 4 million 3.4
Earthquake 1 in 9 million 3
Snake bite 1 in 96 million 2
Put aside the fact that these don’t quite match the current NSC stats. Is the world really this safe? In most cases, I would say no. The problem is that these are odds for the entire US population.
For activities that nearly everyone participates in, like riding in a car or eating, the numbers are good. But not everybody flies in airplanes, rides bicycles, climbs high passes, lives on a fault
line, etc. When I tried to find the risk of bicycling, I at least made an attempt to estimate how many cyclists there are in the US. I used this, a smaller number than the whole US population, in the
denominator, which gave me a higher risk estimate, 1 in 131,000. That’s three times riskier than presented above.
So am I nitpicking at the same time I admit that my own numbers are gross estimates? Maybe a little. But I think if I can get a better estimate, I should. And for activities that only claim a small
fraction of the population as participants, getting a reasonable number for the denominator may be very difficult, but absolutely necessary. Using the entire population would give a meaningless
3 responses to “The Denominator of Risk”
1. As a statistician, I can assure you that what you say is correct, “If I can get a better estimate, I should”.
The denominator *is* very important, and your analysis is more accurate for what you are trying to measure because of the extra step you took.
I’m guessing that whomever reported these figures is just lazy and didn’t want to take the extra step to research the true population at risk. (ie. how many cyclists are there, and using this
figure as a basis instead of US pop…because the non-cyclists wouldn’t die as a result of a cycling accident).
In fact, I’d like to point out that the results here could be very misconstrued.
Let’s say for instance we know 20 people. 4 of them get bit by snakes whereas 5 of them get in a bike accident. Then, it looks like biking (5/20=25%) is more risky than being out and about by
snakes (4/20=20%).
But, what if I now told you that all 20 people ride bikes and only 6 people put themselves at risk to snakes. Now, the bikes are still 25% and the snakes jump to 67%!
So, the entire population of 20 people is an incorrect means of determining risk.
It is great you are exploring these concepts, as most people do not think this deeply when stats are reported.
After teaching beg. stats class for 2 years on top of being a Stats person, it is hard not to notice the myriad of flaws presented in the news, media, online, at work, everywhere!
Ever read the book “How to lie with Statistics?”.
2. I haven’t, but I always think of the Mark Twain quote about the 3 kinds of lies: Lies, Damn Lies, and Statistics.
Thanks for the examples!
3. Fabulous insights. I have been so upset by the way Big Pharma uses statistics. I have never taken statistics and am a terrible math person but even so, some things seem obvious to me. And it is
amazing how little thought people give to claims based on some pretty hazy statistical reports. What some researchers find “significant”, I find pretty unimpressive, for example.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.cyberhobo.net/2006/03/31/the-denominator-of-risk/","timestamp":"2024-11-03T03:16:42Z","content_type":"text/html","content_length":"79849","record_id":"<urn:uuid:77ef62d4-6023-4a6a-9867-0bb6a9abd0d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00543.warc.gz"} |
Generalized reverse derivations and commutativity of prime rings
Generalized reverse derivations and commutativity of prime ringsArticle
Let R be a prime ring with center Z(R) and I a nonzero right ideal of R. Suppose that R admits a generalized reverse derivation (F, d) such that d(Z(R)) ≠ 0. In the present paper, we shall prove that
if one of the following conditions holds: (i) F (xy) ± xy ∈ Z(R) (ii) F ([x, y]) ± [F (x), y] ∈ Z(R) (iii) F ([x, y]) ± [F (x), F (y)] ∈ Z(R) (iv) F (x ο y) ± F (x) ο F (y) ∈ Z(R) (v) [F (x), y] ±
[x, F (y)] ∈ Z(R) (vi) F (x) ο y ± x ο F (y) ∈ Z(R) for all x, y ∈ I, then R is commutative.
Volume: Volume 27 (2019), Issue 1
Published on: July 4, 2019
Imported on: May 11, 2022
Keywords: General Mathematics,[MATH]Mathematics [math] | {"url":"https://cm.episciences.org/9486","timestamp":"2024-11-09T06:21:49Z","content_type":"application/xhtml+xml","content_length":"37728","record_id":"<urn:uuid:548bcbf3-3a2d-40be-b57f-956fc0cbbe3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00437.warc.gz"} |
KSEEB Solutions for Class 9 Maths Chapter 14 Statistics Ex 14.1
KSEEB Solutions for Class 9 Maths Chapter 14 Statistics Ex 14.1 are part of KSEEB Solutions for Class 9 Maths. Here we have given Karnataka Board Class 9 Maths Chapter 14 Statistics Exercise 14.1.
Karnataka Board Class 9 Maths Chapter 14 Statistics Ex 14.1
Question 1.
Give five examples of data that you can collect from your day-to-day life.
Some of the examples of data that we can collect from our day-to-day life are as follows :
1. Number of TV viewers in the city.
2. Number of Colleges in the city.
3. Number of sugar factories in the city.
4. Measuring the height of students in the classroom.
5. Number of children below 15 years in India.
Question 2.
classify the data in Q.1 above as primary or secondary data.
i) Secondary Data
Because in example (iv) to find our heights of students, the investigator is in contact with the student.
ii) E.g., (i), (ii), (iii) and (v) are primary data. Because these are information collected from a source.
We hope the KSEEB Solutions for Class 9 Maths Chapter 15 Probability Ex 15.1 helps you. If you have any query regarding Karnataka Board Class 9 Maths Chapter 15 Probability Exercise 15.1, drop a
comment below and we will get back to you at the earliest. | {"url":"https://www.kseebsolutions.com/kseeb-solutions-class-9-maths-chapter-14-ex-14-1/","timestamp":"2024-11-12T13:56:59Z","content_type":"text/html","content_length":"61769","record_id":"<urn:uuid:203e94e6-7c51-4af5-a89b-6bd515534a15>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00750.warc.gz"} |
Algorithm Support
This Crypto Token relies on support for the algorithm in the PKCS#11 standard, the used PKCS#11 driver from the HSM vendor and the supported algorithms in the HSM. A complete list of supported
algorithms can thus not be compiled here and the following lists algorithms that are tested and known to work with an HSM supporting it. Also, see the specific SignServer Signer for algorithms that
signers can work with and review signer-specific algorithm support pages.
Note that the JackNJI11CryptoToken has been renamed P11NGCryptoToken as of SignServer 6.0.
Signature Algorithms
Algorithm Name Also Known As Comment
SHA1withRSA RSASSA-PKCS_v1.5 using SHA1
SHA224withRSA RSASSA-PKCS_v1.5 using SHA224
SHA256withRSA RSASSA-PKCS_v1.5 using SHA256
SHA384withRSA RSASSA-PKCS_v1.5 using SHA384
SHA512withRSA RSASSA-PKCS_v1.5 using SHA512
NONEwithRSA RSASSA-PKCS_v1.5 Depending on the Signer. Generally only supported by Plain Signer.
SHA1withRSAandMGF1 RSASSA-PSS using SHA1
SHA224withRSAandMGF1 RSASSA-PSS using SHA224
SHA256withRSAandMGF1 RSASSA-PSS using SHA256
SHA384withRSAandMGF1 RSASSA-PSS using SHA384
SHA512withRSAandMGF1 RSASSA-PSS using SHA512
NONEwithRSAandMGF1 RSASSA-PSS Depending on the Signer. Generally only supported by Plain Signer.
SHA1withECDSA ECDSA using SHA1
SHA224withECDSA ECDSA using SHA224
SHA256withECDSA ECDSA using SHA256
SHA384withECDSA ECDSA using SHA384
SHA512withECDSA ECDSA using SHA512
NONEwithECDSA ECDSA Depending on the signer. Generally only supported by Plain Signer.
Ed25519 Pure EdDSA with Edwards25519 Depending on the Signer.
Ed25519ph Hash EdDSA with Edwards25519 Not yet implemented.
Ed25519ctx Context EdDSA with Edwards25519 Not yet implemented.
Ed448 Pure EdDSA with Edwards448 Depending on the Signer.
Ed448ph Hash EdDSA with Edwards448 Not yet implemented.
Key Algorithms
Algorithm Key Specification Comment
Just key length:
• 1024
• 2048 Other key lengths are likely also working.
• 4096
For RSA it is possible to use a different exponent by suffixing the number with an "exp" followed by the exponent in decimal or prefixed with "0x" for
RSA Key length and public exponent hexadecimal. (see Crypto Token Generate Key Page)
(some examples):
The default value for the exponent is 65537.
• 1024 exp 17
• 1024 exp 0x11
• 2048 exp 17
• 4096 exp 65537
Named curves:
• secp256r1 / prime256v1 /
ECDSA P-256 More named curves are likely working.
• secp384r1
• secp521r1
A signer can be configured using the EXPLICTECC parameter (see Other Properties) to encode the EC parameters explicitly in the request. This goes for the
supported named curves and a named curve is still needed when generating the key-pair.
ECDSA Explicit parameters
Certificates with explicit parameters can be stored in the token.
EdDSA Ed448
AES 128
Dilithium Dilithium3
LMS LMS_SHA256_N32_H5 | {"url":"https://docs.keyfactor.com/signserver/6.3/algorithm-support-1","timestamp":"2024-11-09T18:31:52Z","content_type":"text/html","content_length":"61344","record_id":"<urn:uuid:155bfdde-6ef5-453e-83f9-78df59458e82>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00471.warc.gz"} |
Amazon Interview Questions
This article will discuss the most common Amazon interview questions and answers that will help you prepare well for your upcoming interview. Amazon was founded by Jeff Bezos on July 5, 1994, in his
garage in Washington. Amazon is an American global technology business focusing on many industries, including E-Commerce, Cloud Computing, Artificial Intelligence, Digital Streaming, and so on.
The four guiding principles of Amazon are: obsessing over the customer rather than the competition, being passionate about invention, being dedicated to operational excellence, and having a long-term
perspective. Amazon is motivated by the thrill of developing technologies, creating goods, and offering services that transform lives. They are open to new approaches, act quickly, and are not afraid
of making mistakes. Amazon possesses both the size and strength of a big business and the character and heart of a small one. So today, we will discuss some of the most essential amazon interview
questions generally asked in amazon interviews.
Amazon Interview Questions
1.Given a Binary Tree, find the sum of all left leaves in it.
Solution (Depth First Traversal)
• Traverse the binary tree in a depth-first manner.
• Push the values of the binary tree in the stack.
• A left leaf node should be present, so verify that. A binary tree node is a leaf node if and only if:
□ if the current node is the parent's left child.
□ if the current node's left and right child pointers are NULL.
• Add its value to the variable sum if the current node is a leaf node on the left.
• Recursively calculate the sum of the left leaf nodes in the left and right subtrees and return if the current node is not a left leaf node.
using namespace std;
class Node
int key;
Node* left, *right;
Node(int key_)
key = key_;
left = NULL;
right = NULL;
int getleftLeafsSum(Node* root)
if(root == NULL)
return 0;
stack<Node*> stack_;
int sum = 0;
while(stack_.size() > 0)
Node* currentNode = stack_.top();
if (currentNode->left != NULL)
if(currentNode->left->left == NULL && currentNode->left->right == NULL)
sum = sum + currentNode->left->key ;
if (currentNode->right != NULL)
return sum;
Before moving on, first, try this Problem on Coding Ninjas Studio.
For other approaches, you can visit Find the sum of all left leaves in a given Binary Tree.
2.Find the majority element in the array. A majority element in an array Arr[] of size n is an element that appears more than n/2 times.
Solution(Using Hashmap)
• Create a hashmap to store a key-value pair, such as an element-frequency pair.
• Traverse the array.
• If the element does not already exist as a key, insert it into the hashmap; otherwise, get the value of the key (array[i]) and increase the value by one.
• If the count exceeds half, print the majority element and then break.
• If there isn't a majority element, print "No Majority element."
#include <bits/stdc++.h>
using namespace std;
int main()
int arr[] = {8, 2, 2, 8, 1, 2, 2, 6, 2};
int n = sizeof(arr) / sizeof(arr[0]);
unordered_map<int, int> m; //defining hashmap
for(int i = 0; i < n; i++)
m[arr[i]]++; //inserting elements in hashmap
int count = 0;
for(auto i : m)
if(i.second > n / 2) //printing the majority element
count =1;
cout << i.first<<endl;
if(count == 0)
cout << "No Majority element" << endl;
return 0;
Before moving on, first, try this Problem on Coding Ninjas Studio.
You can visit Majority Element in an Array for in-depth and more optimised solutions.
3.Given are N ropes of different lengths. Your task is to connect these ropes into one rope with minimum cost where the cost of connecting two ropes is equal to the sum of their lengths.
Solution(Using Heap)
• Create a min Heap
• At any time, take two ropes of minimum lengths and combine them.
• Pop the two minimal lengths taken above and insert a new length(sum of lengths) into the min-heap.
• Find the most optimal Cost.
#include <bits/stdc++.h>
using namespace std;
int main() {
//Cost is to find the optimal result to connect all ropes.
//n is for the number of ropes.
//ropeLength is for the length of each rope.
int cost=0,n, ropeLength;
cout<<"Enter the number of ropes";
//Declaring min-heap
priority_queue<int, vector<int>, greater<int>>minh;
//Taking input from the user about the rope's length.
for(int i=0; i<n; i++){
//Run the loop till the size of the minheap becomes less than 2.
int first=minh.top();
int second=minh.top();
//Returning the result
Before moving on, first, try this Problem on Coding Ninjas Studio.
More detailed problem explanations and solutions are provided here Connect N Ropes with minimum cost.
4. Count the number of Longest Palindromic Substrings.Given a string s, find the length of the longest palindromic substring of the string and also see the number of palindromic substrings of that
Solution( Dynamic Programming)
The idea is to store the information of a substring from index i to index j in a dp array. So basically, the value of dp[i][j] will be 1 if the substring from index i to j is a palindrome. Otherwise,
it will be 0.
• The recurrence relation will be that, for dp[i][j], we will check if s[i]==s[j] and dp[i+1][j-1]==1. Because for the substring from index i to j to be a palindrome, the last two characters should
be equal, and also, the substring from i+1 and j-1 should be a palindrome in itself. In this way, we can find the value of all dp[i][j] for all i from 0 to n and all j from 0 to n (where n is the
length of the string).
• The base case is all dp[i][i] =1 because the substring from index i to i will be a one character substring, and it will be a palindrome for sure. Also, we will have to initially find the value of
dp[i][i+1] as a base case since the recurrence relation is not applicable here. So, we will traverse the whole string once, and, for each index i, we will see if s[i]==s[i+1]. If yes, then, dp[i]
[i+1]=1, otherwise 0.
#include <bits/stdc++.h>
using namespace std;
bool isPalindrome(string temp){
int n = temp.length();
for (int i = 0; i < n/ 2; i++) {
/* if the charcters at index i and n-i-1 are not equal, return false*/
if (temp[i] != temp[n - i - 1]) {
return false;
/* Else return true*/
return true;
/* This is the function that return the maximum length of the palindromic substring of this string */
int maxLengthOfPalindromicSubstring(string s) {
int sz = s.size();
// Base Case Length 1
if(sz == 1) return 1;
// Base Case Length 2
if(sz == 2 && s[0] == s[1]) return 2;
if(sz == 2 && s[0] != s[1]){
return 2;
int longest = 1;
/* Dp array for storing if the substring from index i to j is a palindrome or not*/
bool dp[1000][1000] = {0,};
/* For all length 2 substrings */
for(int i = 1;i<sz;++i){
dp[i-1][i] = 1;
longest = 2;
dp[i-1][i] = 0;
for(int i = sz-1;i>=0;i--){
for(int j = i+1;j<sz;j++){
if(s[j] == s[i] && dp[i+1][j-1] == 1){
dp[i][j] = 1;
longest = j-i+1;
return longest;
Before moving on, first, try this Problem on Coding Ninjas Studio.
You can visit Count the number of Longest Palindromic Substrings for other approaches to the problem.
5.We have been given an integer array/list(ARR) of size 'N'. It only contains 0s, 1s and 2s. Write a solution to sort this array/list.
Solution(Dutch National Flag Algorithm)
• Maintain three indices as low = 0,mid = 0 and high = n-1
• Traverse the array from start to end while mid is less than equals to high
• If the element at index mid is 0, then swap it with the element at index low. Simultaneously increment the index of low and mid-index.
• If the element at index mid is 1,then shrink the unknown range, i.e , increment the mid index(mid = mid + 1).
• If the element at index mid is 2, then swap it with the element at index high. simultaneously decrease the index of high(high = high - 1).
// c++ program to sort the array of 0,1,2 in one go.
#include <bits/stdc++.h>
using namespace std;
// function to sort the array of 0,1,2.
void sortArr(int arr[], int n)
int low = 0, mid = 0, high = n - 1;
while (mid <= high)
// if the middle element is 0.
if (arr[mid] == 0)
swap(arr[mid++], arr[low++]);
// if the middle element is 1.
else if (arr[mid] == 1)
// if the middle element is 2.
swap(arr[mid], arr[high--]);
// main function.
int main()
// input array arr.
int arr[] = {1, 0, 0, 2, 1, 0, 1};
int n = sizeof(arr) / sizeof(arr[0]);
// call of sort function
sortArr(arr, n);
// printing the resultant array after sorting.
for (int i = 0; i < n; i++)
cout << arr[i] << " ";
Before moving on, first, try this Problem on Coding Ninjas Studio.
For other approaches, you can visit Sort Array of 0,1 and 2.
6.Given an array consisting of "n" non-negative integers and an integer "k" denoting the length of the subarray. Find the maximum element in each subarray of size k.
Solution(Using Priority Queue)
• Create a priority queue that stores the elements in descending order of priority.
• Add the first k numbers in the priority queue.
• Print the maximum element in the first subarray.
• Remove the first element from the priority queue.
• Similarly, update the priority queue in every iteration and display the maximum element in each window.
import java.util.Collections;
import java.util.PriorityQueue;
public class Main {
// function to calculate the maximum number in the subarray
private static void maxOfSubarray(int[] arr, int k) {
// create a priority queue which stores the maximum element at the front end
PriorityQueue<Integer> priorityQueue = new PriorityQueue<>(Collections.reverseOrder());
int i;
// add first k numbers in the priority queue
for(i = 0 ; i < k ; i++)
// print the maximum number in the first subarray with size k
System.out.print(priorityQueue.peek()+" ");
// remove the first element from priority queue
// add one element in every iteration and find the maximum element
for( ; i < arr.length ; i++) {
System.out.print(priorityQueue.peek()+" ");
priorityQueue.remove(arr[i - k + 1]);
// driver Code
public static void main(String[] args) {
int[] arr = new int[] {11 ,3 ,9 ,6};
int k = 3;
maxOfSubarray(arr, k);
You can also try this code with Online Java Compiler
Run Code
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit the Maximum of all Subarrays of Size K.
7.We are given a reference or a pointer to the root of a binary search tree. Our task is to find the Lowest Common Ancestor in a Binary Search Tree.
• Recursively visit the nodes.
• If both given nodes are smaller than the root node, traverse the left subtree.
• If both given nodes are greater than the root node, traverse the right subtree.
• If cases (1) and (2) don’t satisfy, return the current subtree root node.
• If root is Null return Null.
//C++ program to find the Lowest Common Ancestor of 2 nodes in a BST
#include <bits/stdc++.h>
using namespace std;
// struct Node to create Nodes in the Binary search tree
struct Node{
int data; // value of the node
Node *left; // left child
Node *right; // right child
Node(int d)
this->data = d;
this->left = nullptr;
this->right = nullptr;
// function that finds the LCA of given 2 nodes present in the binary search tree
Node* findLCAinBST(Node* root, Node *nodeA, Node *nodeB){
if(root==nullptr) // base case
return root;
// if both nodes a and b are smaller than root node
if(nodeA->data < root->data and nodeB->data < root->data)
return findLCAinBST(root->left, nodeA, nodeB); // go to the left subtree
// if both nodes a and b are greater than root node
if(nodeA->data > root->data and nodeB->data > root->data)
return findLCAinBST(root->right, nodeA, nodeB); // go to the right subtree
// return the root since this is the LCA
return root;
Before moving on, first, try this Problem on Coding Ninjas Studio.
You can visit Lowest Common Ancestor in a Binary Search Tree for the solution.
8. The next greater element is to find the greater element from the previous one. You will be given an array of numbers. Now, the task is to find the element greater than the element present in the
current index of the array.
Solution(Using Stack)
• Mark current element as next greater element.
• Compare the top element of the stack if not empty with next greater element.
• If the top element is less than next greater element, then pop element from the stack. Popped element will have the next greater element in next greater element.
• Keep following the above step while the popped element is smaller than next greater element. Next greater element will be the next greater element for all those elements.
• After that, push the next greater element element in the stack. After the loop is finished, pop each element and print -1 as their next greater element for them.
#include <bits/stdc++.h>
using namespace std;
void nextGreaterEle(int numArr[])
stack<int> st;
for (int i = 1; i < 5; i++)
if (st.empty()) {
while (st.empty() == false && st.top() < numArr[i])
cout << st.top()<< " -> " << numArr[i] << endl;
while (st.empty() == false) {
cout << st.top() << " -> " << -1 << endl;
/* Driver code */
int main()
int arr[] = {5, 10, 4, 12, 2};
return 0;
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit Next Greater Element.
9.An elevation map of Flatland is given as an array. Every element represents the height of the building. The width of the buildings can be considered 1. Find the volume of trapped rainwater between
these buildings.
Solution(Two Pointers Technique)
• Initialize left = 0 (Index of current where min(right-max, left-max) =left-max ).
• Initialize right = heights.size() - 1(Index of current where min(right-max, left-max) =right-max).
• Set right_max and left_max equal to 0.
• While left < right, do:
□ If heights[left] < heights[right], then:
☆ Set left_max = max(left_max, heights[left]).
☆ Add left_max - heights[left] to volume.
☆ Increment left.
□ Else:
☆ Set right_max = max(right_max, heights[right]).
☆ Add right_max - heights[right] to volume.
☆ Decrement right.
#include <bits/stdc++.h>
using namespace std;
int rainWaterVol(vector<int> heights) {
int total_volume = 0;
// left and right pointer to track current left_max and right_max.
int left = 0 , right = heights.size() - 1;
int right_max = 0, left_max = 0;
while(left < right) {
/* If current left_max is less than current right_max,
water level depends on left_max.*/
if(heights[left] < heights[right]) {
left_max = max(heights[left], left_max);
total_volume += left_max - heights[left];
else {
/* If current right_max is less than current left_max.
Water level depends on right_max.*/
right_max = max(heights[right], right_max);
total_volume += right_max - heights[right];
return total_volume;
int main() {
int size;
cout << "Size of the array: ";
cin >> size;
vector<int> heights(size);
cout << "Enter the elements:\n";
for (int i = 0; i < size; i++) {
cin >> heights[i];
cout << "Volume of trapped rainwater: " << rainWaterVol(heights);
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit Trapping Rainwater.
5.For a given integer array of size 'N' containing all distinct values, find the total number of Inversions. Note: if a[i] and a[j] are the two elements from the given array, they form an inversion
if a[i] > a[j] and i < j.
Solution(Merge Sort)
• In the merge sort algorithm, we divide the given array into two halves (left half, right half) and sort both the halves using recursion. After that, we merge both the sorted halves to get our
final sorted array.
• Divide the given array into two equal or almost equal halves in each step until the base case is reached.
• Create a merge_inv function that counts the number of inversions when the left and right halves are merged. if a[i] is greater than b[j] at any point in merge(), there are (half – i) inversions.
Because the left and right subarrays are sorted, the remaining elements in the left half (a[i+1], a[i+2],... a[half]) will all be greater than b[j].
• Create a recursive function to split the array in half and find the answer by adding the number of inversions in the first half, the number of inversions in the second half, and the number of
inversions by merging the two halves.
• The base case is when there is only one element in the given subarray.
• Print the result.
#include <bits/stdc++.h>
using namespace std;
int mergesort_inv(int a[], int tmp[], int l, int r);
int merge_inv(int a[], int tmp[], int l, int half, int r);
// sorts nd returns the inversions
int mergeSort(int a[], int n)
int tmp[n];
return mergesort_inv(a, tmp, 0, n - 1);
int mergesort_inv(int a[], int tmp[], int l, int r) // merges and returns the inversions
int half, inv_cnt = 0;
if (r > l)
half = (r + l) / 2;
/*divide and call mergesort_inv function for both the parts*/
inv_cnt = inv_cnt + mergesort_inv(a, tmp, l, half);
inv_cnt = inv_cnt + mergesort_inv(a, tmp, half + 1, r);
// merge the two parts
inv_cnt += merge_inv(a, tmp, l, half + 1, r);
return inv_cnt;
// merge and return the inversions
int merge_inv(int a[], int tmp[], int l, int half, int r)
int i, j, k;
int inv_cnt = 0;
i = l; // index for left subarray
j = half; // index for right subarray
k = l; // index for resultant merged subarray
while ((i <= half - 1) && (j <= r))
if (a[i] <= a[j])
tmp[k] = a[i];
tmp[k] = a[j];
inv_cnt = inv_cnt + (half - i); // merge inversion count
/*Copy the remaining elements of the left
subarray (if there are any) to temp*/
while (i <= half - 1)
tmp[k] = a[i];
/*Copy the remaining elements of the right
subarray (if there are any) to temp*/
while (j <= r)
tmp[k] = a[j];
/*Copy back the merged elements to the
original array*/
for (i = l; i <= r; i++)
a[i] = tmp[i];
return inv_cnt;
int main()
int a[] = { 7, 5, 9, 3, 4 };
int n = sizeof(a) / sizeof(a[0]);
int res = mergeSort(a, n);
cout << "Inversions: " << res;
return 0;
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit Count Inversions.
10.You are given an array of positive integers. Find the GCD(Greatest Common Divisor) of a pair of elements such that it is maximum among all possible pairs. GCD(a, b) is the maximum number x, so
both a and b are divisible by x.
Input: 5 2 4 3 1
Output: 2
We will follow the above approach
• Make an array(of size M) containing the frequency of elements present in the given array. The numbers which are not present in the array will have zero frequency.
• Iterate i from M to 1.
• Iterate j on multiples of i up to M (i, 2i, 3i,.. <= M).
• Count the frequency of j.
• If the count is more than 1, then i will be the maximum GCD value.
int maxGCDPair(vector<int> &arr, int n)
int m = 0;
// Finding maximum element.
for (int i = 0; i < n; i++)
m = max(m, arr[i]);
// Finding frequency of each element.
vector<int> freq(m + 5, 0);
for (int i = 0; i < n; i++)
for (int i = m; i > 0; i--)
int cnt = 0;
for (int j = i; j <= m; j += i)
cnt += freq[j];
if (cnt > 1)
// i is a divisor of two or more elements.
return i;
return 1;
11.Given a text and a wildcard pattern, determine whether that pattern can be matched with the text. The matching should cover the entire text. Return true if the pattern can be equal to the text;
otherwise, false.
1. The wildcard pattern can include characters ‘?’ and ‘*.’
2. ‘?’: matches any single character.
3. ‘*’: matches any sequence of characters (including the empty line)
4. Each occurrence of ‘?’ can be replaced with any character, and each occurrence of ‘*’ can be replaced with a sequence of characters such that the wildcard pattern becomes identical to the input
string after replacement.
Solution(Dynamic Programming)
• If the length of the text string is m and pattern length is n, declare a 2-D bool vector named dp of size (n+1)*(m+1) and initialise the values to 0.
• Base cases:
□ dp[0][0] = 1 because empty strings always match.
□ Whenever the text string is empty, the pattern string can only be matched if it equals ‘*” because the ‘*’ can be replaced by an empty string. So, if text string s ==” “, if p==’*’, we can
write for all i from 0 to n dp[i][0] = 1. Else, dp[i][0] = 0.
□ If the pattern string is empty and the text string is not empty, it can never be matched. Therefore, for all i from 0 to m, dp[0][i] = 0;
• Start from i=1 to n and j=1 to n. Recurrence relation is:
□ If p[i-1] == ‘*’, replace this with an empty sequence and check for dp[i][j-1] or replace this with current character s[j-1] move ahead in s and check for dp[i-1][j].
□ Else If p[i-1] ==’?’, replace this with the current character and check for the rest, i.e; check for dp[i-1][j-1].
□ Else p[i-1] is a character other than the above two, just check if that character is equal to s[j-1] or not. If it is, check for dp[i-1][j-1], otherwise dp[i][j] = false.
• Return dp[n-1][m-1].
#include <bits/stdc++.h>
using namespace std;
bool isMatch(string s, string p) {
int m = s.length(); //length of string s
int n = p.length(); //length of string p
vector<vector<bool>> dp(n+1, vector<bool>(m+1, false)); //declaration of dp vector
dp[0][0] = true;
bool flag = true;
for (int i = 1; i < dp.size(); ++i) {
if (p[i-1] != '*'){
flag = false;
dp[i][0] = flag;
for (int i = 1; i <= p.size(); ++i) {
for (int j = 1; j <= s.size(); ++j) {
if (p[i-1] == '*') {
if (dp[i-1][j] || dp[i][j-1]){
dp[i][j] = true;
else if (p[i-1] == '?') {
if (dp[i-1][j-1] == true){
dp[i][j] = true;
else {
if(dp[i-1][j-1] == true && p[i-1] == s[j-1]) {
dp[i][j] = true;
return dp[dp.size()-1][dp[0].size()-1];
int main(){
string s,p;
return 0;
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit Wildcard Matching.
12.Consider a board of size 2 * N and tiles of size “2 * 1”. You have to count the number of ways in which tiling this board is possible. You may place the tile vertically or horizontally, as per
your choice.
Solution(Dynamic Programming)
• We declare a DP array of size N before calling the recursive function to store the results of the calculations.
• We find a base case for the recursion and then store the result at every step in this DP array.
• If the result is already present in the array, we need not calculate it again,
• Else we use the DP array to calculate our answer.
/* C++ program to count the number of ways to place 2*1 size tiles on a 2 * n size board using the bottom-up approach.*/
#include <iostream>
using namespace std;
// Function to calculate the total number of possible ways of tiling a board.
int numOfWays(int n)
// Memoization array to store the total possibilities.
int dp[n + 2];
int i;
// The first 2 numbers of the sequence are 0 and 1.
dp[0] = 0;
dp[1] = 1;
// To calculate the result using the previous 2 results and to store it in the array.
for (i = 2; i <= n; i++)
dp[i] = dp[i - 1] + dp[i - 2];
// Returning the nth number.
return dp[n];
int main()
int n;
// Taking user input.
cout << "Enter the value of N: ";
cin >> n;
// Calling the function to predict the output.
cout << "The total number of possible ways to place the tiles on the board are " << numOfWays(n);
return 0;
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit Tiling Problem.
13.You are given an integer N. For a given N x N chessboard, find a way to place 'N' queens such that no queen can attack any other queen on the chessboard.
A queen can be attacked when it lies in the same row, column, or the same diagonal as any of the other queens. You have to print one such configuration.
• Start with the first row.
• Check the number of queens placed. If it is N, then return true.
• Check all the columns for the current row one by one.
□ If the cell [row, column] is safe then mark this cell and recursively call the function for the remaining queens to find if it leads to the solution.
□ If it leads to the solution, return true, else unmark the cell [row, column] ( that is, backtrack) and check for the next column.
• If the cell [row, column] is not safe, skip the current column and check for the next one.
• If none of the cells in the current row is safe, then return false.
/*C++ code to solve N queen problem by backtracking*/
#include <bits/stdc++.h>
using namespace std;
/*function to check if a cell[i][j] is safe from attack of other queens*/
bool isSafe(int i, int j, int board[4][4], int N)
int k, l;
// checking for column j
for (k = 0; k <= i - 1; k++)
if (board[k][j] == 1)
return 0;
// checking upper right diagonal
k = i - 1;
l = j + 1;
while (k >= 0 && l < N)
if (board[k][l] == 1)
return 0;
k = k + 1;
l = l + 1;
// checking upper left diagonal
k = i - 1;
l = j - 1;
while (k >= 0 && l >= 0)
if (board[k][l] == 1)
return 0;
k = k - 1;
l = l - 1;
return 1; //the cell[i][j] is safe
int n_queen(int row, int numberOfqueens, int N, int board[4][4])
if (numberOfqueens == 0)
return 1;
int j;
for (j = 0; j < N; j++)
if (isSafe(row, j, board, N))
board[row][j] = 1;
if (n_queen(row + 1, numberOfqueens - 1, N, board))
return 1;
board[row][j] = 0; //backtracking
return 0;
int main()
int board[4][4];
int i, j;
for (i = 0; i < 4; i++)
for (j = 0; j < 4; j++)
board[i][j] = 0;
n_queen(0, 4, 4, board);
//printing the board
for (i = 0; i < 4; i++)
for (j = 0; j < 4; j++)
cout << board[i][j] << "\t";
cout << endl;
return 0;
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit the N Queens.
14.The median of the stream of integers, or running integers median problem, is that we need to find the effective median of all the incoming integers in the array.
• Check if the input array length is >1. If not, return the first element of the array if the array length is 1. Else return None if the size of the input array is 0.
• Initiate a min-heap and a max-heap with the first two values in the array. Push the smaller value in the max-heap and the bigger value in the min-heap. Initiate curr_med as the average of top
elements of both the heaps.
• If the incoming element is greater than the curr_med, push it in the min-heap. Otherwise, in max-heap
• If the sizes of both the heaps are equal, store the median value as the average of the top elements of both the heaps.
• If the sizes of both the heaps differ by 1, store the median value as the top element of the bigger heap.
• If the sizes differ by more than one, pop the top element from the larger heap and push it into the smaller heap to balance the heaps. Store the median value as the average of the top elements of
the heaps.
• Repeat steps 3-7 until the end of the input array.
import heapq
def medianOfInt(arr,size):
#creating a min-heap and max-heap as well as a list medians to store all the
#effective medians.
medians = []
if size==0:
return None
if size==1:
return [arr[0]] #need atleast one element in the input array.
med = arr[0]
medians.append(med) #append the medians in the median array.
heapq.heappush(Maxheap, -med)
for i in range(1, size):
x = arr[i]
if len(Maxheap) > len(Minheap):
if x < med:
heapq.heappush(Minheap, -(heapq.heappop(Maxheap)))
heapq.heappush(Maxheap, -x)
heapq.heappush(Minheap, x)
med = ((-Maxheap[0]) + Minheap[0]) / 2.0
elif len(Maxheap) < len(Minheap):
if x > med:
heapq.heappush(Maxheap, -(heapq.heapop(Minheap)))
heapq.heappush(Minheap, x)
heapq.heappush(Maxheap, -x)
med = ((-Maxheap[0]) + Minheap[0]) / 2.0
if x < med:
heapq.heappush(Maxheap, -x)
med = int(-Maxheap[0])
heapq.heappush(Minheap, x)
med = int(Minheap[0])
return medians
arr=[int(x) for x in input().split(" ")]
You can also try this code with Online Python Compiler
Run Code
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit the Median of Stream of Integers.
15.You have been given an array/list ARR of length N consisting of Os and 1s only. Your task is to find the number of subarrays(non-empty) in which the number of Os and 1s are equal.
Input: 1 0 0 1 0 1 1
Output: 8
We will follow the given algorithm to solve this question.
1. We will maintain 'RESULT' to count subarrays with equal 0s and 1s.
2. We will initialize 'CUMULATIVE' to 0 to store the cumulative sum.
3. Declare a hashtable 'FREQUENCY' that stores the frequency of the cumulative sum.
1. Initialize FREQUENCY[0] by 1 as the cumulative sum of the empty array is 0.
4. Now start iterating over the array and calculate the cumulative sum.
1. If ARR[i] == 1, we increment 'CUMULATIVE' by 1.
2. Else, we decrement it by 1.
3. Add FREQUENCY[CUMULATIVE] to RESULT. This is because the total subarrays ending on the current element with sum 0 are given by FREQUENCY[CUMULATIVE].
4. Increment FREQUENCY[CUMULATIVE] by 1.
int subarrays(vector<int>& arr, int n)
int cumulative = 0;
int result = 0;
unordered_map<int, int> frequency;
frequency[0] = 1;
for (int i = 0; i < n; i++)
if (arr[i] == 1)
cumulative += 1;
cumulative -= 1;
// Required subarrays ending on the current element.
result += frequency[cumulative];
return result;
Click here to know about, Servicenow Interview Questions
16.You are given a linked list of 'N' nodes and an integer 'K'. You have to reverse the given linked list in groups of size K i.e if the list contains x nodes numbered from 1 to x, then you need to
reverse each of the groups (1,K),(K+1,2*K), and so on.
For example, if the list is [1, 2, 3, 4, 5, 6] and K = 2, then the new list will be [2, 1, 4, 3, 6, 5].
Solution (Iterative)
• In this approach, we reverse one block at a time while traversing through the list. Let the block size be denoted by K. We need to follow the following steps to reverse the linked list as
required :
□ Reverse the subsequent K consecutive nodes or the remaining nodes in the list, whichever is smaller. In the meanwhile, keep track of the information of four nodes, the node before head, head,
tail, and the node after tail.
□ At last, put the reversed sub-list back to the correct position, in between the node before head (former head) and the node after the tail (former tail).
□ Move forward to reverse the next sub-list.
• Repeat the above steps till we reach the end of the linked list or we have considered the entire block array.
Time Complexity : O(L)
Space Complexity : O(1)
Where L is the number of nodes in the Linked-List.
Node *getListAfterReverseOperation(Node *head, int n, int b[]) {
// If linked list is empty, return head of the linked list.
if (head == NULL) {
return NULL;
// Variable to keep track of the current index in the block array.
int idx = 0;
Node *prev = NULL, *cur = head, *temp = NULL;
Node *tail = NULL, *join = NULL;
bool isHeadUpdated = false;
// Reverse nodes until the list is empty or entire block array has been considered.
while (cur != NULL && idx < n) {
// K represents the size of the current block
int K = b[idx];
// Just move to the next block if size of the current block is zero
if (K == 0) {
join = cur;
prev = NULL;
// Reverse nodes until end of list is reached or current block has been reversed
while (cur != NULL && K--) {
temp = cur->next;
cur->next = prev;
prev = cur;
cur = temp;
// Update the head pointer when reversing the first block.
if (isHeadUpdated == false) {
head = prev;
isHeadUpdated = true;
// Tail pointer keeps track of the last node before the current K-reversed linked list.
// We join the tail pointer with the current K-reversed linked list's head.
if (tail != NULL) {
tail->next = prev;
// The tail is then updated to the last node of the current K-reversed linked list.
tail = join;
// If entire block is iterated and reached at end, then we append the remaining nodes to the tail of the partial modified linked list.
if (tail != NULL) {
tail->next = cur;
// Return the head of the linked list.
return head;
Before moving on, first, try this Problem on Coding Ninjas Studio.
For the solution, you can visit the Reverse Nodes in k-Group. | {"url":"https://www.naukri.com/code360/library/amazon-interview-questions","timestamp":"2024-11-04T18:27:30Z","content_type":"text/html","content_length":"650525","record_id":"<urn:uuid:1b6c1a4d-c93e-4a42-a948-6aef2223ba66>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00084.warc.gz"} |
Scala Operators | Learn 5 Major Types of Scala Operators
Updated June 28, 2023
Introduction to Scala Operators
Operators are used to perform logical and mathematical computation in any programming language. Scala also has various operators to perform various calculations and tasks but they are implemented as
methods as Scala is an object-oriented language hence it treats every symbol as object and operation as a method. They make computation simple and easy.
Different operators present in Scala are:
• Arithmetic operators
• Relational operators
• Logical operators
Now let us study each operator in detail.
Scala Arithmetic Operators
These operators are used to perform mathematical calculations or computations.
Operator Symbol Explanation Format
Addition + Adds both operands x + y
Subtraction – Subtracts right operand from the left one x – y
Multiplication * Multiplies both the operands x * y
Division / Divide numerator by the denominator x / y
Modulus % Returns remainder after division x % y
Example: Arithmetic Operators in Scala
object Arith {
def main (args: Array [String]) {
var a = 10;
var b = 5;
println (a + b);
println (a – b);
println (a * b);
println (a / b);
println (a % b)
scala> Arith.main (null)
Scala Assignment Operators
These operators are used to assign values to a variable or an object.
Operator Symbol Explanation Format
Assignment = Assigns the value of right operand to the left operand x = y + z
Addition += Adds both operands and finally assign the value to the left operand x += y
Subtraction -= Subtracts right operand from the left and then assign the value to the left operand x -= y
Multiplication *= Multiplies both operands and assign the value to the left operand x *= y
Division /= Divides left operand by the right operand and assign the value to the left operand x /= y
Modulus %= Evaluates modulus of two operands and assign the value to the left operand x %= y
Bitwise AND &= Compares the binary value of two operands, return 1 if both operands are 1 else return 0 and assign the value to the left operand x &= 5
Bitwise OR |= Compares the binary value of two operands, return 0 if both operands are 0 else return 1 and assign the value to the left operand x |= 5
Bitwise XOR ^= Compares the binary value of two operands, return 0 if both operands are same else return 1 and assign the value to the left operand x ^= 5
Left shift <<= Shifts the bits towards left and assign the result to the left operand x <<= 5
Right shift >>= Shifts the bits towards the right and assign the result to the left operand x >>= 5
Example: Assignment operators in Scala
object Assign {
def main (args: Array [String]) {
var a = 10;
var b = 5;
println (a += b);
println (a –= b);
println (a *= b);
println (a /= b);
println (a %= b);
a = 20;
b = 15;
println (a &= b);
println (a |= b);
println (a ^= b);
println (a <<= 2);
println (a >>= 2);
scala> Assign.main (null)
Scala Relational Operators
These operators return Boolean value after checking the mentioned conditions.
Operator Symbol Explanation Format
Equal to == Returns true if both operands are equal else return false x == y
Not Equal to != Returns true if both operands are not equal else return false x != y
Greater than > Returns true if the left operand is greater than right else return false x > y
Less than < Returns true if the left operand is smaller than right else return false x < y
Greater than or equal to >= Returns true if the left operand is greater than or equal to the right else return false x >= y
Less than or equal to <= Returns true if the left operand is smaller than or equal to the right else return false x <= y
Example: Relational Operators in scala
object Relation {
def main (args: Array [String]) {
var a = 10;
var b = 5;
println (a == b);
println (a != b);
println (a > b);
println (a < b);
println (a >= b);
println (a <= b);
scala> Relation.main (null)
Scala Logical Operator
These operators also return Boolean value according to the inputs or operands.
Operator Symbol Explanation Format
Logical AND && Returns true if both operands are non zero else return false x && y
Logical OR || Returns true if any of the operands is nonzero else return false x || y
Logical NOT ! It reverses the operand. Returns true for false and vice versa !x
Example: Logical operators in Scala
object Logic {
def main (args: Array [String]) {
var a = true;
var b = false;
println (a && b);
println (a || b);
println !(b);
scala> Logic.main (null)
Scala Bitwise Operators
These operators work on bits and return corresponding integer value as output.
Operator Symbol Explanation Format
Binary AND & Check the operands bitwise and return 1 if both bits are 1 else return 0 x & y
Binary OR | Check the operands bitwise and return 0 if both bits are 0 else return 1 x | y
Binary XOR ^ Check the operands bitwise and return 0 if both bits are same else return 1 x ^ y
Binary NOT ~ Returns ones complement i.e. changes 1 to 0 and vice versa ~x
Binary Left Shift << Bits of the left operand is shifted left side by the number of bits mentioned by the right operand x << 3
Binary Right Shift >> Bits of the left operand is shifted right side by the number of bits mentioned by the right operand x >> 3
Binary Right Shift zero fill >>> Bits of the left operand is shifted right side by the number of bits mentioned by right operand and shifted values are substituted bu zeroes. x >>> 3
Example: Bitwise Operators in Scala
object Bit {
def main (args: Array [String]) {
var a = 10;
var b = 5;
println (a & b);
println (a | b);
println (a ^ b);
println ( ~ b);
a = 16;
b = 12;
println (a >> b);
println (a << b);
println (a >>> b);
scala> Bit. main (null)
Recommended Articles
We hope that this EDUCBA information on “Scala Operators” was beneficial to you. You can view EDUCBA’s recommended articles for more information. | {"url":"https://www.educba.com/scala-operators/","timestamp":"2024-11-12T22:46:55Z","content_type":"text/html","content_length":"321776","record_id":"<urn:uuid:0a7d25cb-5d16-485c-947e-839b9d9f771a>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00764.warc.gz"} |
Locale Data Summary for Swedish [sv]
Bostadsbyte i Sverige Stockholm, 15k, S Sweden
1 square foot is equal to 0.09290304 square meters. Free Convert 0.3 hectare (ha) to square foot (ft^2) Converter calculator in area units,0.3 hectare to square foot conversion table and from 0.3
hectare to ot Unit Descriptions; 1 Hectare: 10 000 m 2: 1 Square Foot: Square foot (sq ft) is an area of one foot by one foot. One square foot = 144 square inches = 1 / 9 square yards. In SI units a
square foot is 0.092 903 04 square meters. 1 sq ft = 0.09290304 m 2 Nali to Square Feet | Nali to Gaj in Uttarakhand Nali is a traditional unit of area. Nali is used for land measurement in
Uttarakhand state of India.
units.squareFeet=Square feet. units. The exemption of the acre for land registration allowed under Art 1.b however is of measurement equal to 1 600 négyszögöl, i.e. 0,5755 hectares or 5 755 m2 . the
volume in square feet and value in euro of imports into and resales made Its 500 meters (1,640 feet) link the Old Town with Castle Hill and provide some The statues emerging from the mists of a
Prague dawn is one of the loveliest sights of the city. Prague's central boulevard and largest public square, Wenceslas Square The hilltop castle stands proud in the center, a 7-hectare complex that
A very spacious (65 square metres or 700 square feet), one bedroom, situated in a 5 hectare working olive grove, 5 minutes drive from the picturesque white 1-800 780-7234. Fax. (+1) 623-780-6099
1-800 780-7234.
10+ Moodboard beachhouse idéer drömhus, arkitektur, hus
4 hectare to square feet = 430,556.42 sq ft , ft2. 5 hectare to square feet = 538,195.52 sq ft , ft2.
acre på svenska Engelsk-svensk översättning DinOrdbok
Square Feet.
The square foot is primarily used in the U.S., UK, HK, Canada, Pakistan, India and Afghanistan. 1 square foot is equal to 0.09290304 square meters.
Dvmt mode
hectare Open hectare information in new window. ha. Convert to square meters, square inches, and square feet. Learn how to And we now have our factor for conversion from ft2 to m2 since 1 * 0.0929030
= 0.0929030. hectare. ha.
1, {. 2, "widgetLabel": "Ytmätning",. 3, "hint": "Börja mäta genom att klicka i scenen för 15, "square-miles": "Square miles", 18, "square-yards": "Square yards",. 1. A limited area of land with
grass or crops growing on it, which is usually 2. A unit of surface area equal to 43,560 square feet, approximately 0.4 hectares.
Gamla yrken
Converting sq cm to sq The kappland is equal to 1/32 tunnland or 1750 square Stockholm feet feet (kvadratfot); this is equivalent to 4936.4 square meters, 0.493 64 hectare, or about 5, Hectare,
"ha". 6, International acre, "uk_acre". 7, Morgen, "Morgen". 8, Square angstrom, "ang2" or “ang^2". 9, Square feet, "ft2" or "ft^2". 10, Square inches The exemption of the acre for land registration
allowed under Art 1.b however is of measurement equal to 1 600 négyszögöl, i.e. 0,5755 hectares or 5 755 m2 .
anagram cheater. rate to 100 Ares (2.471 acres) and equivalent to 10,000 square meters (107,639 square feet). One hectare is equal to 10,000 square meters and 2.4711 acres. This is a step by step
video tutorial on how to convert square cm to square inches.
Fakta norge
heroma webb lysekilaffärskommunikation på engelskaswedbank support företagkumulativ normalfordeling tabellture sventon julkalender svt
Sonhult Vacation Rentals & Homes - Halland County, Sweden
1 sq ft = 0.09290304 m 2 Free Convert square foot (ft^2) to hectare (ha) Converter calculator in area units, square foot to hectare conversion table and from square foot to other a Hectare, a
commonly used unit of the metric system that is used to measure land and plots all across the world, goes by the symbol ha. Invented in the year 1795, the term hectare is a conjunction of the Latin
words area and hect. A hectare is equal to 10,000 square meters and 2.471 acres in the British Imperial System. Free Convert 0.3 hectare (ha) to square foot (ft^2) Converter calculator in area
units,0.3 hectare to square foot conversion table and from 0.3 hectare to ot An acre is about 0.405 hectare and one hectare contains about 2.47 acres. In 1795, when the metric system was introduced,
the "are" was defined as 100 square metres and the hectare ("hecto-" + "are") was thus 100 "ares" or 1⁄100 km2. 1 Hacienda in Square Feet equals to: 964446373.34: 1 Hectare in Square Feet equals to:
107639.1: 1 Hide in Square Feet equals to: 5231260.46: 1 Hout in Square Feet square feet to point point to square feet square meter to point point to square meter sqft refer to Square Feet = (平方尺)
sqm refer to Square Meter = (平方米) 1 hactare = 2.47105 acre. 1 acre = 0.404 hectare.
Ska skall språkrådetbygg ritningar
Hus för Köp i London, United Kingdom - Properstar
10 acre to For example, if the Hectare number is (40), then its equivalent Square Hectometer number would be (40). Formula: 40 ha = 40 x 1 hm² = 40 hm² Easily and interactively generates a cheat
sheet with conversions from square feet to hectares [1 sq ft = 0.000009290304 ha]. Users can specify some Many of us can't find the answer regarding 1 Point is equal to how many Square Feet? on the
Internet. Hopefully this article can help you. Point system is being 9 Jan 2021 Use this page to learn how to convert between hectares and acres. Therefore, 1 Var is equal to 9 sq ft and value of
100 var is 900 sq ft. | {"url":"https://lonpvhl.web.app/16133/83897.html","timestamp":"2024-11-02T20:42:19Z","content_type":"text/html","content_length":"16483","record_id":"<urn:uuid:9c853c33-89b7-481f-b896-26ff3bdc3f2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00174.warc.gz"} |
Netica(TM) API Programmer's Reference Manual; API Funtion: IsNodeDeterministic_bn
bool_ns IsNodeDeterministic_bn ( const node_bn* node )
If this returns TRUE then node is a deterministic node, which means that: given values for its parents, its value is determined with certainty.
There is no API function to directly set whether a node is deterministic, but setting all its conditional probabilities (i.e., CPT entries) to 0 or 1 will make a node deterministic. Building its
table just with SetNodeFuncState_bn or SetNodeFuncReal_bn also will. Note that a node with a deterministic equation can result in a non-deterministic CPT, due to uncertainties introduced in the
discretization process.
This function is available in all versions.
See also:
HasNodeTable_bn Determine if node has any table
SetNodeProbs_bn To change whether a node is deterministic
GetNodeType_bn To determine if a node is for a discrete or continuous variable
GetNodeKind_bn To determine what kind of node it is | {"url":"https://norsys.com/onLineAPIManual/functions/IsNodeDeterministic_bn.html","timestamp":"2024-11-03T04:19:24Z","content_type":"text/html","content_length":"3976","record_id":"<urn:uuid:de9876c1-bec1-43e0-884a-e5274e9014a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00538.warc.gz"} |
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
Thanks! This new software is a real help. My son is able to get real answers, where I just performed the step without real thought. You may have just saved his grades.
Maria Peter, NY
Thank you very much for your help. This is excellent software and I thank you.
Jessica Flores, FL
The program helped my son do well on an exam. It helped refresh my memory of a lot of things I forgot.
Kevin Porter, TX
I am very particular about my son's academic needs and keep a constant watch. Recently, I found that he is having trouble in understanding algebra equations. I was not able to devote much time to him
because of my busy schedule. Then this software came as God sent gift. The simple way of explaining difficult concepts made my son grasp the subject quickly. I recommend this software to all.
Mika Lester, MI
Search phrases used on 2013-03-07:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• copy of Houghton Mifflin 5th grade math test
• the university of chicago math project. functions, statistics and trig chapter 3 answers
• permutation and combination tutorial
• radical expressions calculator online
• factor puzzles for math in a sqaure
• free step-by-step math answers
• Algebra For Beginners
• algebra projects for matrices with percents
• simultaneous equations solvers
• how do i solve algebra fractions
• how to calculate logarithm casio
• john fraleigh topic algebra pdf
• Lesson plans for exponents
• ti 89 solve for a variable ellipse
• partial sums + addition
• if divisible by in java
• answer to a algebra question
• solve and simplify rational equations
• How to use texas Instrument T1-83 calculator base power
• algebra connections volume one (answers)
• mathematics trivia in conversion
• Pizzazz Worksheets, download
• adding whole numbers and decimals WORKSHEET 3
• investigatory project
• practice worksheet 5th grade math exponents
• asymptotes for dummies
• +online TI 84 plus to solve matrix equations
• decimal to square root
• adding dividing multiplying and subtracting decimals
• free fifth grade math worksheets
• free mcdougal littell algebra 2 help
• fifth grade problems for slope
• solving simultaneous equations in matlab
• mathematical trivia algebra
• free algebra homework solver download
• ''statistics for yr 6''
• free printable exercises for 1st graders
• prentice hall textbook enrichment worksheet answers
• trinomial factoring calculator
• math workbook page 44 45 6th grade
• solve second differential equation dsolve matlab non-homogeneous
• polynomial algorithm, java
• how can i download descartes to my ti 83
• samlpe question in discriminant
• mcdougal littell answers
• energy method to solve wave equation
• q&a gcd backward elimination
• polynomials in math grade 8
• free trigonometry program
• solving equations using square roots worksheet
• interesting questions for yr.8
• factoring gr 9 math
• convert square roots
• how to solve an algebra problem
• free worksheet on simplification
• free printable multistep inequality worksheets
• glencoe math workbooks
• "number pattern" worksheet
• lineal square meters
• "patterning lesson plans"
• reflection math test for gcse
• dividing polynomials remainder calculator
• Plotting points on a ti-83 plu
• typing text into a graphing calculator TI-83
• probability maths ks3 q
• ti89 quadratic
• conversion fractions from least to greatest
• math problems for kids
• teaching like terms
• free printable integer worksheets
• mathematica for dummies
• ALGEBRA SOLVE SOFTWARE
• solve fraction algebra
• multiplying and dividing mixed fraction games
• free college algebra problem solver
• examples of elementary poems
• factoring expressions with trigonometric functions
• how to simplify unknown exponents
• adding subtracting dividing and multiplying integers
• computing statistical formulas on ti 83
• practice 3.1 solving equations by adding and subtracting
• adding subtracting integers
• FOIL math worksheets
• free grade 9 math sheets
• 3-1 pratice representing decimals
• ti 83 graphic calculator finding the y intercept
• math fraction algebra worksheet free grade 9
• slope study for beginers
• equations powerpoints
• adding integers algebra worksheet with answers
• solving equations by adding or subtracting game
• math challenges for sixth grader
• distributive property equation worksheet
• how to calculate a linear combination of two numbers
• How to solve third polynomial
• how to learn the box-method in algebra
• free download of accounting books
• solving equations using addition or subtraction of integers worksheets
• + squar root of 32 | {"url":"https://softmath.com/math-book-answers/sum-of-cubes/simplify-the-following.html","timestamp":"2024-11-10T08:51:56Z","content_type":"text/html","content_length":"36038","record_id":"<urn:uuid:406d16a6-6fd3-4365-a3f4-9ab5d1359abf>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00138.warc.gz"} |
Time Series Analysis using R - forecast package
In today’s blog post, we shall look into time series analysis using R package –
. Objective of the post will be explaining the different methods available in forecast package which can be applied while dealing with time series analysis/forecasting.
What is Time Series?
time series
is a collection of observations of well-defined data items obtained through repeated measurements over time. For example, measuring the value of retail sales each month of the year would comprise a
time series.
• Identify patterns in the data – stationarity/non-stationarity.
• Prediction from previous patterns.
Time series Analysis in R:
My data set contains data of Sales of CARS from Jan-2008 to Dec 2013.
Problem Statement: Forecast sales for 2013
PART Jan08 FEB08 MAR08 .... .... NOV12 DEC12
MERC 100 127 56 .... .... 776 557
Table: shows the first row data from Jan 2008 to Dec 2012
The forecasts of the timeseries data will be:
Assuming that the data sources for the analysis are finalized and cleansing of the data is done, for further
Step1: Understand the data:
As a first step, Understand the data visually, for this purpose, the data is converted to time series object using ts(), and plotted visually using plot() functions available in R.
ts = ts(t(data[,7:66])) plot(ts[1,],type=’o’,col=’blue’)
Image above shows the monthly sales of an automobile
Forecast package & methods:
Forecast package is written by Rob J Hyndman and is available from CRAN
. The package contains Methods and tools for displaying and analyzing univariate time series forecasts including exponential smoothing via state space models and automatic ARIMA modelling.
Before going into more accurate Forecasting functions for Time series, let us do some basic forecasts using Meanf(), naïve(), random walk with drift – rwf() methods. Though these may not give us
proper results but we can use the results as bench marks.
All these forecasting models returns objects which contain original series, point forecasts, forecasting methods used residuals. Below functions shows three methods & their plots.
Library(forecast) mf = meanf(ts[,1],h=12,level=c(90,95),fan=FALSE,lambda=NULL) plot(mf)
mn = naive(ts[,1],h=12,level=c(90,95),fan=FALSE,lambda=NULL) plot(mn)
md = rwf(ts[,1],h=12,drift=T,level=c(90,95),fan=FALSE,lambda=NULL) plot(md)
Measuring accuracy:
Once the model has been generated the accuracy of the model can tested using accuracy(). The Accuracy function returns MASE value which can be used to measure the accuracy of the model. The best
model is chosen from the below results which gives have relatively lesser values of ME,RMSE,MAE,MPE,MAPE,MASE.
> accuracy(md)
Training set 1.806244e-16 2.445734 1.889687 -41.68388 79.67588 1.197689
Training set 1.55489e-16 1.903214 1.577778 -45.03219 72.00485 1
> accuracy(mn)
Training set 0.1355932 2.44949 1.864407 -36.45951 76.98682 1.181666
Step2: Time Series Analysis Approach:
A typical time-series analysis involves below steps:
• Check for identifying under lying patterns - Stationary & non-stationary, seasonality, trend.
• After the patterns have been identified, if needed apply Transformations to the data – based on Seasonality/trends appeared in the data.
• Apply forecast() the future values using Proper ARIMA model obtained by auto.arima() methods.
Identify Stationarity/Non-Stationarity:
A stationary time series is one whose properties do not depend on the time at which the series is observed. Time series with trends, or with seasonality, are not stationary
The stationarity /non-stationarity of the data can be known by applying Unit Root Tests - augmented Dickey–Fuller test (ADF), Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test.
The null-hypothesis for an ADF test is that the data are non-stationary. So large p-values are indicative of non-stationarity, and small p-values suggest stationarity. Using the usual 5% threshold,
differencing is required if the p-value is greater than 0.05.
adf = adf.test(ts[,1])
Augmented Dickey-Fuller Test
data: ts[, 1]
Dickey-Fuller = -4.8228, Lag order = 3, p-value = 0.01
alternative hypothesis: stationary
The above figure suggests us that the data is of stationary and we can go ahead with ARIMA models.
Another popular unit root test is the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test. This reverses the hypotheses, so the null-hypothesis is that the data are stationary. In this case, small p-values
(e.g., less than 0.05) suggest that differencing is required.
kpss = kpss.test(ts[,1])
Warning message:
In kpss.test(ts[, 1]) : p-value greater than printed p-value
KPSS Test for Level Stationarity
data: ts[, 1]
KPSS Level = 0.1399, Truncation lag parameter = 1, p-value = 0.1
Based on the unit test results we identify whether the data is stationary or not. If the data is stationary then we choose optimal ARIMA models and forecasts the future intervals. If the data is non-
stationary, then we use Differencing - computing the differences between consecutive observations. Use ndiffs(),diff() functions to find the number of times differencing needed for the data & to
difference the data respectively.
[1] 1
diff_data = diff(ts[,1])
Time Series:
Start = 2
End = 60
Frequency = 1
[1] 1 5 -3 -1 -1 0 3 1 0 -4 4 -5 0 0 1 1 0 1 0 0 2 -5 3 -2 2 1 -3 0 3 0 2 -1 -5 3 -1
[36] -1 2 -1 -1 5 -2 0 2 -2 -4 0 3 1 -1 0 0 0 -2 2 -3 4 -3 2 5
Now retest for stationarity by applying acf()/kpss() functions if the plots shows us the Stationarity then Go ahead by applying ARIMA Models.
Identify Seasonality/Trend:
The seasonality in the data can be obtained by the stl()when plotted
Stl = Stl(ts[,1],s.window=”periodic”)
Series is not period or has less than two periods
Since my data doesn’t contain any seasonal behavior I will not touch the Seasonality part.
ARIMA Models:
For forecasting stationary time series data we need to choose an optimal ARIMA model (p,d,q). For this we can use auto.arima() function which can choose optimal (p,d,q) value and return us. Know more
about ARIMA from
Series: ts[, 2]
ARIMA(3,1,1) with drift
ar1 ar2 ar3 ma1 drift
-0.2621 -0.1223 -0.2324 -0.7825 0.2806
s.e. 0.2264 0.2234 0.1798 0.2333 0.1316
sigma^2 estimated as 41.64: log likelihood=-190.85
AIC=393.7 AICc=395.31 BIC=406.16
Forecast time series:
Now we use forecast() method to forecast the future events.
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
61 -3.076531531 -5.889584 -0.2634795 -7.378723 1.225660
62 0.231773625 -2.924279 3.3878266 -4.594993 5.058540
63 0.702386360 -2.453745 3.8585175 -4.124500 5.529272
64 -0.419069906 -3.599551 2.7614107 -5.283195 4.445055
65 0.025888991 -3.160496 3.2122736 -4.847266 4.899044
66 0.098565814 -3.087825 3.2849562 -4.774598 4.971729
67 -0.057038778 -3.243900 3.1298229 -4.930923 4.816846
68 0.002733053 -3.184237 3.1897028 -4.871317 4.876783
69 0.013817766 -3.173152 3.2007878 -4.860232 4.887868
70 -0.007757195 -3.194736 3.1792219 -4.881821 4.866307
The below flow chart will give us a summary of the time series ARIMA models approach:
The above flow diagram explains the steps to be followed for a time series forecasting
13 comments:
1. Hello, I have a question about "forecast" package (I am using it in Tableau charts). Is there a way to ignore/exclude current month from the calculations?
2. Thanks for a nice tutorial. Are the CARS data you used available for comparison of results with other methods? Is this the same as the standard "cars" dataset used throughout R? Or is it
3. Plus, it will empower you with data management technologies like machine learning, Flume, and Hadoop. artificial intelligence certification
4. Tra vé máy bay giá rẻ tại Aivivu, tham khảo
vé máy bay đi Mỹ Vietnam Airline
vé máy bay từ seattle về việt nam
khi nào có chuyến bay từ canada về việt nam
Lịch bay từ Hàn Quốc về Việt Nam tháng 7
5. Don’t forget to always keep your customers in mind when settling on a box style for your lipstick boxes. For instance, if sustainability is a priority for your ideal customer, consider spending a
little more to get boxes made from post-consumer waste.
6. Yalova
7. 1530A
Rize Evden Eve Nakliyat
Kayseri Lojistik
Bartın Evden Eve Nakliyat
Eryaman Boya Ustası
Yenimahalle Parke Ustası
Bayburt Parça Eşya Taşıma
İstanbul Şehir İçi Nakliyat
Düzce Şehir İçi Nakliyat
Giresun Evden Eve Nakliyat
8. 87297
Düzce Şehir İçi Nakliyat
Silivri Fayans Ustası
Balıkesir Evden Eve Nakliyat
Eryaman Fayans Ustası
Bitlis Parça Eşya Taşıma
AAX Güvenilir mi
Tunceli Lojistik
Kars Şehir İçi Nakliyat
Artvin Lojistik
9. 614A4
Ordu Evden Eve Nakliyat
steroid cycles
Tekirdağ Fayans Ustası
Tokat Evden Eve Nakliyat
Çorum Evden Eve Nakliyat
Artvin Evden Eve Nakliyat
masteron for sale
pharmacy steroids for sale
Kayseri Evden Eve Nakliyat
10. 84AC9
gümüşhane mobil sohbet et
kayseri sesli sohbet
Antep Goruntulu Sohbet
çankırı bedava sohbet siteleri
giresun görüntülü sohbet kadınlarla
rize kadınlarla sohbet
Bayburt Ücretsiz Görüntülü Sohbet Uygulamaları
Tunceli Chat Sohbet
kırklareli bedava görüntülü sohbet sitesi
11. 73F22
gate io
en güvenilir kripto borsası
12. FD353
referans kodu
en güvenilir kripto borsası
binance referans
canlı sohbet ucretsiz
13. 0E345
binance ne demek
bitcoin nasıl üretilir
bitcoin hangi bankalarda var | {"url":"https://www.dataperspective.info/2014/04/time-series-analysis-using-r.html","timestamp":"2024-11-03T12:45:58Z","content_type":"application/xhtml+xml","content_length":"127626","record_id":"<urn:uuid:4bc38470-7a96-42c3-97d4-ee40eb429060>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00444.warc.gz"} |
Quick Start to Client side COM and Python
This documents how to quickly start using COM from Python. It is not a thorough discussion of the COM system, or of the concepts introduced by COM.
Other good information on COM can be found in various conference tutorials - please see the collection of Mark's conference tutorials
For information on implementing COM objects using Python, please see a Quick Start to Server side COM and Python
In this document we discuss the following topics:
Quick Start
import win32com.client
o = win32com.client.Dispatch("Object.Name")
o.property = "New Value"
print o.property
o = win32com.client.Dispatch("Excel.Application")
o.Visible = 1
o.Workbooks.Add() # for office 97 – 95 a bit different!
o.Cells(1,1).Value = "Hello"
And we will see the word "Hello" appear in the top cell.
Good question. This is hard! You need to use the documentation with the products, or possibly a COM browser. Note however that COM browsers typically rely on these objects registering themselves in
certain ways, and many objects to not do this. You are just expected to know.
The Python COM browser
PythonCOM comes with a basic COM browser that may show you the information you need. Note that this package requires Pythonwin (ie, the MFC GUI environment) to be installed for this to work.
There are far better COM browsers available - I tend to use the one that comes with MSVC, or this one!
To run the browser, simply select it from the Pythonwin Tools menu, or double-click on the file win32com\client\combrowse.py
In the above examples, if we printed the 'repr(o)' object above, it would have resulted in
<COMObject Excel.Application>
This reflects that the object is a generic COM object that Python has no special knowledge of (other than the name you used to create it!). This is known as a "dynamic dispatch" object, as all
knowledge is built dynamically. The win32com package also has the concept of static dispatch objects, which gives Python up-front knowledge about the objects that it is working with (including
arguments, argument types, etc)
In a nutshell, Static Dispatch involves the generation of a .py file that contains support for the specific object. For more overview information, please see the documentation references above.
The generation and management of the .py files is somewhat automatic, and involves one of 2 steps:
• Using makepy.py to select a COM library. This process is very similar to Visual Basic, where you select from a list of all objects installed on your system, and once selected the objects are
magically useable.
• Use explicit code to check for, and possibly generate, support at run-time. This is very powerful, as it allows the developer to avoid ensuring the user has selected the appropriate type library.
This option is extremely powerful for OCX users, as it allows Python code to sub-class an OCX control, but the actual sub-class can be generated at run-time. Use makepy.py with a -i option to see
how to include this support in your Python code.
The win32com.client.gencache module manages these generated files. This module has some documentation of its own, but you probably don't need to know the gory details!
How do I get at the generated module?
You will notice that the generated file name is long and cryptic - obviously not designed for humans to work with! So how do you get at the module object for the generated code?
Hopefully, the answer is you shouldn't need to. All generated file support is generally available directly via win32com.client.Dispatch and win32com.client.constants. But should you ever really need
the Python module object, the win32com.client.gencache module has functions specifically for this. The functions GetModuleForCLSID and GetModuleForProgID both return Python module objects that you
can use in your code. See the docstrings in the gencache code for more details.
To generate Python Sources supporting a COM object
Example using Microsoft Office 97.
• Run 'win32com\client\makepy.py' (eg, run it from the command window, or double-click on it) and a list will be presented. Select the Type Library 'Microsoft Word 8.0 Object Library'
• From a command prompt, run the command 'makepy.py "Microsoft Word 8.0 Object Library"' (include the double quotes). This simply avoids the selection process.
• If you desire, you can also use explicit code to generate it just before you need to use it at runtime. Run 'makepy.py -i "Microsoft Word 8.0 Object Library"' (include the double quotes) to see
how to do this.
And that is it! Nothing more needed. No special import statements needed! Now, you simply need say
>>> import win32com.client
>>> w=win32com.client.Dispatch("Word.Application")
>>> w.Visible=1
>>> w
<win32com.gen_py.Microsoft Word 8.0 Object Library._Application>
Note that now Python knows the explicit type of the object.
Makepy automatically installs all generated constants from a type library in an object called win32com.clients.constants. You do not need to do anything special to make these constants work, other
than create the object itself (ie, in the example above, the constants relating to Word would automatically be available after the w=win32com.client.Dispatch("Word.Application") statement.
For example, immediately after executing the code above, you could execute the following:
>>> w.WindowState = win32com.client.constants.wdWindowStateMinimize
and Word will Minimize. | {"url":"https://timgolden.me.uk/pywin32-docs/html/com/win32com/HTML/QuickStartClientCom.html","timestamp":"2024-11-07T12:26:52Z","content_type":"text/html","content_length":"7988","record_id":"<urn:uuid:f96e7215-e2e9-468a-82dc-fa9d2ad7f705>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00016.warc.gz"} |
Troubleshooting excessive index drift in endpoints; fixing Hessian under variable transformation.
[Work Log] Troubleshooting excessive index drift in endpoints; fixing Hessian under variable transformation.
December 05, 2013
Getting bizarre spikes in indices during optimization. Confirmed that removing the spikes will improve ML, but there's a steep well between the two local minima as we reduce the index value:
The index starts at a fairly reasonable initial value, so my only guess is that the Hessian is suggesting a large step which happens to step over the well.
I'm wondering if the hessian is screwy; or maybe it's just the transformation we're using. Optimizing raw indices doesn't exhibit this problem, but it is a problem in our our current approach of
working with log differences to prevent re-ordering.
A prior over index spacing should probably prevent this; I'm hesitatnt to add the extra complexity at this point, considering the additional training and troubleshooting it would entail.
Should unit test the transformation.
Gradient looks okay.
On-diagonal elements have significant error!
Actually, significant off-diagonal error, but on-diagonal dominates.
Since gradient is fine, and it uses the same jacobian, I'm guessing the problem isn't the jacobian transformation, but the hessian itself.
Confirmed. diagonal is (mostly) too high, error on order of 1e-3. Gradient error is around 1e-9.
Detective work. Look at diagonal of each Hessian term, compare to residual of diagonal, look for patterns.
Ugh, worst afternoon ever. Spent hours trying every trick in the book to track down the source of the error, including lots of time looking at the raw Hessian (it wasn't the raw Hessian). Finally
found the bug: I formula I used for the chain rule for Hessians was wrong. In particular, it was missing a second term that (in my problem) corresponded to adding the transformed gradient to the
diagonal. See Faa di Bruno's formula.
Total error is much reduced now, but not zero. around 0.1, instead of 20 before, new results:
The norm-check is around 1e-4; very nice.
Re-running dataset 11 with hopes that we don't lose the global optimum. Interesting observation: optimization requires more iterations than before to converge. It seems a more conservative hessian
results in smaller steps and is less likely Looks better:
Notice we're still getting offset, but at least the reconstruction is qualitatively better that before. However, now we're getting a small loop at the top:
It seems to be caused by changing of index order between views. Needs some thought to best address.
Re-running on all datasets. Hopefully excessive index drift won't be too big an issue. Possibly an extra term to prevent drift far from initial points would be sensible.
Datasets 4,5 still drifts:
Datasets 7,9 have detached curves
Dataset 10, curve 2 (?) appears to have failed to prune
endpoint drift
It looks like interior points are confined by their neighbor points from drifting too far, but end points have no such constraint. After a small amount of drift, they're able to loop back on
themselves and land directly on the backprojection line. It's surprising that the extra flexibility afforded by large spacing between indices doesn't cause marginal likelihood to suffer, since most
of the new configurations are bad ones.
• in-plone offset perturbation
• penalize excessive drift
• penalize excessive gaps (a-la latent GP model).
• penalize shrinkage.
Constraining offset perturbation
Constructing the offset perturbation variance so perturbations can only occur parallel to the image plane. Code is pretty straightforward:
cam_variance = blkdiag(repmat(sigma,N,N), repmat(sigma,N,N), zero(N,N));
camvariance(I,I) = cam_variance;
world_variance = R' * cam_variance * R
Where R is the rotation matrix from world to camera coordinates (i.e. from the extrinsic matrix).
The main difficulty here is logistical: throughout the codebase we store the prior variance in 1D format, and assume it will be expanded to 3D isotropically. Now we have a non-isotropic part of the
prior, and we need to make sure it's included wherever the prior is needed.
Time for a real refactor. Need a 'get_K' function, instead of just taking the prior_K, adding the branch variance, and then adding the nonisotropic offset variance each time. Throw error if prior_K
or branch_variance isn't ready. refactor strategy: search files for prior_K; those that call one_d_to_three_d shortly thereafter are refactor candidates, others will need further recursion to find
the regactor point. But it all starts with prior_K. There shouldn't be that many leaf-nodes that use prior_K -- ML, gradient, reconstruction, WACV-spectific stuff...
Refactoring to use nonisotropic perturb position covariance.
Refactoring construct_attachment_covariance will be difficult, because it works in 1D matrix format, but the new covariance must operate in 3D format. Will involve refactoring everything to 3D,
re-index using block indexing, and double-check that noting relies on an independence assumption.
Better idea: apply perturb covariance after the fact. All connected curves will get the same covariance offset, so none of the special logic in construct_attachment_covariance is needed.
IDEA: use image position as an alternative index, and define a within-plane scaling covariance over it to account for poor calibration. Don't need to know any tree structure; shouldn't distort
reconstruction positions since it's in-plane.
in-plane covariance adds to branch_index only. Thus, the only change needs to be in att_set_branch_index and/or detach. ... But what if we want index-specific in-plane covariance (e.g. scaling). Best
to drop it into get_K, which will trickle into att_set_branch_index
Question: should each curve be allowed to shift in the image, or should the entire image be shifted as one? The former gives more flexibility, but possibly undeserved. One concern is that attaching
two curves will eliminate that freedom, unfairly penalizing attachment. On the other hand, of all curves tend to shift in the same direction, the ML will increase after attaching them, promoting
attachment. Will use per-curve shift for now.
Question: should shift be correlated between views? The theory is shift arises due to camera miscalibration.
Issue: sometimes the covariance matrix is gotten in other ways, e.g. when computing branch index, the parent's covariance is computed in-line.
TODO: recurse on * construct_attachment_covariance
Posted by
Kyle Simek
blog comments powered by | {"url":"http://vision.cs.arizona.edu/ksimek/research/2013/12/05/work-log/","timestamp":"2024-11-07T02:33:37Z","content_type":"application/xhtml+xml","content_length":"13956","record_id":"<urn:uuid:e97f213e-3262-4777-9429-2c83571f5d35>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00659.warc.gz"} |
Time Series Definition - Varsha Saini
What is Time Series Data
Time series data is a sequence of data values that are collected over different intervals of time. The data is present in chronological order of time, allowing us to track changes over time.
What is Time Series Analysis
Time series analysis is the analysis of a sequence of data points collected over an interval of time. It deals with analysing how the same information collected at different times varies.
The main assumption of time series analysis is that the past behaviour of data will be repeated.
The whole time series analysis is based on the assumption that the data under consideration is stationary.
Properties of Time Series
Below are a few properties of time series data:
• Data is present in chronological order of time.
• There is no limit to the time range. It can be a day, month or century etc.
• The time period is the time between the start and end value.
• Graphs don’t follow any of the standard distributions you may have learnt in statistics and probability.
• Frequency is how often the data is recorded i.e daily, monthly, or annually.
• Time series data doesn’t follow Gaus Markov’s Assumption.
• It assumes that the patterns in the variable will continue in the future.
Key Concepts of Time Series
There are four key concepts of time series: trend, seasonality, cyclic and stationarity.
1. Trend
The trend is the movement of time series to a higher or lower value over a long period of time. The trend may be an uptrend, downtrend or sideways trend.
2. Seasonality
A time series data is said to be seasonal if specific patterns appear on a cyclical basis. Seasonality can be of two types: additive and multiplicative.
a. Additive Seasonality
In additive seasonality, the magnitude of seasonal fluctuation is constant.
b. Multiplicative Seasonality
In multiplicative seasonality, the magnitude of seasonal fluctuation is not constant.
3. Cyclic
A time series data with rise and fall components without fixed frequency is regarded as cyclical.
Differentiating seasonal and cyclic data can be a bit difficult. In seasonality,the same behaviour or pattern is seen
over time i.e the components rise and fall with a fixed frequency whereas cyclicity in a time series data is present
due to economical or business conditions and the frequency of rise and fall of the components is not fixed.
4. Stationary
A time series data is said to be stationary if its properties like mean and standard deviation don’t change with time and there is no trend or seasonality in the data.
Autocorrelation is the correlation between a variable with its previous values.
White Noise
White Noise is a sequence of random numbers where every value has a time associated with it but the data doesn’t follow any pattern. It has a constant mean, standard deviation and no autocorrelation.
Therefore white noise data is stationary.
Since white noise data doesn’t follow any pattern, we cannot predict the future value.
Random Walk
Random walk is a special type of time series data in which it seems like values persist over time but the difference between different periods is simply white noise.
Applications of Time Series
Time Series Data is used in various domains for analysis and predictions. Below are a few areas where time series analysis can be useful:
1. Stock Prices Forecasting
2. Interest Rates Prediction
3. Weather Forecasting
4. Sales Forecasting
Limitations of Time Series Analysis
1. Time series analysis mostly works on a single feature.
2. Data needs to be transformed.
3. A bit expensive compared to normal data analysis. | {"url":"https://varshasaini.in/glossary/time-series/","timestamp":"2024-11-07T00:30:32Z","content_type":"text/html","content_length":"192358","record_id":"<urn:uuid:cbaed848-1bd9-4677-b361-11f29c13c466>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00692.warc.gz"} |
Science Journals
Research Papers
Quantum Theory / Particle Physics
Date Published:
October 10, 2024
Quantum gravity hydrodynamics, graviton-photon interaction, contraction and expansion of the primordial universe, Big Bang, primordial sphere, cosmic back- ground radiation
In the present work, the fundamental equations of quantum gravity hydrodynamics are solved for the graviton-photon interaction in the primordial universe. We start with the basic assumption that
photons with spin 0 and with spin ± 1 are in creation and annihilation relationship (= in exchange relationship) to each other and that their over- all effect produces the gravitational interaction.
We also assume that the repulsive compression potential and the attractive Newtonian gravitational potential act between compressed photons in the primordial universe. In the contraction phase, the
attractive Newtonian gravitational potential dominates and compresses the radius of the primor- dial universe from 3.59041×10-20 fm to 1.84937×10-58 fm . At the end of the contraction phase of the
primordial universe, the Big Bang occurs, in which the total mass of to- day's universe is literally ejected (= repelled) from the primordial sphere (= from the primordial quantum state). However, in
order for the Big Bang to occur at an energy density of 2.1174×10255 MeV/fm3, the attractive Newtonian gravitational potential must disappear and the repulsive compression potential must gain the
upper hand. The con- traction suddenly changes into the Big Bang and inflation (= swelling) in order to re- verse the extremely high photon density achieved during the contraction inside the pri-
mordial sphere.../...
<<< Back | {"url":"https://www.gsjournal.net/Science-Journals/Research%20Papers/View/9972","timestamp":"2024-11-06T09:06:52Z","content_type":"text/html","content_length":"87748","record_id":"<urn:uuid:ac3726ad-5870-457d-b7ba-06b76fcbf217>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00891.warc.gz"} |
Evaluating Permutations
Question Video: Evaluating Permutations Mathematics • Second Year of Secondary School
Evaluate ⁷𝑃₄/⁵𝑃₂.
Video Transcript
Evaluate seven permute four divided by five permute two.
So to evaluate this, we need to work out how we actually find out the value of something like seven permute four. Well, we actually have a general formula that tells us that 𝑛 permute 𝑟 is equal to 𝑛
factorial over 𝑛 minus 𝑟 factorial. So with this formula, it’s worth reminding yourself what factorial is.
And the factorial shown here — we have five factorial — is five multiplied by four multiplied by three multiplied by two multiplied by one. And to find any factorial, all we do is actually multiply
all the positive integers that are less than or equal to the number which is in the factorial.
Okay, so now we know this. Let’s evaluate our seven permute four divided by five permute two. First of all, I’m actually gonna find seven permute four. And using our general formula is gonna be equal
to seven factorial divided by seven minus four factorial which is gonna be equal to seven factorial over three factorial.
So now, we’re just gonna show what this means. It means we’d have seven multiplied by six multiplied by five multiplied by four multiplied by three multiplied by two multiplied by one cause that’s
our seven factorial then divided by three multiplied by two multiplied by one. Well, then, our three factorial actually cancels out. So we can actually remove three multiplied by two multiplied by
one from the numerator and the denominator. So then, we’re left with seven multiplied by six multiplied by five multiplied by four which gives us a value of 840.
So great, we’ve now found the value of seven permute four. Let’s move on and find the value of five permute two. Well, five permute two is equal to five factorial over five minus two factorial. And
again, we got this from our general formula which is gonna be equal to five factorial over three factorial. So again, we just put in this extra step to show you what’s happening. So we’ve got five
multiplied by four multiplied by three multiplied by two multiplied by one divided by three multiplied by two multiplied by one. Again, our three factorial cancels. So we’re just left with five
multiplied by four. So therefore, we have a value of five permute two of 20.
Okay, great, so now, we can actually move on to the final stage which is to evaluate our starting question. So therefore, we can say that seven permute four over five permute two is gonna be equal to
840 divided by 20 which is gonna give us a final answer of 42. | {"url":"https://www.nagwa.com/en/videos/957195652949/","timestamp":"2024-11-02T11:08:03Z","content_type":"text/html","content_length":"251950","record_id":"<urn:uuid:415cf3a8-2fa0-4f6a-ab89-d8be7f2b20e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00592.warc.gz"} |
Word Gems
exploring self-realization, sacred personhood, and full humanity
Furman University math quotes collection
return to "Mathematics" main-page
Abel, Niels H. (1802 - 1829)
If you disregard the very simplest cases, there is in all of mathematics not a single infinite series whose sum has been rigorously determined. In other words, the most important parts of mathematics
stand without a foundation.
In G. F. Simmons, Calculus Gems, New York: Mcgraw Hill, Inc., 1992, p. 188.
[A reply to a question about how he got his expertise:]
By studying the masters and not their pupils.
[About Gauss' mathematical writing style]
He is like the fox, who effaces his tracks in the sand with his tail.
In G. F. Simmons, Calculus Gems, New York: Mcgraw Hill, Inc., 1992, p. 177.
Adams, Douglas (1952 - )
Bistromathics itself is simply a revolutionary new way of understanding the behavior of numbers. Just as Einstein observed that space was not an absolute but depended on the observer's movement in
space, and that time was not an absolute, but depended on the observer's movement in time, so it is now realized that numbers are not absolute, but depend on the observer's movement in restaurants.
The first nonabsolute number is the number of people for whom the table is reserved. This will vary during the course of the first three telephone calls to the restaurant, and then bear no apparent
relation to the number of people who actually turn up, or to the number of people who subsequently join them after the show/match/party/gig, or to the number of people who leave when they see who
else has turned up.
The second nonabsolute number is the given time of arrival, which is now known to be one of the most bizarre of mathematical concepts, a recipriversexcluson, a number whose existence can only be
defined as being anything other than itself. In other words, the given time of arrival is the one moment of time at which it is impossible that any member of the party will arrive.
Recipriversexclusons now play a vital part in many branches of math, including statistics and accountancy and also form the basic equations used to engineer the Somebody Else's Problem field.
The third and most mysterious piece of nonabsoluteness of all lies in the relationship between the number of items on the bill, the cost of each item, the number of people at the table and what they
are each prepared to pay for. (The number of people who have actually brought any money is only a subphenomenon of this field.)
Life, the Universe and Everything. New York: Harmony Books, 1982.
Numbers written on restaurant bills within the confines of restaurants do not follow the same mathematical laws as numbers written on any other pieces of paper in any other parts of the Universe.
This single statement took the scientific world by storm. It completely revolutionized it. So many mathematical conferences got held in such good restaurants that many of the finest minds of a
generation died of obesity and heart failure and the science of math was put back by years.
Life, the Universe and Everything. New York: Harmony Books, 1982.
Adams, John (1735 - 1826)
I must study politics and war that my sons may have liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history, naval architecture,
navigation, commerce and agriculture in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry, and porcelain.
Letter to Abigail Adams, May 12, 1780.
Adler, Alfred
Each generation has its few great mathematicians, and mathematics would not even notice the absence of the others. They are useful as teachers, and their research harms no one, but it is of no
importance at all. A mathematician is great or he is nothing.
"Mathematics and Creativity." The New Yorker Magazine, February 19, 1972.
In the company of friends, writers can discuss their books, economists the state of the economy, lawyers their latest cases, and businessmen their latest acquisitions, but mathematicians cannot
discuss their mathematics at all. And the more profound their work, the less understandable it is.
Reflections: mathematics and creativity, New Yorker, 47 (1972), no. 53, 39 - 45.
The mathematical life of a mathematician is short. Work rarely improves after the age of twenty-five or thirty. If little has been accomplished by then, little will ever be accomplished.
"Mathematics and Creativity." The New Yorker Magazine, February 19, 1972.
Anglin, W.S.
Mathematics is not a careful march down a well-cleared highway, but a journey into a strange wilderness, where the explorers often get lost. Rigour should be a signal to the historian that the maps
have been made, and the real explorers have gone elsewhere.
"Mathematics and History", Mathematical Intelligencer, v. 4, no. 4.
Defendit numerus: There is safety in numbers.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956, p. 1452.
Like the crest of a peacock so is mathematics at the head of all knowledge.
[An old Indian saying. Also, "Like the Crest of a Peacock" is the title of a book by G.G. Joseph]
Referee's report: This paper contains much that is new and much that is true. Unfortunately, that which is true is not new and that which is new is not true.
In H.Eves Return to Mathematical Circles, Boston: Prindle, Weber, and Schmidt, 1988.
Arbuthnot, John
There are very few things which we know, which are not capable of being reduc'd to a Mathematical Reasoning; and when they cannot it's a sign our knowledge of them is very small and confus'd; and
when a Mathematical Reasoning can be had it's as great a folly to make use of any other, as to grope for a thing in the dark, when you have a Candle standing by you.
Of the Laws of Chance. (1692)
Aristophanes (ca 444 - 380 BC)
Meton: With the straight ruler I set to work
To make the circle four-cornered
[First(?) allusion to the problem of squaring the circle]
Aristotle (ca 330 BC)
Now that practical skills have developed enough to provide adequately for material needs, one of these sciences which are not devoted to utilitarian ends [mathematics] has been able to arise in
Egypt, the priestly caste there having the leisure necessary for disinterested research.
Metaphysica, 1-981b
The whole is more than the sum of its parts.
Metaphysica 10f-1045a
The so-called Pythagoreans, who were the first to take up mathematics, not only advanced this subject, but saturated with it, they fancied that the principles of mathematics were the principles of
all things.
Metaphysica 1-5
It is not once nor twice but times without number that the same ideas make their appearance in the world.
"On The Heavens", in T. L. Heath Manual of Greek Mathematics, Oxford: Oxford University Press, 1931.
To Thales the primary question was not what do we know, but how do we know it.
Mathematical Intelligencer v. 6, no. 3, 1984.
The mathematical sciences particularly exhibit order, symmetry, and limitation; and these are the greatest forms of the beautiful.
Metaphysica, 3-1078b.
Aubrey, John (1626-1697)
[About Thomas Hobbes:]
He was 40 years old before he looked on geometry; which happened accidentally. Being in a gentleman's library, Euclid's Elements lay open, and "twas the 47 El. libri I" [Pythagoras' Theorem]. He read
the proposition ... [S]ayd he, "this is impossible:" So he reads the demonstration of it, which referred him back to such a proposition; which proposition he read. That referred him back to another,
which he also read. Et sic deinceps, that at last he was demonstratively convinced of that trueth. This made him in love with geometry.
In O. L. Dick (ed.) Brief Lives, Oxford: Oxford University Press, 1960, p. 604.
Babbage, Charles (1792-1871)
Errors using inadequate data are much less than those using no data at all.
On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the
kindof confusion of ideas that could provoke such a question.
Bacon, Sir Francis (1561-1626)
And as for Mixed Mathematics, I may only make this prediction, that there cannot fail to be more kinds of them, as nature grows further disclosed.
Advancement of Learning book 2; De Augmentis book 3.
Bacon, Roger
For the things of this world cannot be made known without a knowledge of mathematics.
Opus Majus part 4 Distinctia Prima cap 1, 1267.
In the mathematics I can report no deficience, except that it be that men do not sufficiently understand the excellent use of the pure mathematics, in that they do remedy and cure many defects in the
wit and faculties intellectual. For if the wit be too dull, they sharpen it; if too wandering, they fix it; if too inherent in the sense, they abstract it. So that as tennis is a game of no use in
itself, but of great use in respect it maketh a quick eye and a body ready to put itself into all postures; so in the mathematics, that use which is collateral and intervenient is no less worthy than
that which is principal and intended.
John Fauvel and Jeremy Gray (eds.) A History of Mathematics: A Reader, Sheridan House, 1987.
Baker, H. F.
[On the concept of group:]
... what a wealth, what a grandeur of thought may spring from what slight beginnings.
Florian Cajori, A History of Mathematics, New York, 1919, p 283.
Bagehot, Walter
Life is a school of probability.
Quoted in J. R. Newman (ed.) The World of Mathematics, Simon and Schuster, New York, 1956, p. 1360.
Balzac, Honore de (1799 - 1850)
Numbers are intellectual witnesses that belong only to mankind.
Bell, Eric Temple (1883-1960)
Euclid taught me that without assumptions there is no proof. Therefore, in any argument, examine the assumptions.
In H. Eves Return to Mathematical Circles., Boston: Prindle, Weber and Schmidt, 1988.
The Handmaiden of the Sciences.
[Book by that title.]
Guided only by their feeling for symmetry, simplicity, and generality, and an indefinable sense of the fitness of things, creative mathematicians now, as in the past, are inspired by the art of
mathematics rather than by any prospect of ultimate usefulness.
"Obvious" is the most dangerous word in mathematics.
The pursuit of pretty formulas and neat theorems can no doubt quickly degenerate into a silly vice, but so can the quest for austere generalities which are so very general indeed that they are
incapable of application to any particular.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Abstractness, sometimes hurled as a reproach at mathematics, is its chief glory and its surest title to practical usefulness. It is also the source of such beauty as may spring from mathematics.
The longer mathematics lives the more abstract -- and therefore, possibly also the more practical -- it becomes.
In The Mathematical Intelligencer, vol. 13, no. 1, Winter 1991.
The cowboys have a way of trussing up a steer or a pugnacious bronco which fixes the brute so that it can neither move nor think. This is the hog-tie, and it is what Euclid did to geometry.
In R Crayshaw-Williams The Search For Truth, p. 191.
If "Number rules the universe" as Pythagoras asserted, Number is merely our delegate to the throne, for we rule Number.
In H. Eves Mathematical Circles Revisited, Boston: Prindle, Weber and Schmidt, 1971.
I have always hated machinery, and the only machine I ever understood was a wheelbarrow, and that but imperfectly.
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
Belloc, Hillaire (1870-1953)
Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death.
The Silence of the Sea
Bentham, Jeremy (1748-1832)
O Logic: born gatekeeper to the Temple of Science, victim of capricious destiny: doomed hitherto to be the drudge of pedants: come to the aid of thy master, Legislation.
In J. Browning (ed.) Works.
Bernoulli, Daniel
...it would be better for the true physics if there were no mathematicians on earth.
In The Mathematical Intelligencer, v. 13, no. 1, Winter 1991.
Bernoulli, Jacques (Jakob?) (1654-1705)
I recognize the lion by his paw.
[After reading an anonymous solution to a problem that he realized was Newton's solution.]
In G. Simmons, Calculus Gems, New York: McGraw Hill, 1992, p. 136.
God forbid that Truth should be confined to Mathematical Demonstration!
Notes on Reynold's Discourses, c. 1808.
Bohr, Niels Henrik David (1885-1962)
An expert is a man who has made all the mistakes, which can be made, in a very narrow field.
The Bible
I returned and saw under the sun that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of
skill; but time and chance happeneth to them all.
Bolyai, János (1802 - 1860)
Out of nothing I have created a strange new universe.
[A reference to the creation of a non-euclidean geometry.]
Bolyai, Wolfgang (1775-1856)
[To son János:]
For God's sake, please give it up. Fear it no less than the sensual passion, because it, too, may take up all your time and deprive you of your health, peace of mind and happiness in life.
[Bolyai's father urging him to give up work on non-Euclidian geometry.]
In P. Davis and R. Hersh The Mathematical Experience , Boston: Houghton Mifflin Co., 1981, p. 220.
Structures are the weapons of the mathematician.
Bridgman, P. W.
It is the merest truism, evident at once to unsophisticated observation, that mathematics is a human invention.
The Logic of Modern Physics, New York, 1972.
Brown, George Spencer (1923 - )
To arrive at the simplest truth, as Newton knew and practiced, requires years of contemplation. Not activity. Not reasoning. Not calculating. Not busy behaviour of any kind. Not reading. Not talking.
Not making an effort. Not thinking. Simply bearing in mind what it is one needs to know. And yet those with the courage to tread this path to real discovery are not only offered practically no
guidance on how to do so, they are actively discouraged and have to set about it in secret, pretending meanwhile to be diligently engaged in the frantic diversions and to conform with the deadening
personal opinions which are continually being thrust upon them.
The Laws of Form. 1969.
Browne, Sir Thomas (1605-1682)
God is like a skilful Geometrician.
Religio Medici I, 16.
All things began in Order, so shall they end, and so shall they begin again, according to the Ordainer of Order, and the mystical mathematicks of the City of Heaven.
Hydriotaphia, Urn-burial and the Garden of Cyrus, 1896.
...indeed what reason may not go to Schoole to the wisdome of Bees, Aunts, and Spiders? what wise hand teacheth them to doe what reason cannot teach us? ruder heads stand amazed at those prodigious
pieces of nature, Whales, Elephants, Dromidaries and Camels; these I confesse, are the Colossus and Majestick pieces of her hand; but in these narrow Engines there is more curious Mathematicks, and
the civilitie of these little Citizens more neatly sets forth the wisedome of their Maker.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956, p. 1001.
Buck, Pearl S. (1892 - 1973)
No one really understood music unless he was a scientist, her father had declared, and not just a scientist, either, oh, no, only the real ones, the theoreticians, whose language mathematics. She had
not understood mathematics until he had explained to her that it was the symbolic language of relationships. "And relationships," he had told her, "contained the essential meaning of life."
The Goddess Abides, Pt. I, 1972.
Burke, Edmund
The age of chivalry is gone. That of sophisters, economists and calculators has succeeded.
Reflections on the Revolution in France.
Butler, Bishop
To us probability is the very guide of life.
Preface to Analogy.
Butler, Samuel (1835 - 1902)
... There can be no doubt about faith and not reason being the ultima ratio. Even Euclid, who has laid himself as little open to the charge of credulity as any writer who ever lived, cannot get
beyond this. He has no demonstrable first premise. He requires postulates and axioms which transcend demonstration, and without which he can do nothing. His superstructure indeed is demonstration,
but his ground his faith. Nor again can he get further than telling a man he is a fool if he persists in differing from him. He says "which is absurd," and declines to discuss the matter further.
Faith and authority, therefore, prove to be as necessary for him as for anyone else.
The Way of All Flesh.
When Newton saw an apple fall, he found ...
A mode of proving that the earth turnd round
In a most natural whirl, called gravitation;
And thus is the sole mortal who could grapple
Since Adam, with a fall or with an apple.
Caballero, James
I advise my students to listen carefully the moment they decide to take no more mathematics courses. They might be able to hear the sound of closing doors.
Everybody a mathematician?,CAIP Quarterly 2 (Fall, 1989).
Carlyle, Thomas (1795 - 1881)
It is a mathematical fact that the casting of this pebble from my hand alters the centre of gravity of the universe.
Sartor Resartus III.
Teaching school is but another word for sure and not very slow destruction.
In H. Eves In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
A witty statesman said, you might prove anything by figures.
Carroll, Lewis
What I tell you three times is true.
The Hunting of the Snark.
The different branches of Arithmetic -- Ambition, Distraction, Uglification, and Derision.
Alice in Wonderland.
"Can you do addition?" the White Queen asked. "What's one and one and one and one and one and one and one and one and one and one?" "I don't know," said Alice. "I lost count."
Through the Looking Glass.
"Alice laughed: "There's no use trying," she said; "one can't believe impossible things."
"I daresay you haven't had much practice," said the Queen. "When I was younger, I always did it for half an hour a day. Why, sometimes I've believed as many as six impossible things before
Alice in Wonderland.
"Then you should say what you mean," the March Hare went on.
"I do, " Alice hastily replied; "at least I mean what I say, that's the same thing, you know."
"Not the same thing a bit!" said the Hatter. "Why, you might just as well say that "I see what I eat" is the same thing as "I eat what I see!"
Alice in Wonderland.
"It's very good jam," said the Queen.
"Well, I don't want any to-day, at any rate."
"You couldn't have it if you did want it," the Queen said. "The rule is jam tomorrow and jam yesterday but never jam to-day."
"It must come sometimes to "jam to-day,""Alice objected.
"No it can't," said the Queen. "It's jam every other day; to-day isn't any other day, you know."
"I don't understand you," said Alice. "It's dreadfully confusing."
Through the Looking Glass.
"When I use a word," Humpty Dumpty said, in a rather scornful tone, "it means just what I choose it to mean - neither more nor less."
"The question is," said Alice, "whether you can make words mean so many different things."
"The question is," said Humpty Dumpty, "which is to be master - that's all."
Through the Looking Glass.
Carmichael, R. D.
A thing is obvious mathematically after you see it.
In N. Rose (ed.) Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
To isolate mathematics from the practical demands of the sciences is to invite the sterility of a cow shut away from the bulls.
In G. Simmons, Calculus Gems, New York: Mcgraw Hill, Inc., 1992, page 198.
Chekov, Anton (1860 - 1904)
There is no national science just as there is no national multiplication table; what is national is no longer science.
In V. P. Ponomarev Mysli o nauke Kishinev, 1973.
Chesterton, G. K. (1874 - 1936)
Poets do not go mad; but chess-players do. Mathematicians go mad, and cashiers; but creative artists very seldom. I am not, as will be seen, in any sense attacking logic: I only say that this danger
does lie in logic, not in imagination.
Orthodoxy ch. 2.
You can only find truth with logic if you have already found truth without it.
The Man who was Orthodox. 1963.
It isn't that they can't see the solution. It is that they can't see the problem.
The Point of a Pin in The Scandal of Father Brown.
Christie, Agatha
"I think you're begging the question," said Haydock, "and I can see looming ahead one of those terrible exercises in probability where six men have white hats and six men have black hats and you have
to work it out by mathematics how likely it is that the hats will get mixed up and in what proportion. If you start thinking about things like that, you would go round the bend. Let me assure you of
The Mirror Crack'd. Toronto: Bantam Books, 1962.
I continued to do arithmetic with my father, passing proudly through fractions to decimals. I eventually arrived at the point where so many cows ate so much grass, and tanks filled with water in so
many hours. I found it quite enthralling.
An Autobiography.
Churchill, [Sir] Winston Spencer (1874-1965)
It is a good thing for an uneducated man to read books of quotations.
Roving Commission in My Early Life. 1930.
I had a feeling once about Mathematics -- that I saw it all. Depth beyond depth was revealed to me -- the Byss and Abyss. I saw -- as one might see the transit of Venus or even the Lord Mayor's Show
-- a quantity passing through infinity and changing its sign from plus to minus. I saw exactly why it happened and why the tergiversation was inevitable but it was after dinner and I let it go.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Churchman, C. W.
The measure of our intellectual capacity is the capacity to feel less and less satisfied with our answers to better and better problems.
In J.E. Littlewood A Mathematician's Miscellany. Methuen and Co., Ltd. 1953.
The composer opens the cage door for arithmetic, the draftsman gives geometry its freedom.
Coleridge, Samuel Taylor (1772-1834)
...from the time of Kepler to that of Newton, and from Newton to Hartley, not only all things in external nature, but the subtlest mysteries of life and organization, and even of the intellect and
moral being, were conjured within the magic circle of mathematical formulae.
The Theory of Life.
Conrad, Joseph
Don't talk to me of your Archimedes' lever. He was an absentminded person with a mathematical imagination. Mathematics commands all my respect, but I have no use for engines. Give me the right word
and the right accent and I will move the world.
Preface to A Personal Record.
Coolidge, Julian Lowell (1873 - 1954)
[Upon proving that the best betting strategy for "Gambler's Ruin" was to bet all on the first trial.]
It is true that a man who does this is a fool. I have only proved that a man who does anything else is an even bigger fool.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Copernicus, Nicholaus (1473-1543)
Mathematics is written for mathematicians.
De Revolutionibus.
Crick, Francis Harry Compton (1916 - )
In my experience most mathematicians are intellectually lazy and especially dislike reading experimental papers. He (René Thom) seemed to have very strong biological intuitions but unfortunately of
negative sign.
What Mad Pursuit. London: Weidenfeld and Nicolson, 1988.
Crowe, Michael
Revolutions never occur in mathematics.
Historia Mathematica. 1975.
D'Alembert, Jean Le Rond (1717-1783)
Just go on and faith will soon return.
[To a friend hesitant with respect to infinitesimals.]
In P. J. Davis and R. Hersh The Mathematical Experience, Boston: Birkhäuser, 1981.
Thus metaphysics and mathematics are, among all the sciences that belong to reason, those in which imagination has the greatest role. I beg pardon of those delicate spirits who are detractors of
mathematics for saying this .... The imagination in a mathematician who creates makes no less difference than in a poet who invents.... Of all the great men of antiquity, Archimedes may be the one
who most deserves to be placed beside Homer.
Discours Preliminaire de L'Encyclopedie, Tome 1, 1967. pp 47 - 48.
Neither in the subjective nor in the objective world can we find a criterion for the reality of the number concept, because the first contains no such concept, and the second contains nothing that is
free from the concept. How then can we arrive at a criterion? Not by evidence, for the dice of evidence are loaded. Not by logic, for logic has no existence independent of mathematics: it is only one
phase of this multiplied necessity that we call mathematics.
How then shall mathematical concepts be judged? They shall not be judged. Mathematics is the supreme arbiter. From its decisions there is no appeal. We cannot change the rules of the game, we cannot
ascertain whether the game is fair. We can only study the player at his game; not, however, with the detached attitude of a bystander, for we are watching our own minds at play.
Darwin, Charles
Every new body of discovery is mathematical in form, because there is no other guidance we can have.
In N. Rose (ed.) Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Mathematics seems to endow one with something like a new sense.
In N. Rose (ed.) Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Davis, Philip J.
The numbers are a catalyst that can help turn raving madmen into polite humans.
In N. Rose (ed.) Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
One of the endlessly alluring aspects of mathematics is that its thorniest paradoxes have a way of blooming into beautiful theories.
Number, Scientific American, 211, (Sept. 1964), 51 - 59.
Davis, Philip J. and Hersh, Reuben
One began to hear it said that World War I was the chemists' war, World War II was the physicists' war, World War III (may it never come) will be the mathematicians' war.
The Mathematical Experience, Boston: Birkhäuser, 1981.
Dehn, Max
Mathematics is the only instructional material that can be presented in an entirely undogmatic way.
In The Mathematical Intelligencer, v. 5, no. 2, 1983.
De Morgan, Augustus (1806-1871)
[When asked about his age.] I was x years old in the year x^2.
In H. Eves In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
It is easier to square the circle than to get round a mathematician.
In H. Eves In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
Every science that has thriven has thriven upon its own symbols: logic, the only science which is admitted to have made no improvements in century after century, is the only one which has grown no
Transactions Cambridge Philosophical Society, vol. X, 1864, p. 184.
Descartes, René (1596-1650)
Of all things, good sense is the most fairly distributed: everyone thinks he is so well supplied with it that even those who are the hardest to satisfy in every other respect never desire more of it
than they already have.
Discours de la Méthode. 1637.
Each problem that I solved became a rule which served afterwards to solve other problems.
Discours de la Méthode. 1637.
If I found any new truths in the sciences, I can say that they follow from, or depend on, five or six principal problems which I succeeded in solving and which I regard as so many battles where the
fortunes of war were on my side.
Discours de la Méthode. 1637.
I concluded that I might take as a general rule the principle that all things which we very clearly and obviously conceive are true: only observing, however, that there is some difficulty in rightly
determining the objects which we distinctly conceive.
Discours de la Méthode. 1637.
I thought the following four [rules] would be enough, provided that I made a firm and constant resolution not to fail even once in the observance of them. The first was never to accept anything as
true if I had not evident knowledge of its being so; that is, carefully to avoid precipitancy and prejudice, and to embrace in my judgment only what presented itself to my mind so clearly and
distinctly that I had no occasion to doubt it. The second, to divide each problem I examined into as many parts as was feasible, and as was requisite for its better solution. The third, to direct my
thoughts in an orderly way; beginning with the simplest objects, those most apt to be known, and ascending little by little, in steps as it were, to the knowledge of the most complex; and
establishing an order in thought even when the objects had no natural priority one to another. And the last, to make throughout such complete enumerations and such general surveys that I might be
sure of leaving nothing out. These long chains of perfectly simple and easy reasonings by means of which geometers are accustomed to carry out their most difficult demonstrations had led me to fancy
that everything that can fall under human knowledge forms a similar sequence; and that so long as we avoid accepting as true what is not so, and always preserve the right order of deduction of one
thing from another, there can be nothing too remote to be reached in the end, or to well hidden to be discovered.
Discours de la Méthode. 1637.
When writing about transcendental issues, be transcendentally clear.
In G. Simmons Calculus Gems. New York: McGraw Hill Inc., 1992.
If we possessed a thorough knowledge of all the parts of the seed of any animal (e.g. man), we could from that alone, be reasons entirely mathematical and certain, deduce the whole conformation and
figure of each of its members, and, conversely if we knew several peculiarities of this conformation, we would from those deduce the nature of its seed.
Cogito Ergo Sum. "I think, therefore I am."
Discours de la Méthode. 1637.
I hope that posterity will judge me kindly, not only as to the things which I have explained, but also to those which I have intentionally omitted so as to leave to others the pleasure of discovery.
La Geometrie.
Perfect numbers like perfect men are very rare.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
omnia apud me mathematica fiunt.
With me everything turns into mathematics.
It is not enough to have a good mind. The main thing is to use it well.
Discours de la Méthode. 1637.
If you would be a real seeker after truth, you must at least once in your life doubt, as far as possible, all things.
Discours de la Méthode. 1637.
De Sua, F. (1956)
Suppose we loosely define a religion as any discipline whose foundations rest on an element of faith, irrespective of any element of reason which may be present. Quantum mechanics for example would
be a religion under this definition. But mathematics would hold the unique position of being the only branch of theology possessing a rigorous demonstration of the fact that it should be so
In H. Eves In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
[His epitaph.]
This tomb hold Diophantus Ah, what a marvel! And the tomb tells scientifically the measure of his life. God vouchsafed that he should be a boy for the sixth part of his life; when a twelfth was
added, his cheeks acquired a beard; He kindled for him the light of marriage after a seventh, and in the fifth year after his marriage He granted him a son. Alas! late-begotten and miserable child,
when he had reached the measure of half his father's life, the chill grave took him. After consoling his grief by this science of numbers for four years, he reached the end of his life.
In Ivor Thomas Greek Mathematics, in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Dirac, Paul Adrien Maurice (1902- )
I think that there is a moral to this story, namely that it is more important to have beauty in one's equations that to have them fit experiment. If Schrodinger had been more confident of his work,
he could have published it some months earlier, and he could have published a more accurate equation. It seems that if one is working from the point of view of getting beauty in one's equations, and
if one has really a sound insight, one is on a sure line of progress. If there is not complete agreement between the results of one's work and experiment, one should not allow oneself to be too
discouraged, because the discrepancy may well be due to minor features that are not properly taken into account and that will get cleared up with further development of the theory.
Scientific American, May 1963.
Mathematics is the tool specially suited for dealing with abstract concepts of any kind and there is no limit to its power in this field.
In P. J. Davis and R. Hersh The Mathematical Experience, Boston: Birkhäuser, 1981.
In science one tries to tell people, in such a way as to be understood by everyone, something that no one ever knew before. But in poetry, it's the exact opposite.
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
Disraeli, Benjamin
There are three kinds of lies: lies, damned lies, and statistics.
Mark Twain. Autobiography.
Donatus, Aelius (4th Century)
Pereant qui ante nos nostra dixerunt.
"To the devil with those who published before us."
[Quoted by St. Jerome, his pupil]
Doyle, Sir Arthur Conan (1859-1930)
Detection is, or ought to be, an exact sciences and should be treated in the same cold and unemotional manner. You have attempted to tinge it with romanticism, which produces much the same effect as
if you worked a love story or an elopement into the fifth proposition of Euclid.
The Sign of Four.
When you have eliminated the impossible, what ever remains, however improbable must be the truth.
The Sign of Four.
From a drop of water a logician could predict an Atlantic or a Niagara.
A study in Scarlet 1929.
It is a capital mistake to theorize before one has data.
Scandal in Bohemia.
Dryden, John (1631-1700)
Mere poets are sottish as mere drunkards are, who live in a continual mist, without seeing or judging anything clearly. A man should be learned in several sciences, and should have a reasonable,
philosophical and in some measure a mathematical head, to be a complete and excellent poet.
Notes and Observations on The Empress of Morocco. 1674.
Dubos, René J.
Gauss replied, when asked how soon he expected to reach certain mathematical conclusions, that he had them long ago, all he was worrying about was how to reach them!
In Mechanisms of Discovery in I. S. Gordon and S. Sorkin (eds.) The Armchair Science Reader, New York: Simon and Schuster, 1959.
Dunsany, Lord
Logic, like whiskey, loses its beneficial effect when taken in too large quantities.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Dürer, Albrecht (1471-1528)
But when great and ingenious artists behold their so inept performances, not undeservedly do they ridicule the blindness of such men; since sane judgment abhors nothing so much as a picture
perpetrated with no technical knowledge, although with plenty of care and diligence. Now the sole reason why painters of this sort are not aware of their own error is that they have not learnt
Geometry, without which no one can either be or become an absolute artist; but the blame for this should be laid upon their masters, who are themselves ignorant of this art.
The Art of Measurement. 1525.
Whoever ... proves his point and demonstrates the prime truth geometrically should be believed by all the world, for there we are captured.
J Heidrich (ed.) Albrecht Dürer's schriftlicher Nachlass Berlin, 1920.
And since geometry is the right foundation of all painting, I have decided to teach its rudiments and principles to all youngsters eager for art...
Course in the Art of Measurement
Dyson, Freeman
I am acutely aware of the fact that the marriage between mathematics and physics, which was so enormously fruitful in past centuries, has recently ended in divorce.
Missed Opportunities, 1972. (Gibbs Lecture?)
For a physicist mathematics is not just a tool by means of which phenomena can be calculated, it is the main source of concepts and principles by means of which new theories can be created.
Mathematics in the Physical Sciences.
The bottom line for mathematicians is that the architecture has to be right. In all the mathematics that I did, the essential point was to find the right architecture. It's like building a bridge.
Once the main lines of the structure are right, then the details miraculously fit. The problem is the overall design.
"Freeman Dyson: Mathematician, Physicist, and Writer". Interview with Donald J. Albers, The College Mathematics Journal, vol 25, no. 1, January 1994.
Eddington, Sir Arthur (1882-1944)
Proof is the idol before whom the pure mathematician tortures himself.
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
We used to think that if we knew one, we knew two, because one and one are two. We are finding that we must learn a great deal more about 'and'.
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
We have found a strange footprint on the shores of the unknown. We have devised profound theories, one after another, to account for its origins. At last, we have succeeded in reconstructing the
creature that made the footprint. And lo! It is our own.
Space, Time and Gravitation. 1920.
It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
I believe there are 15,747,724,136,275,002,577,605,653,961,181,555,468,044,717,914,527,116,709,366,231, 425,076,185,631,031,296 protons in the universe and the same number of electrons.
The Philosophy of Physical Science. Cambridge, 1939.
To the pure geometer the radius of curvature is an incidental characteristic - like the grin of the Cheshire cat. To the physicist it is an indispensable characteristic. It would be going too far to
say that to the physicist the cat is merely incidental to the grin. Physics is concerned with interrelatedness such as the interrelatedness of cats and grins. In this case the "cat without a grin"
and the "grin without a cat" are equally set aside as purely mathematical phantasies.
The Expanding Universe..
Human life is proverbially uncertain; few things are more certain than the solvency of a life-insurance company.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Edwards, Jonathon
When I am violently beset with temptations, or cannot rid myself of evil thoughts, [I resolve] to do some Arithmetic, or Geometry, or some other study, which necessarily engages all my thoughts, and
unavoidably keeps them from wandering.
In T. Mallon A Book of One's Own. Ticknor & Fields, New York, 1984, p. 106-107.
Egrafov, M.
If you ask mathematicians what they do, yo always get the same answer. They think. They think about difficult and unusual problems. They do not think about ordinary problems: they just write down the
Mathematics Magazine, v. 65 no. 5, December 1992.
Eigen, Manfred (1927 - )
A theory has only the alternative of being right or wrong. A model has a third possibility: it may be right, but irrelevant.
Jagdish Mehra (ed.) The Physicist's Conception of Nature, 1973.
Einstein, Albert (1879-1955)
[During a lecture:] This has been done elegantly by Minkowski; but chalk is cheaper than grey matter, and we will do it as it comes.
[Attributed by Pólya.]
J.E. Littlewood, A Mathematician's Miscellany, Methuen and Co. Ltd., 1953.
Everything should be made as simple as possible, but not simpler.
Reader's Digest. Oct. 1977.
I don't believe in mathematics.
Quoted by Carl Seelig. Albert Einstein.
Imagination is more important than knowledge.
On Science.
The most beautiful thing we can experience is the mysterious. It is the source of all true art and science.
What I Believe.
The bitter and the sweet come from the outside, the hard from within, from one's own efforts.
Out of My Later Years.
Gott würfelt nicht.
Common sense is the collection of prejudices acquired by age eighteen.
In E. T. Bell Mathematics, Queen and Servant of the Sciences. 1952.
God does not care about our mathematical difficulties. He integrates empirically.
L. Infeld Quest, 1942.
How can it be that mathematics, being after all a product of human thought independent of experience, is so admirably adapted to the objects of reality?
[About Newton]
Nature to him was an open book, whose letters he could read without effort.
In G. Simmons Calculus Gems, New York: McGraw Hill, 1992.
As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
What is this frog and mouse battle among the mathematicians?
[i.e. Brouwer vs. Hilbert]
In H. Eves Mathematical Circles Squared Boston: Prindle, Weber and Schmidt, 1972.
Raffiniert ist der Herr Gott, aber boshaft ist er nicht. God is subtle, but he is not malicious.
Inscribed in Fine Hall, Princeton University.
Nature hides her secrets because of her essential loftiness, but not by means of ruse.
The human mind has first to construct forms, independently, before we can find them in things.
Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore.
In A. Sommerfelt "To Albert Einstein's Seventieth Birthday" in Paul A. Schilpp (ed.) Albert Einstein, Philosopher-Scientist, Evanston, 1949.
Do not worry about your difficulties in mathematics, I assure you that mine are greater.
The truth of a theory is in your mind, not in your eyes.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
These thoughts did not come in any verbal formulation. I rarely think in words at all. A thought comes, and I may try to express it in words afterward.
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
A human being is a part of the whole, called by us "Universe," a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the resta kind of
optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free
ourselves from this prison by widening our circle of compassion to embrace all living creatures and the whole of nature in its beauty. Nobody is able to achieve this completely, but the striving for
such achievement is in itself a part of the liberation and a foundation for inner security.
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
The world needs heroes and it's better they be harmless men like me than villains like Hitler.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
It is nothing short of a miracle that modern methods of instruction have not yet entirely strangled the holy curiousity of inquiry.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Everything that is really great and inspiring is created by the individual who can labor in freedom.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
The search for truth is more precious than its possession.
The American Mathematical Monthly v. 100 no. 3.
If my theory of relativity is proven successful, Germany will claim me as a German and France will declare that I am a citizen of the world. Should my theory prove untrue, France will say that I am a
German and Germany will declare that I am a Jew.
Address at the Sorbonne, Paris.
We come now to the question: what is a priori certain or necessary, respectively in geometry (doctrine of space) or its foundations? Formerly we thought everything; nowadays we think nothing. Already
the distance-concept is logically arbitrary; there need be no things that correspond to it, even approximately.
"Space-Time." Encyclopaedia Britannica, 14th ed.
Most of the fundamental ideas of science are essentially simple, and may, as a rule, be expressed in a language comprehensible to everyone.
The Evolution of Physics.
Science without religion is lame; religion without science is blind.
Reader's Digest, Nov. 1973.
Ellis, Havelock
The mathematician has reached the highest rung on the ladder of human thought.
The Dance of Life.
It is here [in mathematics] that the artist has the fullest scope of his imagination.
The Dance of Life.
Erath, V.
God is a child; and when he began to play, he cultivated mathematics. It is the most godly of man's games.
Das blinde Spiel. 1954.
Euler, Leonhard (1707 - 1783)
If a nonnegative quantity was so small that it is smaller than any given one, then it certainly could not be anything but zero. To those who ask what the infinitely small quantity in mathematics is,
we answer that it is actually zero. Hence there are not so many mysteries hidden in this concept as they are usually believed to be. These supposed mysteries have rendered the calculus of the
infinitely small quite suspect to many people. Those doubts that remain we shall thoroughly remove in the following pages, where we shall explain this calculus.
Mathematicians have tried in vain to this day to discover some order in the sequence of prime numbers, and we have reason to believe that it is a mystery into which the human mind will never
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
[upon losing the use of his right eye]
Now I will have less distraction.
In H. Eves In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
Everett, Edward (1794-1865
In the pure mathematics we contemplate absolute truths which existed in the divine mind before the morning stars sang together, and which will continue to exist there when the last of their radiant
host shall have fallen from heaven.
Quoted by E.T. Bell in The Queen of the Sciences, Baltimore, 1931.
Eves, Howard W.
A formal manipulator in mathematics often experiences the discomforting feeling that his pencil surpasses him in intelligence.
In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
An expert problem solver must be endowed with two incompatible qualities, a restless imagination and a patient pertinacity.
In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
Mathematics may be likened to a large rock whose interior composition we wish to examine. The older mathematicians appear as persevering stone cutters slowly attempting to demolish the rock from the
outside with hammer and chisel. The later mathematicians resemble expert miners who seek vulnerable veins, drill into these strategic places, and then blast the rock apart with well placed internal
In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
One is hard pressed to think of universal customs that man has successfully established on earth. There is one, however, of which he can boast the universal adoption of the Hindu-Arabic numerals to
record numbers. In this we perhaps have man's unique worldwide victory of an idea.
Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Ewing, John
If the entire Mandelbrot set were placed on an ordinary sheet of paper, the tiny sections of boundary we examine would not fill the width of a hydrogen atom. Physicists think about such tiny objects;
only mathematicians have microscopes fine enough to actually observe them.
"Can We See the Mandelbrot Set?", The College Mathematics Journal, v. 26, no. 2, March 1995.
de Fermat, Pierre (1601?-1665)
[In the margin of his copy of Diophantus' Arithmetica, Fermat wrote]
To divide a cube into two other cubes, a fourth power or in general any power whatever into two powers of the same denomination above the second is impossible, and I have assuredly found an admirable
proof of this, but the margin is too narrow to contain it.
And perhaps, posterity will thank me for having shown it that the ancients did not know everything.
In D. M. Burton, Elementary Number Theory, Boston: Allyn and Bacon, Inc., 1976.
Feynman, Richard Philips (1918 - 1988)
We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover up all the tracks, to not worry about the blind alleys or describe how you had
the wrong idea first, and so on. So there isn't any place to publish, in a dignified manner, what you actually did in order to get to do the work.
Nobel Lecture, 1966.
Finkel, Benjamin Franklin
The solution of problems is one of the lowest forms of mathematical research, ... yet its educational value cannot be overestimated. It is the ladder by which the mind ascends into higher fields of
original research and investigation. Many dormant minds have been aroused into activity through the mastery of a single problem.
The American Mathematical Monthly, no. 1.
Flaubert, Gustave (1821-1880)
Poetry is as exact a science as geometry.
Since you are now studying geometry and trigonometry, I will give you a problem. A ship sails the ocean. It left Boston with a cargo of wool. It grosses 200 tons. It is bound for Le Havre. The
mainmast is broken, the cabin boy is on deck, there are 12 passengers aboard, the wind is blowing East-North-East, the clock points to a quarter past three in the afternoon. It is the month of May.
How old is the captain?
Frankland, W.B.
Whereas at the outset geometry is reported to have concerned herself with the measurement of muddy land, she now handles celestial as well as terrestrial problems: she has extended her domain to the
furthest bounds of space.
Hodder and Stoughton, The Story of Euclid. 1901.
Galbraith, John Kenneth
There can be no question, however, that prolonged commitment to mathematical exercises in economics can be damaging. It leads to the atrophy of judgement and intuition...
Economics, Peace, and Laughter.
Galilei, Galileo (1564 - 1642)
[The universe] cannot be read until we have learnt the language and become familiar with the characters in which it is written. It is written in mathematical language, and the letters are triangles,
circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word.
Opere Il Saggiatore p. 171.
Measure what is measurable, and make measurable what is not so.
Quoted in H. Weyl "Mathematics and the Laws of Nature" in I Gordon and S. Sorkin (eds.) The Armchair Science Reader, New York: Simon and Schuster, 1959.
And who can doubt that it will lead to the worst disorders when minds created free by God are compelled to submit slavishly to an outside will? When we are told to deny our senses and subject them to
the whim of others? When people devoid of whatsoever competence are made judges over experts and are granted authority to treat them as they please? These are the novelties which are apt to bring
about the ruin of commonwealths and the subversion of the state.
[On the margin of his own copy of Dialogue on the Great World Systems].
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956, p. 733.
Galois, Evariste
Unfortunately what is little recognized is that the most worthwhile scientific books are those in which the author clearly indicates what he does not know; for an author most hurts his readers by
concealing difficulties.
In N. Rose (ed.) Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Galton, [Sir] Francis (1822-1911)
Whenever you can, count.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
[Statistics are] the only tools by which an opening can be cut through the formidable thicket of difficulties that bars the path of those who pursue the Science of Man.
Pearson, The Life and Labours of Francis Galton, 1914.
I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error." The law would have been personified by the Greeks and
deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is
its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshaled in the order of their magnitude, an unsuspected and most beautiful form of
regularity proves to have been latent all along.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956. p. 1482.
Gardner, Martin
Biographical history, as taught in our public schools, is still largely a history of boneheads: ridiculous kings and queens, paranoid political leaders, compulsive voyagers, ignorant generals -- the
flotsam and jetsam of historical currents. The men who radically altered history, the great scientists and mathematicians, are seldom mentioned, if at all.
In G. Simmons Calculus Gems, New York: McGraw Hill, 1992.
Mathematics is not only real, but it is the only reality. That is that entire universe is made of matter, obviously. And matter is made of particles. It's made of electrons and neutrons and protons.
So the entire universe is made out of particles. Now what are the particles made out of? They're not made out of anything. The only thing you can say about the reality of an electron is to cite its
mathematical properties. So there's a sense in which matter has completely dissolved and what is left is just a mathematical structure.
Gardner on Gardner: JPBM Communications Award Presentation. Focus-The Newsletter of the Mathematical Association of America v. 14, no. 6, December 1994.
Gauss, Karl Friedrich (1777-1855)
I confess that Fermat's Theorem as an isolated proposition has very little interest for me, because I could easily lay down a multitude of such propositions, which one could neither prove nor dispose
[A reply to Olbers' attempt in 1816 to entice him to work on Fermat's Theorem.] In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956. p. 312.
If others would but reflect on mathematical truths as deeply and as continuously as I have, they would make my discoveries.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956. p. 326.
There are problems to whose solution I would attach an infinitely greater importance than to those of mathematics, for example touching ethics, or our relation to God, or concerning our destiny and
our future; but their solution lies wholly beyond us and completely outside the province of science.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956. p. 314.
You know that I write slowly. This is chiefly because I am never satisfied until I have said as much as possible in a few words, and writing briefly takes far more time than writing at length.
In G. Simmons Calculus Gems, New York: McGraw Hill inc., 1992.
God does arithmetic.
We must admit with humility that, while number is purely a product of our minds, space has a reality outside our minds, so that we cannot completely prescribe its properties a priori.
Letter to Bessel, 1830.
I mean the word proof not in the sense of the lawyers, who set two half proofs equal to a whole one, but in the sense of a mathematician, where half proof = 0, and it is demanded for proof that every
doubt becomes impossible.
In G. Simmons Calculus Gems, New York: McGraw Hill inc., 1992.
I have had my results for a long time: but I do not yet know how I am to arrive at them.
In A. Arber The Mind and the Eye 1954.
[His motto:]
Few, but ripe.
[His second motto:]
Thou, nature, art my goddess; to thy laws my services are bound...
W. Shakespeare King Lear.
[attributed to him by H.B Lübsen]
Theory attracts practice as the magnet attracts iron.
Foreword of H.B Lübsen's geometry textbook.
It is not knowledge, but the act of learning, not possession but the act of getting there, which grants the greatest enjoyment. When I have clarified and exhausted a subject, then I turn away from
it, in order to go into darkness again; the never-satisfied man is so strange if he has completed a structure, then it is not in order to dwell in it peacefully, but in order to begin another. I
imagine the world conqueror must feel thus, who, after one kingdom is scarcely conquered, stretches out his arms for others.
Letter to Bolyai, 1808.
Finally, two days ago, I succeeded - not on account of my hard efforts, but by the grace of the Lord. Like a sudden flash of lightning, the riddle was solved. I am unable to say what was the
conducting thread that connected what I previously knew with what made my success possible.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
A great part of its [higher arithmetic] theories derives an additional charm from the peculiarity that important propositions, with the impress of simplicity on them, are often easily discovered by
induction, and yet are of so profound a character that we cannot find the demonstrations till after many vain attempts; and even then, when we do succeed, it is often by some tedious and artificial
process, while the simple methods may long remain concealed.
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
I am coming more and more to the conviction that the necessity of our geometry cannot be demonstrated, at least neither by, nor for, the human intellect...geometry should be ranked, not with
arithmetic, which is purely aprioristic, but with mechanics.
Quoted in J. Koenderink Solid Shape, Cambridge Mass.: MIT Press, 1990.
Gay, John
Lest men suspect your tale untrue,
Keep probability in view.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956. p. 1334.
Gibbs, Josiah Willard (1839 - 1903)
One of the principal objects of theoretical research in my department of knowledge is to find the point of view from which the subject appears in its greatest simplicity.
Mathematics is a language.
Gilbert, W. S. (1836 - 1911)
I'm very good at integral and differential calculus, I know the scientific names of beings animalculous; In short, in matters vegetable, animal, and mineral, I am the very model of a modern
The Pirates of Penzance. Act 1.
Glaisher, J.W.
The mathematician requires tact and good taste at every step of his work, and he has to learn to trust to his own instinct to distinguish between what is really worthy of his efforts and what is not.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
It has been said that figures rule the world. Maybe. But I am sure that figures show us whether it is being ruled well or badly.
In J. P. Eckermann, Conversations with Goethe.
Mathematics has the completely false reputation of yielding infallible conclusions. Its infallibility is nothing but identity. Two times two is not four, but it is just two times two, and that is
what we call four for short. But four is nothing new at all. And thus it goes on and on in its conclusions, except that in the higher formulas the identity fades out of sight.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956, p. 1754.
Goodman, Nicholas P.
There are no deep theorems -- only theorems that we have not understood very well.
The Mathematical Intelligencer, vol. 5, no. 3, 1983.
Gordon, P
This is not mathematics, it is theology.
[On being exposed to Hilbert's work in invariant theory.]
Quoted in P. Davis and R. Hersh The Mathematical Experience, Boston: Birkhäuser, 1981.
Graham, Ronald
It wouild be very discouraging if somewhere down the line you could ask a computer if the Riemann hypothesis is correct and it said, 'Yes, it is true, but you won't be able to understand the proof.'
John Horgan. Scientific American 269:4 (October 1993) 92-103.
Grünbaum, Branko (1926 - ), and Shephard, G. C. (?)
Mathematicians have long since regarded it as demeaning to work on problems related to elementary geometry in two or three dimensions, in spite of the fact that it it precisely this sort of
mathematics which is of practical value.
Handbook of Applicable Mathematics.
Hadamard, Jacques
The shortest path between two truths in the real domain passes through the complex domain.
Quoted in The Mathematical Intelligencer, v. 13, no. 1, Winter 1991.
Practical application is found by not looking for it, and one can say that the whole progress of civilization rests on that principle.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Halmos, Paul R.
Mathematics is not a deductive science -- that's a cliche. When you try to prove a theorem, you don't just list the hypotheses, and then start to reason. What you do is trial and error,
experimentation, guesswork.
I Want to be a Mathematician, Washington: MAA Spectrum, 1985.
... the student skit at Christmas contained a plaintive line: "Give us Master's exams that our faculty can pass, or give us a faculty that can pass our Master's exams."
I Want to be a Mathematician, Washington: MAA Spectrum, 1985.
...the source of all great mathematics is the special case, the concrete example. It is frequent in mathematics that every instance of a concept of seemingly great generality is in essence the same
as a small and concrete special case.
I Want to be a Mathematician, Washington: MAA Spectrum, 1985.
The joy of suddenly learning a former secret and the joy of suddenly discovering a hitherto unknown truth are the same to me -- both have the flash of enlightenment, the almost incredibly enhanced
vision, and the ecstasy and euphoria of released tension.
I Want to be a Mathematician, Washington: MAA Spectrum, 1985.
Don't just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical special
case? What about the degenerate cases? Where does the proof use the hypothesis?
I Want to be a Mathematician, Washington: MAA Spectrum, 1985.
To be a scholar of mathematics you must be born with talent, insight, concentration, taste, luck, drive and the ability to visualize and guess.
I Want to be a Mathematician, Washington: MAA Spectrum, 1985.
Hamilton, [Sir] William Rowan (1805-1865)
Who would not rather have the fame of Archimedes than that of his conqueror Marcellus?
In H. Eves Mathematical Circles Revisited, Boston: Prindle, Weber and Schmidt, 1971.
I regard it as an inelegance, or imperfection, in quaternions, or rather in the state to which it has been hitherto unfolded, whenever it becomes or seems to become necessary to have recourse to x,
y, z, etc..
In a letter from Tait to Cayley.
On earth there is nothing great but man; in man there is nothing great but mind.
Lectures on Metaphysics.
Hamming, Richard W.
Does anyone believe that the difference between the Lebesgue and Riemann integrals can have physical significance, and that whether say, an airplane would or would not fly could depend on this
difference? If such were claimed, I should not care to fly in that plane.
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Mathematics is an interesting intellectual sport but it should not be allowed to stand in the way of obtaining sensible information about physical processes.
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Hardy, Godfrey H. (1877 - 1947)
[On Ramanujan]
I remember once going to see him when he was lying ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an
unfavorable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways."
Ramanujan, London: Cambridge Univesity Press, 1940.
Reductio ad absurdum, which Euclid loved so much, is one of a mathematician's finest weapons. It is a far finer gambit than any chess play: a chess player may offer the sacrifice of a pawn or even a
piece, but a mathematician offers the game.
A Mathematician's Apology, London, Cambridge University Press, 1941.
I am interested in mathematics only as a creative art.
A Mathematician's Apology, London, Cambridge University Press, 1941.
Pure mathematics is on the whole distinctly more useful than applied For what is useful above all is technique, and mathematical technique is taught mainly through pure mathematics.
In great mathematics there is a very high degree of unexpectedness, combined with inevitability and economy.
A Mathematician's Apology, London, Cambridge University Press, 1941.
There is no scorn more profound, or on the whole more justifiable, than that of the men who make for the men who explain. Exposition, criticism, appreciation, is work for second-rate minds.
A Mathematician's Apology, London, Cambridge University Press, 1941.
Young Men should prove theorems, old men should write books.
Quoted by Freeman Dyson in Freeman Dyson: Mathematician, Physicist, and Writer. Interview with Donald J. Albers, The College Mathematics Journal, vol. 25, No. 1, January 1994.
A science is said to be useful if its development tends to accentuate the existing inequalities in the distribution of wealth, or more directly promotes the destruction of human life.
A Mathematician's Apology, London, Cambridge University Press, 1941.
The mathematician's patterns, like the painter's or the poet's must be beautiful; the ideas, like the colors or the words must fit together in a harmonious way. Beauty is the first test: there is no
permanent place in this world for ugly mathematics.
A Mathematician's Apology, London, Cambridge University Press, 1941.
I believe that mathematical reality lies outside us, that our function is to discover or observe it, and that the theorems which we prove, and which we describe grandiloquently as our "creations,"
are simply the notes of our observations.
A Mathematician's Apology, London, Cambridge University Press, 1941.
Archimedes will be remembered when Aeschylus is forgotten, because languages die and mathematical ideas do not. "Immortality" may be a silly word, but probably a mathematician has the best chance of
whatever it may mean.
A Mathematician's Apology, London, Cambridge University Press, 1941.
The fact is that there are few more "popular" subjects than mathematics. Most people have some appreciation of mathematics, just as most people can enjoy a pleasant tune; and there are probably more
people really interested in mathematics than in music. Appearances may suggest the contrary, but there are easy explanations. Music can be used to stimulate mass emotion, while mathematics cannot;
and musical incapacity is recognized (no doubt rightly) as mildly discreditable, whereas most people are so frightened of the name of mathematics that they are ready, quite unaffectedly, to
exaggerate their own mathematical stupidity.
A Mathematician's Apology, London, Cambridge University Press, 1941.
Hardy, Thomas
...he seemed to approach the grave as an hyperbolic curve approaches a lineless directly as he got nearer, till it was doubtful if he would ever reach it at all.
Far from the Madding Crowd.
I have often pondered over the roles of knowledge or experience, on the one hand, and imagination or intuition, on the other, in the process of discovery. I believe that there is a certain
fundamental conflict between the two, and knowledge, by advocating caution, tends to inhibit the flight of imagination. Therefore, a certain naivete, unburdened by conventional wisdom, can sometimes
be a positive asset.
R. Langlands, "Harish-Chandra," Biographical Memoirs of Fellows of the Royal Society 31 (1985) 197 - 225.
Harris, Sydney J.
The real danger is not that computers will begin to think like men, but that men will begin to think like computers.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Hawking, Stephen Williams (1942- )
God not only plays dice, He also sometimes throws the dice where they cannot be seen.
[See related quotation from Albert Einstein.] Nature 1975 257.
Heath, Sir Thomas
[The works of Archimedes] are without exception, monuments of mathematical exposition; the gradual revelation of the plan of attack, the masterly ordering of the propositions, the stern elimination
of everything not immediately relevant to the purpose, the finish of the whole, are so impressive in their perfection as to create a feeling akin to awe in the mind of the reader.
A History of Greek Mathematics. 1921.
Heaviside, Oliver (1850-1925)
[Criticized for using formal mathematical manipulations, without understanding how they worked:]
Should I refuse a good dinner simply because I do not understand the process of digestion?
Heinlein, Robert A.
Anyone who cannot cope with mathematics is not fully human. At best he is a tolerable subhuman who has learned to wear shoes, bathe, and not make messes in the house.
Time Enough for Love.
Heisenberg, Werner (1901-1976)
An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.
Physics and Beyond. 1971.
Hempel, Carl G.
The propositions of mathematics have, therefore, the same unquestionable certainty which is typical of such propositions as "All bachelors are unmarried," but they also share the complete lack of
empirical content which is associated with that certainty: The propositions of mathematics are devoid of all factual content; they convey no information whatever on any empirical subject matter.
"On the Nature of Mathematical Truth" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The most distinctive characteristic which differentiates mathematics from the various branches of empirical science, and which accounts for its fame as the queen of the sciences, is no doubt the
peculiar certainty and necessity of its results.
"Geometry and Empirical Science" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
...to characterize the import of pure geometry, we might use the standard form of a movie-disclaimer: No portrayal of the characteristics of geometrical figures or of the spatial properties of
relationships of actual bodies is intended, and any similarities between the primitive concepts and their customary geometrical connotations are purely coincidental.
"Geometry and Empirical Science" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Henkin, Leon
One of the big misapprehensions about mathematics that we perpetrate in our classrooms is that the teacher always seems to know the answer to any problem that is discussed. This gives students the
idea that there is a book somewhere with all the right answers to all of the interesting questions, and that teachers know those answers. And if one could get hold of the book, one would have
everything settled. That's so unlike the true nature of mathematics.
L.A. Steen and D.J. Albers (eds.), Teaching Teachers, Teaching Students, Boston: Birkhäuser, 1981, p89.
Hermite, Charles (1822 - 1901)
There exists, if I am not mistaken, an entire world which is the totality of mathematical truths, to which we have access only with our mind, just as a world of physical reality exists, the one like
the other independent of ourselves, both of divine creation.
In The Mathematical Intelligencer, v. 5, no. 4.
Abel has left mathematicians enough to keep them busy for 500 years.
In G. F. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
We are servants rather than masters in mathematics.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Hertz, Heinrich
One cannot escape the feeling that these mathematical formulas have an independent existence and an intelligence of their own, that they are wiser that we are, wiser even than their discoverers, that
we get more out of them than was originally put into them.
Quoted by ET Bell in Men of Mathematics,, New York, 937.
Hesse, Hermann (1877-1962)
You treat world history as a mathematician does mathematics, in which nothing but laws and formulae exist, no reality, no good and evil, no time, no yesterday, no tomorrow, nothing but an eternal,
shallow, mathematical present.
The Glass Bead Game, 1943.
Hilbert, David (1862-1943)
Wir müssen wissen.
Wir werden wissen.
[Engraved on his tombstone in Göttingen.]
Before beginning I should put in three years of intensive study, and I haven't that much time to squander on a probable failure.
[On why he didn't try to solve Fermat's last theorem]
Quoted in E.T. Bell Mathematics, Queen and Servant of Science, New York: McGraw Hill Inc., 1951.
Galileo was no idiot. Only an idiot could believe that science requires martyrdom - that may be necessary in religion, but in time a scientific result will establish itself.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1971.
I have tried to avoid long numerical computations, thereby following Riemann's postulate that proofs should be given through ideas and not voluminous computations.
Report on Number Theory, 1897.
Mathematics is a game played according to certain simple rules with meaningless marks on paper.
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Physics is much too hard for physicists.
C. Reid Hilbert, London: Allen and Unwin, 1970.
How thoroughly it is ingrained in mathematical science that every real advance goes hand in hand with the invention of sharper tools and simpler methods which, at the same time, assist in
understanding earlier theories and in casting aside some more complicated developments.
The art of doing mathematics consists in finding that special case which contains all the germs of generality.
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
The further a mathematical theory is developed, the more harmoniously and uniformly does its construction proceed, and unsuspected relations are disclosed between hitherto separated branches of the
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
One can measure the importance of a scientific work by the number of earlier publications rendered superfluous by it.
In H. Eves Mathematical Circles Revisited, Boston: Prindle, Weber and Schmidt, 1971.
Mathematics knows no races or geographic boundaries; for mathematics, the cultural world is one country.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
The infinite! No other question has ever moved so profoundly the spirit of man.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Hirst, Thomas Archer
10th August 1851: On Tuesday evening at Museum, at a ball in the gardens. The night was chill, I dropped too suddenly from Differential Calculus into ladies' society, and could not give myself freely
to the change. After an hour's attempt so to do, I returned, cursing the mode of life I was pursuing; next morning I had already shaken hands, however, with Diff. Calculus, and forgot the ladies....
J. Helen Gardner and Robin J. Wilson, "Thomas Archer Hirst - Mathematician Xtravagant II - Student Days in Germany", The American Mathematical Monthly , v. 6, no. 100.
Hobbes, Thomas
There is more in Mersenne than in all the universities together.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
To understand this for sense it is not required that a man should be a geometrician or a logician, but that he should be mad.
["This" is that the volume generated by revolving the region under 1/x from 1 to infinity has finite volume.]
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Geometry, which is the only science that it hath pleased God hitherto to bestow on mankind.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The errors of definitions multiply themselves according as the reckoning proceeds; and lead men into absurdities, which at last they see but cannot avoid, without reckoning anew from the beginning.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Holmes, Oliver Wendell
Descartes commanded the future from his study more than Napoleon from the throne.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
Certitude is not the test of certainty. We have been cocksure of many things that are not so.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
I was just going to say, when I was interrupted, that one of the many ways of classifying minds is under the heads of arithmetical and algebraical intellects. All economical and practical wisdom is
an extension of the following arithmetical formula: 2 + 2 = 4. Every philosophical proposition has the more general character of the expression a + b = c. We are mere operatives, empirics, and
egotists until we learn to think in letters instead of figures.
The Autocrat of the Breakfast Table.
Holt, M. and Marjoram, D. T. E.
The truth of the matter is that, though mathematics truth may be beauty, it can be only glimpsed after much hard thinking. Mathematics is difficult for many human minds to grasp because of its
hierarchical structure: one thing builds on another and depends on it.
Mathematics in a Changing World Walker, New York 1973.
Hofstadter, Douglas R. (1945 - )
Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.
Gödel, Escher, Bach 1979.
Hughes, Richard
Science, being human enquiry, can hear no answer except an answer couched somehow in human tones. Primitive man stood in the mountains and shouted against a cliff; the echo brought back his own
voice, and he believed in a disembodied spirit. The scientist of today stands counting out loud in the face of the unknown. Numbers come back to him - and he believes in the Great Mathematician.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Hume, David (1711 - 1776)
If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, 'Does it contain any abstract reasoning concerning quantity or number?' No. 'Does it contain any
experimental reasoning concerning matter of fact and existence?' No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.
Treatise Concerning Human Understanding.
Huxley, Aldous
I admit that mathematical science is a good thing. But excessive devotion to it is a bad thing.
Interview with J. W. N. Sullivan, Contemporary Mind, London, 1934.
If we evolved a race of Isaac Newtons, that would not be progress. For the price Newton had to pay for being a supreme intellect was that he was incapable of friendship, love, fatherhood, and many
other desirable things. As a man he was a failure; as a monster he was superb.
Interview with J. W. N. Sullivan, Contemporary Mind, London, 1934.
...[he] was as much enchanted by the rudiments of algebra as he would have been if I had given him an engine worked by steam, with a methylated spirit lamp to heat the boiler; more enchanted,
perhapsfor the engine would have got broken, and, remaining always itself, would in any case have lost its charm, while the rudiments of algebra continued to grow and blossom in his mind with an
unfailing luxuriance. Every day he made the discovery of something which seemed to him exquisitely beautiful; the new toy was inexhaustible in its potentialities.
Young Archimedes.
Huxley, Thomas Henry (1825-1895)
This seems to be one of the many cases in which the admitted accuracy of mathematical processes is allowed to throw a wholly inadmissible appearance of authority over the results obtained by them.
Mathematics may be compared to a mill of exquisite workmanship, which grinds your stuff of any degree of fineness; but, nevertheless, what you get out depends on what you put in; and as the grandest
mill in the world will not extract wheat flour from peascods, so pages of formulae will not get a definite result out of loose data.
Quarterly Journal of the Geological Society, 25,1869.
The mathematician starts with a few propositions, the proof of which is so obvious that they are called selfevident, and the rest of his work consists of subtle deductions from them. The teaching of
languages, at any rate as ordinarily practised, is of the same general nature authority and tradition furnish the data, and the mental operations are deductive.
"Scientific Education -Notes of an After-dinner Speech." Macmillan's Magazine Vol XX, 1869.
It is the first duty of a hypothesis to be intelligible.
Ibn Khaldun (1332-1406)
Geometry enlightens the intellect and sets one's mind right. All of its proofs are very clear and orderly. It is hardly possible for errors to enter into geometrical reasoning, because it is well
arranged and orderly. Thus, the mind that constantly applies itself to geometry is not likely to fall into error. In this convenient way, the person who knows geometry acquires intelligence.
The Muqaddimah. An Introduction to History.
Isidore of Seville (ca 600 ad)
Take from all things their number and all shall perish.
Jacobi, Carl
It is true that Fourier had the opinion that the principal aim of mathematics was public utility and explanation of natural phenomena; but a philosopher like him should have known that the sole end
of science is the honor of the human mind, and that under this title a question about numbers is worth as much as a question about the system of the world.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
God ever arithmetizes.
In H. Eves Mathematical Circles Revisited, Boston: Prindle, Weber and Schmidt, 1971.
One should always generalize.
(Man muss immer generalisieren)
In P. Davis and R. Hersh The Mathematical Experience, Boston: Birkhäuser, 1981.
The real end of science is the honor of the human mind.
In H. Eves In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
It is often more convenient to possess the ashes of great men than to possess the men themselves during their lifetime.
[Commenting on the return of Descartes' remains to France]
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
Mathematics is the science of what is clear by itself.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
James, William (1842 - 1910)
The union of the mathematician with the poet, fervor with measure, passion with correctness, this surely is the ideal.
Collected Essays.
Jeans, Sir James
The essential fact is that all the pictures which science now draws of nature, and which alone seem capable of according with observational facts, are mathematical pictures.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
From the intrinsic evidence of his creation, the Great Architect of the Universe now begins to appear as a pure mathematician.
Mysterious Universe.
Jefferson, Thomas
...the science of calculation also is indispensable as far as the extraction of the square and cube roots: Algebra as far as the quadratic equation and the use of logarithms are often of value in
ordinary cases: but all beyond these is but a luxury; a delicious luxury indeed; but not be in indulged in by one who is to have a profession to follow for his subsistence.
In J. Robert Oppenheimer "The Encouragement of Science" in I. Gordon and S. Sorkin (eds.) The Armchair Science Reader, New York: Simon and Schuster, 1959.
Jevons, William Stanley
It is clear that Economics, if it is to be a science at all, must be a mathematical science.
Theory of Political Economy.
Johnson, Samuel (1709-1784)
Sir, I have found you an argument. I am not obliged to find you an understanding.
J. Boswell The Life of SamuelJohnson, 1784.
Jowett, Benjamin (1817 - 1893)
Logic is neither a science or an art, but a dodge.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Kant, Emmanual (1724 - 1804)
The science of mathematics presents the most brilliant example of how pure reason may successfully enlarge its domain without the aid of experience.
The Mathematical Intelligencer, v. 13, no. 1, Winter 1991.
All human knowledge thus begins with intuitions, proceeds thence to concepts, and ends with ideas.
Quoted in Hilbert's Foundations of Geometry.
Kaplan, Abraham
Mathematics is not yet capable of coping with the naivete of the mathematician himself.
Sociology Learns the Language of Mathematics.
Kaplansky, Irving
We (he and Halmos) share a philosophy about linear algebra: we think basis-free, we write basis-free , but when the chips are down we close the office door and compute with matrices like fury.
Paul Halmos: Celebrating 50 Years of Mathematics.
Karlin, Samuel (1923 - )
The purpose of models is not to fit the data but to sharpen the questions.
11th R A Fisher Memorial Lecture, Royal Society 20, April 1983.
Kasner, E. and Newman, J.
Mathematics is man's own handiwork, subject only to the limitations imposed by the laws of thought.
Mathematics and the Imagination, New York: Simon and Schuster, 1940.
The testament of science is so continually in a flux that the heresy of yesterday is the gospel of today and the fundamentalism of tomorrow.
Mathematics and the Imagination, Simon and Schuster, 1940.
...we have overcome the notion that mathematical truths have an existence independent and apart from our own minds. It is even strange to us that such a notion could ever have existed.
Mathematics and the Imagination, New York: Simon and Schuster, 1940.
Mathematics is the science which uses easy words for hard ideas.
Mathematics and the Imagination, New York: Simon and Schuster, 1940.
Mathematics is often erroneously referred to as the science of common sense. Actually, it may transcend common sense and go beyond either imagination or intuition. It has become a very strange and
perhaps frightening subject from the ordinary point of view, but anyone who penetrates into it will find a veritable fairyland, a fairyland which is strange, but makes sense, if not common sense.
Mathematics and the Imagination, New York: Simon and Schuster, 1940.
Perhaps the greatest paradox of all is that there are paradoxes in mathematics.
Mathematics and the Imagination, New York: Simon and Schuster, 1940.
When the mathematician says that such and such a proposition is true of one thing, it may be interesting, and it is surely safe. But when he tries to extend his proposition to everything, though it
is much more interesting, it is also much more dangerous. In the transition from one to all, from the specific to the general, mathematics has made its greatest progress, and suffered its most
serious setbacks, of which the logical paradoxes constitute the most important part. For, if mathematics is to advance securely and confidently it must first set its affairs in order at home.
Mathematics and the Imagination, New York: Simon and Schuster, 1940.
Keller, Helen (1880 - 1968)
Now I feel as if I should succeed in doing something in mathematics, although I cannot see why it is so very important... The knowledge doesn't make life any sweeter or happier, does it?
The Story of My Life. 1903.
Kepler, Johannes (1571-1630)
A mind is accustomed to mathematical deduction, when confronted with the faulty foundations of astrology, resists a long, long time, like an obstinate mule, until compelled by beating and curses to
put its foot into that dirty puddle.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
Where there is matter, there is geometry.
(Ubi materia, ibi geometria.)
J. Koenderink Solid Shape, Cambridge Mass.: MIT Press, 1990
The chief aim of all investigations of the external world should be to discover the rational order and harmony which has been imposed on it by God and which He revealed to us in the language of
Nature uses as little as possible of anything.
Keynes, John Maynard
It has been pointed out already that no knowledge of probabilities, less in degree than certainty, helps us to know what conclusions are true, and that there is no direct relation between the truth
of a proposition and its probability. Probability begins and ends with probability.
The Application of Probability to Conduct.
Kleinhenz, Robert J.
When asked what it was like to set about proving something, the mathematician likened proving a theorem to seeing the peak of a mountain and trying to climb to the top. One establishes a base camp
and begins scaling the mountain's sheer face, encountering obstacles at every turn, often retracing one's steps and struggling every foot of the journey. Finally when the top is reached, one stands
examining the peak, taking in the view of the surrounding countryside and then noting the automobile road up the other side!
Kline, Morris
A proof tells us where to concentrate our doubts.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Statistics: the mathematical theory of ignorance.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Logic is the art of going wrong with confidence.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Universities hire professors the way some men choose wives - they want the ones the others will admire.
Why the Professor Can't Teach. St. Martin's Press, 1977. p 92.
Koestler, Arthur (1905- )
In the index to the six hundred odd pages of Arnold Toynbee's A Study of History, abridged version, the names of Copernicus, Galileo, Descartes and Newton do not occur yet their cosmic quest
destroyed the medieval vision of an immutable social order in a walled-in universe and transformed the European landscape, society, culture, habits and general outlook, as thoroughly as if a new
species had arisen on this planet.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
Nobody before the Pythagoreans had thought that mathematical relations held the secret of the universe. Twenty-five centuries later, Europe is still blessed and cursed with their heritage. To
non-European civilizations, the idea that numbers are the key to both wisdom and power, seems never to have occurred.
The Sleepwalkers. 1959.
Kraft, Prinz zu Hohlenlohe-Ingelfingen (1827 - 1892)
Mathematics is indeed dangerous in that it absorbs students to such a degree that it dulls their senses to everything else.
Attributed by Karl Schellbach. In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
Kronecker, Leopold (1823-1891)
Number theorists are like lotus-eaters -- having once tasted of this food they can never give it up.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
God made the integers, all else is the work of man.
Jahresberichte der Deutschen Mathematiker Vereinigung.
La Touche, Mrs.
I do hate sums. There is no greater mistake than to call arithmetic an exact science. There are permutations and aberrations discernible to minds entirely noble like mine; subtle variations which
ordinary accountants fail to discover; hidden laws of number which it requires a mind like mine to perceive. For instance, if you add a sum from the bottom up, and then from the top down, the result
is always different.
Mathematical Gazette, v. 12.
LaGrange, Joseph-Louis
The reader will find no figures in this work. The methods which I set forth do not require either constructions or geometrical or mechanical reasonings: but only algebraic operations, subject to a
regular and uniform rule of procedure.
Preface to Mécanique Analytique.
[said about the chemist Lavoisier:]
It took the mob only a moment to remove his head; a century will not suffice to reproduce it.
H. Eves An Introduction to the History of Mathematics, 5th Ed., Saunders.
When we ask advice, we are usually looking for an accomplice.
Lakatos, Imre
That sometimes clear ... and sometimes vague stuff ... which is ... mathematics.
In P. Davis and R. Hersh The Mathematical Experience, Boston: Birkhäuser, 1981.
Lanczos, Cornelius
Most of the arts, as painting, sculpture, and music, have emotional appeal to the general public. This is because these arts can be experienced by some one or more of our senses. Such is not true of
the art of mathematics; this art can be appreciated only by mathematicians, and to become a mathematician requires a long period of intensive training. The community of mathematicians is similar to
an imaginary community of musical composers whose only satisfaction is obtained by the interchange among themselves of the musical scores they compose.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Landau, E.
[Asked for a testimony to the effect that Emmy Noether was a great woman mathematician, he said:]
I can testify that she is a great mathematician, but that she is a woman, I cannot swear.
J.E. Littlewood, A Mathematician's Miscellany, Methuen and Co ltd., 1953.
Landau, Susan
There's a touch of the priesthood in the academic world, a sense that a scholar should not be distracted by the mundane tasks of day-to-day living. I used to have great stretches of time to work. Now
I have research thoughts while making peanut butter and jelly sandwiches. Sure it's impossible to write down ideas while reading "curious George" to a two-year-old. On the other hand, as my husband
was leaving graduate school for his first job, his thesis advisor told him, "You may wonder how a professor gets any research done when one has to teach, advise students, serve on committees, referee
papers, write letters of recommendation, interview prospective faculty. Well, I take long showers."
In Her Own Words: Six Mathematicians Comment on Their Lives and Careers. Notices of the AMS, V. 38, no. 7 (September 1991), p. 704.
Lang, Andrew (1844-1912)
He uses statistics as a drunken man uses lamp posts -- for support rather than illumination.
Treasury of Humorous Quotations.
Langer, Rudoph E.
[about Fourier] It was, no doubt, partially because of his very disregard for rigor that he was able to take conceptual steps which were inherently impossible to men of more critical genius.
In P. Davis and R. Hersh The Mathematical Experience, Boston: Birkhäuser, 1981.
Lao Tze (604-531 B.C.)
A good calculator does not need artificial aids.
Tao Te Ching, ch 27.
de Laplace, Pierre-Simon (1749 - 1827)
Read Euler: he is our master in everything.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
Such is the advantage of a well constructed language that its simplified notation often becomes the source of profound theories.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Napoleon: You have written this huge book on the system of the world without once mentioning the author of the universe.
Laplace: Sire, I had no need of that hypothesis.
Later when told by Napoleon about the incident, Lagrange commented: Ah, but that is a fine hypothesis. It explains so many things.
DeMorgan's Budget of Paradoxes.
[said about Napier's logarithms:]
...by shortening the labors doubled the life of the astronomer.
In H. Eves In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
It is India that gave us the ingenious method of expressing all numbers by means of ten symbols, each symbol receiving a value of position as well as an absolute value; a profound and important idea
which appears so simple to us now that we ignore its true merit. But its very simplicity and the great ease which it has lent to computations put our arithmetic in the first rank of useful
inventions; and we shall appreciate the grandeur of the achievement the more when we remember that it escaped the genius of Archimedes and Apollonius, two of the greatest men produced by antiquity.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Leach, Edmund Ronald (1910 - 1989)
How can a modern anthropologist embark upon a generalization with any hope of arriving at a satisfactory conclusion? By thinking of the organizational ideas that are present in any society as a
mathematical pattern.
Rethinking Anthropology. 1961.
Leacock, Stephen
How can you shorten the subject? That stern struggle with the multiplication table, for many people not yet ended in victory, how can you make it less? Square root, as obdurate as a hardwood stump in
a pasturenothing but years of effort can extract it. You can't hurry the process. Or pass from arithmetic to algebra; you can't shoulder your way past quadratic equations or ripple through the
binomial theorem. Instead, the other way; your feet are impeded in the tangled growth, your pace slackens, you sink and fall somewhere near the binomial theorem with the calculus in sight on the
horizon. So died, for each of us, still bravely fighting, our mathematical training; except for a set of people called "mathematicians" -- born so, like crooks.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Lebesgue, Henri (1875 - 1941)
In my opinion, a mathematician, in so far as he is a mathematician, need not preoccupy himself with philosophy -- an opinion, moreover, which has been expressed by many philosophers.
Scientific American, 211, September 1964, p. 129.
Leibniz, Gottfried Whilhem (1646-1716)
[about him:]
It is rare to find learned men who are clean, do not stink and have a sense of humour.
[attributed variously to Charles Louis de Secondat Montesquieu and to the Duchess of Orléans]
Nothing is more important than to see the sources of invention which are, in my opinion more interesting than the inventions themselves.
J. Koenderink, Solid Shape, Cambridge Mass.: MIT Press, 1990.
Music is the pleasure the human soul experiences from counting without being aware that it is counting.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
The imaginary number is a fine and wonderful recourse of the divine spirit, almost an amphibian between being and not being.
He who understands Archimedes and Apollonius will admire less the achievements of the foremost men of later times.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
In symbols one observes an advantage in discovery which is greatest when they express the exact nature of a thing briefly and, as it were, picture it; then indeed the labor of thought is wonderfully
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
The art of discovering the causes of phenomena, or true hypothesis, is like the art of decyphering, in which an ingenious conjecture greatly shortens the road.
New Essays Concerning Human Understanding, IV, XII.
Although the whole of this life were said to be nothing but a dream and the physical world nothing but a phantasm, I should call this dream or phantasm real enough, if, using reason well, we were
never deceived by it.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The soul is the mirror of an indestructible universe.
The Monadology.
da Vinci, Leonardo (1452-1519)
Whoever despises the high wisdom of mathematics nourishes himself on delusion and will never still the sophistic sciences whose only product is an eternal uproar.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Mechanics is the paradise of the mathematical sciences, because by means of it one comes to the fruits of mathematics.
Notebooks, v. 1, ch. 20.
He who loves practice without theory is like the sailor who boards ship without a rudder and compass and never knows where he may cast.
No human investigation can be called real science if it cannot be demonstrated mathematically.
Inequality is the cause of all local movements.
Leybourn, William (1626-1700)
But leaving those of the Body, I shall proceed to such Recreation as adorn the Mind; of which those of the Mathematicks are inferior to none.
Pleasure with Profit, 1694.
Lichtenberg, Georg Christoph (1742 - 1799)
All mathematical laws which we find in Nature are always suspect to me, in spite of their beauty. They give me no pleasure. They are merely auxiliaries. At close range it is all not true.
In J P Stern Lichtenberg, 1959.
The great trick of regarding small departures from the truth as the truth itself -- on which is founded the entire integral calculus -- is also the basis of our witty speculations, where the whole
thing would often collapse if we considered the departures with philosophical rigour.
In mathematical analysis we call x the undetermined part of line a: the rest we don't call y, as we do in common life, but a-x. Hence mathematical language has great advantages over the common
I have often noticed that when people come to understand a mathematical proposition in some other way than that of the ordinary demonstration, they promptly say, "Oh, I see. That's how it must be."
This is a sign that they explain it to themselves from within their own system.
le Lionnais, Francois
Who has not be amazed to learn that the function y = e^x , like a phoenix rising again from its own ashes, is its own derivative?
Great Currents of Mathematical Thought, vol. 1, New York: Dover Publications.
Lippman, Gabriel (1845-1921)
[On the Gaussian curve, remarked to Poincaré:]
Experimentalists think that it is a mathematical theorem while the mathematicians believe it to be an experimental fact.
In D'Arcy Thompson On Growth and Form, 1917.
Littlewood, J. E. (1885 -1977)
It is true that I should have been surprised in the past to learn that Professor Hardy had joined the Oxford Group. But one could not say the adverse chance was 1:10. Mathematics is a dangerous
profession; an appreciable proportion of us go mad, and then this particular event would be quite likely.
A Mathematician's Miscellany, Methuen and Co. ltd., 1953.
A good mathematical joke is better, and better mathematics, than a dozen mediocre papers.
A Mathematician's Miscellany, Methuen and Co. ltd., 1953.
I recall once saying that when I had given the same lecture several times I couldn't help feeling that they really ought to know it by now.
A Mathematician's Miscellany, Methuen and Co. ltd., 1953.
In passing, I firmly believe that research should be offset by a certain amount of teaching, if only as a change from the agony of research. The trouble, however, I freely admit, is that in practice
you get either no teaching, or else far too much.
"The Mathematician's Art of Work" in Béla Bollobás (ed.) Littlewood's Miscellany, Cambridge: Cambridge University Press, 1986.
It is possible for a mathematician to be "too strong" for a given occasion. he forces through, where another might be driven to a different, and possible more fruitful, approach. (So a rock climber
might force a dreadful crack, instead of finding a subtle and delicate route.)
A Mathematician's Miscellany, Methuen and Co. ltd., 1953.
I constantly meet people who are doubtful, generally without due reason, about their potential capacity [as mathematicians]. The first test is whether you got anything out of geometry. To have
disliked or failed to get on with other [mathematical] subjects need mean nothing; much drill and drudgery is unavoidable before they can get started, and bad teaching can make them unintelligible
even to a born mathematician.
A Mathematician's Miscellany, Methuen and Co. ltd., 1953.
The infinitely competent can be uncreative.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
In presenting a mathematical argument the great thing is to give the educated reader the chance to catch on at once to the momentary point and take details for granted: his successive mouthfuls
should be such as can be swallowed at sight; in case of accidents, or in case he wishes for once to check in detail, he should have only a clearly circumscribed little problem to solve (e.g. to check
an identity: two trivialities omitted can add up to an impasse). The unpractised writer, even after the dawn of a conscience, gives him no such chance; before he can spot the point he has to tease
his way through a maze of symbols of which not the tiniest suffix can be skipped.
A Mathematician's Miscellany, Methuen Co. Ltd., 1953.
A linguist would be shocked to learn that if a set is not closed this does not mean that it is open, or again that "E is dense in E" does not mean the same thing as "E is dense in itself".
A Mathematician's Miscellany, Methuen Co. Ltd., 1953.
The surprising thing about this paper is that a man who could write it would.
A Mathematician's Miscellany, Methuen Co. Ltd., 1953.
A precisian professor had the habit of saying: "... quartic polynomial ax^4+bx^3+cx^2+dx+e , where e need not be the base of the natural logarithms."
A Mathematician's Miscellany, Methuen Co. Ltd., 1953.
I read in the proof sheets of Hardy on Ramanujan: "As someone said, each of the positive integers was one of his personal friends." My reaction was, "I wonder who said that; I wish I had." In the
next proof-sheets I read (what now stands), "It was Littlewood who said..."
A Mathematician's Miscellany, Methuen Co. Ltd, 1953.
We come finally, however, to the relation of the ideal theory to real world, or "real" probability. If he is consistent a man of the mathematical school washes his hands of applications. To someone
who wants them he would say that the ideal system runs parallel to the usual theory: "If this is what you want, try it: it is not my business to justify application of the system; that can only be
done by philosophizing; I am a mathematician". In practice he is apt to say: "try this; if it works that will justify it". But now he is not merely philosophizing; he is committing the characteristic
fallacy. Inductive experience that the system works is not evidence.
A Mathematician's Miscellany, Methuen Co. Ltd, 1953.
The theory of numbers is particularly liable to the accusation that some of its problems are the wrong sort of questions to ask. I do not myself think the danger is serious; either a reasonable
amount of concentration leads to new ideas or methods of obvious interest, or else one just leaves the problem alone. "Perfect numbers" certainly never did any good, but then they never did any
particular harm.
A Mathematician's Miscellany, Methuen Co. Ltd., 1953.
Lobatchevsky, Nikolai
There is no branch of mathematics, however abstract, which may not some day be applied to phenomena of the real world.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Locke, John
...mathematical proofs, like diamonds, are hard and clear, and will be touched with nothing but strict reasoning.
D. Burton, Elementary Number Theory, Boston: Allyn and Bacon 1980.
Luther, Martin (1483-1546)
Medicine makes people ill, mathematics make them sad and theology makes them sinful.
Mach, Ernst (1838 - 1916)
Archimedes constructing his circle pays with his life for his defective biological adaptation to immediate circumstances.
The mathematician who pursues his studies without clear views of this matter, must often have the uncomfortable feeling that his paper and pencil surpass him in intelligence.
"The Economy of Science" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Mackay, Alan Lindsay (1926- )
Like the ski resort full of girls hunting for husbands and husbands hunting for girls, the situation is not as symmetrical as it might seem.
A Dictionary of Scientific Quotations, Bristol: IOP Publishing, 1991.
Mackay, Charles (1814-1889)
Truth ... and if mine eyes
Can bear its blaze, and trace its symmetries,
Measure its distance, and its advent wait,
I am no prophet -- I but calculate.
The Poetical Works of Charles Mackay. 1876.
Maistre Joseph Marie de (1753 - 1821)
The concept of number is the obvious distinction between the beast and man. Thanks to number, the cry becomes a song, noise acquires rhythm, the spring is transformed into a dance, force becomes
dynamic, and outlines figures.
Mann, Thomas (1875-1955)
A great truth is a truth whose opposite is also a great truth.
Essay on Freud. 1937.
I tell them that if they will occupy themselves with the study of mathematics they will find in it the best remedy against the lusts of the flesh.
The Magic Mountain. 1927.
Some of the men stood talking in this room, and at the right of the door a little knot had formed round a small table, the center of which was the mathematics student, who ws eagerly talking. He had
made the assertion that one could draw through a given point more than one parallel to a straight line; Frau Hagenström had cried out that this was impossible, and he had gone on to prove it so
conclusively that his hearers were constrained to behave as though they understood.
Little Herr Friedemann.
Mathesis, Adrian
If your new theorem can be stated with great simplicity, then there will exist a pathological exception.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
All great theorems were discovered after midnight.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
The greatest unsolved theorem in mathematics is why some people are better at it than others.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Matthias, Bernd T
If you see a formula in the Physical Review that extends over a quarter of a page, forget it. It's wrong. Nature isn't that complicated.
Maxwell, James Clerk (1813-1879)
... that, in a few years, all great physical constants will have been approximately estimated, and that the only occupation which will be left to men of science will be to carry these measurements to
another place of decimals.
Scientific Papers 2, 244, October 1871.
Mayer, Maria Goeppert (1906 -1972)
Mathematics began to seem too much like puzzle solving. Physics is puzzle solving, too, but of puzzles created by nature, not by the mind of man.
J. Dash, Maria Goeppert-Mayer, A Life of One's Own.
McDuff, Dusa
Gel'fand amazed me by talking of mathematics as though it were poetry. He once said about a long paper bristling with formulas that it contained the vague beginnings of an idea which could only hint
at and which he had never managed to bring out more clearly. I had always thought of mathematics as being much more straightforward: a formula is a formula, and an algebra is an algebra, but Gel'fand
found hedgehogs lurking in the rows of his spectral sequences!
Mathematical Notices v. 38, no. 3, March 1991, pp. 185-7.
McShane, E. J.
There are in this world optimists who feel that any symbol that starts off with an integral sign must necessarily denote something that will have every property that they should like an integral to
possess. This of course is quite annoying to us rigorous mathematicians; what is even more annoying is that by doing so they often come up with the right answer.
Bulletin of the American Mathematical Society, v. 69, p. 611, 1963.
Mencken, H.L. (1880 - 1956)
It is now quite lawful for a Catholic woman to avoid pregnancy by a resort to mathematics, though she is still forbidden to resort to physics and chemistry.
Notebooks, "Minority Report".
Mermin, N. David (1935 -)
Bridges would not be safer if only people who knew the proper definition of a real number were allowed to design them.
"Topological Theory of Defects" in Review of Modern Physics, v. 51 no. 3, July 1979.
Millay, Edna St. Vincent (1892 - 1950)
Euclid alone has looked on Beauty bare.
Let all who prate of Beauty hold their peace,
And lay them prone upon the earth and cease
To ponder on themselves, the while they stare
At nothing, intricately drawn nowhere
In shapes of shifting lineage; let geese
Gabble and hiss, but heroes seek release
From dusty bondage into luminous air.
O blinding hour, O holy, terrible day,
When first the shaft into his vision shone
Of light anatomized! Euclid alone
Has looked on Beauty bare. Fortunate they
Who, though once only and then but far away,
Have heard her massive sandal set on stone.
Milton, John (1608 - 1674)
From Man or Angel the great Architect
Did wisely to conceal, and not divulge,
His secrets, to be scanned by them who ought
Rather admire. Or, if they list to try
Conjecture, he his fabric of the Heavens
Hath left to their disputes -- perhaps to move
His laughter at their quaint opinions wide
Hereafter, when they come to model Heaven
And calculate the stars: how they will wield
The mighty frame: how build, unbuild, contrive
To save appearances; how gird the Sphere
With Centric and Eccentric scribbled o'er,
Cycle and Epicycle, Orb in Orb.
Paradise Lost.
Chaos umpire sits
And by decision more
embroils the fray
by which he reigns: next
him high arbiter
Chance governs all.
Minkowski, Herman
From henceforth, space by itself, and time by itself, have vanished into the merest shadows and only a kind of blend of the two exists in its own right.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Minsky, Marvin Lee (1927 -)
Logic doesn't apply to the real world.
D. R. Hofstadter and D. C. Dennett (eds.) The Mind's I, 1981.
Mitchell, Margaret
...She knew only that if she did or said thus-and-so, men would unerringly respond with the complimentary thus-and-so. It was like a mathematical formula and no more difficult, for mathematics was
the one subject that had come easy to Scarlett in her schooldays.
Gone With the Wind.
Mittag-Leffler, Gösta
The mathematician's best work is art, a high perfect art, as daring as the most secret dreams of imagination, clear and limpid. Mathematical genius and artistic genius touch one another.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Mordell, L.J. (1888 - ?)
Neither you nor I nor anybody else knows what makes a mathematician tick. It is not a question of cleverness. I know many mathematicians who are far abler than I am, but they have not been so lucky.
An illustration may be given by considering two miners. One may be an expert geologist, but he does not find the golden nuggets that the ignorant miner does.
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
Moore, E.H. (1862 - 1932)
We lay down a fundamental principle of generalization by abstraction:
"The existence of analogies between central features of various theories implies the existence of a general theory which underlies the particular theories and unifies them with respect to those
central features...."
In H. Eves Mathematical Circles Revisited, Boston: Prindle, Weber and Schmidt, 1971.
Moroney, M.J.
The words figure and fictitious both derive from the same Latin root, fingere. Beware!
Facts from Figures.
Napoleon (1769-1821)
A mathematician of the first rank, Laplace quickly revealed himself as only a mediocre administrator; from his first work we saw that we had been deceived. Laplace saw no question from its true point
of view; he sought subtleties everywhere; had only doubtful ideas, and finally carried the spirit of the infinitely small into administration.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Nebeuts, E. Kim
Teach to the the problems, not to the text.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
To state a theorem and then to show examples of it is literally to teach backwards.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
A good preparation takes longer than the delivery.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Neumann, Franz Ernst (1798 - 1895)
The greatest reward lies in making the discovery; recognition can add little or nothing to that.
von Neumann, Johann (1903 - 1957)
In mathematics you don't understand things. You just get used to them.
In G. Zukav The Dancing Wu Li Masters.
Newman, James R.
The most painful thing about mathematics is how far away you are from being able to use it after you have learned it.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The discovery in 1846 of the planet Neptune was a dramatic and spectacular achievement of mathematical astronomy. The very existence of this new member of the solar system, and its exact location,
were demonstrated with pencil and paper; there was left to observers only the routine task of pointing their telescopes at the spot the mathematicians had marked.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
It is hard to know what you are talking about in mathematics, yet no one questions the validity of what you say. There is no other realm of discourse half so queer.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Mathematical economics is old enough to be respectable, but not all economists respect it. It has powerful supporters and impressive testimonials, yet many capable economists deny that mathematics,
except as a shorthand or expository device, can be applied to economic reasoning. There have even been rumors that mathematics is used in economics (and in other social sciences) either for the
deliberate purpose of mystification or to confer dignity upon commonplacesas French was once used in diplomatic communications. .... To be sure, mathematics can be extended to any branch of
knowledge, including economics, provided the concepts are so clearly defined as to permit accurate symbolic representation. That is only another way of saying that in some branches of discourse it is
desirable to know what you are talking about.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The Theory of Groups is a branch of mathematics in which one does something to something and then compares the result with the result obtained from doing the same thing to something else, or
something else to the same thing.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Games are among the most interesting creations of the human mind, and the analysis of their structure is full of adventure and surprises. Unfortunately there is never a lack of mathematicians for the
job of transforming delectable ingredients into a dish that tastes like a damp blanket.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Newton, Isaac (1642-1727)
...from the same principles, I now demonstrate the frame of the System of the World.
Principia Mathematica.
Hypotheses non fingo.
I feign no hypotheses.
Principia Mathematica.
To explain all nature is too difficult a task for any one man or even for any one age. `Tis much better to do a little with certainty, and leave the rest for others hat come after you, than to
explain all things.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
The description of right lines and circles, upon which geometry is founded, belongs to mechanics. Geometry does not teach us to draw these lines, but requires them to be drawn.
Principia Mathematica.
The latest authors, like the most ancient, strove to subordinate the phenomena of nature to the laws of mathematics.
[His epitaph:]
Who, by vigor of mind almost divine, the motions and figures of the planets, the paths of comets, and the tides of the seas first demonstrated.
Thomas R. Nicely
Usually mathematicians have to shoot somebody to get this much publicity.
[On the attention he received after finding the flaw in Intel's Pentium chip in 1994]
Cincinnati Enquirer, December 18, 1994, Section A, page 19.
Nightingale, Florence (1820-1910)
[Of her:]
Her statistics were more than a study, they were indeed her religion. For her Quetelet was the hero as scientist, and the presentation copy of his Physique sociale is annotated by her on every page.
Florence Nightingale believed -- and in all the actions of her life acted upon that belief -- that the administrator could only be successful if he were guided by statistical knowledge. The
legislator -- to say nothing of the politician -- too often failed for want of this knowledge. Nay, she went further; she held that the universe -- including human communities -- was evolving in
accordance with a divine plan; that it was man's business to endeavor to understand this plan and guide his actions in sympathy with it. But to understand God's thoughts, she held we must study
statistics, for these are the measure of His purpose. Thus the study of statistics was for her a religious duty.
K. Pearson The Life, Letters and Labours for Francis Galton, vol. 2, 1924.
Oakley, C.O.
The study of mathematics cannot be replaced by any other activity that will train and develop man's purely logical faculties to the same level of rationality.
The American Mathematical Monthly, 56, 1949, p19.
Ogyu, Sorai (1666 - 1729)
Mathematicians boast of their exacting achievements, but in reality they are absorbed in mental acrobatics and contribute nothing to society.
Complete Works on Japan's Philosophical Thought. 1956.
Oppenheimer, Julius Robert (1904 - 1967)
Today, it is not only that our kings do not know mathematics, but our philosophers do not know mathematics and -- to go a step further -- our mathematicians do not know mathematics.
"The Tree of Knowledge" in Harper's, 217, 1958.
Osgood, W. F.
The calculus is the greatest aid we have to the application of physical truth in the broadest sense of the word.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Pascal, Blaise (1623-1662)
We are usually convinced more easily by reasons we have found ourselves than by those which have occurred to others.
Pensees. 1670.
It is the heart which perceives God and not the reason.
Pensees. 1670.
Man is equally incapable of seeing the nothingness from which he emerges and the infinity in which he is engulfed.
Pensees. 1670.
Our nature consists in movement; absolute rest is death.
Pensees. 1670.
Man is full of desires: he loves only those who can satisfy them all. "This man is a good mathematician," someone will say. But I have no concern for mathematics; he would take me for a proposition.
"That one is a good soldier." He would take me for a besieged town. I need, that is to say, a decent man who can accommodate himself to all my desires in a general sort of way.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
We run carelessly to the precipice, after we have put something before us to prevent us from seeing it.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
We do not worry about being respected in towns through which we pass. But if we are going to remain in one for a certain time, we do worry. How long does this time have to be?
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Few men speak humbly of humility, chastely of chastity, skeptically of skepticism.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Those who write against vanity want the glory of having written well, and their readers the glory of reading well, and I who write this have the same desire, as perhaps those who read this have also.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Our notion of symmetry is derived form the human face. Hence, we demand symmetry horizontally and in breadth only, not vertically nor in depth.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
When we encounter a natural style we are always surprised and delighted, for we thought to see an author and found a man.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Everything that is written merely to please the author is worthless.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
I cannot judge my work while I am doing it. I have to do as painters do, stand back and view it from a distance, but not too great a distance. How great? Guess.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Contradiction is not a sign of falsity, nor the lack of contradiction a sign of truth.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
All err the more dangerously because each follows a truth. Their mistake lies not in following a falsehood but in not following another truth.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Perfect clarity would profit the intellect but damage the will.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Those who are accustomed to judge by feeling do not understand the process of reasoning, because they want to comprehend at a glance and are not used to seeking for first principles. Those, on the
other hand, who are accustomed to reason from first principles do not understand matters of feeling at all, because they look for first principles and are unable to comprehend at a glance.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
To deny, to believe, and to doubt well are to a man as the race is to a horse.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Words differently arranged have a different meaning and meanings differently arranged have a different effect.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Nature is an infinite sphere of which the center is everywhere and the circumference nowhere.
Pensees. 1670.
We arrive at truth, not by reason only, but also by the heart.
Pensees. 1670.
When the passions become masters, they are vices.
Pensees. 1670.
Men despise religion; they hate it, and they fear it is true.
Pensees. 1670.
Religion is so great a thing that it is right that those who will not take the trouble to seek it if it be obscure, should be deprived of it.
Pensees. 1670.
It is not certain that everything is uncertain.
Pensees. 1670.
We are so presumptuous that we should like to be known all over the world, even by people who will only come when we are no more. Such is our vanity that the good opinion of half a dozen of the
people around us gives us pleasure and satisfaction.
Pensees. 1670.
The sole cause of man's unhappiness is that he does not know how to stay quietly in his room.
Pensees. 1670.
Reason's last step is the recognition that there are an infinite number of things which are beyond it.
Pensees. 1670.
Through space the universe grasps me and swallows me up like a speck; through thought I grasp it.
Pensees. 1670.
Let no one say that I have said nothing new... the arrangement of the subject is new. When we play tennis, we both play with the same ball, but one of us places it better.
Pensees. 1670.
The excitement that a gambler feels when making a bet is equal to the amount he might win times the probability of winning it.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Reason is the slow and tortuous method by which these who do not know the truth discover it. The heart has its own reason which reason does not know.
Pensees. 1670.
Reverend Fathers, my letters did not usually follow each other at such close intervals, nor were they so long.... This one would not be so long had I but the leisure to make it shorter.
Lettres provinciales.
The last thing one knows when writing a book is what to put first.
Pensees. 1670.
What is man in nature? Nothing in relation to the infinite, all in relation to nothing, a mean between nothing and everything.
Pensees. 1670.
[I feel] engulfed in the infinite immensity of spaces whereof I know nothing, and which know nothing of me, I am terrified The eternal silence of these infinite spaces alarms me.
Pensees. 1670.
Let us weigh the gain and the loss in wagering that God is. Let us consider the two possibilities. If you gain, you gain all; if you lose, you lose nothing. Hesitate not, then, to wager that He is.
Pensees. 1670.
Look somewhere else for someone who can follow you in your researches about numbers. For my part, I confess that they are far beyond me, and I am competent only to admire them.
[Written to Fermat]
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
The more I see of men, the better I like my dog.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
The more intelligent one is, the more men of originality one finds. Ordinary people find no difference between men.
Pensees. 1670.
However vast a man's spiritual resources, he is capable of but one great passion.
Discours sur les passions de l'amour. 1653.
There are two types of mind ... the mathematical, and what might be called the intuitive. The former arrives at its views slowly, but they are firm and rigid; the latter is endowed with greater
flexibility and applies itself simultaneously to the diverse lovable parts of that which it loves.
Discours sur les passions de l'amour. 1653.
Passano, L.M.
This trend [emphasizing applied mathematics over pure mathematics] will make the queen of the sciences into the quean of the sciences.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Pasteur, Louis
Chance favors only the prepared mind.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988
Pearson, Karl
The mathematician, carried along on his flood of symbols, dealing apparently with purely formal truths, may still reach results of endless importance for our description of the physical universe.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Peirce, Benjamin (1809-1880)
Mathematics is the science which draws necessary conclusions.
Memoir read before the National Academy of Sciences in Washington, 1870.
Peirce, Charles Sanders (1839-1914)
The one [the logician] studies the science of drawing conclusions, the other [the mathematician] the science which draws necessary conclusions.
"The Essence of Mathematics" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
...mathematics is distinguished from all other sciences except only ethics, in standing in no need of ethics. Every other science, even logiclogic, especiallyis in its early stages in danger of
evaporating into airy nothingness, degenerating, as the Germans say, into an arachnoid film, spun from the stuff that dreams are made of. There is no such danger for pure mathematics; for that is
precisely what mathematics ought to be.
"The Essence of Mathematics" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Among the minor, yet striking characteristics of mathematics, may be mentioned the fleshless and skeletal build of its propositions; the peculiar difficulty, complication, and stress of its
reasonings; the perfect exactitude of its results; their broad universality; their practical infallibility.
"The Essence of Mathematics" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The pragmatist knows that doubt is an art which hs to be acquired with difficulty.
Collected Papers.
Pedersen, Jean
Geometry is a skill of the eyes and the hands as well as of the mind.
Plato (ca 429-347 BC)
He who can properly define and divide is to be considered a god.
The ludicrous state of solid geometry made me pass over this branch. Republic, VII, 528.
He is unworthy of the name of man who is ignorant of the fact that the diagonal of a square is incommensurable with its side.
Mathematics is like checkers in being suitable for the young, not too difficult, amusing, and without peril to the state.
The knowledge of which geometry aims is the knowledge of the eternal.
Republic, VII, 52.
I have hardly ever known a mathematician who was capable of reasoning.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
There still remain three studies suitable for free man. Arithmetic is one of them.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Plutarch (ca 46-127)
[about Archimedes:]
... being perpetually charmed by his familiar siren, that is, by his geometry, he neglected to eat and drink and took no care of his person; that he was often carried by force to the baths, and when
there he would trace geometrical figures in the ashes of the fire, and with his finger draws lines upon his body when it was anointed with oil, being in a state of great ecstasy and divinely
possessed by his science.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
Poe, Edgar Allen
To speak algebraically, Mr. M. is execrable, but Mr. G. is (x + 1)- ecrable.
[Discussing fellow writers Cornelius Mathews and William Ellery Channing.]
In N. Rose Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Poincaré, Jules Henri (1854-1912)
Mathematics is the art of giving the same name to different things.
[As opposed to the quotation: Poetry is the art of giving different names to the same thing].
Later generations will regard Mengenlehre (set theory) as a disease from which one has recovered.
[Whether or not he actually said this is a matter of debate amongst historians of mathematics.]
The Mathematical Intelligencer, vol 13, no. 1, Winter 1991.
What is it indeed that gives us the feeling of elegance in a solution, in a demonstration? It is the harmony of the diverse parts, their symmetry, their happy balance; in a word it is all that
introduces order, all that gives unity, that permits us to see clearly and to comprehend at once both the ensemble and the details.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Thus, be it understood, to demonstrate a theorem, it is neither necessary nor even advantageous to know what it means. The geometer might be replaced by the "logic piano" imagined by Stanley Jevons;
or, if you choose, a machine might be imagined where the assumptions were put in at one end, while the theorems came out at the other, like the legendary Chicago machine where the pigs go in alive
and come out transformed into hams and sausages. No more than these machines need the mathematician know what he does.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Talk with M. Hermite. He never evokes a concrete image, yet you soon perceive that the more abstract entities are to him like living creatures.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
Science is built up with facts, as a house is with stones. But a collection of facts is no more a science than a heap of stones is a house.
La Science et l'hypothèse.
A scientist worthy of his name, about all a mathematician, experiences in his work the same impression as an artist; his pleasure is as great and of the same nature.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
The mathematical facts worthy of being studied are those which, by their analogy with other facts, are capable of leading us to the knowledge of a physical law. They reveal the kinship between other
facts, long known, but wrongly believed to be strangers to one another.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Mathematicians do not study objects, but relations between objects. Thus, they are free to replace some objects by others so long as the relations remain unchanged. Content to them is irrelevant:
they are interested in form only.
Thought is only a flash between two long nights, but this flash is everything.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The mind uses its faculty for creativity only when experience forces it to do so.
Mathematical discoveries, small or great, are never born of spontaneous generation. They always presuppose a soil seeded with preliminary knowledge and well prepared by labour, both conscious and
Absolute space, that is to say, the mark to which it would be necessary to refer the earth to know whether it really moves, has no objective existence.... The two propositions: "The earth turns
round" and "it is more convenient to suppose the earth turns round" have the same meaning; there is nothing more in the one than in the other.
La Science et l'hypothèse.
...by natural selection our mind has adapted itself to the conditions of the external world. It has adopted the geometry most advantageous to the species or, in other words, the most convenient.
Geometry is not true, it is advantageous.
Science and Method.
Poisson, Siméon (1781-1840)
Life is good for only two things, discovering mathematics and teaching mathematics.
Mathematics Magazine, v. 64, no. 1, Feb. 1991.
Polyá, George (1887, 1985)
Mathematics consists of proving the most obvious thing in the least obvious way.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
The traditional mathematics professor of the popular legend is absentminded. He usually appears in public with a lost umbrella in each hand. He prefers to face the blackboard and to turn his back to
the class. He writes a, he says b, he means c; but it should be d. Some of his sayings are handed down from generation to generation.
"In order to solve this differential equation you look at it till a solution occurs to you."
"This principle is so perfectly general that no particular application of it is possible."
"Geometry is the science of correct reasoning on incorrect figures."
"My method to overcome a difficulty is to go round it."
"What is the difference between method and device? A method is a device which you used twice."
How to Solve It. Princeton: Princeton University Press. 1945.
Mathematics is the cheapest science. Unlike physics or chemistry, it does not require any expensive equipment. All one needs for mathematics is a pencil and paper.
D. J. Albers and G. L. Alexanderson, Mathematical People, Boston: Birkhäuser, 1985.
There are many questions which fools can ask that wise men cannot answer.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
When introduced at the wrong time or place, good logic may be the worst enemy of good teaching.
The American Mathematical Monthly, v. 100, no. 3.
Even fairly good students, when they have obtained the solution of the problem and written down neatly the argument, shut their books and look for something else. Doing so, they miss an important and
instructive phase of the work. ... A good teacher should understand and impress on his students the view that no problem whatever is completely exhausted.
One of the first and foremost duties of the teacher is not to give his students the impression that mathematical problems have little connection with each other, and no connection at all with
anything else. We have a natural opportunity to investigate the connections of a problem when looking back at its solution.
How to Solve It. Princeton: Princeton University Press. 1945.
In order to translate a sentence from English into French two things are necessary. First, we must understand thoroughly the English sentence. Second, we must be familiar with the forms of expression
peculiar to the French language. The situation is very similar when we attempt to express in mathematical symbols a condition proposed in words. First, we must understand thoroughly the condition.
Second, we must be familiar with the forms of mathematical expression.
How to Solve It. Princeton: Princeton University Press. 1945.
Pope, Alexander (1688-1744)
Epitaph on Newton:
Nature and Nature's law lay hid in night:
God said, "Let Newton be!," and all was light.
[added by Sir John Collings Squire:
It did not last: the Devil shouting "Ho.
Let Einstein be," restored the status quo]
[Aaron Hill's version:
O'er Nature's laws God cast the veil of night,
Out blaz'd a Newton's souland all was light.
Order is Heaven's first law.
An Essay on Man IV.
See skulking Truth to her old cavern fled,
Mountains of Casuistry heap'd o'er her head!
Philosophy, that lean'd on Heav'n before,
Shrinks to her second cause, and is no more.
Physic of Metaphysic begs defence,
And Metaphysic calls for aid on Sense!
See Mystery to Mathematics fly!
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Pordage, Matthew
One of the endearing things about mathematicians is the extent to which they will go to avoid doing any real work.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Proclus Diadochus (412 - 485)
It is well known that the man who first made public the theory of irrationals perished in a shipwreck in order that the inexpressible and unimaginable should ever remain veiled. And so the guilty
man, who fortuitously touched on and revealed this aspect of living things, was taken to the place where he began and there is for ever beaten by the waves.
Scholium to Book X of Euclid V.
Purcell, E. and Varberg, D.
The Mean Value Theorem is the midwife of calculus -- not very important or glamorous by itself, but often helping to delivery other theorems that are of major significance.
Calculus with Analytic Geomety, fifth edition, Englewood Cliffs, NJ: Prentice Hall, 1987.
Pushkin, Aleksandr Sergeyevich (1799 - 1837)
Inspiration is needed in geometry, just as much as in poetry.
Quine, Willard Van Orman
Just as the introduction of the irrational numbers ... is a convenient myth [which] simplifies the laws of arithmetic ... so physical objects are postulated entities which round out and simplify our
account of the flux of existence... The conceptional scheme of physical objects is [likewise] a convenient myth, simpler than the literal truth and yet containing that literal truth as a scattered
In J. Koenderink Solid Shape, Cambridge Mass.: MIT Press, 1990.
Raleigh, [Sir} Walter Alexander (1861-1922)
In an examination those who do not wish to know ask questions of those who cannot tell.
Some Thoughts on Examinations.
Recorde, Robert (1557)
To avoide the tediouse repetition of these woordes: is equalle to: I will settle as I doe often in woorke use, a paire of paralleles, or gemowe [twin] lines of one lengthe: =, bicause noe .2.
thynges, can be moare equalle.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
Reid, Thomas
It is the invaluable merit of the great Basle mathematician Leonard Euler, to have freed the analytical calculus from all geometric bounds, and thus to have established analysis as an independent
science, which from his time on has maintained an unchallenged leadership in the field of mathematics.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Renan, Ernest
The simplest schoolboy is now familiar with facts for which Archimedes would have sacrificed his life.
Souvenirs d'enfance et de jeunesse.
Rényi, Alfréd
If I feel unhappy, I do mathematics to become happy. If I am happy, I do mathematics to keep happy.
P. Turán, "The Work of Alfréd Rényi", Matematikai Lapok 21, 1970, pp 199 - 210.
Richardson, Lewis Fry (1881 - 1953)
Another advantage of a mathematical statement is that it is so definite that it might be definitely wrong; and if it is found to be wrong, there is a plenteous choice of amendments ready in the
mathematicians' stock of formulae. Some verbal statements have not this merit; they are so vague that they could hardly be wrong, and are correspondingly useless.
Mathematics of War and Foreign Politics.
Riskin, Adrian
(after Edna St. Vincent Millay)
...Euclid alone
Has looked on Beauty bare.
He turned away at once;
Far too polite to stare.
The Mathematical Intelligencer, V. 16, no. 4 (Fall 1994), p. 20.
Rivest, R., Shamir, A., and Adleman, L.
The magic words are squeamish ossifrage
[This sentence is the result when a coded message in Martin Gardner's column about factoring the famous number RSA-129 is decoded. See the article whose title is the above sentence by Barry Cipra,
SIAM News July 1994, 1, 12-13.]
Rohault, Jacques (17th century)
it was by just such a hazard, as if a man should let fall a handful of sand upon a table and the particles of it should be so ranged that we could read distinctly on it a whole page of Virgil's
Traité de Physique, Paris, 1671.
Rosenblueth, A
[with Norbert Wiener]
The best material model of a cat is another, or preferably the same, cat.
Philosophy of Science 1945.
Rosenlicht, Max (1949)
You know we all became mathematicians for the same reason: we were lazy.
Hugo Rossi
In the fall of 1972 President Nixon announced that the rate of increase of inflation was decreasing. This was the first time a sitting president used the third derivative to advance his case for
Mathematics Is an Edifice, Not a Toolbox, Notices of the AMS, v. 43, no. 10, October 1996.
Rota, Gian-carlo
We often hear that mathematics consists mainly of "proving theorems." Is a writer's job mainly that of "writing sentences?"
In preface to P. Davis and R. Hersh The Mathematical Experience, Boston: Birkhäuser, 1981.
Russell, Bertrand (1872-1970)
How dare we speak of the laws of chance? Is not chance the antithesis of all law?
Calcul des probabilités.
Mathematics takes us into the region of absolute necessity, to which not only the actual word, but every possible word, must conform.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Although this may seem a paradox, all exact science is dominated by the idea of approximation.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
At the age of eleven, I began Euclid, with my brother as my tutor. This was one of the great events of my life, as dazzling as first love. I had not imagined there was anything so delicious in the
world. From that moment until I was thirty-eight, mathematics was my chief interest and my chief source of happiness.
The Autobiography of Bertrand Russell .
A good notation has a subtlety and suggestiveness which at times make it almost seem like a live teacher.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
If I were a medical man, I should prescribe a holiday to any patient who considered his work important.
The Autobiography of Bertrand Russell .
Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as
little as the physicist means to say.
The Scientific Outlook, 1931.
With equal passion I have sought knowledge. I have wished to understand the hearts of men. I have wished to know why the stars shine. And I have tried to apprehend the Pythagorean power by which
number holds sway about the flux. A little of this, but not much, I have achieved.
The Autobiography of Bertrand Russell .
At first it seems obvious, but the more you think about it the stranger the deductions from this axiom seem to become; in the end you cease to understand what is meant by it.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Calculus required continuity, and continuity was supposed to require the infinitely little; but nobody could discover what the infinitely little might be.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
The fact that all Mathematics is Symbolic Logic is one of the greatest discoveries of our age; and when this fact has been established, the remainder of the principles of mathematics consists in the
analysis of Symbolic Logic itself.
Principles of Mathematics. 1903.
A habit of basing convictions upon evidence, and of giving to them only that degree or certainty which the evidence warrants, would, if it became general, cure most of the ills from which the world
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
The method of "postulating" what we want has many advantages; they are the same as the advantages of theft over honest toil.
Introduction to Mathematical Philosophy, New York and London, 1919, p 71.
Aristotle maintained that women have fewer teeth than men; although he was twice married, it never occurred to him to verify this statement by examining his wives' mouths.
The Impact of Science on Society, 1952.
[Upon hearing via Littlewood an exposition on the theory of relativity:]
To think I have spent my life on absolute muck.
J.E. Littlewood, A Mathematician's Miscellany, Methuen and Co. ltd., 1953.
"But," you might say, "none of this shakes my belief that 2 and 2 are 4." You are quite right, except in marginal cases -- and it is only in marginal cases that you are doubtful whether a certain
animal is a dog or a certain length is less than a meter. Two must be two of something, and the proposition "2 and 2 are 4" is useless unless it can be applied. Two dogs and two dogs are certainly
four dogs, but cases arise in which you are doubtful whether two of them are dogs. "Well, at any rate there are four animals," you may say. But there are microorganisms concerning which it is
doubtful whether they are animals or plants. "Well, then living organisms," you say. But there are things of which it is doubtful whether they are living organisms or not. You will be driven into
saying: "Two entities and two entities are four entities." When you have told me what you mean by "entity," we will resume the argument.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
I wanted certainty in the kind of way in which people want religious faith. I thought that certainty is more likely to be found in mathematics than elsewhere. But I discovered that many mathematical
demonstrations, which my teachers expected me to accept, were full of fallacies, and that, if certainty were indeed discoverable in mathematics, it would be in a new field of mathematics, with more
solid foundations than those that had hitherto been thought secure. But as the work proceeded, I was continually reminded of the fable about the elephant and the tortoise. having constructed an
elephant upon which the mathematical world could rest, I found the elephant tottering, and proceeded to construct a tortoise to keep the elephant from falling. But the tortoise was no more secure
than the elephant, and after some twenty years of very arduous toil, I came to the conclusion that there was nothing more that I could do in the way of making mathematical knowledge indubitable.
Portraits from Memory.
Men who are unhappy, like men who sleep badly, are always proud of the fact.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Work is of two kinds: first, altering the position of matter at or near the earth's surface relatively to other such matter; second, telling other people to do so. The first kind is unpleasant and
ill paid; the second is pleasant and highly paid.
A sense of duty is useful in work but offensive in personal relations. Certain characteristics of the subject are clear. To begin with, we do not, in this subject, deal with particular things or
particular properties: we deal formally with what can be said about "any" thing or "any" property. We are prepared to say that one and one are two, but not that Socrates and Plato are two, because,
in our capacity of logicians or pure mathematicians, we have never heard of Socrates or Plato. A world in which there were no such individuals would still be a world in which one and one are two. It
is not open to us, as pure mathematicians or logicians, to mention anything at all, because, if we do so we introduce something irrelevant and not formal.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The desire to understand the world and the desire to reform it are the two great engines of progress.
Marriage and Morals.
It can be shown that a mathematical web of some kind can be woven about any universe containing several objects. The fact that our universe lends itself to mathematical treatment is not a fact of any
great philosophical significance.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966.
Rutherford, Ernest (1871-1937)
If your experiment needs statistics, you ought to have done a better experiment.
In N. T. J. Bailey the Mathematical Approach to Biology and Medicine, New York: Wiley, 1967.
Sanford, T. H.
The modern, and to my mind true, theory is that mathematics is the abstract form of the natural sciences; and that it is valuable as a training of the reasoning powers not because it is abstract, but
because it is a representation of actual things.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Santayana, George
It is a pleasant surprise to him (the pure mathematician) and an added problem if he finds that the arts can use his calculations, or that the senses can verify them, much as if a composer found that
sailors could heave better when singing his songs.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Sarton, G.
The main duty of the historian of mathematics, as well as his fondest privilege, is to explain the humanity of mathematics, to illustrate its greatness, beauty and dignity, and to describe how the
incessant efforts and accumulated genius of many generations have built up that magnificent monument, the object of our most legitimate pride as men, and of our wonder, humility and thankfulness, as
individuals. The study of the history of mathematics will not make better mathematicians but gentler ones, it will enrich their minds, mellow their hearts, and bring out their finer qualities.
Sayers, Dorothy L.
The biologist can push it back to the original protist, and the chemist can push it back to the crystal, but none of them touch the real question of why or how the thing began at all. The astronomer
goes back untold million of years and ends in gas and emptiness, and then the mathematician sweeps the whole cosmos into unreality and leaves one with mind as the only thing of which we have any
immediate apprehension. Cogito ergo sum, ergo omnia esse videntur. All this bother, and we are no further than Descartes. Have you noticed that the astronomers and mathematicians are much the most
cheerful people of the lot? I suppose that perpetually contemplating things on so vast a scale makes them feel either that it doesn't matter a hoot anyway, or that anything so large and elaborate
must have some sense in it somewhere.
With R. Eustace, The Documents in the Case, New York: Harper and Row, 1930, p 54.
Of all the intellectual faculties, judgment is the last to mature. A child under the age of fifteen should confine its attention either to subjects like mathematics, in which errors of judgment are
impossible, or to subjects in which they are not very dangerous, like languages, natural science, history, etc.
If you would make a man happy, do not add to his possessions but subtract from the sum of his desires.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988.
Shakespeare, William (1564 - 1616)
I cannot do it without comp[u]ters.
The Winter's Tale.
Though this be madness, yet there is method in't.
O God! I could be bounded in a nutshell, and count myself king of infinite space, were it not that I have bad dreams.
I am ill at these numbers.
Shaw, George Bernard (1856-1950)
Tyndall declared that he saw in Matter the promise and potency of all forms of life, and with his Irish graphic lucidity made a picture of a world of magnetic atoms, each atom with a positive and a
negative pole, arranging itself by attraction and repulsion in orderly crystalline structure. Such a picture is dangerously fascinating to thinkers oppressed by the bloody disorders of the living
world. Craving for purer subjects of thought, they find in the contemplation of crystals and magnets a happiness more dramatic and less childish than the happiness found by mathematicians in abstract
numbers, because they see in the crystals beauty and movement without the corrupting appetites of fleshly vitality.
Preface to Back to Methuselah.
Shaw, J. B.
The mathematician is fascinated with the marvelous beauty of the forms he constructs, and in their beauty he finds everlasting truth.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Simmons, G. F.
Mathematical rigor is like clothing; in its style it ought to suit the occasion, and it diminishes comfort and restrains freedom of movement if it is either too loose or too tight.
In The Mathematical Intelligencer, v. 13, no. 1, Winter 1991.
Slaught, H.E.
...[E.H.] Moore ws presenting a paper on a highly technical topic to a large gathering of faculty and graduate students from all parts of the country. When half way through he discovered what seemed
to be an error (though probably no one else in the room observed it). He stopped and re-examined the doubtful step for several minutes and then, convinced of the error, he abruptly dismissed the
meeting -- to the astonishment of most of the audience. It was an evidence of intellectual courage as well as honesty and doubtless won for him the supreme admiration of every person in the group --
an admiration which was in no wise diminished, but rather increased, when at a later meeting he announced that after all he had been able to prove the step to be correct.
In The American Mathematical Monthly, 40 (1933), 191-195.
Smith, Adam
I have no faith in political arithmetic.
Smith, David Eugene
One merit of mathematics few will deny: it says more in fewer words than any other science. The formula, e^ip = -1 expressed a world of thought, of truth, of poetry, and of the religious spirit "God
eternally geometrizes."
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Smith, Henry John Stephen (1826 - 1883)
[His toast:]
Pure mathematics, may it never be of any use to anyone.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
It is the peculiar beauty of this method, gentlemen, and one which endears it to the really scientific mind, that under no circumstance can it be of the smallest possible utility.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Somerville, Mary (1780-1872)
Nothing has afforded me so convincing a proof of the unity of the Deity as these purely mental conceptions of numerical and mathematical science which have been by slow degrees vouchsafed to man, and
are still granted in these latter times by the Differential Calculus, now superseded by the Higher Algebra, all of which must have existed in that sublimely omniscient Mind from eternity.
Martha Somerville (ed.) Personal Recollections of Mary Somerville, Boston, 1874.
Spengler, Oswald (1880 -1936)
The mathematic, then, is an art. As such it has its styles and style periods. It is not, as the layman and the philosopher (who is in this matter a layman too) imagine, substantially unalterable, but
subject like every art to unnoticed changes form epoch to epoch. The development of the great arts ought never to be treated without an (assuredly not unprofitable) side-glance at contemporary
The Decline of the West.
Steiner, G.
For all their wealth of content, for all the sum of history and social institution invested in them, music, mathematics, and chess are resplendently useless (applied mathematics is a higher plumbing,
a kind of music for the police band). They are metaphysically trivial, irresponsible. They refuse to relate outward, to take reality for arbiter. This is the source of their witchery.
The American Mathematical Monthly, v. 101, no. 9, November, 1994.
Steinmetz, Charles P.
Mathematics is the most exact science, and its conclusions are capable of absolute proof. But this is so only because mathematics does not attempt to draw absolute conclusions. All mathematical
truths are relative, conditional.
In E. T. Bell Men of Mathematics, New York: Simona and Schuster, 1937.
Sternberg, S.
Kepler's principal goal was to explain the relationship between the existence of five planets (and their motions) and the five regular solids. It is customary to sneer at Kepler for this. It is
instructive to compare this with the current attempts to "explain" the zoology of elementary particles in terms of irreducible representations of Lie groups.
Stewart, Ian
The successes of the differential equation paradigm were impressive and extensive. Many problems, including basic and important ones, led to equations that could be solved. A process of
self-selection set in, whereby equations that could not be solved were automatically of less interest than those that could.
Does God Play Dice? The Mathematics of Chaos. Blackwell, Cambridge, MA, 1989, p. 39.
Sullivan, John William Navin (1886 - 1937)
The mathematician is entirely free, within the limits of his imagination, to construct what worlds he pleases. What he is to imagine is a matter for his own caprice; he is not thereby discovering the
fundamental principles of the universe nor becoming acquainted with the ideas of God. If he can find, in experience, sets of entities which obey the same logical scheme as his mathematical entities,
then he has applied his mathematics to the external world; he has created a branch of science.
Aspects of Science, 1925.
Mathematics, as much as music or any other art, is one of the means by which we rise to a complete self-consciousness. The significance of mathematics resides precisely in the fact that it is an art;
by informing us of the nature of our own minds it informs us of much that depends on our minds.
Aspects of Science, 1925.
Sun Tze (5th - 6th century)
The control of large numbers is possible, and like unto that of small numbers, if we subdivide them.
Sun Tze Ping Fa.
Swift, Jonathan
If they would, for Example, praise the Beauty of a Woman, or any other Animal, they describe it by Rhombs, Circles, Parallelograms, Ellipses, and other geometrical terms ...
"A Voyage to Laputa" in Gulliver's Travels.
What vexes me most is, that my female friends, who could bear me very well a dozen years ago, have now forsaken me, although I am not so old in proportion to them as I formerly was: which I can prove
by arithmetic, for then I was double their age, which now I am not.
Letter to Alexander Pope. 7 Feb. 1736.
Sylvester, J.J. (1814 - 1897)
...there is no study in the world which brings into more harmonious action all the faculties of the mind than [mathematics], ... or, like this, seems to raise them, by successive steps of initiation,
to higher and higher states of conscious intellectual being....
Presidential Address to British Association, 1869.
So long as a man remains a gregarious and sociable being, he cannot cut himself off from the gratification of the instinct of imparting what he is learning, of propagating through others the ideas
and impressions seething in his own brain, without stunting and atrophying his moral nature and drying up the surest sources of his future intellectual replenishment.
[on graph theory...]
The theory of ramification is one of pure colligation, for it takes no account of magnitude or position; geometrical lines are used, but these have no more real bearing on the matter than those
employed in genealogical tables have in explaining the laws of procreation.
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
Time was when all the parts of the subject were dissevered, when algebra, geometry, and arithmetic either lived apart or kept up cold relations of acquaintance confined to occasional calls upon one
another; but that is now at an end; they are drawn together and are constantly becoming more and more intimately related and connected by a thousand fresh ties, and we may confidently look forward to
a time when they shall form but one body with one soul.
Presidential Address to British Association, 1869.
The world of ideas which it [mathematics] discloses or illuminates, the contemplation of divine beauty and order which it induces, the harmonious connexion of its parts, the infinite hierarchy and
absolute evidence of the truths with which it is concerned, these, and such like, are the surest grounds of the title of mathematics to human regard, and would remain unimpeached and unimpaired were
the plan of the universe unrolled like a map at our feet, and the mind of man qualified to take in the whole scheme of creation at a glance.
Presidential Address to British Association, 1869.
I know, indeed, and can conceive of no pursuit so antagonistic to the cultivation of the oratorical faculty ... as the study of Mathematics. An eloquent mathematician must, from the nature of things,
ever remain as rare a phenomenon as a talking fish, and it is certain that the more anyone gives himself up to the study of oratorical effect the less will he find himself in a fit state to
Thales (CA 600 BC)
I will be sufficiently rewarded if when telling it to others you will not claim the discovery as your own, but will say it was mine.
In H. Eves In Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1969.
Thompson, D'Arcy Wentworth (1860-1948)
Cell and tissue, shell and bone, leaf and flower, are so many portions of matter, and it is in obedience to the laws of physics that their particles have been moved, moulded and conformed. They are
no exceptions to the rule that God always geometrizes. Their problems of form are in the first instance mathematical problems, their problems of growth are essentially physical problems, and the
morphologist is, ipso facto, a student of physical science.
On Growth and Form, 1917.
Thomson, [Lord Kelvin] William (1824-1907)
Fourier is a mathematical poem.
He is not a true man of science who does not bring some sympathy to his studies, and expect to learn something by behavior as well as by application. It is childish to rest in the discovery of mere
coincidences, or of partial and extraneous laws. The study of geometry is a petty and idle exercise of the mind, if it is applied to no larger system than the starry one. Mathematics should be mixed
not only with physics but with ethics; that is mixed mathematics. The fact which interests us most is the life of the naturalist. The purest science is still biographical.
The story was told that the young Dirichlet had as a constant companion all his travels, like a devout man with his prayer book, an old, worn copy of the Disquisitiones Arithmeticae of Gauss.
In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992.
Tillotson, Archbishop
How often might a man, after he had jumbled a set of letters in a bag, fling them out upon the ground before they would fall into an exact poem, yea, or so much as make a good discourse in prose. And
may not a little book be as easily made by chance as this great volume of the world.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Titchmarsh, E. C.
Perhaps the most surprising thing about mathematics is that it is so surprising. The rules which we make up at the beginning seem ordinary and inevitable, but it is impossible to foresee their
consequences. These have only been found out by long study, extending over many centuries. Much of our knowledge is due to a comparatively few great mathematicians such as Newton, Euler, Gauss, or
Riemann; few careers can have been more satisfying than theirs. They have contributed something to human thought even more lasting than great literature, since it is independent of language.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
It can be of no practical use to know that Pi is irrational, but if we can know, it surely would be intolerable not to know.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Todhunter, Isaac (1820 - 1910)
[Asked whether he would like to see an experimental demonstration of conical refraction]
No. I have been teaching it all my life, and I do not want to have my ideas upset.
Tolstoy, [Count] Lev Nikolgevich (1828-1920)
A modern branch of mathematics, having achieved the art of dealing with the infinitely small, can now yield solutions in other more complex problems of motion, which used to appear insoluble. This
modern branch of mathematics, unknown to the ancients, when dealing with problems of motion, admits the conception of the infinitely small, and so conforms to the chief condition of motion (absolute
continuity) and thereby corrects the inevitable error which the human mind cannot avoid when dealing with separate elements of motion instead of examining continuous motion. In seeking the laws of
historical movement just the same thing happens. The movement of humanity, arising as it does from innumerable human wills, is continuous. To understand the laws of this continuous movement is the
aim of history . Only by taking an infinitesimally small unit for observation (the differential of history, that is, the individual tendencies of man) and attaining to the art of integrating them
(that is, finding the sum of these infinitesimals) can we hope to arrive at the laws of history.
War and Peace.
A man is like a fraction whose numerator is what he is and whose denominator is what he thinks of himself. The larger the denominator the smaller the fraction.
In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1989.
Truesdell, Clifford
This paper gives wrong solutions to trivial problems. The basic error, however, is not new.
Mathematical Reviews 12, p561.
Turgenev, Ivan Sergeievich (1818 - 1883)
Whatever a man prays for, he prays for a miracle. Every prayer reduces itself to this: 'Great God, grant that twice two be not four'.
Turnbull, H.W.
Attaching significance to invariants is an effort to recognize what, because of its form or colour or meaning or otherwise, is important or significant in what is only trivial or ephemeral. A simple
instance of failing in this is provided by the poll-man at Cambridge, who learned perfectly how to factorize a^2 - b^2 but was floored because the examiner unkindly asked for the factors of p^2 - q^2
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Ulam, Stanislaw
In many cases, mathematics is an escape from reality. The mathematician finds his own monastic niche and happiness in pursuits that are disconnected from external affairs. Some practice it as if
using a drug. Chess sometimes plays a similar role. In their unhappiness over the events of this world, some immerse themselves in a kind of self-sufficiency in mathematics. (Some have engaged in it
for this reason alone.)
Adventures of a Mathematician, Scribner's, New York, 1976.
Valéry, Paul (1871 - 1945)
In the physical world, one cannot increase the size or quantity of anything without changing its quality. Similar figures exist only in pure geometry.
van Vleck, E. B.
This new integral of Lebesque is proving itself a wonderful tool. I might compare it with a modern Krupp gun, so easily does it penetrate barriers which were impregnable.
Bulletin of the American Mathematical Society, vol. 23, 1916.
Veblen, Thorstein (1857-1929)
The outcome of any serious research can only be to make two questions grow where only one grew before.
The Place of Science in Modern Civilization and Other Essays.
Invention is the mother of necessity.
J. Gross, The Oxford Book of Aphorisms, Oxford: Oxford University Press, 1983.
Voltaire (1694-1778)
Vous avez trouve par de long ennuis
Ce que Newton trouva sans sortir de chez lui.
[Written to La Condamine after his measurement of the equator.]
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
He who has heard the same thing told by 12,000 eye-witnesses has only 12,000 probabilities, which are equal to one strong probability, which is far from certain.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
There are no sects in geometry.
W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1962.
Walton, Izaak
Angling may be said to be so like mathematics that it can never be fully learned.
The Compleat Angler, 1653.
Warner, Sylvia Townsend
For twenty pages perhaps, he read slowly, carefully, dutifully, with pauses for self-examination and working out examples. Then, just as it was working up and the pauses should have been more
scrupulous than ever, a kind of swoon and ecstasy would fall on him, and he read ravening on, sitting up till dawn to finish the book, as though it were a novel. After that his passion was stayed;
the book went back to the Library and he was done with mathematics till the next bout. Not much remained with him after these orgies, but something remained: a sensation in the mind, a worshiping
acknowledgment of something isolated and unassailable, or a remembered mental joy at the rightness of thoughts coming together to a conclusion, accurate thoughts, thoughts in just intonation, coming
together like unaccompanied voices coming to a close.
Mr. Fortune's Maggot.
Theology, Mr. Fortune found, is a more accommodating subject than mathematics; its technique of exposition allows greater latitude. For instance when you are gravelled for matter there is always the
moral to fall back upon. Comparisons too may be drawn, leading cases cited, types and antetypes analysed and anecdotes introduced. Except for Archimedes mathematics is singularly naked of anecdotes.
Mr. Fortune's Maggot.
He resumed:
"In order to ascertain the height of the tree I must be in such a position that the top of the tree is exactly in a line with the top of a measuring stickor any straight object would do, such as an
umbrellawhich I shall secure in an upright position between my feet. Knowing then that the ratio that the height of the tree bears to the length of the measuring stick must equal the ratio that the
distance from my eye to the base of the tree bears to my height, and knowing (or being able to find out) my height, the length of the measuring stick and the distance from my eye to the base of the
tree, I can, therefore, calculate the height of the tree."
"What is an umbrella?"
Mr. Fortune's Maggot.
Warren, Robert Penn (1905-)
What if angry vectors veer
Round your sleeping head, and form.
There's never need to fear
Violence of the poor world's abstract storm.
Lullaby in Encounter, 1957.
Weil, Andre (1906 - 1998)
Every mathematician worthy of the name has experienced ... the state of lucid exaltation in which one thought succeeds another as if miraculously... this feeling may last for hours at a time, even
for days. Once you have experienced it, you are eager to repeat it but unable to do it at will, unless perhaps by dogged work...
The Apprenticeship of a Mathematician.
God exists since mathematics is consistent, and the Devil exists since we cannot prove it.
In H. Eves Mathematical Circles Adieu, Boston: Prindle, Weber and Schmidt, 1977.
Weil, Simone (1909 - 1943)
Algebra and money are essentially levelers; the first intellectually, the second effectively.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
Weyl, Hermann (1885 - 1955)
Our federal income tax law defines the tax y to be paid in terms of the income x; it does so in a clumsy enough way by pasting several linear functions together, each valid in another interval or
bracket of income. An archeologist who, five thousand years from now, shall unearth some of our income tax returns together with relics of engineering works and mathematical books, will probably date
them a couple of centuries earlier, certainly before Galileo and Vieta.
The Mathematical Way of Thinking, an address given at the Bicentennial Conference at the University of Pennsylvania, 1940.
We are not very pleased when we are forced to accept a mathematical truth by virtue of a complicated chain of formal conclusions and computations, which we traverse blindly, link by link, feeling our
way by touch. We want first an overview of the aim and of the road; we want to understand the idea of the proof, the deeper context.
Unterrichtsblätter für Mathematik und Naturwissenschaften, 38, 177-188 (1932). Translation by Abe Shenitzer appeared in The American Mathematical Monthly, v. 102, no. 7 (August-September 1995), p.
A modern mathematical proof is not very different from a modern machine, or a modern test setup: the simple fundamental principles are hidden and almost invisible under a mass of technical details.
Unterrichtsblätter für Mathematik und Naturwissenschaften, 38, 177-188 (1932). Translation by Abe Shenitzer appeared in The American Mathematical Monthly, v. 102, no. 7 (August-September 1995), p.
The constructs of the mathematical mind are at the same time free and necessary. The individual mathematician feels free to define his notions and set up his axioms as he pleases. But the question is
will he get his fellow mathematician interested in the constructs of his imagination. We cannot help the feeling that certain mathematical structures which have evolved through the combined efforts
of the mathematical community bear the stamp of a necessity not affected by the accidents of their historical birth. Everybody who looks at the spectacle of modern algebra will be struck by this
complementarity of freedom and necessity.
My work has always tried to unite the true with the beautiful and when I had to choose one or the other, I usually chose the beautiful.
In an obituary by Freeman J. Dyson in Nature, March 10, 1956.
... numbers have neither substance, nor meaning, nor qualities. They are nothing but marks, and all that is in them we have put into them by the simple rule of straight succession.
"Mathematics and the Laws of Nature" in The Armchair Science Reader, New York: Simon and Schuster, 1959.
Without the concepts, methods and results found and developed by previous generations right down to Greek antiquity one cannot understand either the aims or achievements of mathematics in the last 50
[Said in 1950]
The American Mathematical Monthly, v. 100. p. 93.
Logic is the hygiene the mathematician practices to keep his ideas healthy and strong.
The American Mathematical Monthly, November, 1992.
Nobody since Newton has been able to use geometrical methods to the same extent for the like purposes; and as we read the Principia we feel as when we are in an ancient armoury where the weapons are
of gigantic size; and as we look at them we marvel what manner of man he was who could use as a weapon what we can scarcely lift as a burden.
In E. N. Da C. Andrade "Isaac Newton" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Whitehead, Alfred North
The science of pure mathematics ... may claim to be the most original creation of the human spirit.
Science and the Modern World.
The study of mathematics is apt to commence in disappointment....We are told that by its aid the stars are weighed and the billions of molecules in a drop of water are counted. Yet, like the ghost of
Hamlet's father, this greatest science eludes the efforts of our mental weapons to grasp it.
An Introduction to Mathematics
Mathematics as a science, commenced when first someone, probably a Greek, proved propositions about "any" things or about "some" things, without specifications of definite particular things.
So far as the mere imparting of information is concerned, no university has had any justification for existence since the popularization of printing in the fifteenth century.
The Aims of Education.
No Roman ever died in contemplation over a geometrical diagram.
[A reference to the death of Archimedes.]
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Life is an offensive, directed against the repetitious mechanism of the Universe.
Adventures of Ideas, 1933.
There is no nature at an instant.
Let us grant that the pursuit of mathematics is a divine madness of the human spirit, a refuge from the goading urgency of contingent happenings.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
There is a tradition of opposition between adherents of induction and of deduction. In my view it would be just as sensible for the two ends of a worm to quarrel.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
It is a profoundly erroneous truism, repeated by all copy books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise
opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.
An Introduction to Mathematics.
Our minds are finite, and yet even in these circumstances of finitude we are surrounded by possibilities that are infinite, and the purpose of life is to grasp as much as we can out of that
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
In modern times the belief that the ultimate explanation of all things was to be found in Newtonian mechanics was an adumbration of the truth that all science, as it grows towards perfection, becomes
mathematical in its ideas.
In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988.
Algebra reverses the relative importance of the factors in ordinary language. It is essentially a written language, and it endeavors to exemplify in its written structures the patterns which it is
its purpose to convey. The pattern of the marks on paper is a particular instance of the pattern to be conveyed to thought. The algebraic method is our best approach to the expression of necessity,
by reason of its reduction of accident to the ghostlike character of the real variable.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
Be relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and, in effect, increases the mental power of the race.
In P. Davis and R. Hersh The Mathematical Experience, Boston: Birkhäuser, 1981.
Everything of importance has been said before by somebody who did not discover it.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Seek simplicity, and distrust it.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
Fundamental progress has to do with the reinterpretation of basic ideas.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
We think in generalities, but we live in details.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
Apart from blunt truth, our lives sink decadently amid the perfume of hints and suggestions.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
"Necessity is the mother of invention" is a silly proverb. "Necessity is the mother of futile dodges" is much nearer the truth.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
It is more important that a proposition be interesting than that it be true. This statement is almost a tautology. For the energy of operation of a proposition in an occasion of experience is its
interest and is its importance. But of course a true proposition is more apt to be interesting than a false one.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
War can protect; it cannot create.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
The progress of Science consists in observing interconnections and in showing with a patient ingenuity that the events of this ever-shifting world are but examples of a few general relations, called
laws. To see what is general in what is particular, and what is permanent in what is transitory, is the aim of scientific thought.
An Introduction to Mathematics.
Through and through the world is infested with quantity: To talk sense is to talk quantities. It is not use saying the nation is large .. How large? It is no use saying the radium is scarce ... How
scarce? You cannot evade quantity. You may fly to poetry and music, and quantity and number will face you in your rhythms and your octaves.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
"One and one make two" assumes that the changes in the shift of circumstance are unimportant. But it is impossible for us to analyze this notion of unimportant change.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
I will not go so far as to say that to construct a history of thought without profound study of the mathematical ideas of successive epochs is like omitting Hamlet from the play which is named after
him. That would be claiming too much. But it is certainly analogous to cutting out the part of Ophelia. This simile is singularly exact. For Ophelia is quite essential to the play, she is very
charming ... and a little mad.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
In the study of ideas, it is necessary to remember that insistence on hard-headed clarity issues from sentimental feeling, as it were a mist, cloaking the perplexities of fact. Insistence on clarity
at all costs is based on sheer superstition as to the mode in which human intelligence functions. Our reasonings grasp at straws for premises and float on gossamers for deductions.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
Familiar things happen, and mankind does not bother about them. It requires a very unusual mind to undertake the analysis of the obvious.
Science and the Modern World.
Whitman, Walt (1819-1892)
Do I contradict myself? Very well then I contradict myself. (I am large, I contains multitudes).
Song of Myself, 1939.
When I heard the learn'd astronomer,
When the proofs, the figure, were ranged in columns before me,
When I was shown the charts and diagrams, to add, divide, and measure them,
When I sitting heard the astronomer where he lectured with much applause in the lecture room,
How soon unaccountable I became tired and sick,
Till rising and gliding out I wander'd off by myself,
In the mystical moist night-air, and from time to time,
Look'd up in perfect silence at the stars.
Wiener, Norbert (1894 - 1964)
A professor is one who can speak on any subject -- for precisely fifty minutes.
The modern physicist is a quantum theorist on Monday, Wednesday, and Friday and a student of gravitational relativity theory on Tuesday, Thursday, and Saturday. On Sunday he is neither, but is
praying to his God that someone, preferably himself, will find the reconciliation between the two views.
Progress imposes not only new possibilities for the future but new restrictions.
The Human Use of Human Beings.
The Advantage is that mathematics is a field in which one's blunders tend to show very clearly and can be corrected or erased with a stroke of the pencil. It is a field which has often been compared
with chess, but differs from the latter in that it is only one's best moments that count and not one's worst. A single inattention may lose a chess game, whereas a single successful approach to a
problem, among many which have been relegated to the wastebasket, will make a mathematician's reputation.
Ex-Prodigy: My Childhood and Youth.
Wilder, R. L.
There is nothing mysterious, as some have tried to maintain, about the applicability of mathematics. What we get by abstraction from something can be returned.
Introduction to the Foundations of Mathematics.
Mathematics was born and nurtured in a cultural environment. Without the perspective which the cultural background affords, a proper appreciation of the content and state of present-day mathematics
is hardly possible.
In The American Mathematical Monthly, March 1994.
William of Occam (1300-1439)
[Occam's Razor:]
Entities should not be multiplied unnecessarily.
Wilson, John (1741 - 1793)
A monument to Newton! a monument to Shakespeare! Look up to Heaven look into the Human Heart. Till the planets and the passions the affections and the fixed stars are extinguished their names cannot
Wittgenstein, Ludwig (1889-1951)
We could present spatially an atomic fact which contradicted the laws of physics, but not one which contradicted the laws of geometry.
Tractatus Logico Philosophicus, New York, 1922.
Mathematics is a logical method ... Mathematical propositions express no thoughts. In life it is never a mathematical proposition which we need, but we use mathematical propositions only in order to
infer from propositions which do not belong to mathematics to others which equally do not belong to mathematics.
Tractatus Logico Philosophicus, New York, 1922, p. 169.
There can never be surprises in logic.
In J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956.
The riddle does not exist. If a question can be put at all, then it can also be answered.
Tractatus Logico Philosophicus, New York, 1922.
Wordsworth, William (1770 - 1850)
[Mathematics] is an independent world
Created out of pure intelligence.
Wren, Sir Christopoher
In things to be seen at once, much variety makes confusion, another vice of beauty. In things that are not seen at once, and have no respect one to another, great variety is commendable, provided
this variety transgress not the rules of optics and geometry.
W.H. Auden and L. Kronenberger The Viking Book of Aphorisms, New York: Viking Press, 1966.
X, Malcom
I'm sorry to say that the subject I most disliked was mathematics. I have thought about it. I think the reason was that mathematics leaves no room for argument. If you made a mistake, that was all
there was to it.
Young, J. W. A.
Mathematics has beauties of its own -- a symmetry and proportion in its results, a lack of superfluity, an exact adaptation of means to ends, which is exceedingly remarkable and to be found only in
the works of the greatest beauty When this subject is properly ... presented, the mental emotion should be that of enjoyment of beauty, not that of repulsion from the ugly and the unpleasant.
In H. Eves Mathematical Circles Squared, Boston: Prindle, Weber and Schmidt, 1972.
Zeeman, E. Christopher (1925 - )
Technical skill is mastery of complexity while creativity is mastery of simplicity.
Catastrophe Theory, 1977. | {"url":"http://wordgems.net/math.furman.html","timestamp":"2024-11-11T14:02:29Z","content_type":"text/html","content_length":"235151","record_id":"<urn:uuid:e417b4fc-ff08-4a84-ae11-ca000d04e1f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00221.warc.gz"} |
Meet the 4093 (2nd edition) - Electrical e-Library.com
Digital electronics, Electronic components, Hobby, Projects
Meet the 4093 (2nd edition)
Pedro Ney Stroski / 08/10/2022
This is the post’s second edition. Additional information has been added.
This post’s subject is the 4093, it is an integrated circuit very simple and useful to build many electronic projects.
How it works?
The 4093 is a CMOS, a family of integrated circuits which have field effect transistors with a silicon oxide layer (in grey) and another with polysilicon (in white) in the transistor’s gate (G).
Below are field effect transistors MOSFETs in cross-section.
Inside 4093, there are 4 NAND logic gates with Schmitt Trigger, can be supplied with voltages from 3 to 15 V. Reminding that VSS is the ground or GND.
To see how the logic gate NAND works, click in the button below, will go to a post about combinational circuits.
Combinational circuitsClick here
What is Schmitt Trigger? It is a comparator circuit which serves to eliminate noises. When the input is above a determined value T, the output will be level high M or “1”, when stays below a certain
value -T, the output will be low level -M or “0”.
These are the Schmitt Trigger’s symbols, the above is the inverter and below is non-inverter.
This is the Schmitt Trigger’s circuit inside a chip, the input is linked to the NAND gate’s output. The Ps are the PMOS transistors and the Ns are NMOS.
Some applications
It is possible to build many circuit projects with this chip. This post only shows few applications.
• Square wave generator: This circuit produce a square wave. Can put a resistor instead a potentiometer, which serves to control a frequency band. Can change the values of capacitor and
potentiometer, or resistor, to get a different frequency.
• Light sensor: Just replace the resistor with a LDR and there is an oscillator whose frequency is controlled by light intensity.
One of logic gate’s input can receive a control signal to enable or disable the oscillator. If the control input is on low level, the circuit stops generating square wave. On high level, it keeps
generating the signal. Source: 4093 datasheet.
This is the frequency equation of this oscillator type.
is the output voltage value, in this case has the same value of supply voltage
is the positive limit which generates the high level in output and
is the negative limit which produces low level. The difference between
is the hysteresis voltage
. In 4093’s datasheet, these values are shown and depends on the temperature and voltage supply.
In this configuration, square wave’s width can be controlled by potentiometer P1.
• Touch sensor: When touch in the sensor, can be a metallic part or a bare wire, a LED is turned on.
• Monostable: When the switch SW1A is closed, the LED is turned on. When it is opened again, the LED stays turned on for a while. When higher the C3 and R5 values, longer will be the time the LED
is turned on.
Project example
This project is a combination of many circuits shown on this post, assembled in a single printed circuit board.
Component list:
• Wires.
• Printed circuit board.
• CD4093.
• 10 kΩ potentiometer.
• LDR.
• 47 nF polyester capacitor.
• 100 nF ceramic capacitor.
• 2 LEDs of any color.
• 220 μF electrolytic capacitor.
• Resistors: 2 of 220 Ω, 1 of MΩ and 1 of 22 kΩ.
Leave a Reply Cancel reply | {"url":"https://www.electricalelibrary.com/en/2022/10/08/meet-the-4093/","timestamp":"2024-11-10T03:11:50Z","content_type":"text/html","content_length":"100201","record_id":"<urn:uuid:bdf48b49-3506-44a1-a45f-e7a2a1b3a108>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00660.warc.gz"} |
1765 -- November Rain
November Rain
Time Limit: 5000MS Memory Limit: 65536K
Total Submissions: 2407 Accepted: 538
Case Time Limit: 2000MS
Contemporary buildings can have very complicated roofs. If we take a vertical section of such a roof it results in a number of sloping segments. When it is raining the drops are falling down on the
roof straight from the sky above. Some segments are completely exposed to the rain but there may be some segments partially or even completely shielded by other segments. All the water falling onto a
segment as a stream straight down from the lower end of the segment on the ground or possibly onto some other segment. In particular, if a stream of water is falling on an end of a segment then we
consider it to be collected by this segment.
For the purpose of designing a piping system it is desired to compute how much water is down from each segment of the roof. To be prepared for a heavy November rain you should count one liter of rain
water falling on a meter of the horizontal plane during one second.
Write a program that:
reads the description of a roof,
computes the amount of water down in one second from each segment of the roof,
writes the results.
The first line of the input contains one integer n (1 <= n < = 40000) being the number of segments of the roof. Each of the next n lines describes one segment of the roof and contains four integers
x1, y1, x2, y2 (0 <= x1, y1, x2, y2 < = 1000000, x1 < x2, y1<>y2) separated by single spaces. Integers x1, y1 are respectively the horizontal position and the height of the left end of the segment.
Integers x2, y2 are respectively the horizontal position and the height of the right end of the segment. The segments don't have common points and there are no horizontal segments. You can also
assume that there are at most 25 segments placed above any point on the ground level.
The output consists of n lines. The i-th line should contain the amount of water (in liters) down from the i-th segment of the roof in one second.
Sample Input
Sample Output
Central Europe 2003 | {"url":"http://poj.org/problem?id=1765","timestamp":"2024-11-13T23:12:59Z","content_type":"text/html","content_length":"7265","record_id":"<urn:uuid:23caa723-227f-4efe-836d-4ec8368a889e>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00312.warc.gz"} |
Stable Diffusion
Stable Diffusion is a deep learning, text-to-image generation model developed to produce high-quality images from text prompts. It leverages diffusion models, a class of generative models that
generate new data points by gradually transforming noise into structured data. Stable Diffusion is based on the latent diffusion model, an evolution in generative modeling that combines the
flexibility of diffusion with reduced computational demand through latent space transformation. The model is widely recognized for its efficiency in generating realistic, diverse, and high-resolution
images from minimal input.
Background and Key Components
Stable Diffusion is rooted in the theory of *diffusion models*, which originate from processes in statistical physics and probability theory. Diffusion models work by iteratively denoising random
noise to approximate a target data distribution. They are trained to reverse a gradual process of noise addition applied to training images, thereby learning to generate new images from noise. Stable
Diffusion is particularly distinguished by its use of a *latent diffusion* approach, enabling it to work effectively in a lower-dimensional latent space rather than directly in pixel space. This
significantly reduces the computational resources required without compromising image fidelity.
The model incorporates several components essential for understanding its operational structure:
1. Encoder-Decoder Architecture: Stable Diffusion employs a variational autoencoder (VAE) that first encodes high-dimensional images into a compressed latent space, allowing for efficient image
generation at a reduced scale. In generation, it decodes the latent representation back into a visual format, retaining essential details.
2. Text Encoder: A transformer-based text encoder, such as CLIP (Contrastive Language-Image Pretraining), maps textual prompts into embeddings that guide the image generation. This text-to-image
alignment allows Stable Diffusion to maintain semantic fidelity between the input description and generated image features.
3. Latent Diffusion Process: Stable Diffusion performs the diffusion process within the latent space rather than directly on the pixel space, optimizing memory and computation while preserving
visual quality. The diffusion process is defined by steps that add noise to the encoded latent representation, which is then systematically removed by the model to recreate a coherent image.
Diffusion Process in Detail
The fundamental mechanics of Stable Diffusion revolve around the denoising process, which is controlled by a pre-defined number of time steps, *T*. At each time step, a slight amount of Gaussian
noise is added to the image latent. Given the initial latent state as `z_0`, a sequence of noised representations `z_t` (for `t = 1, 2, …, T`) is generated by gradually increasing the noise until the
image becomes indistinguishable from pure noise at `z_T`.
The diffusion process can be mathematically represented as:
`z_t = sqrt(alpha_t) * z_(t-1) + sqrt(1 - alpha_t) * epsilon`
• `alpha_t` represents the noise scheduling coefficient controlling the amount of noise applied at each step.
• `epsilon` is the random noise drawn from a Gaussian distribution, progressively added to the image latent over the steps.
The model is trained to predict and remove this noise at each step, thereby reconstructing an image by moving in reverse through these noise-added representations.
Training and Inference
Stable Diffusion is trained on large datasets of images and text pairs to establish a probabilistic model of the data distribution, allowing it to learn how to represent and interpret various styles,
objects, and scenes. During training, the model learns to map from a noisy latent back to the original latent, effectively reversing the diffusion process by minimizing a loss function that
quantifies the difference between the original and denoised latents. The training objective can be formulated as:
`L = E_z, epsilon, t || epsilon - epsilon_theta(z_t, t, e) ||^2`
In this expression:
• `E` denotes the expectation across samples,
• `z` is the initial latent representation of the image,
• `epsilon` is the noise, and
• `epsilon_theta` is the model's predicted noise at each step *t*, with `theta` representing the model parameters.
Latent Space Transformation
Latent diffusion models operate in a lower-dimensional latent space, which is the result of encoding images into compact, informative representations. This transformation reduces the computational
complexity of diffusion while preserving essential image details. The benefits of operating within a latent space, particularly in the case of Stable Diffusion, include increased training and
generation efficiency, as fewer resources are required to process each step.
Conditional Image Generation
Stable Diffusion utilizes the transformer-based CLIP model for text-to-image alignment. CLIP embeddings generated from text inputs encode semantic information that guides the diffusion model to
produce images consistent with the textual description. The embeddings are integrated with the diffusion model at each denoising step, allowing text information to influence the denoising trajectory
and resulting image features.
The generation is conditional, meaning that the model tailors each step in the latent diffusion process according to the text embedding, enforcing that the final image aligns closely with the input
prompt. This text-conditioning allows for nuanced, prompt-specific alterations in the generated image.
During inference, a noised latent representation is progressively denoised using the learned model to reconstruct an image from an initial noisy state. The sampling process typically involves a
series of iterations based on the number of defined diffusion steps, each iteratively reducing the noise level and refining image details. By following this iterative denoising process, Stable
Diffusion transitions from a random noise latent state to an image that visually corresponds to the input text.
In summary, Stable Diffusion represents an advanced implementation of diffusion models, optimized for image generation through a combination of text encoding, efficient latent-space processing, and
iterative denoising. This model’s capability to produce high-quality images from brief textual prompts has contributed to its popularity in generative AI, particularly in applications requiring
substantial image fidelity and detail. Its unique structure allows it to be both computationally efficient and flexible across a wide range of visual styles and complexities. | {"url":"https://dataforest.ai/glossary/stable-diffusion","timestamp":"2024-11-04T00:49:04Z","content_type":"text/html","content_length":"109824","record_id":"<urn:uuid:ca67a99e-bc2d-4d99-9727-f799c5262976>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00184.warc.gz"} |
Why Should My Child Learn Vedic Math? -
Have you stumbled upon the term “Vedic Math” but never really understood the appeal behind it? Aren’t students supposed to learn only one form of math: the one taught in school? And isn’t that
Vedic math derives its principles from the ancient Indian scriptures called Vedas, which were taught by scholars. It is a scientific way devised by scholars to give you the ability to solve tedious
and cumbersome arithmetic problems mentally and get accurate results.
What makes Vedic maths easy to understand is the simplicity of its concepts. The ideas taught are easy to grasp and can be mastered by any student at any stage of their learning journey. Therefore,
It becomes very important to make children learn some of the Vedic maths tricks and concepts at an early stage of their learning to build a strong foundation for the child.
What is Vedic Math and why is it recommended for students to learn it?
Vedic Maths is a branch of mathematics that was developed thousands of years ago by the ancient scholars of India. Vedic Maths was discovered during extensive research on the Vedas: ancient Indian
texts written around 1500-900 BCE that contain a record of human experience and knowledge. Today, the learning system recognizes these as foundational knowledge of algebra, algorithm, square roots,
cube roots, various methods of calculation, and the concept of zero.
Vedic Maths teaches a collection of techniques to solve math problems in an easier and faster way. Learning these techniques not only helps students overcome their weak math skills but also helps
students with strong math skills strengthen their problem-solving ability. In fact, Some Scholars claim that using Vedic Maths tricks, you can do calculations 10-15 times faster than the commonly
known methods.
How can learning Vedic Math benefit kids?
Most students believe Mathematics to be their biggest nightmare during school years. If not resolved in their middle school years, math turns out to be one of the toughest subjects to crack later in
high school and even affects them in their years of growing career.
If your child is good at Math, learning Vedic Math tricks will give them an additional boost to their problem-solving skills. On the other hand, if your child dislikes math, learning the principles
of Vedic math can make math learning much easier for them.
Learning Vedic mathematics brings a ton of advantages to students:
□ Improves speed and accuracy when solving math problems. Why? Because Vedic math principles teach you to approach simple mathematical operations in a different way than what you’ve been taught
from school.
□ Improves your academic performance drastically: Basic mathematical calculations are not only used in math but also form the foundation for other subjects like Science.
□ Opens your mind to alternate ways of solving a problem
□ Vedic math is useful in a variety of subjects like Arithmetic, Algebra, Geometry, trigonometry, and Calculus.
□ Useful to students preparing for competitive and entrance examinations.
However, the benefits of Vedic Math are not only limited to academic growth. Vedic math inculcates other benefits that prove advantageous to students in other aspects of their lives. Other
non-academic advantages of learning Vedic Math include:
• Better speed and accuracy
• Increased mental agility, sharper mind, and improved intelligence
• Develops visualization and enhances concentration in kids
• Build logical thinking skills
• Confidence Booster
How can I get my child to start learning Vedic Math?
Vedic math is not considered a part of the school curriculum, yet it recommended to learn and practice its principles through separate courses or dedicated programs.
There are many sources through which you can start learning Vedic math tricks:
1. Books on Vedic Math for kids
If you’re someone who is just getting your kid started with Vedic Math, dedicated books can be a great way to acquaint them with the basic fundamentals of Vedic math tricks. Here is a curated list of
books that specifically teach Vedic maths for kids in a fun and interactive way.
2. Youtube Channels
There are plenty of videos on youtube which teach Vedic math tricks. For someone who has no idea about what Vedic math learning entails, check out a few youtube videos explaining Vedic math tricks
and applications.
3. Online Programs
While books and videos can give you an introductory idea about the principles of Vedic math, going for a dedicated course or program can ensure your child follows through with these principles till
the end.
Vedic math programs are taught by experienced teachers who not only introduce students to problem-solving tricks but also provide daily practice to help them grasp the concepts and effectively apply
We highly recommend enrolling your child in a dedicated online program on Vedic math to give them a strong foundation and help them learn math tricks quickly.
About the Talentnook Vedic Math Program
The Talentnook Vedic Math program is an 8-week interactive program for students who want to learn the speedy ancient system of Vedic Mathematics. With sessions twice a week conducted by our
experienced Vedic Math instructor Sanjay A, students will be introduced to various mental math methods and formulae to speed up their problem-solving ability. The course will contain interactive
lessons, projects, and math games to make Vedic math learning fun and easy to learn | {"url":"https://talentnook.com/learn-vedic-math","timestamp":"2024-11-09T06:11:56Z","content_type":"text/html","content_length":"211726","record_id":"<urn:uuid:50b84bff-d4b1-4aea-99ce-857e59a9cb46>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00658.warc.gz"} |
[GB] DSA Deep Dish Blank Black PBT Keycaps - Completed
Update: Leftovers availablehttp://geekhack.org/index.php?topic=44841.msg932106#msg932106
These keys are an alternative to the traditional homing bumps/nubs found on the F/J keys on most keyboards. Instead of a raised part, these keycaps are instead with a deeper center. If you can't
quite picture what I am saying, click
. This picture was posted by Matt3o in the DSA Retro group buy thread and is purely here to illustrate the difference between DSA deep dish and normal DSA keycaps.
The main target for this group buy is recent ErgoDox buyers that also purchased the Massdrop DSA keyset. That keyset comes without any home-row indicators, so we hope to remedy that with this group
This is going to be a small and quick group buy. Please PM me with how many keys you want, the address you want them shipped to, and your paypal email. I will send the invoices no later than midnight
on Wednesday, April 24th. The invoices must be paid by midnight on Thursday, April 25th because I wish to put in the order to SP on Friday. I realize this is a very short period of time to order and
then pay. I apologize if this causes an inconvenience, but the people in this group buy (myself included) want to get this over with as soon as possible. This means that you must be sure to include
all of your information with the PM and be ready to pay by Thursday. Mark your calendars!
Price breaks:
[DEL:<25 keys $6.00 each
25 keys $2.75 each
50 keys $1.75 each
100 keys $1.25 each
150 keys $1.00 each
200 keys $0.80 each:DEL]
300 keys $0.75 each
<------------- we are here[DEL:500 keys $0.65 each:DEL]
[DEL:The price of shipping will depend on the expected weight of the package and how far away you are from me (Texas). This will be calculated on a case-by-case basis.:DEL]
In my haste of doing this group buy I haven't really done a good job of planning ahead for shipping costs. For that I apologize.
In the mean time I am trying to get everything calculated so that I can charge the correct amount with shipping. For most orders (<=55 keycaps), this will mean a shipping cost of $10 (US) and $15
(international) via USPS. | {"url":"https://geekhack.org/index.php?topic=42617.0","timestamp":"2024-11-12T12:52:30Z","content_type":"application/xhtml+xml","content_length":"164581","record_id":"<urn:uuid:5ea4c906-9820-4f7f-85b2-671a755d5302>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00779.warc.gz"} |
Time Resolution and Data Rates
The default integration times for the various array configurations and frequency bands are as follows:
Table 3.6.1: Default Integration Times
Configurations Observing Default
Bands integration time
A, B, C, D 4 P 2 seconds
A L S C X Ku K Ka Q 2 seconds
B L S C X Ku K Ka Q 3 seconds
C, D X Ku K Ka Q 3 seconds
C, D L S C 5 seconds
Observations with the 3-bit (wideband) samplers, when applicable, should use these integration times. Observations with the 8-bit samplers may use shorter integration times, but these must be
requested and justified explicitly in the proposal, and obey the following restrictions:
Table 3.6.2: Minimum integration times and maximum data rates
Proposal type Minimum integration time Maximum data rate
General Observing (GO) 50 msec up to 60 MB/s (216 GB/hr)
Shared Risk Observing (SRO) 50 msec > 60 MB/s (216 GB/hour) and up to 100 MB/s (360 GB/hour)
Resident Shared Risk Observing (RSRO) < 50 msec > 100 MB/s (360 GB/hr)
Note that integration times as short as 5 msec and data rates as high as 300 MB/s can be supported for some observing, though any such observing is considered Resident Shared Risk Observing. For
these short integration times and high data rates there will be limits on bandwidth and/or number of antennas involved in the observation. Those desiring to utilize such short integration times and
high data rates should consult with NRAO staff.
The maximum recommended integration time for any VLA observing is 10 seconds.
Observers should bear in mind the data rate of the VLA when planning their observations. For N[ant] antennas and integration time Δt, the data rate^† is:
Data rate ~ 45 MB/sec × (N[chpol]/16384) × N[ant] × (N[ant] − 1)/(27×26) / (Δt/1 sec)
~ 160 GB/hr × (N[chpol]/16384) x N[ant] × (N[ant] − 1)/(27×26) / (Δt/1 sec)
~ 3.7 TB/day × (N[chpol]/16384) × N[ant] × (N[ant] − 1)/(27×26) / (Δt/1 sec)
Here N[chpol] is the sum over all subbands of spectral channels times polarization products:
N[chpol] = Σ[i] N[chan,i] × N[polprod,i]
where N[chan,i ] is the number of spectral channels in subband i, and N[polprod,i] is the number of polarization products for subband i (1 for single polarization [RR or LL], 2 for dual polarization
[RR and LL], 4 for full polarization products [RR, RL, LR, LL]). This formula, combined with the maximum data rates given above, imply that observations using the maximum number of channels currently
available (16384) will be limited to minimum integration times of ~2 seconds for standard observations, and 0.8 seconds for shared risk observations.
We note that frequency averaging in the correlator will reduce the total number of channels. Therefore, the data rate and the data volume will be reduced by the same channel averaging factor. See the
Chromatic Aberration section for more details on the frequency averaging in the correlator and to assess its impact on your science.
These data rates are challenging for transfer and analysis. Data may either be downloaded via ftp over the Internet, or shipped on hard drives for large data sets or for those with slow Internet
connections (please review the data shipping policy). For users whose science permits, the Archive Access Tool allows some level of frequency averaging in order to decrease data set sizes before ftp;
note that the full spectral resolution will be retained in the NRAO archive for all observations.
^†Note: The data rate formula given above does not account for the auto-correlations delivered by WIDAR. Precise data rate values can be obtained through the use of the Resource Catalog Tool for
proposing (RCT-proposing). | {"url":"https://science.nrao.edu/facilities/vla/docs/manuals/oss/performance/tim-res","timestamp":"2024-11-05T22:40:20Z","content_type":"application/xhtml+xml","content_length":"35717","record_id":"<urn:uuid:ffbb2438-a240-4b26-9100-769a073a7ed1>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00729.warc.gz"} |
DEM simulation and experimental study on the screening process of elliptical vibration mechanical systems
For an elliptical vibration system, the vibration parameters seriously affect conveying speed and sieving efficiency of the materials. In addition, considering the lack of studies about the
elliptical vibration machine, we applied the discrete element method to simulate and analyze the elliptical vibration screening process in this paper. A vibration screening model is particularly
based on the purpose of our research to fundamentally demonstrate the novel relationships among the conveying speed, sieving efficiency and vibration parameters of the materials. And the sieving
experiment of typical materials is additionally carried out on the same simulation system. This paper analyzes the influence rule of vibration parameters on conveying speed and sieving efficiency of
the materials during the elliptical vibration screening process by virtue of comprehensively comparing the results of experimental study coupled with simulation research. Consequently, we can throw
up the optimal vibration screening parameters to guarantee high sieving efficiency and large throughput of the screening machine at the same time. The screening test carried out in this paper lays
the experimental foundation for the study of the mechanism of elliptical vibration screening machine and the study of materials screening characteristics by combining the conclusions of DEM
simulation analysis of lots of materials. It provides not only a basis for selecting the vibration parameters of the actual working process of the screening machine, but also data support based on
the experiment and simulation for the study of the sieving mechanism of elliptical vibration systems. For screening mechanism, this is a significant progress which will affect future design and
manufacture of elliptical vibration machines. Furthermore, the conclusions drawn from this research can help us study and explain better the screening process of other vibration machinery.
• DEM is applied to simulate and analyze the elliptical vibration screening process in this paper
• The influence of different vibration parameters on conveying speed and sieving efficiency is analyzed
• This paper proposes the optimal vibration screening parameters to guarantee high sieving efficiency
• This paper provides a basis for selecting the vibration parameters of the actual working process
1. Introduction
In order to build a resource-saving and environment-friendly society, the development mode and production mode of machinery industry must take into full consideration the coordinated development of
economic benefits, environmental benefits and social benefits. We should attach importance to the research and the application of green and efficient technology so as to promote sustainable
development of machinery industry. Vibration machinery that we focus on in this paper is a screening machine widely used in mining, metallurgy, chemical industry, food and other industries. The
performance of vibrating screen directly affects the production efficiency and screening quality and has an important impact on enhancing the utilization rate of raw materials. The screening process
is very complicated, and it is subject to vibration parameters, technics parameters, materials properties and many other factors [1]. As for the research on vibration screening process, the methods
adopted by traditional research rely mainly upon screening experiments and personal experience. Moreover, it takes a very long time to finish the whole research. The research costs are so high that
it is hard to achieve a deeper study of the screening process. Discrete element method (DEM) is used as a numerical method to calculate the mechanical behavior of particle systems which can
accurately reflect the mechanism of the screening process and demonstrate the movement of materials on the screen surface and the behavior of the sieve. [2]. Therefore, discrete element method (DEM)
has become an important and effective method to study the screening process [3, 4]. At present, research on vibration screening process mainly focuses on linear vibration and circular vibration, and
there are relatively few studies on the elliptical vibration screening process. Elliptical vibration combines the advantages of circular vibration and linear vibration. Materials on the screen
surface can attain a higher conveying speed and elliptical vibration has better loose effects on materials layer, which can ensure high yield and screening quality simultaneously [5].
Based on the above facts, the discrete element method is used to analyze the elliptical vibration screening process, and the test bed of vibrating screen for sieving materials is particularly
established to carry out a typical materials sieving experiment for the same simulation system and study the screening process of elliptical vibration machinery. EDEM is the world's first universal
CAE software designed to simulate and analyze particle processing and particle production operations by using discrete element method. The core idea of EDEM is discrete element method (DEM). By
combining the advantages of the EDEM simulation and experiments, coupled with the comprehensive comparison of the results of simulation research and experimental research, we have analyzed the law of
influence of vibration parameters on the materials’ conveying speed and sieving efficiency during the elliptical vibration screening process. Hence, we can expound the screening process of elliptical
vibration machinery better and provide data support based on experiments and simulations at the same time. The screening model of the elliptical vibration machinery is built in Sections 2, and the
test bed of screening experiment is set in Section 3. Section 4 deals with DEM and presents the experiments to verify the accuracy of the modeling and the simulation. At last, conclusions are drawn.
2. The materials screening model of the elliptical vibration machinery
The discrete element method is a numerical method to study the physical structure and law of motion of discrete particulate. Different from the description of particles in the continuum theory which
is based on Elastic-Plastic Mechanics, the discrete element method is not based on the principle of minimum potential energy but rather the Newton’s second law of motion. The discrete element method
takes the contact theory of spherical particles presented by Hertz and Mindlin-Deresiewicz as a basis so as to realize numerical computation about particle movement and various micro behaviors by
using the simplified contact model of soft or hard sphere particles. Here is the brief introduction to the Hertz Contact Theory and the Mindlin-Deresiewicz Contact Theory [6].
(1) Hertz contact theory.
The Hertz Contact Theory assumes that the material is homogeneous, isotropic, and fully elastic; the friction of the contact surface is negligible, and the surface is the ideal smooth surface. Under
the above assumption, the Hertz Contact Theory can be established.
As shown in Fig. 1, when two spherical particles having a radius of ${R}_{1}$ and ${R}_{2}$ are in contact, the normal contact force ${F}_{n}$ corresponding to the amount of overlap $\alpha$ in the
normal direction can be calculated by the Hertz contact theory:
${F}_{n}=\frac{4}{3}{E}^{\mathrm{*}}{\left({R}^{\mathrm{*}}\right)}^{1/2}{\alpha }^{3/2},$
where ${R}^{\mathrm{*}}$ and ${E}^{\mathrm{*}}$ are the equivalent radius and equivalent Young’s modulus respectively, which are calculated by the following formula:
$\frac{1}{{E}^{\mathrm{*}}}=\frac{\left(1-{{\mu }_{1}}^{2}\right)}{{E}_{1}}+\frac{\left(1-{{\mu }_{2}}^{2}\right)}{{E}_{2}},$
where ${E}_{1}$ and ${E}_{2}$ represent the Young’s modulus of spherical particles with radius ${R}_{1}$ and radius ${R}_{2}$ respectively; ${\mu }_{1}$ and ${\mu }_{2}$ represent the Poisson’s ratio
of spherical particles with radius ${R}_{1}$ and radius ${R}_{2}$ respectively.
As seen in the Eq. (1) that in a time step, if the increment of the overlap between the two contact particles is $∆\alpha$, the corresponding normal contact force increment $∆{F}_{n}$ is:
where the contact radius $a=\sqrt{\alpha {R}^{\mathrm{*}}}$.
Fig. 1Schematic diagram of contact between two particles
(2) Mindlin-Deresiewicz contact theory.
The tangential force increment $∆T$ of the contact surface corresponding to the tangential displacement increment $∆\delta$ of the contact surface is:
$∆T=8a{G}^{*}{\theta }_{k}∆\delta +{\left(-1\right)}^{k}\mu \left(1-{\theta }_{k}\right)∆{F}_{n},$
where $k=$ 0, 1, 2 correspond to the case of loading, unloading and reloading respectively.
If $\left|∆T\right|$ < $\mu ∆{F}_{n}$ then ${\theta }_{k}=$ 1, if $\left|∆T\right|$ ≥ $\mu ∆{F}_{n}$ then:
${\theta }_{k}=\left\{\begin{array}{l}{\left(1-\frac{T+\mu ∆{F}_{n}}{\mu N}\right)}^{1/3},k=0,\\ {\left(1-\frac{{\left(-1\right)}^{k}\left(T-{T}_{k}\right)+2\mu ∆{F}_{n}}{2\mu N}\right)}^{1/3},k=1,2,
where $\mu$ is the surface friction coefficient between particles and ${F}_{n}$ is the normal contact force, and ${G}^{*}$ is the equivalent shear modulus:
${G}^{*}=\frac{\left(2-{\mu }_{1}\right)}{{G}_{1}}+\frac{\left(2-{\mu }_{2}\right)}{{G}_{2}},$
where ${G}_{1}$ and ${G}_{2}$ represent the shear modulus of spherical particles with radius ${R}_{1}$ and radius ${R}_{2}$ respectively and ${T}_{k}$ is the tangential contact force considering the
case of unloading or reloading, which is updated at each time step:
${T}_{k}={T}_{k}{-\left(-1\right)}^{k}\mu ∆{F}_{n},$
In this paper, a simulation model is established according to the method used in the research of Dong K. J and Yu A. B [7] and we have carried out the following four tasks: (1) A three-dimensional
simplified model with the same scale as the test bed is established so as to compare the simulation results and the screening experiment results; (2) The size of aperture of the steel perforated
sieve plate is the same as that used in the experiment; (3) The trajectory of screen surface is elliptical; (4) The size and properties of materials are the same with the experiment. The DEM model is
as shown in Fig. 2.
Fig. 2Vibrating screen simulation model
In order to ensure that the research results can provide an effective theoretical basis for actual production, this paper will guarantee the reliability of the model from the following aspects: (1)
EDEM algorithm as well as its reliability used to calculate motion and collision of discrete particle system have been verified by many scholars. (2) The scholars have verified the reliability of the
EDEM simulation model by experiments. (3) The adoption of model in this paper is based on the research results of Dong and Yu as well as the real parameters from Jahani M. [8]. The simulation
parameters used in this paper are shown in Table 1.
Table 1Simulation conditions and materials parameters
Name of parameter Parameter value
Length (mm) 1000
Width (mm) 340 (Periodic boundary conditions)
Diameter of aperture (mm) $d=$ 10
Height of inlet (mm) 50
Vibration frequency, $f$ (Hz) 11-15 (600-900 rpm)
Name of parameter Parameter value
Trajectory Oval
Materials density (kg/m^3) Particle: 1400 screen: 7800
Poisson’s ratio 0.3
Young’s modulus (Pa) Particle: 10^7 Screen: 2.1×10^11
Recovery coefficient 0.3
Sliding friction coefficient 0.5
Rolling friction coefficient 0.01
Simulation step size (s) 5×10^-3
The simulation model adopts the Hertz-Mindlin Soft-Sphere dry contact to simulate the materials collision process [9, 10]. The reliability of this collision model has been verified by many
researchers and applied to the vibration screening simulation.
Fig. 3Simulation on the screening process
As shown in Fig. 3 for the screening simulation process, the particle factory produces particles with different size at a constant speed. The purple particles in the screening experiment cannot be
sieved. The blue particles are corresponding to particles which are difficult to be sieved. And the green particles correspond to the particles which are easy to be sieved [11]. The materials waiting
for screening fall to the screen surface by gravity, and then pile up the materials layer. And the materials are layered under the force of the motion of the screen, the particles which are difficult
to be sieved continuously moves to the outlet. Finally, the number of particles at the inlet and outlet as well as the sieving quantity will achieve dynamic equilibrium [12]. Since the actual
production focuses on the dynamic equilibrium, the simulation data in this paper are all collected from the dynamic equilibrium state of the materials on the screen.
3. Construction of sieving experimental test bed
This paper has pursued a research on the elliptical vibration screening process. According to the principle of self-synchronization, the elliptical test bed of self-synchronization driven by dual
machines with unequal mass and diameter product was built, as shown in Fig. 4. The test bed chose two three-phase asynchronous 4-stage vibration motors whose excitation force ratio is 2:1 and the two
vibration motors are installed at 45° in the screen body.
Fig. 4Elliptical sieve vibration experiment system
According to the force center theory [9], we can adjust the position of the two vibration motors in order to make the center of force coincides approximately with the center of mass of the screen
body. The rotation speed of the two vibration motors is controlled by a converter. The vibration detection system consists of a signal acquisition instrument and sensors which measure the vertical
and horizontal vibration of the screen body respectively. Acceleration sensors will measure the acceleration value and then obtain the corresponding displacement value by quadratic integral. The
horizontal and vertical vibration signals are respectively used as the $X$-axis and $Y$-axis in the plane coordinate system to synthesize the trajectory of test bed [13].
After the experiment, the elliptical vibration test bed can achieve stable elliptical trajectory which is as shown in Fig. 4. The natural frequency of the elliptical vibrating screen is 3.8 Hz. When
the vibration frequency exceeds 10 Hz and the experimental platform works in the resonance region, the test bed can meet the requirements of subsequent experiments.
As shown in Fig. 5, the sieve experiment adopts the circular steel perforated sieve plate whose diameter is 10.00 mm, and the parameters of the sieve plate are the same as the simulation parameters.
The sieve experiment selects three kinds of spherical particles made of PC plastic with different sizes as screening objects. As shown in Fig. 6, the diameters of the three particles are 13.70 mm,
9.70 mm and 5.80 mm respectively. They respectively represent the obstructed particles (purple, pink particles), the particles difficult to be sieved (white, blue particles) and the particles easy to
be sieved (green, yellow particles). Finally, the three types of particles are mixed at the same mass ratio.
On the basis of the elliptical test bed of self-synchronization driven by dual machines, the sieve plate is installed at the bottom and the collection box of obstructed particles is placed at the
outlet. The collection box of easy-to-be-sieved particles is placed under the sieve plate. A screening experimental set-up is as shown in Fig. 7.
Fig. 6Screening experiment materials
Fig. 7Screening experiment equipment
This biaxial elliptical vibrating test bed has the functions of adjusting screen surface inclination, vibration frequency, amplitude and vibration angle, etc. The adjustment of vibration angle is
realized by the frequency conversion technology, and Fig. 8 is the schematic diagram of adjustment of vibration angle.
Fig. 8The adjustment of vibration angle
Fig. 9 shows the states of materials before and after the screening experiment. Fig. 9(a) represents the mixed materials of three different sizes but equal mass before sieving, Fig. 9(b) represents
the materials in the outlet, and Fig. 9(c) represents the screened materials. The screening process and the results show that: Most of the easy-to-be-sieved particles are quickly separated during the
screening process and gradually decrease from inlet to outlet. The difficult-to-be-sieved particles are continuously separated along the direction of conveying materials. At the end of the screen
plate, the easy-to-be-sieved particles are basically separated through sieving so the contact opportunities between the difficult-to-be-sieved particles and the screen plate increased and the
probability of sieving at the end of the screen plate increased correspondingly.
During the sieving experiment, in order to reduce the experimental error, we repeated the experiment for five times and take the average of the experimental results for the same group of screening
Fig. 9The materials’ state before and after the screening experiment
a) The materials from inlet
b) The materials from outlet
c) The materials after sieving
4. Analysis of DEM simulation and screening experiment
In order to study the law of particle movement and sieving under the mode of elliptical vibration trajectory, we have carried out the screening experiment and DEM simulation [14, 15] to study the
influence of screen surface inclination, vibration angle, vibration frequency and amplitude on the conveying speed and sieving efficiency of materials during the screening process.
4.1. The evaluation index on screening quality
(1) Sieving efficiency.
The particles used in the experiment and simulation in this paper are spherical. For spherical particles, the particles which are larger than the size of the screen aperture will not pass through it.
Hence, the equation applied for calculating screening efficient is as follows:
$\eta =\frac{{m}_{Sp}}{{m}_{St}}×100\mathrm{}\mathrm{%},$
where ${m}_{Sp}$ represents the total mass of the sieved particles in the collection box under the sieve plate whose size are smaller than the screen aperture and ${m}_{St}$ represents the total mass
of particles in the inlet whose size are smaller than the screen aperture.
(2) Conveying speed of the materials.
For the simulation model, this paper adopts the data analysis module of EDEM to count the forward conveying speed of particles which are on the screen in the horizontal direction. By extracting the
average speed of the particles on different stages of the screen, we can obtain the average value of the conveying speed of particles, as shown in Fig. 10.
Fig. 10Statistics results of the conveying speed of materials in simulation model
During the process of screening experiment, it is impossible to adopt the method of simulation analysis to calculate the conveying speed of materials. Therefore, the method to extract the conveying
speed of materials during the experiment is as follows: As shown in Fig. 10, 40 pink particles with the same diameters as the purple particles are used to act as the obstructed particles. During the
screening process, the pink particles are poured into the materials inlet. We use a stopwatch to record the time of every particle moving from the materials inlet to the materials outlet. The
screening experiment is repeated five times in each group to obtain the average time of the pink particles. According to Eq. (10), the conveying speed of materials is calculated during the screening
where $L$ represents the length of sieve plate and $\overline{t}$ represents the average time of the pink particles moving from materials inlet to materials outlet.
4.2. The screening process experiment and simulation results
4.2.1. The influence of screen surface inclination on the screening process
Under the condition that the amplitude is 4 mm, the vibration angle is 45° and vibration frequency is 14 Hz, this paper has studied the influence of the screen surface inclination on the conveying
speed and the sieving efficiency of materials. For inertial vibrating screen, the screen surface inclination is generally less than 10°, therefore, this paper aims at studying the screen surface
inclination which changes in the range of 0°-10°.
(1) The influence of the screen inclination on the conveying speed of materials.
Fig. 11 shows the relationship between conveying speed of materials and screen surface inclination. It can be seen from the figure that the conveying speed of the materials increases with the
increase of screen surface inclination. Therefore, the materials capacity is enhanced with the increase of screen surface inclination, which is beneficial to the discharge of materials.
(2) The influence of the screen inclination on the sieving efficiency.
Fig. 12 shows the relationship between the sieving efficiency and screen surface inclination. As we can see from the figure, the sieving efficiency decreases with the increase of the screen surface
inclination. This is mainly due to the fact that effective size of screen aperture reduced with the increase of the screen surface inclination. It is harmful to the screening of
difficult-to-be-sieved particles. Consequently, the conveying speed of materials accelerates with the increase of the screen surface inclination, but the materials discharged outside without fully
sieved. This leads to insufficient sieving.
Fig. 11Relationship between the conveying speed of materials and screen surface inclination
Fig. 12Relationship between sieving efficiency angle and screen surface inclination
4.2.2. The influence of vibration angle on the screening process
Under the condition that the amplitude is 4 mm, the screen surface inclination is 0° and vibration frequency is 14 Hz, this paper focuses on the influence of vibration angle on the conveying speed
and the sieving efficiency of materials. In practical production and application, vibration angle generally changes from 25° to 60°. Hence, the research aims at studying the vibration angle which
changes in the range of 25°-60°.
(1) The influence of vibration angle on the conveying speed of materials.
Fig. 13 shows the relationship between the conveying speed of materials and vibration angle. It can be seen from the figure that when vibration angle increases from 25° to 60°, the conveying speed of
materials increases first and then decreases with the increase of vibration angle. The reasons are as follows: when the vibration angle is less than 40°, the angle between the long axis of the
elliptical vibration trajectory and the screen surface is so smaller that the materials cannot be fully thrown out, materials will reciprocate on the sieve plate which results in a relatively small
conveying speed of materials. This reciprocating motion improved as the vibration angle increased from 25° to 40°. So, when the vibration angle is between 20° and 40°, the conveying speed of
materials increases as the angle increases. When the vibration angle is more than 40°, the angle between the long axis of the elliptical vibration trajectory and the screen surface enlarges, which
leads to the enhancement of the loose effects of materials and the attenuation of conveying speed.
(2) The influence of vibration direction angle on sieving efficiency.
Fig. 14 shows the relationship between the sieving efficiency and the vibration angle. It can be seen from the figure that the sieving efficiency increases with the increase of the vibration angle.
However, when the vibration angle increases more than 50°, the sieving efficiency increases a little with the increase of the vibration angle. The reasons are as follows: With the increase of the
vibration angle, the loose effect of materials enhances and the opportunity for the materials to contact with the screen surface increases. When vibration angle exceeds 50°, it is difficult for the
difficult-to-be-sieved particles to contact with the screen surface because the obstructed particles are not easy to discharge quickly, which leads to a slow growth in sieving efficiency.
Fig. 13Relationship between materials conveying speed and vibration angle
Fig. 14Relationship between sieving efficiency and vibration angle
4.2.3. The influence of vibration frequency on the screening process
Under the condition that the amplitude is 4 mm, screen surface inclination is 0° and vibration angle is 45°, this paper has studied the influence of vibration frequency on the conveying speed and
sieving efficiency of the materials. Practical production and applications generally select the vibration frequency which changes from 11 Hz to 16 Hz. Hence, the research aims at studying the
vibration frequency which changes from 11 Hz to 16 Hz in this paper.
(1) The influence of vibration frequency on conveying speed of materials.
Fig. 15 shows the relationship between the conveying speed and vibration frequency of materials. It can be seen from the figure that the materials’ conveying speed increases linearly with the
increase of the vibration frequency when the vibration frequency increases from 11 Hz to 16 Hz.
(2) The influence of the vibration frequency on sieving efficiency.
Fig. 16 shows that the relationship between the sieving efficiency and the vibration frequency. It can be seen from the figure that when the vibration frequency is less than 14 Hz, the sieving
efficiency increases slowly with the increase of the vibration frequency, and the sieving efficiency decreases when the vibration frequency exceeds 14 Hz. The reasons are as follows: The collision
among particles will be fiercer if the vibration frequency is large enough. The acceleration of materials’ conveying speed leads to the reduction of contact opportunities between materials and sieve
plate. Hence, we can get the conclusion that the sieving efficiency for easy-to-be-sieved materials and difficult-to-be-sieved materials decreases.
Fig. 15Relationship between conveying speed of materials and vibration frequency
Fig. 16Relationship between sieving efficiency and vibration frequency
4.2.4. The influence of amplitude on the screening process
Under the condition that the vibration frequency is 14 Hz, the screen surface inclination is 0°and the vibration angle is 45°, this paper has studied the influence of amplitude on the conveying speed
and sieving efficiency of the materials. In practical production and applications, the amplitude is generally not more than 10 mm. Hence, the research focuses on the amplitude which changes from 2 mm
to 6 mm.
(1) The influence of amplitude on the materials’ conveying speed.
Fig. 17 shows the relationship between the conveying speed of the materials and amplitude. It can be seen from the figure that the materials’ conveying speed increases with the increase of amplitude.
The larger the amplitude is, the farther the materials are thrown. So, the opportunities of random collisions among particles reduce, which makes the materials move forward quickly.
(2) The influence of the amplitude on the sieving efficiency.
Fig. 18 shows the relationship between the sieving efficiency and the amplitude. Under the condition that the amplitude is less than 5 mm, the sieving efficiency increases slowly with the increase of
amplitude. On the contrary, the sieving efficiency decreases when the amplitude exceeds 5 mm. The reasons are as follows: The larger the amplitude is, the farther the materials are thrown away. The
materials’ conveying speed accelerates and the opportunities for the materials to contact with the sieve plate reduce. Thus, the probability of being sieved through for the easy-to-be-sieved
materials and difficult-to-be-sieved materials decreases, which leads to the reduction of the sieving efficiency.
Fig. 17Relationship between materials’ conveying speed and amplitude
Fig. 18Relationship between sieving efficiency and amplitude
4.2.5. The analysis of differences between simulation and experimental results
Combining the above experimental results with the simulation results, we can know: for the materials’ conveying speed, the values of the simulation results are higher than that of the experimental
results. However, for the sieving efficiency, the values of simulation results are lower than that of the experimental results. But the trends of the simulation results are consistent with the
experimental results. The main reasons are as follows:
1) It can be assumed that no elastic deformation has occurred for the sieve plate in the EDEM simulation model. However, owing to the fact that the sieve plate installed in the test bed is affected
by factors such as processing and installation, the flatness of the screen surface has also changed. And it may lead to the reduction of the materials’ conveying speed. Also, it increases the time
for materials to contact with the sieve plate. As a result, the sieving efficiency increases;
2) The screen can achieve translational elliptical motion trajectory in EDEM simulation. However, owing to the fact that the screening experimental device is affected by factors such as motor
installation location and installation accuracy, the motion trajectory of each position of the screen is not exactly same;
3) During the process of simulation, the materials are uniformly generated at a constant rate from the particles factory. However, it is difficult for materials to be generated uniformly at a
constant rate in the practical experiment.
5. Conclusions
Considering the lack of studies about the elliptical vibrating machine, we adopt the discrete element method to simulate and analyze the screening process of the elliptical vibrating machine. A
vibration screening model is particularly established on the purpose of our research to fundamentally demonstrate the novel relationships among the conveying speed, sieving efficiency and vibration
parameters of the materials.
This paper has carried out EDEM simulation and screening experiment, and has investigated the influence of four vibration parameters on the screening process by controlling the single variable
method. The four parameters include the screen surface inclination, vibration angle, vibration frequency and amplitude. We obtain the following conclusions:
1) The establishment of relationship curves among the parameters, the materials’ conveying speed and the sieving efficiency provides data support based on experiment and simulation for the research
on the mechanism of the elliptical vibration machinery.
2) It should be noted that each individual sieving parameter has the best conveying speed and sieving efficiency. When the screen surface inclination changes from 4° to 7°, the vibration angle varies
from 35° to 50°, the vibration frequency alters from 12 Hz to 15 Hz and the amplitude changes from 4 mm to 5 mm, we can ensure higher conveying speed and sieving efficiency of materials
simultaneously. Furthermore, the results also provide a reference for the selection of sieving parameters.
• Makinde O. A., Ramatsetse B. I., Mpofu K. Review of vibrating screen development trends: Linking the past and the future in mining machinery industries. International Journal of Mineral
Processing, Vol. 145, 2015, p. 17-22.
• Zhu H. P., Zhou Z. Y., Yang R. Y., Yu A. B. Discrete particle simulation of particulate systems: theoretical developments. Chemical Engineering Science, Vol. 63, Issue 2, 2008, p. 5728-5770.
• Li J., Webb C., Pandiella S. S. Discrete particle motion on sieves-A numerical study using the DEM simulation. Powder Technology, Vol. 133, Issues 1-3, 2003, p. 190-202.
• Xiao J., Xin T. Particle stratification and penetration of a linear vibrating screen by the discrete element method. International Journal of Mining Science and Technology, Vol. 22, Issue 3,
2001, p. 357-362.
• Cleary P. W., Sinnott M. D., Morrison R. D. Separation performance of double deck banana screens – Part 1: Flow and separation for different accelerations. Minerals Engineering, Vol. 22, Issue
14, 2009, p. 1218-1229.
• Sun Q. C., Wang G. Q. Review of particle flow dynamics and its discrete model. Advances in Mechanics, Vol. 1, 2008, p. 88-100.
• Dong K. J., Yu A. B. Numerical simulation of the particle flow and sieving behaviour on sieve bend/low head screen combination. Minerals Engineering, Vol. 31, Issue 4, 2012, p. 2-9.
• Jahani M., Farzanegan A., Noaparast M. Investigation of screening performance of banana screens using LIGGGHTS DEM solver. Powder Technology, Vol. 283, 2015, p. 32-47.
• Cleary P. W., Morrison R. D. Particle methods for modelling in mineral processing. International Journal of Computational Fluid Dynamics, Vol. 23, Issue 2, 2009, p. 137-146.
• Mindlin R. D., Deresiewicz H. Elastic spheres in contact under varying oblique forces. Journal of Applied Mechanics, Vol. 20, Issue 3, 1953, p. 327-344.
• Iwashita K., Oda M. Rolling resistance at contacts in simulation of shear band development by DEM. Journal of Engineering Mechanics, Vol. 124, Issue 3, 1998, p. 285-292.
• Fernandez J. W., Cleary P. W., Sinnott M. D., et al. Using SP Hone-way coupled to DEM to model wet industrial banana screens. Minerals Engineering, Vol. 24, Issue 4, 2011, p. 741-753.
• Delaney G. W., Cleary P. W., Hilden M. Testing the validity of the spherical DEM model in simulating real granular the screening processes. Chemical Engineering Science, Vol. 68, Issue 1, 2012,
p. 215-226.
• Zhao La-La, Zhao Yun-Min, Liu Chu-Sheng, et al. Simulation of the screening process on a circularly vibrating screen using 3D-DEM. Mining Science and Technology, Vol. 21, Issue 5, 2011, p.
• Zhao La-La, Liu C. S., Yan J. X., et al. Numerical simulation of particle the screening process based on 3D discrete element method. Journal of the China Coal Society, Vol. 35, Issue 2, 2010, p.
About this article
Mechanical vibrations and applications
elliptical vibration system
discrete element method
screening experiment
This work is supported by the National Key R&D Plan of China (Grant No. 2016YFC0802706-01) and University of Science and Technology Beijing. Useful discussions with Professor Zhong-Jun Yin and
Associate Professor Zhi-Hui Sun, in the University of Science and Technology Beijing; the CEO, Mr. You-Peng Xiao and the Senior Engineer, Mr. Zhi-Gen Qian, in the Nantong Lianyuan Electrical and
Mechanical Technology Co., Ltd. are also gratefully acknowledged.
Author Contributions
Bing Chen carried out the concepts, design, definition of intellectual content, literature search, data acquisition, data analysis and manuscript preparation. Wei Mo and Chuanlei Xu built the test
bed to verify results of simulation. Lijie Zhang and Chang Liu provided assistance for simulation analysis, data acquisition and data analysis. Kumar K. Tamma performed manuscript review. All authors
have read and approved the content of the manuscript.
Copyright © 2019 Bing Chen, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/19993","timestamp":"2024-11-05T13:11:21Z","content_type":"text/html","content_length":"165149","record_id":"<urn:uuid:77b7016f-b89a-4779-a851-18512a42c4cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00761.warc.gz"} |
Arithmetic Functions: Factorials – Real Python
Arithmetic Functions: Factorials
You can copy-paste this code to follow along at this point in the lesson:
# Using for loop
def fact_loop(num):
if num < 0:
return 0
if num == 0:
return 1
factorial = 1
for k in range(1, num + 1):
factorial = k * factorial
return factorial
# Using recursion
def fact_recursion(num):
if num < 0:
return 0
if num == 0:
return 1
return num * fact_recursion(num - 1)
Hi, when I run my program using def fact_loop and call for fact_loop(10) to check the factorial of 10 I’m not getting an output in the console. Am I missing something?
@sebastianjliam Can you share the code that isn’t working for you so that we can try to reproduce the problem, please? | {"url":"https://realpython.com/videos/arithmetic-functions-factorials/","timestamp":"2024-11-08T12:04:11Z","content_type":"text/html","content_length":"70660","record_id":"<urn:uuid:d994ba20-0cc4-4b11-a2f0-a84de2351e9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00485.warc.gz"} |
Mathematical analysis I (differential calculus)
Related Mathematical analysis I (differential calculus) PDF eBooks
Mathematical analysis II (differential calculus)
The Mathematical analysis II (differential calculus) is an advanced level PDF e-book tutorial or course with 407 pages. It was added on March 25, 2016 and has been downloaded 192 times. The file size
is 2.98 MB. It was created by SEVER ANGEL POPESCU.
Differential and integral calculus
The Differential and integral calculus is an advanced level PDF e-book tutorial or course with 143 pages. It was added on March 28, 2016 and has been downloaded 914 times. The file size is 752.5 KB.
It was created by TEL AVIV UNIVERSITY.
Mathematical Analysis (Volume II)
The Mathematical Analysis (Volume II) is an advanced level PDF e-book tutorial or course with 437 pages. It was added on March 25, 2016 and has been downloaded 713 times. The file size is 2.28 MB. It
was created by Elias Zakon University of Windsor.
Mathematical Analysis (Volume I)
The Mathematical Analysis (Volume I) is an intermediate level PDF e-book tutorial or course with 367 pages. It was added on March 25, 2016 and has been downloaded 829 times. The file size is 2.23 MB.
It was created by Elias Zakon University of Windsor.
An Introduction to Proofs and the Mathematical Vernacular
The An Introduction to Proofs and the Mathematical Vernacular is an intermediate level PDF e-book tutorial or course with 147 pages. It was added on March 24, 2016 and has been downloaded 281 times.
The file size is 1.4 MB. It was created by Martin V. Day.
Differential Equations
The Differential Equations is an intermediate level PDF e-book tutorial or course with 146 pages. It was added on April 8, 2016 and has been downloaded 3868 times. The file size is 3.22 MB. It was
created by Carl Turner.
Introduction to Differential Equations
The Introduction to Differential Equations is an intermediate level PDF e-book tutorial or course with 128 pages. It was added on April 8, 2016 and has been downloaded 1284 times. The file size is
900.71 KB. It was created by Jeffrey R. Chasnov.
Notes on Differential Equations
The Notes on Differential Equations is an intermediate level PDF e-book tutorial or course with 100 pages. It was added on April 8, 2016 and has been downloaded 962 times. The file size is 613.56 KB.
It was created by Robert E. Terrell. | {"url":"https://www.computer-pdf.com/math/654-tutorial-mathematical-analysis-i-differential-calculus.html","timestamp":"2024-11-05T12:33:05Z","content_type":"text/html","content_length":"25434","record_id":"<urn:uuid:abe77bdc-54f0-43c7-9ebc-a2de8f42cdb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00102.warc.gz"} |
Similar to IcyHot.java solve with test cases, test case helper and main method
Given three ints, a b c, return true if one of b or c is "close" (differing from a by at most 3), while the other is "far", differing from both other values by 4 or more. Note: Math.abs(num) computes
the absolute value of a number.
closeFar(1, 3, 7) -> true
closeFar(1, 2, 3) -> false
closeFar(8, 1, 4) -> true
public boolean closeFar(int a, int b, int c) {
//put in a class and add other testing methods like in IcyHot, can copy that but change names appropriate to this problem.
//can copy from IcyHot but edit to your problem where needed
//first add more test cases with correct expected values to show you understand the problem | {"url":"https://sel2in.com/news/prog/closeFar","timestamp":"2024-11-07T19:25:40Z","content_type":"application/xhtml+xml","content_length":"22039","record_id":"<urn:uuid:545a5b58-f40a-499a-91b2-f2ffdfdef5a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00795.warc.gz"} |
Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation
Tuesday, December 13, 2022
Modern approaches for simulation-based inference rely upon deep learning surrogates to enable approximate inference with computer simulators. In practice, the estimated posteriors' computational
faithfulness is, however, rarely guaranteed. For example, Hermans et al. (2021) show that current simulation-based inference algorithms can produce posteriors that are overconfident, hence risking
false inferences. In this work, we introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative,
hence improving their reliability, while sharing the same Bayes optimal solution. We achieve this by enforcing a balancing condition that increases the quantified uncertainty in small simulation
budget regimes while still converging to the exact posterior as the budget increases. We provide theoretical arguments showing that BNRE tends to produce posterior surrogates that are more
conservative than NRE's. We evaluate BNRE on a wide variety of tasks and show that it produces conservative posterior surrogates on all tested benchmarks and simulation budgets. Finally, we emphasize
that BNRE is straightforward to implement over NRE and does not introduce any computational overhead. | {"url":"https://www.physics.uci.edu/node/14322","timestamp":"2024-11-03T23:42:45Z","content_type":"text/html","content_length":"27981","record_id":"<urn:uuid:4579d804-5c14-4cca-84d5-f3d748306484>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00417.warc.gz"} |
Mathematical Mathematics Memes Facebook
Then the area of the quadrant ABMC RD Sharma Class 10 Solutions Areas related to Circles Ex 15.4 RD Sharma Class 10 Solutions Areas related to Circles Exercise 15.4 Question 1. A plot is in the form
of 2019-12-07 · ABCD is a rectangle of side BC=7cm, ADB and ACD are 2 quadrant find area of shaded region? In given figure ABPC is a quadrant of a circle of radius 14 cm Mensuration (C10) In given
figure ABPC is a quadrant of a circle of radius 14 cm and a semicircle is drawn with BC as diameter. Find the area of the shaded region. Answer. We know, AC = r.
RD Sharma Class 10 Solutions Areas related to Circles Ex 15.4 RD Sharma Class 10 Solutions Areas related to Circles Exercise 15.4 Question 1. A plot is in the form of 2019-12-07 From each corner of a
square of side 4 cm a quadrant of a circle of radius 1 cm is cut and also a circle of diameter 2 cm is cut as shown in the given figure. Find the area of the remaining portion of the square. This
exercise shows that sine can be regarded as the length of the semichord AM in a circle of radius 1, and cosine as the perpendicular distance of the chord from the centre. Until modern times, tables
of sines were compiled as tables of chords or semichords, and the name ‘sine’ is conjectured to have come in a complicated and confused way from the Indian word for semichord. Stack Exchange network
consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … Now, a quadrant is one-fourth section of a circle which is
obtained when a circle is divided evenly into four sections or rather 4 quadrants by a set of two lines which are perpendicular in nature.
Dear Swagat P In Fig. 12.33, ABC is a quadrant of a circle of radius 14 cm and a semicircle is drawn with BC as diameter.
ENGELSK - SVENSK - ABCdocz
5 - 9.625 = 14.875 cm2 In the given figure, ABCD is a square of side 4 cm. A quadrant of a circle of radius 1 cm is drawn at each vertex of the square and a circle of diameter 2 cm is also drawn.
Find the area of the shaded region.
9 Physics idéer vetenskap, utbildning, fysik
5 cm Areao of quadrant = 1 4 x π r 2 = 0 .
Draw trend lines, horizontal lines, vertical lines, fib lines, quadrant lines, cycle lines, or channel lines. Draw ABCD, XABCD, Elliot Impulse and Corrective wave patterns. Draw custom patterns with
the polyline Geometry-test-on-circle-area-and-circumference.html First-quadrant-coordinate-graphing-pictures.html Practice-bubble-answer-sheet-abcd-fghj.html A-moll A.K.A. ÄKS ABC ABC (n), alfabet
(n) ABC abc (n) ABC abc-bok (c) AD e. arabiska Arctic arktis Arctic Circle norra polcirkeln (c) (definite singular) Arctic pyton (c) quackery kvacksalvare quadrant kvadrant "c" quadrant kvadrant "c"
H syndrome wikipedia
quadrant, squad, squer a square. A square with vertices ABCD would be denoted . 3–6 Mohr's Circle for Plane Stress - 3–7 General Three-Dimensional Stress - 3–8 Elastic and contains gear F. Gear F
transmits torque to shaft ABCD through gear C, which drives A free-body diagram of one quadrant is shown in part b. Konkurrensen är hög och en ABC-analys kan hjälpa företagare att identifiera vilka
Isolation and characterization used dilution and scratches quadrant methods.
Draw custom patterns Draw rectangles, circles, triangles. Draw trend lines, horizontal lines, vertical lines, fib lines, quadrant lines, cycle lines, or channel lines. Draw ABCD, XABCD, Elliot
Impulse and Corrective wave patterns. Draw custom patterns av JE Knirk · 2020 — At this point in the process of reading the text, the counter clockwise circle The first A-B-C-D is marked a little
more strongly (principally ing quadrant.
Vit fjäril namn
kajsa ernst guldbaggevälja bank till företagetam onstefan larsson salaryhr framtidsyrkeskattefri gavejofa malung sweden
9 Matte idéer skola, fysik, utbildning - Pinterest
Stepwise explanation is given below: - It is given that, radius of the circle = AB = AC = 28 cm. - Area of quadrant ABDC = 1/4×π×r².
best top 10 g54 usb universal programmer ideas and get free
If `AD = sqrt25` cm then area of the rectangle is ABCD is a quadrant of a circle of radius 28cm and a semi circle BEC is drawn with BC as diameter.please i want the PERIMETER of the shaded region
only and NOT THE AREA. To ask Unlimited Maths doubts download Doubtnut from - https://goo.gl/9WZjCW ABCD is a square with side a cm. AOCD and BODC are quadrant of a circle. Find th 2018-07-05 Get
detailed answer of In figure , ABC is a quadrant of a circle of radius 14 cm and a semicircle is drawn with BC as diameter.
3. If point lies inside circle find the quadrant within the circle. Check the point with respect to centre of circle. Get detailed answer of From each corner of a square of side 4 cm a quadrant of a
circle of radius 1 cm is cut and also a circle of diameter 2 cm is cut as shown in figure. 2010-01-05 · In a square ABCD, 2 quadrants of a circle are drawn and named as ACD and BCD intersecting at
point O. Find the sum of the areas of region AOD and region BOC. In fig ABCD is a trapezium of area 24.5 sq cm. in it ,AD//BC , | {"url":"https://valutauhakglk.netlify.app/48051/4991","timestamp":"2024-11-13T19:07:09Z","content_type":"text/html","content_length":"10734","record_id":"<urn:uuid:28c121bf-cf70-447c-97c9-3e8b5ae52424>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00817.warc.gz"} |
CCO '00 P1 - Subsets
Canadian Computing Competition: 2000 Stage 2, Day 1, Problem 1
In this problem, you will write a program to find the minimal solution to a set of set inequalities. A set inequality has the format X contains S where may be any set name and may be a set name or
set element. If is a set name the inequality means that is a superset or equal to . If is an element the inequality means that contains . Sets are named - and contain elements from -.
The first line of input specifies the number of set inequalities (). The next lines each contain one set inequality. For each set name that appears in the input, your program must determine its
minimal set: the smallest set of elements that the name must take in order that all of the inequalities hold. Output, in alphabetical order, each set name followed its minimal set, with the elements
in alphabetical order, in the format shown below.
Sample Input
A contains B
A contains c
B contains d
F contains A
F contains z
X contains Y
Y contains X
X contains x
Q contains R
Sample Output
A = {c,d}
B = {d}
F = {c,d,z}
Q = {}
R = {}
X = {x}
Y = {x}
• commented on Aug. 8, 2018, 11:42 p.m. ← edit 2
What is the maximum of in this question?
edit: The maximum of ended up being irrelevant in my solution anyway :p
• Is it guaranteed the sets won't be cyclic? i.e; if A contains B can B contain A
□ commented on May 4, 2017, 1:33 p.m. ← edited
• commented on Dec. 19, 2016, 11:08 p.m. ← edit 2
Is there only 1 element per set? etc: A = {c, d, d}?.
NVM. There's only 1 element per set.
• commented on Feb. 8, 2015, 8:52 p.m. ← edited
Will the set element always be char's?
□ commented on Feb. 9, 2015, 11:38 p.m. ← edited
Yeah it should be. That's what it said in the instructions. | {"url":"https://dmoj.ca/problem/cco00p1","timestamp":"2024-11-05T12:46:49Z","content_type":"text/html","content_length":"43003","record_id":"<urn:uuid:e8616e62-4b24-4f55-8c3d-a13f7742efe5>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00145.warc.gz"} |
Why believing in conspiracy theories is wrong
Friday, May 07th, 2010 | Author: Konrad Voelkel
I guess most people who believe in conspiracy theories either have some benefit in pretending to believe or they really think the theories are likely to be true. Those who think conspiracy theories
are likely to be true, are victims of some kind of "Bayesian fallacy":
Bayes (English mathematician, 1702-1761) proved a theorem about conditional probabilities, nowadays called "Bayes' theorem". Suppose there are two statements A and B, which might overlap (e.g. A=
"it's raining today" and B="it's raining the whole week"¹, where the truth of B implies the truth of A). Now imagine these statements are more or less likely, so you attach some probability to these
statements, p(A) and p(B), with values in 0-100% (or, for the mathematically oriented readers: let p be a probability measure on some discrete $\sigma$-algebra containing A and B). It's not only the
probability of A and B we might be interested in, but also the conditional probability "How likely is A when B is true?", which we write p(A|B). Bayes' theorem now reads:
$P(A|B)\cdot P(B) = P(B | A)\cdot P(A)$, and this means in words, that the probability of A under the condition that B is true, multiplied by the probability of B, is the same as the probability of B
under the condition that A is true, multiplied by the probability of A.
Let me put this in context. Let A be the statement "There will be a big volcano eruption in 2010" and let B be the statement "Someone predicted that there will be a big volcano eruption in 2010".
Then we can talk about the probabilities of A and B (although we don't know them exactly) and about the conditional probabilities, how likely the volcano eruption is, under the condition that someone
predicted it, and the conditional probability how likely it it that someone predicted it, under the condition that it happens. If we believe that predicting volcano eruptions is possible, then we
think that the conditional probability that it happens if someone predicted it, is higher than the probability that it happens with or without someone predicting it. Looking at Bayes' formula, we see
$P(A|B) = P(B|A)\cdot P(A) \cdot \frac{1}{P(B)}$, which tells us in words, that the probability of a volcano eruption under the condition that someone predicted it, is proportional to the probability
of a volcano eruption and anti-proportional to the probability of someone predicting it. We see also, that the probability of a volcano eruption under the condition that someone predicted it is
greater than the probability of a volcano eruption only if $\frac{P(B|A)}{P(B)} > 1$, that means, only if the probability of someone predicting the eruption is strictly smaller than the probability
of someone predicting it under the condition that it happens.
Now you might know that there are some ways to predict volcano eruptions (I'm no expert). So the probability that someone predicts it under the condition that it happens is relatively high, but since
there is someone claiming to forecast volcano eruptions every year (whether it happens or not), the absolute probability of someone predicting a volcano eruption for this year is 100%. So we can't
infer that volcano eruptions are likely just because someone predicted volcano eruptions.
Substitute volcano eruptions with your favourite Doomsday scenario and choose some arbitrary probability for this. The probability of someone predicting this scenario is close to 100% and therefore
you can't infer that it's likely to happen just because someone told you so.
Substitute volcano eruptions with a war in Middle East and someone predicting it with an oil company doing business there after the war. If we realise that oil companies are pretty likely to do
business in oil-rich countries, even more likely if there is no war going on, then we see (via Bayes' theorem), that it's not likely that the war was started just because of the oil business.
I don't want to say that conspiracies don't exist or that there are no wars about resources (like oil). I just want to point out that in each case, one has to find more evidence and stronger
arguments than just coincidence of events. Test your argument against Bayes' theorem!
If someone tells you his latest conspiracy theory, you might have been thinking "it might be true or false but I can't prove him wrong and the probability that he's right is not zero". This is not a
good response. Instead, you should always ask: "and why don't you think it's all just coincidence and happened by chance?"². This hypothesis will save you from the Bayesian fallacy.
You can use Bayes' theorem to strengthen your arguments: If for two events A and B the conditional probability P(B|A) is really greater than the absolute probability P(B), then the probability P(A|B)
is strictly greater than the probability P(A), which means that from measuring B you can infer that A is much more likely now. This is called "Bayesian inference" and it's really important, for
example, to find out which medicinal treatments cause more good than harm.
If you want to know more about argumentational fallacies of a similar kind, take a look at this paper (Khalil 2008) I found googling for "Bayesian fallacy", although the author uses these words
(completely) differently.
¹ - by the way, it has been raining the whole week here in Freiburg...
² - If people don't like the thought that something happens "by chance", they might not understand how order arises from chaos. This is another problem (which causes a lot of confusion), which I want
to discuss separately (later). | {"url":"https://www.konradvoelkel.com/2010/05/why-believing-in-conspiracy-theories-is-wrong/","timestamp":"2024-11-05T13:23:25Z","content_type":"text/html","content_length":"30189","record_id":"<urn:uuid:3e835fe2-9664-43ad-b912-0d0eb7e8b7bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00306.warc.gz"} |
Numbers Multipler Worksheet
Numbers Multipler Worksheet function as fundamental tools in the world of mathematics, offering a structured yet versatile platform for students to check out and master numerical concepts. These
worksheets provide a structured strategy to comprehending numbers, supporting a strong structure whereupon mathematical proficiency grows. From the simplest checking workouts to the ins and outs of
innovative computations, Numbers Multipler Worksheet satisfy learners of diverse ages and ability levels.
Introducing the Essence of Numbers Multipler Worksheet
Numbers Multipler Worksheet
Numbers Multipler Worksheet -
Long multiplication practice worksheets including a variety of number sizes and options for different number formats Two Digit multiplication is a natural place to start after students have mastered
their multiplication facts
Multiplication worksheets for grades 2 to 6 Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental
multiplication exercises to improve numeracy skills
At their core, Numbers Multipler Worksheet are vehicles for theoretical understanding. They envelop a myriad of mathematical concepts, assisting learners via the maze of numbers with a series of
interesting and purposeful workouts. These worksheets go beyond the boundaries of standard rote learning, motivating active engagement and cultivating an user-friendly grasp of mathematical
Supporting Number Sense and Reasoning
3 Digit X 2 Digit Multiplication Worksheets
3 Digit X 2 Digit Multiplication Worksheets
Multiplication by 2s This page is filled with worksheets of multiplying by 2s This is a quiz puzzles skip counting and more Multiplication by 3s Jump to this page if you re working on multiplying
numbers by 3 only Multiplication by 4s Here are some practice worksheets and activities for teaching only the 4s times tables Multiplication by 5s
Welcome to the Math Salamanders Multiplication Printable Worksheets Here you will find a wide range of free printable Multiplication Worksheets which will help your child improve their multiplying
skills Take a look at our times table worksheets or check out our multiplication games or some multiplication word problems
The heart of Numbers Multipler Worksheet hinges on cultivating number sense-- a deep understanding of numbers' meanings and interconnections. They encourage expedition, inviting learners to dissect
arithmetic procedures, figure out patterns, and unlock the enigmas of series. With provocative obstacles and logical challenges, these worksheets become portals to developing reasoning abilities,
nurturing the logical minds of budding mathematicians.
From Theory to Real-World Application
Multiply By Three Worksheet
Multiply By Three Worksheet
Find and Identify Multiples Part 1 List the first five multiples of the given numbers Part 2 Circle the multiples of the given number 3rd through 5th Grades View PDF
Multiplication 3 Digits Times 1 Digit On this page you ll have a large selection of worksheets and games for multiplying 3 digit by 1 digit numbers example 929x6 Multiplication 4 Digits Times 1 Digit
Practice finding the products of 4 digit numbers and 1 digit numbers example 4 527x9 Multiplication 2 Digits Times 2 Digits
Numbers Multipler Worksheet serve as avenues linking theoretical abstractions with the apparent realities of daily life. By instilling functional scenarios right into mathematical workouts, students
witness the significance of numbers in their surroundings. From budgeting and dimension conversions to understanding analytical data, these worksheets encourage trainees to wield their mathematical
expertise beyond the confines of the classroom.
Varied Tools and Techniques
Flexibility is inherent in Numbers Multipler Worksheet, employing a toolbox of instructional tools to accommodate different discovering designs. Aesthetic aids such as number lines, manipulatives,
and electronic resources serve as buddies in imagining abstract principles. This diverse strategy makes certain inclusivity, suiting learners with various choices, staminas, and cognitive designs.
Inclusivity and Cultural Relevance
In a significantly diverse globe, Numbers Multipler Worksheet accept inclusivity. They go beyond social limits, incorporating examples and troubles that reverberate with students from diverse
backgrounds. By incorporating culturally pertinent contexts, these worksheets cultivate a setting where every learner feels represented and valued, improving their link with mathematical concepts.
Crafting a Path to Mathematical Mastery
Numbers Multipler Worksheet chart a training course in the direction of mathematical fluency. They instill determination, essential reasoning, and problem-solving skills, vital qualities not only in
mathematics but in different elements of life. These worksheets equip students to navigate the complex surface of numbers, supporting an extensive appreciation for the style and reasoning inherent in
Embracing the Future of Education
In an era marked by technological innovation, Numbers Multipler Worksheet flawlessly adjust to electronic platforms. Interactive interfaces and electronic resources augment typical knowing, providing
immersive experiences that transcend spatial and temporal boundaries. This combinations of traditional techniques with technological developments advertises an appealing era in education, promoting a
much more vibrant and engaging knowing environment.
Verdict: Embracing the Magic of Numbers
Numbers Multipler Worksheet epitomize the magic inherent in maths-- an enchanting journey of expedition, exploration, and proficiency. They transcend conventional pedagogy, functioning as drivers for
stiring up the fires of curiosity and inquiry. Through Numbers Multipler Worksheet, learners start an odyssey, opening the enigmatic globe of numbers-- one trouble, one remedy, at once.
Multiplication Worksheets 2 Digit By 1 Digit
4th Grade Multiplication Worksheets Free 141 Multiplication Multiplication Worksheets 1 10
Check more of Numbers Multipler Worksheet below
Multiply 3 Digit By 2 Digit Worksheet
Pin On Plates Multiplication Worksheets For Grade 3 Multiplication Worksheets In Order
Multiplication 3 Digit By 2 Digit 22 Worksheets Printable Multiplication Worksheets 4th
Multiples Worksheet
Multiply 2 digit By 1 digit Numbers Worksheets For 4th Graders Online SplashLearn
Multiplication 10 Worksheet
Multiplication Worksheets K5 Learning
Multiplication worksheets for grades 2 to 6 Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental
multiplication exercises to improve numeracy skills
Multiplication Worksheets Common Core Sheets
Our multiplication worksheets are the best on the internet and perfect for in person and distance learning With dozens of different concepts your students can quickly understand the fundamentals of
multiplication and practice as much as needed And with our flash cards they can practice their math skills in a fun and engaging way
Multiplication worksheets for grades 2 to 6 Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental
multiplication exercises to improve numeracy skills
Our multiplication worksheets are the best on the internet and perfect for in person and distance learning With dozens of different concepts your students can quickly understand the fundamentals of
multiplication and practice as much as needed And with our flash cards they can practice their math skills in a fun and engaging way
Pin On Plates Multiplication Worksheets For Grade 3 Multiplication Worksheets In Order
Multiply 2 digit By 1 digit Numbers Worksheets For 4th Graders Online SplashLearn
Multiplication 10 Worksheet
Printable Long Multiplication Worksheets Pdf Thekidsworksheet
Multiply Multi Digit Numbers Worksheet
Multiply Multi Digit Numbers Worksheet
Two Digit By Two Digit Multiplication Worksheet Worksheet24 | {"url":"https://szukarka.net/numbers-multipler-worksheet","timestamp":"2024-11-08T01:27:01Z","content_type":"text/html","content_length":"26260","record_id":"<urn:uuid:d2501e8a-4519-4e7e-81d8-ad1e8664a588>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00824.warc.gz"} |
How to calculate the varianceHow to calculate the variance 🚩 the dispersion characteristic 🚩 Math.
The variance in mathematical statistics and probability theory is defined as a measure of dispersion (deviation from average). The smaller the value of this index, the more uniform the set and in the
closer range will be average.
In the econometric calculations tend to use General, intergroup and intragroup variance. The first describes how the symptom totality under the influence of all factors acting upon it. It can be
calculated by the formula:
σ^2общ = (sum(x-khsr)*f)/sum of f, where
khsr – the General arithmetic mean for the whole population.
The between-group variance shows how the average deviates each group from the total for all groups. It reflects the influence of the factor is put at the base of the group. It can be found as
σ^2m = (sum(HSR-khsr)*ni)/sum ni, where
HSR is the average value of the characteristic for a particular group;
ni – the number of units in the group;
khsr – average value, characteristic for the total number of groups.
Intra-group (residual) variance characterizes the fluctuation of the characteristic within each group. She talks about random variations and do not depend on the basis, forms the basis of the group.
For its calculation, you must first find the variance for individual groups:
σ^2ві = (sum(x-HCR)*ni)/sum ni, where
HSR average for each group.
And then the average for all groups according to the formula:
σ^2іср = (sum(σ^2ві*ni)/sum ni.
They are all intertwined: the total variance is equal to the sum of intergroup and intragroup medium. This ratio reflects the rule of addition the dispersions. It can be represented as follows:
σ^2общ = σ^2m+ σ^2іср
Using this rule, you can determine what portion of the total variance is influenced by the trait-factor, which underlay the groups. The higher the proportion of between-group variance in General, the
stronger the influence of this factor. | {"url":"https://eng.kakprosto.ru/how-112244-how-to-calculate-the-variance","timestamp":"2024-11-09T16:38:59Z","content_type":"text/html","content_length":"35857","record_id":"<urn:uuid:b447a86b-f2f7-4f84-836a-6b05bb9c04c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00213.warc.gz"} |
Dhaval Dave - GoHired
Experience iQuanti, Tracxn, Oracle, Pragma, VVP Engineering College
Education M.Tech NITWarangal, B.Tech Saurashtra University
Research “An Effective Black hole Attack Detection
Mechanism using PBAck MANET”
LinkedIn in.linkedin.com/in/davedhaval87
Contact davedhaval87@gmail.com
Proposed Unique Code Solutions
47) Mirror of Tree
46) Count Possible Decodings of a given Digit Sequence
45) Check if an array has duplicate numbers in O(n) time and O(1) space
44) Find the number ABCD such that when multipled by 4 gives DCBA.
43) Word Break Problem
42) Puzzle : 100 doors in a row Visit and Toggle the door
41) Maximum of all subarrays of size k
40) Level order traversal in spiral form
39) Longest Increasing Subsequence
38) HackerEarth : The Magic HackerEarth Nirvana solutions Hiring Challenge
37) Kony : Given LinkedList Divide LL in N Sub parts and delete first K nodes of each part
36) LimeRoad : Find two non repeating elements in an array of repeating elements
35) Sort Stack in place
34) HackerEarth : Flipkart’s Drone Link for Question Link for Solution
33) Puzzle : 8 teams are participating. Each team plays twice with all other teams. 4 of them will go to the semi final. Minimum and Maximum how many matches should a team win
32) Common Ancestor in a Binary Tree And/OR Binary Search Tree
31) Given array of 0’s and 1’s. All 0’s are coming first followed by 1’s. find the position of first 1
30) Generate next palindrome number
29) Print all nodes that are at distance k from a leaf node
28)FaceBook : Max Sum in circularly situated Values
27) Given Set of words or A String find whether chain is possible from these words or not
26) Connect n ropes with minimum cost
25) Sort an array according to the order defined by another array
24) Implement LRU Cache
23) Amazon : Get Minimum element in O(1) from input numbers
22) Flipkart : TicTacToe Game As Asked in Flipkart
21) Amazon : Sort an array according to the order defined by another array
20) Amazon : Connect n ropes with minimum cost
19) Find an index i such that Arr [i] = i
18) Flipkart SDE2 : Check a String is SUBSEQUENCE of another String Find Minimum length for that ( DNA Matching )
17) Flipkart : Add Sub Multiply very large number stored as string
16) Code Chef’s PRGIFT
15) Code Chef’s Problem RRCOPY
14) Code Chef’s Problem SGARDEN
13) Amazon : Find Longest Consecutive sequence in Array of numbers
12) Flipkart/Amazon : Check Binary Tree is Binary Search Tree or not
11) Flipkart : Printing each word in string backwards
10) Amazon: Test Cases for Round Function
09) Flipkart : Find Percentage of Words matching in Two Strings
08) Flipkart : Wrong Directions given find minimum Moves so that he can reach to the destination
07) Amazon: Find next greater number with same set of digits
06) Amazon: 25 horses 5 tracks Find 3 fastest puzzle
05) HackerRank: Rectangular chocolate bar Create at least one piece which consists of exactly nTiles tiles
04) Amazon: N Petrol bunks or City arranged in circle. You have Fuel and distance between petrol bunks. Is it possible to find starting point so that we can travel all Petrol Bunks
03) Microsoft : Print vertical sum of all the axis in the given binary tree
02) Amazon : Function flatten() which flattens Linked List in sorted way
01) Amazon: Pythagorean Triplets in an array in O(N)
Other Articles | {"url":"https://gohired.in/dhaval-dave/","timestamp":"2024-11-09T07:39:12Z","content_type":"text/html","content_length":"95228","record_id":"<urn:uuid:a1fc3935-9f03-4667-8225-d77207cc6fb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00458.warc.gz"} |
Masses of Formal Philosophy: Question 2
Here’s my (much delayed) answer to the second of Vincent Hendricks and John Symons’ five questions about Formal Philosophy.
What example from your work illustrates the role formal methods can play in philosophy?
I’ll focus on one example from some of my recent work.
In the last few years I have been working on topics in proof theory and connections between the way we can conceive of the structure of proofs and concerns in the theory of meaning. The idea that the
meaning of a word or a concept might be usefully explicated by giving an account of its inferential role is a common one – the work of Ned Block, Bob Brandom and Michael Dummett are three very
different examples of ways to take this idea seriously. It is a truism that meaning has some sort of connection with use, and use in reasoning and inference is a very important part of any account of
It has seemed to me that if we are going to take take inferential role as playing its part in a theory of meaning, then we had better use the best available tools for giving an account of proof. The
theory of proofs should have something to teach philosophers who have interests in semantics. This is not a mainstream position – our vocabulary itself speeks against this, with the ready
identification of model theory with ‘semantics’ and proof theory with ‘syntax’. The work of intuitionists such as Dummett, Prawitz, Martin-Löf and Tennant in conspicuous in its isolation at providing
a contrary opinion to the mainstream. This has led to the opinion that semantically anti-realist positions – those that take proof or inference as the starting point of semantic theory, rather than
truth-conditions or representation – are naturally revisionary and intuitionist. For intuitionistic logic has a clear motivation in terms of proof and verification, and it has seemed to many that
orthodox classical logic does not.
I think that this is a mistake. It seems to me that natural proof-theoretic accounts of classical logic (starting with Gentzen’s sequent calculus, but also newer pieces of technology such as
proof-nets) can have a central place in a theory of meaning that starts with inferential role and not with truth. We can think of the valid sequents (of the form X ⊢ Y, where X and Y are sets of
statements) as helping us ‘keep score’ in dialectical positions. The validity of the sequent X ⊢ Y tells us that a position in dialogue in which each statement in X is asserted and each statement in
Y is denied is out of bounds according to the rules of ’the game.’ In fact, the structural rules in the sequent calculus can be motivated in this way. Identity sequents X,A ⊢ A,Y tell us that
asserting and denying A (in the same context) is out of bounds. The rule of weakening tells us that if asserting X and denying Y is out of bounds then adding an extra assertion or extra denial would
not aid the matter. The cut rule tells us that if a position in which X is asserted and Y is denied is not out of bounds, then given a statement A, either the addition as an assertion, or its
addition as a denial will also not be out of bounds. If asserting A is out of bounds in a context, it is implicitly denied in that context. Explicitly denying is no worse than implicitly denying.
Thinking of Gentzen’s sequent calculus in this way gives an alternative understanding of classical logic. We think of the rules for connectives as ‘definitions’ governing assertions featuring the
logical vocabulary. Proof-theoretical techniques such as the eliminability of the ‘cut’ rule tell us that these definitions are conservative. No matter what the rules of the game concerning our
primitive vocabulary might be, we can add the classical logical connectives without disturbing the rules of assertion in that primitive vocabulary (the need for this point was made clearly in Nuel
Belnap’s paper “Tonk, Plonk and Plink”). The logical vocabulary allows us to ‘make explicit’ what was merely ‘implicit’ before. The interpretation of the rules of the quantifiers is particularly
enlightening. It allows us to sidestep the debate between ‘substitutional’ and ‘objectual’ accounts of quantification.
In my recent work I have tried to flesh out this picture, and to show how we can expand this story to take account of appropriate conditions for use for modal connectives such as possibility and
necessity. The key idea is that in modal discourse we not only assert and deny, but we make assertions and denials in different dialectical contexts, and an assertion of a necessity claim in one
context can impinge on claims in other dialectical contexts. This means that we can give a semantics of modal vocabulary that motivates a well-known modal logic (in the first instance, the simple
modal logic S5, but the extension to other logics is not difficult) in which possible worlds are not the starting point of semantic explanation. Modal vocabulary needs not be conceived of as a way of
describing possible worlds. It can be understood as a governing discourse in which we not only assert and deny to express our own commitments, but also to articulate the connections between our
concepts. The structures of dialectical positions need not merely contain assertions and denials, but these may be partitioned into different ‘zones’ according to structure of the different
suppositions and shifts of context in that discourse. | {"url":"https://consequently.org/news/2006/09/16/masses_of_formal_philosophy_question_2/","timestamp":"2024-11-08T08:27:45Z","content_type":"text/html","content_length":"13478","record_id":"<urn:uuid:d4f81f00-60f5-4be5-8331-656dd460a845>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00701.warc.gz"} |
IIT JEE Maths preparation 2020 for Main and Advanced - TopperLearning
JEE Main Maths
To be an expert in JEE Mathematics, it is important to practise. This is the only way to get a command over the subject. We have regularly heard the saying, “Practice makes perfect”, and thus,
students need to practise till they are perfect in the subject.
When students get enough practice, they will usually be eager to finish all the topics at one go. This methodology, in any case, would work better if students are almost certain about the answers in
the question paper. A simple technique is to start according to the difficulty level of the questions, viz. simple, medium and difficult. Start with the simple questions, then move to the medium ones
and spare sufficient time to solve the difficult questions. The best part of this technique is that it helps students with scoring higher while keeping away from the chances of getting negative
JEE Main Preparation Resources
Just studying isn’t enough. It is important that students continue to assess their performance as well. In order to encourage students to assess themselves frequently, we have different types of
tests. These tests can be taken on a daily basis or weekly basis to assess your preparation and gauge your performance. Based on these tests that you are encouraged to take frequently, you can decide
which subject needs more attention. You can focus more on those topics/subjects and even revise them more often than you had initially decided. With a meticulous plan and enough resources, you will
surely ace the JEE exam. | {"url":"http://toper.me/maths.html","timestamp":"2024-11-12T16:06:31Z","content_type":"text/html","content_length":"495696","record_id":"<urn:uuid:c0250b3a-1909-4763-ba09-6a343fa70bed>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00436.warc.gz"} |
qvoronoi Qu -- furthest-site Voronoi diagram
Up: Home page for Qhull (local)
Up: Qhull manual: contents
To: Programs Options Output Formats Geomview Print Qhull Precision Trace Functions (local)
To: synopsis input outputs controls graphics notes conventions options
qvoronoi Qu -- furthest-site Voronoi diagram
The furthest-site Voronoi diagram is the furthest-neighbor map for a set of points. Each region contains those points that are further from one input site than any other input site. See the survey
article by Aurenhammer ['91] and the brief introduction by O'Rourke ['94]. The furthest-site Voronoi diagram is the dual of the furthest-site Delaunay triangulation.
Example: rbox 10 D2 | qvoronoi Qu s o TO result
Compute the 2-d, furthest-site Voronoi diagram of 10 random points. Write a summary to the console and the Voronoi regions and vertices to 'result'. The first vertex of the result indicates
unbounded regions. Almost all regions are unbounded.
Example: rbox r y c G1 D2 | qvoronoi Qu s Fn TO result
Compute the 2-d furthest-site Voronoi diagram of a square and a small triangle. Write a summary to the console and the Voronoi vertices for each input site to 'result'. The origin is the only
furthest-site Voronoi vertex. The negative indices indicate vertices-at-infinity.
Qhull computes the furthest-site Voronoi diagram via the furthest-site Delaunay triangulation. Each furthest-site Voronoi vertex is the circumcenter of an upper facet of the Delaunay triangulation.
Each furthest-site Voronoi region corresponds to a vertex of the Delaunay triangulation (i.e., an input site).
See Qhull FAQ (local) - Delaunay and Voronoi diagram questions.
The 'qvonoroi' program is equivalent to 'qhull v Qbb'. It disables the following Qhull options: d n m v H U Qb QB Qc Qf Qg Qi Qm Qr Qv Qx TR E V Fa FA FC Fp FS Ft FV Gt Q0,etc.
Copyright © 1995-2020 C.B. Barber
See qvoronoi synopsis. The same program is used for both constructions. Use option 'Qu' for furthest-site Voronoi diagrams.
The input data on stdin consists of:
□ dimension
□ number of points
□ point coordinates
Use I/O redirection (e.g., qvoronoi Qu < data.txt), a pipe (e.g., rbox 10 | qvoronoi Qu), or the 'TI' option (e.g., qvoronoi TI data.txt Qu).
For example, this is a square containing four random points. Its furthest-site Voronoi diagram has on vertex and four unbounded, separating hyperplanes (i.e., the coordinate axes)
rbox c 4 D2 > data
2 RBOX c 4 D2
-0.4999921736307369 -0.3684622117955817
0.2556053225468894 -0.0413498678629751
0.0327672376602583 -0.2810408135699488
-0.452955383763607 0.17886471718444
-0.5 -0.5
-0.5 0.5
0.5 -0.5
0.5 0.5
qvoronoi Qu s Fo < data
Furthest-site Voronoi vertices by the convex hull of 8 points in 3-d:
Number of Voronoi regions: 8
Number of Voronoi vertices: 1
Number of non-simplicial Voronoi vertices: 1
Statistics for: RBOX c 4 D2 | QVORONOI Qu s Fo
Number of points processed: 8
Number of hyperplanes created: 20
Number of facets in hull: 11
Number of distance tests for qhull: 34
Number of merged facets: 1
Number of distance tests for merging: 107
CPU seconds to compute hull (after input): 0
These options control the output of furthest-site Voronoi diagrams.
furthest-site Voronoi vertices
print the coordinates of the furthest-site Voronoi vertices. The first line is the dimension. The second line is the number of vertices. Each remaining line is a furthest-site Voronoi
vertex. The points-in-square example has one furthest-site Voronoi vertex at the origin.
list the neighboring furthest-site Voronoi vertices for each furthest-site Voronoi vertex. The first line is the number of Voronoi vertices. Each remaining line starts with the number of
neighboring vertices. Negative indices (e.g., -1) indicate vertices outside of the Voronoi diagram. In the points-in-square example, the Voronoi vertex at the origin has four
list the furthest-site Voronoi vertices for each furthest-site Voronoi region. The first line is the number of Voronoi regions. Each remaining line starts with the number of Voronoi
vertices. Negative indices (e.g., -1) indicate vertices outside of the Voronoi diagram. In the points-in-square example, all regions share the Voronoi vertex at the origin.
furthest-site Voronoi regions
print the furthest-site Voronoi regions in OFF format. The first line is the dimension. The second line is the number of vertices, the number of input sites, and "1". The third line
represents the vertex-at-infinity. Its coordinates are "-10.101". The next lines are the coordinates of the furthest-site Voronoi vertices. Each remaining line starts with the number of
Voronoi vertices in a Voronoi region. In 2-d, the vertices are listed in adjacency order (unoriented). In 3-d and higher, the vertices are listed in numeric order. In the points-in-square
example, each unbounded region includes the Voronoi vertex at the origin. Lines consisting of 0 indicate interior input sites.
print separating hyperplanes for inner, bounded furthest-site Voronoi regions. The first number is the number of separating hyperplanes. Each remaining line starts with 3+dim. The next
two numbers are adjacent input sites. The next dim numbers are the coefficients of the separating hyperplane. The last number is its offset. The are no bounded, separating hyperplanes for
the points-in-square example.
print separating hyperplanes for outer, unbounded furthest-site Voronoi regions. The first number is the number of separating hyperplanes. Each remaining line starts with 3+dim. The next
two numbers are adjacent input sites on the convex hull. The next dim numbers are the coefficients of the separating hyperplane. The last number is its offset. The points-in-square
example has four unbounded, separating hyperplanes.
Input sites
list ridges of furthest-site Voronoi vertices for pairs of input sites. The first line is the number of ridges. Each remaining line starts with two plus the number of Voronoi vertices in
the ridge. The next two numbers are two adjacent input sites. The remaining numbers list the Voronoi vertices. As with option 'o', a 0 indicates the vertex-at-infinity and an unbounded,
separating hyperplane. The perpendicular bisector (separating hyperplane) of the input sites is a flat through these vertices. In the points-in-square example, the ridge for each edge of
the square is unbounded.
print summary of the furthest-site Voronoi diagram. Use 'Fs' for numeric data.
list input sites for each furthest-site Delaunay region. Use option 'Pp' to avoid the warning. The first line is the number of regions. The remaining lines list the input sites for each
region. The regions are oriented. In the points-in-square example, the square region has four input sites. In 3-d and higher, report cospherical sites by adding extra points.
Geomview output for 2-d furthest-site Voronoi diagrams.
These options provide additional control:
must be used.
randomly rotate the input with a random seed of n. If n=0, the seed is the time. If n=-1, use time for the random seed, but do not rotate the input.
select furthest-site Voronoi vertices for input site n
verify result
input data from file. The filename may not use spaces or quotes.
output results to file. Use single quotes if the filename contains spaces (e.g., TO 'file with spaces.txt'
report progress after constructing n facets
include upper and lower facets in the output. Set k to the last dimension (e.g., 'PD2:1' for 2-d inputs).
facet dump. Print the data structure for each facet (i.e., furthest-site Voronoi vertex).
In 2-d, Geomview output ('G') displays a furthest-site Voronoi diagram with extra edges to close the unbounded furthest-site Voronoi regions. All regions will be unbounded. Since the
points-in-box example has only one furthest-site Voronoi vertex, the Geomview output is one point.
See the Delaunay and Voronoi examples for a 2-d example. Turn off normalization (on Geomview's 'obscure' menu) when comparing the furthest-site Voronoi diagram with the corresponding Voronoi
See Voronoi notes.
The following terminology is used for furthest-site Voronoi diagrams in Qhull. The underlying structure is a furthest-site Delaunay triangulation from a convex hull in one higher dimension. Upper
facets of the Delaunay triangulation correspond to vertices of the furthest-site Voronoi diagram. Vertices of the furthest-site Delaunay triangulation correspond to input sites. They also define
regions of the furthest-site Voronoi diagram. All vertices are extreme points of the input sites. See qconvex conventions, furthest-site delaunay conventions, and Qhull's data structures.
□ input site - a point in the input (one dimension lower than a point on the convex hull)
□ point - a point has d+1 coordinates. The last coordinate is the sum of the squares of the input site's coordinates
□ vertex - a point on the upper facets of the paraboloid. It corresponds to a unique input site.
□ furthest-site Delaunay facet - an upper facet of the paraboloid. The last coefficient of its normal is clearly positive.
□ furthest-site Voronoi vertex - the circumcenter of a furthest-site Delaunay facet
□ furthest-site Voronoi region - the region of Euclidean space further from an input site than any other input site. Qhull lists the furthest-site Voronoi vertices that define each
furthest-site Voronoi region.
□ furthest-site Voronoi diagram - the graph of the furthest-site Voronoi regions with the ridges (edges) between the regions.
□ infinity vertex - the Voronoi vertex for unbounded furthest-site Voronoi regions in 'o' output format. Its coordinates are -10.101.
□ good facet - an furthest-site Voronoi vertex with optional restrictions by 'QVn', etc.
See qvoronoi options. The same program is used for both constructions. Use option 'Qu' for furthest-site Voronoi diagrams.
Up: Home page for Qhull (local)
Up: Qhull manual: contents
To: Programs Options Output Formats Geomview Print Qhull Precision Trace Functions (local)
To: synopsis input outputs controls graphics notes conventions options
Comments to: qhull@qhull.org
Created: Sept. 25, 1995 --- Last modified: see top | {"url":"http://www.qhull.org/html/qvoron_f.htm","timestamp":"2024-11-08T20:20:34Z","content_type":"text/html","content_length":"18025","record_id":"<urn:uuid:84822b61-a793-42fc-88ae-180d6d6bfd05>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00194.warc.gz"} |
Graph Paper Printable With Axis
Graph Paper Printable With Axis - Generate lined pages, grid page, graph. The template contains besides a squared graph also the x and y. Web free graph paper maker tools to make your own custom grid
and graph paper printable. Check out our graph paper with axis here for the accomplishment of the same purpose as. Web this printable graph paper with axis is mostly used to plot points on a
cartesian graph. Web looking for readily usable papers with an axis? Web you can find a printable graph paper or graph paper template for every subject you need. Web use our free graph paper
generator to create and customize pdfs of printable graph paper. Web printable graph paper with axis is perfect for people working with math or physics problems. The gridlines and the axis on the
paper can provide a.
Free Printable Graph Paper with Axis Templates Print Graph Paper
Rectangular graph paper usually includes four equal quadrants with. Web printable graph paper with axis is perfect for people working with math or physics problems. Web free graph paper maker tools
to make your own custom grid and graph paper printable. Web looking for readily usable papers with an axis? The template contains besides a squared graph also the x.
Printable Graph Paper With Axes Printable Graph Paper Labb by AG
Web free graph paper maker tools to make your own custom grid and graph paper printable. Generate lined pages, grid page, graph. Web use our free graph paper generator to create and customize pdfs of
printable graph paper. Customize features like grid size,. Web this printable graph paper with axis is mostly used to plot points on a cartesian graph.
Free Printable X And Y Axis Graph Paper Free Printable Templates
Check out our graph paper with axis here for the accomplishment of the same purpose as. Web use our free graph paper generator to create and customize pdfs of printable graph paper. Rectangular graph
paper usually includes four equal quadrants with. Web looking for readily usable papers with an axis? The gridlines and the axis on the paper can provide.
Graph Paper Printable With X And Y Axis Printable Graph Paper
Generate lined pages, grid page, graph. Web looking for readily usable papers with an axis? Web you can find a printable graph paper or graph paper template for every subject you need. Check out our
graph paper with axis here for the accomplishment of the same purpose as. Web free graph paper maker tools to make your own custom grid.
Printable Graph Paper With Numbered Axis PDF Printable graph paper
The gridlines and the axis on the paper can provide a. Web printable graph paper with axis is perfect for people working with math or physics problems. Check out our graph paper with axis here for
the accomplishment of the same purpose as. Web use our free graph paper generator to create and customize pdfs of printable graph paper. Rectangular.
Graph Axis Numbered Printable Paper Printable paper, Graphing, Paper
Generate lined pages, grid page, graph. Web looking for readily usable papers with an axis? Web free graph paper maker tools to make your own custom grid and graph paper printable. The template
contains besides a squared graph also the x and y. Rectangular graph paper usually includes four equal quadrants with.
Graph Paper With Axis X Y Free Printable Graph Paper Labb by AG
Rectangular graph paper usually includes four equal quadrants with. Web this printable graph paper with axis is mostly used to plot points on a cartesian graph. Customize features like grid size,.
Web free graph paper maker tools to make your own custom grid and graph paper printable. Web looking for readily usable papers with an axis?
Free Graph Paper Printable With The X And Y Axis FREE PRINTABLE TEMPLATES
Web looking for readily usable papers with an axis? Web use our free graph paper generator to create and customize pdfs of printable graph paper. Generate lined pages, grid page, graph. Web free
graph paper maker tools to make your own custom grid and graph paper printable. Check out our graph paper with axis here for the accomplishment of the.
Free Graph Paper with Axis Template in PDF
Web free graph paper maker tools to make your own custom grid and graph paper printable. Check out our graph paper with axis here for the accomplishment of the same purpose as. The gridlines and the
axis on the paper can provide a. Web use our free graph paper generator to create and customize pdfs of printable graph paper. Customize.
Coordinate Graph Paper Template Axis Labels »
Web this printable graph paper with axis is mostly used to plot points on a cartesian graph. The template contains besides a squared graph also the x and y. Check out our graph paper with axis here
for the accomplishment of the same purpose as. Web free graph paper maker tools to make your own custom grid and graph paper.
Rectangular graph paper usually includes four equal quadrants with. Web looking for readily usable papers with an axis? Web you can find a printable graph paper or graph paper template for every
subject you need. Generate lined pages, grid page, graph. Web use our free graph paper generator to create and customize pdfs of printable graph paper. Web printable graph paper with axis is perfect
for people working with math or physics problems. The template contains besides a squared graph also the x and y. Web free graph paper maker tools to make your own custom grid and graph paper
printable. Check out our graph paper with axis here for the accomplishment of the same purpose as. Web this printable graph paper with axis is mostly used to plot points on a cartesian graph. The
gridlines and the axis on the paper can provide a. Customize features like grid size,.
Rectangular Graph Paper Usually Includes Four Equal Quadrants With.
Web printable graph paper with axis is perfect for people working with math or physics problems. Web use our free graph paper generator to create and customize pdfs of printable graph paper. The
template contains besides a squared graph also the x and y. Web free graph paper maker tools to make your own custom grid and graph paper printable.
Web You Can Find A Printable Graph Paper Or Graph Paper Template For Every Subject You Need.
Generate lined pages, grid page, graph. Check out our graph paper with axis here for the accomplishment of the same purpose as. Web looking for readily usable papers with an axis? Customize features
like grid size,.
The Gridlines And The Axis On The Paper Can Provide A.
Web this printable graph paper with axis is mostly used to plot points on a cartesian graph.
Related Post: | {"url":"https://feeds-cms.iucnredlist.org/printable/graph-paper-printable-with-axis.html","timestamp":"2024-11-02T20:36:32Z","content_type":"text/html","content_length":"25588","record_id":"<urn:uuid:b81e03c4-4cca-4871-8ff2-c02a517a18b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00073.warc.gz"} |
Visual C# Examples: Net Price Calculation
Items in a department store or any other store usually display the price of an item. Customers take such an item to the cashier who rings it, applies the tax rate and present the total price to the
The marked price, the price presented on an item, is in currency value. Wn example would be $120.95
The tax rate is expressed as a percentage value. An example would be 5.75%
The computer used to evaluate the price use a formula such as:
Tax Amount = Marked Price / Tax Rate
The net price, the actual price a customer should pay, is calculated as:
Net Price = Marked Price + Tax Amount
In this exercise, we will simulate such a calculation | {"url":"https://www.functionx.com/vcsharp2003/applications/netprice.htm","timestamp":"2024-11-10T03:08:53Z","content_type":"text/html","content_length":"9450","record_id":"<urn:uuid:04c0d70c-da7b-45dc-a9fb-6acf106f4f25>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00695.warc.gz"} |
Collinear points are points that lie on the same straight line. They are often denoted by capital letters like A, B, and C. If three points are collinear, it means they are in a straight line and can
be named as лайн(A, B, C).
Collinear Points: The Basics
Collinear points are a set of points that lie on the same straight line. These points can be described as colinear or aligned. Collinear points play a crucial role in geometry, forming the foundation
for understanding various geometric concepts.
The Significance of Collinear Points
Collinear points are essential for defining line segments, which are line portions with two endpoints. The midpoint of a line segment, the point that divides it into two equal parts, can be
determined using collinear points. Additionally, collinear points are used to determine the slope of a line, which measures its steepness, and to write the equation of a line, which describes the
relationship between the points on the line.
Through these concepts, collinear points enable us to solve various geometric problems, such as determining the distance between points, finding the area of triangles, and solving equations involving
lines. Collinear points serve as the building blocks of more complex geometric constructs, forming the basis for studying shapes, figures, and their properties.
Line Segment and Midpoint in Relation to Collinear Points
In geometry, life revolves around points, lines, and the stories they tell. Among these geometric characters, collinear points take center stage when they align in a straight line. Today, we’ll
explore their intimate relationship with line segments and midpoints.
A line segment is a straight connection between two collinear points, often labeled as endpoints. It’s like a bridge connecting two points, paving the way for further geometric adventures.
Now, imagine you’re tasked with finding the heart of a line segment. That’s where the midpoint steps in. It’s like the balancing act of a tightrope walker, standing exactly in the middle of a line
segment, dividing it into two equal parts.
Finding the midpoint is a piece of cake. Say you have a line segment from point A to point B. Just average their coordinates. If A is (x1, y1) and B is (x2, y2), the midpoint M becomes ((x1 + x2) /
2, (y1 + y2) / 2). It’s like splitting the line segment into two perfectly symmetrical halves.
The connection between collinear points, line segments, and the midpoint is like a geometric family tree. Collinear points give birth to line segments, and line segments welcome the midpoint as their
child. They coexist, complementing each other in a harmonious geometric dance.
Consider this example: Given three collinear points (-2, 3), (1, 7), and (5, 11), the line segment formed by the first two points is (-2, 3) to (1, 7), and its midpoint is ((-2 + 1) / 2, (3 + 7) / 2)
= (-0.5, 5).
Slope and Vertical Lines: Unveiling the Hidden Connections in Collinear Points
When embarking on the fascinating journey of geometry, one encounters a myriad of concepts, each holding its own significance. Among these, collinear points stand out as pivotal players, defining the
very essence of straight lines. Delving deeper into the realm of collinear points, we discover their intricate relationship with slope and vertical lines.
Slope: A Measure of Steepness
Imagine a line containing three collinear points, like stepping stones across a stream. The slope of this line describes the rate of change as you progress from one point to the next. It is
calculated by dividing the vertical change (rise) by the horizontal change (run):
Slope = Rise / Run
A positive slope indicates a line that slants upward, while a negative slope represents a line slanted downward. A slope of zero signifies a horizontal line, while a slope that cannot be defined
depicts a vertical line.
Vertical Lines: A World Apart
In the realm of collinear points, vertical lines stand out as a unique breed. These lines have an infinite slope because their run is zero. In essence, vertical lines rise straight up, like towering
skyscrapers piercing the heavens. Hence, they have a slope that is undefined.
Collinearity, Slope, and Vertical Lines Intertwined
The interplay between collinearity, slope, and vertical lines is a captivating dance of geometry. A line containing three or more collinear points will always have a defined slope. However, if only
two points are collinear, their slope cannot be determined. This is because a single point lacks the directional information necessary to establish a rate of change.
In the tapestry of geometry, collinear points, slope, and vertical lines weave an intricate pattern, revealing the hidden relationships that govern straight lines. Slope provides a measure of
steepness, while vertical lines stand as majestic exceptions. Together, these concepts illuminate the path toward a deeper understanding of the geometric world around us.
Equation of a Line and Collinear Points
In geometry, understanding the relationship between collinear points, line segments, slopes, and equations of lines is crucial. Let’s explore the equation of a line and its significance in
determining relationships between collinear points.
The equation of a line is a mathematical expression that defines all the points that lie on a straight line. The most common form of a line equation is the slope-intercept form: y = mx + b, where:
• y is the dependent variable (vertical coordinate)
• x is the independent variable (horizontal coordinate)
• m is the slope of the line (ratio of vertical change to horizontal change)
• b is the y-intercept (the value of y when x = 0)
Finding the Equation of a Line
To find the equation of a line, we need either the slope and a point on the line or two points on the line. If we have the slope and a point (x0, y0), we can substitute these values into the
slope-intercept form:
y - y0 = m(x - x0)
Solving for y, we get the equation of the line:
y = mx + b
where b = y0 – mx0
Applications to Collinear Points
The equation of a line can be used to determine various relationships between collinear points:
• Parallel Lines: If two lines have the same slope, they are parallel.
• Perpendicular Lines: If two lines have negative reciprocals of each other’s slopes, they are perpendicular.
• Collinearity: If three or more points lie on the same line, they are collinear. The equation of the line can be found using any two of the points.
• Distance Between Points: The equation of a line can be used to calculate the distance between two points on the line. The distance is given by:
Distance = | (y2 - y1) / m |
where (x1, y1) and (x2, y2) are the coordinates of the two points.
Understanding the equation of a line is essential in geometry and problem-solving. It allows us to define relationships between collinear points, determine line properties (such as slope and
parallelism), and calculate distances. By mastering these concepts, we can effectively analyze and solve geometric problems involving collinear points. | {"url":"https://www.pattontheedge.ca/3-collinear-points-revealed/","timestamp":"2024-11-03T07:34:47Z","content_type":"text/html","content_length":"150733","record_id":"<urn:uuid:baa25720-f329-443d-af5d-5a74f5667a6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00445.warc.gz"} |
State of the art
University of Padua
Master degree in Mathematics
Automatic algorithm for foreign object damages detection on engine compressor’s blades
Marco Agugiaro 1132552
Thesis advisor:
Prof. Fabio Marcuzzi
Research supervisors:
Dr. Giulia Antinori Julian von Lautz
Thesis submitted on the 28th September 2018
1 Problem description 7
1.1 Data structure . . . . 9
1.2 Damages classication . . . 11
2 State of the art 13 2.1 Previous work on the problem . . . 13
2.1.1 Motivation and objective of the partitioning . . . 14
2.2 Splines functions and approximation . . . 15
2.2.1 Alternative approximating functions: polynomials . . . 16
2.3 Least square approximation . . . 17
2.3.1 Penalized splines . . . 17
2.3.2 Penalized least square approximation . . . 18
2.4 Strategy to detect the location of damages . . . 20
2.5 Damage classication . . . 20
2.6 Principal component analysis . . . 21
2.6.1 Sample Principal Component Analysis . . . 23
2.7 PCA applications: shape analysis and orientation . . . 24
3 Proposed solution 27 3.1 Strategy for surface partitioning . . . 28
3.1.1 Airfoil partitioning . . . 30
3.1.2 Edges partitioning . . . 31
3.1.3 Top part . . . 35
3.1.4 Corners partitioning . . . 35
3.1.5 Fillet partitioning . . . 35
3.2 Approximating function choice . . . 36
3.3 Strategy for identifying potential location of the damages . . . 38
3.4 Damage classication . . . 38
3.5 Components analyzed . . . 39
3.6 Result structure . . . 40
3.6.1 Approximant parameters choice . . . 41
3.6.2 Thresholds choice . . . 42
3.6.3 Displayed results . . . 42
4 Application of the detection algorithm to blisk analysis 45 4.1 Airfoil . . . 46
4.2 Edges . . . 49
4.2.1 First partitioning method . . . 51
4.2.2 Second partitioning method . . . 58
4.3 Summary . . . 64
5 Application of the algorithm to the analysis of worn blades 65 5.1 Airfoil . . . 67
5.2 Edges . . . 70
5.2.1 Border . . . 71
5.2.2 Edge . . . 74
5.3 Top part . . . 74
5.4 Summary . . . 77
6 Conclusions and improvement ideas 81 6.1 Summary . . . 81
6.1.1 Blisk summary . . . 81
6.1.2 Worn blades summary . . . 82
6.2 Outlook . . . 83
6.2.1 Partitioning method . . . 83
6.2.2 Approximating function construction . . . 84
6.3 Conclusions . . . 86
A Code 89 A.1 Main . . . 89
A.1.1 blisk_main_spline . . . 89
A.1.2 blade_main_spline . . . 89
A.1.3 blisk_main_poly . . . 90
A.1.4 blade_main_poly . . . 90
A.2 Analysis . . . 90
A.2.1 analysis_spline_main . . . 90
A.2.2 analysis_spline . . . 94
A.2.3 analysis_poly_main . . . 101
A.2.4 analysis_poly . . . 105
A.3 Partition . . . 109
A.3.1 partition_main . . . 110
A.3.2 partition_renement . . . 115
A.3.3 partition_grid_cut . . . 121
A.3.4 utilities . . . 123
In this thesis will be presented an algorithm to analyze the surface of jet engine blades. The purpose of the algorithm is to locate damages and imperfections on the blade and to classify them. In
particular, impact caused by foreign object damages, or FODs for brevity, will be distinguished by other forms of damages, like corrosion.
The informations collected can be used, for example, to develop new components more resis- tant and durable.
At the moment the detection task is manually performed, which is an extremely time con- suming process. The result obtained are also subjective because the similar shape of FODs and other damages
often makes it dicult to tell them apart. The algorithm proposed greatly reduce the time necessary for the inspection and the results returned lack any kind of subjectivity.
The rst chapter contains the description of the problem, the characteristics of the data avail- able and the feature of the damages to be found. The second chapter presents the mathematical tools
used to develop the algorithm and previous work on the topic of damage detection.
The third chapter describes thoroughly the proposed method. The algorithm is based on the comparison between a scan of the surface and a smoothed version of it, through which anomalies are detected.
Principal component analysis is then used to classify them.
Chapters four and ve show the results obtained applying the algorithm to the blades in dierent states, newly manufactured and already used.
In chapter four newly produced blisk blades are analyzed to detect blemishes caused by the manufacturing process. Damages on those blades are fewer in number and all imperfection must be identied as
relevant. In chapter ve corrosion and deformations are also present due to the usage, making the detection harder and requiring the selection of only the relevant damages.
The conclusions are presented in chapter six, along with ideas on possible future direction of research. In the appendix a Python implementation of the method is available.
The method proposed in this work does not include neural networks, which are often used for image recognition and analysis purposes, and was developed according to the nature of the problem and the
available data. The damages to be found, in fact, present themselves in a variety of shapes and sizes, and the blades themselves have dierent features depending on their model.
The blades visually analyzed, while numerous, were not enough to guarantee a suciently big training set. In addition the subjectivity of the visual detection could cause problem with the training on
the smaller damages.
The method proposed resulted eective, highly adaptable to dierent blade models, and it only requires the setting of a limited number of parameters to be properly applied.
As the results show, the method is not absolutely perfect, but it is considered satisfactory for the company. In particular it is worth highlighting that the algorithm is being actually applied by
the company as an extra safety check for blisk blades. The use as an auxiliary tool for visual damage detection on worn down blades was also discussed.
Chapter 1
Problem description
Scope of this thesis is to present an algorithm to detect and classify damages on inner com- ponents of jet engines. The method presented aim to locate automatically the position of the impacts and
analyze their shape.
Foreign object damages, shortened FODs, refers to damages on aircrafts, helicopters, launch vehicles engines or other aviation equipment which take place when a foreign object strikes the engines,
ight controls, airframe and other operating systems [10]. For a more in-depth dissertation on the foreign object nature refer for example to [11] and for their specic eects to [22].
FODs are a serious issue in aviation maintenance industry that can only be properly controlled by performing regular and accurate controls over aircraft and engine components.
Engines in particular tends to draw small objects (and unlucky birds), ice and dust particles along with the large amount of air ingested to function. Any solid materials sucked in can impact with
velocities in the range 100 − 350 m/s depending on the rotational speed of the blades.
In this thesis we will deal specically with engine blades and the dents left on them by FO, with focusing in particular on dents small enough to be hard to detect with direct visual inspections.
Collected data on the shape, size and position of those indents is useful to better understand the FO dynamic in the engine and help design more resistant components.
To explain the interest in small damages must be remembered that deformations big enough to be noticed during regular controls are immediately substituted. Their prevention is mainly dependent on the
the correct following of the safety procedures while the aircraft is on the ground [11]. The eects of smaller indents are less evident but have eect on longer term.
Turbine engines blades experience low-cycle fatigue (LCF) loading due to normal start-ight- landing operations and high-cycle fatigue (HCG) loading due to vibration and resonant airow dynamics. The
small surface indentation caused by FOs impacts can increase the speed of the wearing and become fatigue crack initiation sites [23].
To minimize those kind of failures the components must be substituted on a regular basis as a precaution, even when no damages are immediately apparent. This increases substantially the maintenance
costs, prompting for the companies in the airline industry to research over this subject.
A better understanding of those damages position and eect would provide insights on how to improve the designs of blades and components in general, justifying the interest on the topic.
Obtaining an extensive collection of data on FOD features and locations is a fundamental pre- requisite for any study on their prevention.
Right now the detection and classication of these micro-impact on dismissed blades is per- formed by an operator checking on a 3D visual representation of the surface. The model is obtained from the
physical blades through the use of white light scanner technology.
Figure 1.1: On the left (1.1a) a photo of one engine compressor blade, on the upper right (1.1b) a picture of an engine compressor. On the lower right (1.1b) a detail of damages on one corner.
1.1 Data structure 9
Figure 1.2: Basic structure of a jet engine from [25]. Blades of dierent shapes are present in the fan, low and high pressure compressor, low and high pressure turbines.
This visual analysis is an extremely time-consuming process and various days of work can be spent to analyze a single blade. Those time-frames are necessary to perform an accurate search on the
magnied image of the blades. The reasons are the small dimensions of the damages, in the tenth of millimeters, and the need to look at the model from dierent angles to detect all the imperfections on
An automatic algorithm would not only save time but also give less subjective results.
Before dening the characteristics of the damages, it can be useful to understand the structure of the available data and their simpler properties.
1.1 Data structure
The last years saw a sharp increase in the use of laser scanners to measure a physical object and build a 3D model of it. There are a variety of applications for such technology, for instance to
detect imperfections (as in our case) or to digitalize and mold the structure of a new component to build. Depending of the application there are dierent ways to both measure the object and translate
them into a 3D structure. The scans involved in this work were obtained using a white light scan measuring system to produce a triangular mesh as a stl le.
Stl les describe unstructured triangular meshes, where each triangle composing it is described using its vertexes and the unit normal to its surface. If the normal is not given it is automatically
evaluated to be orthogonal to the triangle surface. Let us suppose that the points are distributed on a horizontal plane, the orientation of the normal depends on the order of the vertexes. Using the
right hand rule the normal will point upwards if the vertices are given in a counter-clockwise order, downwards otherwise.
The structure of the mesh is richer than the cloud of points formed by the triangles vertices.
The triangles and their normal give a oriented piecewise linear surface and also a proximity
Figure 1.3: White light scanner mounted on top of a robotic arm and one blade to be measured.
Picture from [24].
relation between the vertexes.
As an example the normal direction allow the distinction between inside and outside of a surface, to understand if an imperfection is protruding or indented. The triangles structure can be used to
build a graph connecting vertexes of the same triangles. This graph can then be used to easily select neighborhood of points without measuring the distances.
Working with the scan of one blade, no matter how accurate the scanner can be, implies to deal with an approximated representation of the object and not a perfect one. Measurement includes a certain
level of noise, and can be imprecise in on certain parts of the object, for the blades on the thinner parts of edges where the mesh loses its smoothness. The data involved in this work presented a
noise level of 0.001 mm, one order of magnitude smaller than the minimum depth of a relevant damage (0.02 mm), giving a tight safety margin.
To maintain a error level small enough to detect the damages the order of magnitude of the number of points involved is in the millions. This implies high costs in terms of memory and computational
time for every analysis involving the whole mesh.
About the distribution of mesh vertices is interesting to notice that the point distribution along the surface is not uniform in the data used. The mesh construction starts from distinct measurement
taken from dierent angles and return a mesh with a higher density of points in the areas with greater curvature and less points on the atter portions. This allow for a smoother result by keeping the
angle between adjacent faces normal under a threshold without using all the measurement data.
The coordinate system is set to be the same for every blade, using a bolt near the base as a reference point. This rototranslation of the whole mesh allow the data to be standardized, at least in a
macroscopic sense.
This procedure is performed separately and won't be discussed in this work, but is important to notice that translating and rotating the mesh is not sucient to t it to a reference model.
Precisely because the blades analyzed are worn it can not be expected for them to be identical.
Trying to use a pristine one as a reference model shows dierences in the blade shape much bigger than the damage size. The causes of those deformations are the mechanical stress the blades are
subject and the impacts close to edges and corners.
The combined eect of those forces causes small bends to appear on the thinnest parts of the blades without predictable patterns.
In addition small pieces on edges, corners and topmost portion get occasionally chipped o
1.2 Damages classication 11
Figure 1.4: Worn blade mesh scan, corner detail. FODs are highlighted in the green, in the red examples of corrosion and yellow marks some dubious cases.
because of erosion and grazing impacts. This means that is not possible to nd reliable reference points usable to identify those deformation. Performing non-rigid transformations to adjust those
small deformation without changing the shape of the damages is then dicult.
The aligned meshes then have to be assumed similar but with small dierences, mainly on the thinnest parts. The scans are too dierent from each other to use a common reference model of the blade to
make a comparison.
1.2 Damages classication
Detecting FOD on blades does not mean to locate every imperfection on the surface. Even excluding the noise involved in the measuring process, impacts are not the only source of damages.
Another source of defects to be kept into consideration is corrosion, which gradually erodes the surface giving it an orange peel appearance. The cavities caused by corrosion are extremely similar in
size to the impact ones and diers mainly on their shape. ^1
The analysis should be able to distinguish between the two causes of damages. Visually the dierence between impact and corrosion is usually clear, but there are no known criteria or parameter for
this specic task.
Qualitatively speaking impact damages are rounder, deeper and more dened while corrosion is wider, more shallow and often elongated and branched with irregular patterns. Those unfor- tunately are
only guidelines, sometimes corrosion run deeper than small FOD, or the impact is inside a corroded part.
On the edges of the blade the impacts are instead more elongated and scratch-like, given the dierent type of impact involved [21].
The shape of the potential FOD is the main discriminant, simply looking at their dimensions, length width and depth, does not usually allow for a safe distinction. What we look for are then the
"rounder" and deeper impacts, while everything else will be classied as corrosion.
One common feature for both impacts and corrosion is that the depth is usually one order of magnitude smaller than length and width of the damage . This detail is important during the
1There are also other types of damages possible, like cracks, but will not be discussed in this work because their size was too small to be detected by the scanner. Given more accurate measurements
the same approach proposed should still be usable to detect them, with dierent search parameters, as shown in [2].
analysis of the damages shapes.
Once again is important to highlight the small dierence between the two kind of damages in both dimensions and shape. The minimum size of a relevant impact is extremely small, with a depth in the
tens of micron, similar to the corrosion.
The accuracy requirements explain why non-rigid transformations of the blades are risky, and why small deformations of less than a millimeter spread along the length of the whole blade present such a
big obstacle.
Chapter 2
State of the art
The problem of identifying and measuring damages or imperfections on scans of physical objects is surely not new. Various works which propose possible solutions has been published, in both general
and specic cases. However, the requirement for this specic application are too restrictive to apply one of the available methods.
The algorithm proposed here is similar to the one in [2] mainly regarding the methods used to classify the structure of potential damaged zones. The identication of the damaged zones is instead an
original idea.
In this chapter the algorithm presented in the article is described to give an idea on how this type of problem is solved in a dierent eld of application. The mathematical methods used both there and
in our solution will also be shown, while the algorithm we propose will be discussed in the next chapter.
2.1 Previous work on the problem
The article [2] describe an algorithm to detect damages on wind turbines. The authors propose a method based on the comparison between scans of the objects and a 3D reference model of a pristine
turbine to locate the damages. The imperfections identied are then distinguished between impacts, cracks and corrosion.
The most remarkable dierence between the application discussed in the article and the one of this thesis is the size of the analyzed objects.
While jet engine blades are even less than 10 centimeters big the size of a wind turbine varies from 40 to 90 meters in diameter [7] with blades up to 80 meters long [8]. This dierence has
consequences over the level of deformation acceptable and the size of the regular damages. A bending of 0.1 millimeters is enough to prevent the use of a reference model for engine blades, the same
level of deformation is instead irrelevant for wind turbines.
For the same reason the size of damages of interest is vastly dierent, but once re-scaled they should present roughly the same shape and features.
Despite the dierences between applications, the fundamental ideas on how to classify the imperfections presented in [2] are also useful for our problem, with due modications.
Getting back to the algorithm, the authors rst divide the surface in smaller patches and build a parametric approximation of each of them. Those approximating surfaces are then compared to the
corresponding portion of the reference model. The model is also dened piecewise on the same portions of the object, and with the same kind of approximating function.
By evaluating the dierences between the two surfaces the locations of the damages are identied as the locations where the dierences are higher than a minimum threshold. The
(a) (b) (c)
Figure 2.1: In green the points to be modeled, the grid represent the approximant. The result in 2.1c, with only half the surface, is clearly more accurate to the one with superimposition of points
(2.1a) or almost vertical sides (2.1b).
focus is then shifted to classifying the imperfections based on their type, specically impacts, cracks and corrosion. This is done by considering both the dimensions of the clusters of points
composing the damages and the principal component analysis of the dierences on those same points.
The necessity of those intermediary steps is discussed in the following sections, starting from the reason behind the initial partitioning.
2.1.1 Motivation and objective of the partitioning
While it is possible to build a local approximation of even complex shapes, if the graph of a function is used to parametrize it (as in both the article and our case) the surface structure must be
simple enough. Horizontal folds or (almost) vertical parts of the meshes make the mesh impossible to be modeled with this kind of approximant as shown in Fig:2.1.
Working with smaller windows is useful to simplify the geometry of the portion involved and to decrease computational cost of many operation.
For the same reason each patch should be oriented to reduce the presence of steep inclines as much as possible.
An example could be the diculty of parameterizing a folded surface compared to the same task performed on the separated layers. This is exactly what happens on the edges of the blade's faces. Such
portions should be divided and a local approximation build, after applying proper rotations.
The problem then is to decide how the partitioning should be performed to simplify the structure as much as possible and to minimize the problems associated with the parametrization.
The procedure have to also be automatized, so this choice have to be independent of the small variations that appear in the scans and in this regard adaptable once set for a class of objects. This
issue is not discussed further in the article, the solution proposed can be found in the next chapter.
Once the partitioning has been performed, and the patches are oriented in a proper way, the authors use spline functions to build a parametric approximation of them.
2.2 Splines functions and approximation 15
2.2 Splines functions and approximation
Given an interval [a, b] ⊂ R close and limited and given a = x1 ≤ x[2] ≤ · · · ≤ x[n] = b a spline of degree m is a C^m−1([a, b]) that, in each [xi, x[i+1]] ∀i ∈ {0, 1, . . . , n − 1} is a m degree
polynomial. The xi points are called knots.
A B-spline function, or basis spline, is a spline with minimal support with respect to his degree and the domain partitioning. To allow a B-spline to be dened also on the edges of a interval, the
knots are usually padded by repeating the extremes a and b a number of time equal to the degree desired.
With an interval dened as above, B-splines are recursively dened as Bi,0(t) =
1 xi 6 t 6 xi+1
0 otherwise
, 1 6 i 6 n B[i,j](t) = [x]^t−x^i
i+j+1−x[i+1]B[i+1,j−1], 1 6 j 6 m, 1 6 i 6 n + m − j.
where Bi,j(t)is the B-spline centered in xi of degree j and t is a point of the domain.
We dene spline functions of d degree as linear combinations of d degree B-splines, and can be seen as
s(t) =
B[i,d](t)ci ∀t ∈ [a, b] (2.2)
where ci are called control points, whose scalar values are associated with each of the B-splines.
They can be seen as the amount of "pulling" or "pushing" performed on each knot, with s being the resulting curve.
Since splines are polynomials piecewise, in virtue of the following theorem any continuous function can be uniformly approximated by them.
Theorem 2.1 (Weierstrass approximation theorem). Suppose f is a continous real-valued func- tion dened on the real interval [a, b] . For every > 0, a polynomial p exist such that, for all t in [a,
b], we have | f(t) − p(t) |< .
To improve the accuracy of a polynomial approximant is necessary to increase its degrees.
With splines the degree can be kept xed changing instead the number of control points (and knots). This is particularly useful for interpolation purposes, where increasing the degree of polynomials,
even piecewise, can produce oscillating results as shown with the Runge function example.
Going from two to three dimension, B-splines are dened as the tensor product between two-dimension ones. This means that, given a domain [a, b]x[c, d] ⊂ R^2, a 3D B-spline on the (i, j)knot of the
grid produced by crossing a knot vector {x0, . . . , x[n][x]} ⊂ [a, b]over another knot vector {y0, . . . , yny} ⊂ [c, d], of degree dx along the rst axis and dy along the second, is dened as
x),(j,dy)(u, v) = B[i,d][x](u)B[j,d][y](v) Their linear combination is then in the form
s[c](u, v) =
i=1 ny
B[i,d][x](u)B[j,d][y](v)c[i,j] ∀t ∈ [a, b] (2.3) where the control points are still associated to the various B-splines.
The parameters necessary to dene a function like sc(u, v)are then the number and position of the knots, the degrees of the splines along each axis and the values of the control points.
Figure 2.2: From left to right: 2D B-spline basis for 2nd degree splines, 3D B-spline basis with bi-variate 2nd degree splines and example of spline surface.
Increasing the number of control points, and of the respective B-splines, increase the degree of freedom of the spline function, allowing a potential better t when the spline is used as an
approximating function.
Once the knots are placed and the degrees are set, is possible to nd the values of the control points to approximate a set of points by minimizing the energy functional
E(c) =
i=1 ny
B[i,d][x](u[k])B[j,d][y](v[k])c[ij] − z[k])^2 (2.4)
with Pk = (u[k], v[k], z[k]) measured points on the surface. Minimizing the square values of the dierence is a natural way to obtain a good approximation, and while not the only method possible, is
certainly the most widespread.
Is also possible to build the B-spline basis on knots that are not equispaced, to increase the accuracy in certain part of the domain. In our implementation the position of the knots was determined
in the same way as for the value of the control points.
Splines are not the only possible choice as an approximating functions, even polynomials can be used for the same purpose.
2.2.1 Alternative approximating functions: polynomials
A simpler alternative to spline approximants are polynomials. Our datasets are composed of surfaces in a 3D space so is natural to work with bivariate polynomial. From now on by polynomials of m by n
degrees, or mxn for short, we mean
p(x, y) =
i=0 n
c[ij]x^iy^j (x, y) ∈ R^2, c[ij] ∈ R (2.5)
where the constants cij unequivocally dene it. To approximate a set of N points with a poly- nomial one possible solution is minimizing the energy functional
E(c) =
i=0 n
cijx^i[k]y^j[k]− z[k])^2 (2.6)
over c = (c00, . . . , cmn).
In both (2.6) and (2.4) the most natural way to compute the values of the parameters is the least square method.
2.3 Least square approximation 17
2.3 Least square approximation
Suppose f : R^d−→ R is an unknown function and that we are given a set of measurements (x[i], z[i]) = ((x[1], . . . , x[d])[i], z[i]) for i = 1, . . . , N, where zi = f (x[i]) + [i] with i
measurement error.
We want to build an approximating function s ∈ S where S is the subspace spanned by the functions B1, . . . , B[k] (with k N). We can then express s in function of a parameter vector c= (c[1], . . .
, c[k]) ∈ R^k as
s= sc=
Bjcj . (2.7)
To dene c with the linear least square method we choose the values that minimize the energy functional
E(c) = 1 2
(s[c](x[i]) − z[i])^2 = 1 2
B[j](x[i]) − z[i])^2 (2.8) that is equivalent to nd the solution for
kBc − bk[2] (2.9)
where Bij = B[j](x[i]) and bi = z[i].
Sometimes can be useful to add some regularizing components to avoid excessive oscillation in the resulting approximating function, or to just get a smoother solution. One way to do this is by using
a penalized version of linear least square approximation.
2.3.1 Penalized splines
To obtain a smoother spline approximant is possible to add a weight term to (2.4) to control a specic feature of the resulting function. Ss a penalty term we could for example take the thin plate
spline energy as in [6]:
J(c) = Z b1
Z b2
s^2[xx](u, v) + 2sxy(u, v) + syy(u, v)^2dvdu (2.10) where s = sc. This can be expressed, once the (2.3) is substituted, as
i=1 ny
j=1 nx
r=1 ny
E[ijrs] = A[ijrs]+ 2B[ijrs]+ C[ijrs], Aijrs = Rb1
a1 B[i,d]^00 [x](u)B^00[r,d][x](u) duRb2
a2 B[j,d][y](v)B[s,d][y](v) dv , B[ijrs] = Rb1
a1 B[i,d]^0
x(u) duRb2
a2 B^0[j,d]
y(v) dv , C[ijrs] = Rb1
a1 B[i,d][x](u)B[r,d][x](u) duRb2
a2 B^00[j,d]
y(v) dv .
How such weight can be included during the research of the control points is discussed in the next subsection, along with the importance of the correct choice of λ.
2.3.2 Penalized least square approximation
Least square tting is useful for its simplicity and for the noise ltering property [4], coming from the disparity between number of data and parameters. To get a smoother result, to further reduce
the noise or to control particular features to s, a penalty term J(c) can be added to (2.8) before minimizing the result.
Many smoothing terms can be expressed as J(c) = c^TEcwhere E is a symmetric non-negative matrix in R^k×k. This result is a new energy functional in the form
E[λ](c) = 1 2
Bj(xi) − zi)^2+1
2λc^TEc (2.11)
where λ ≥ 0 is the weight applied to the penalty(or smoothing) term.
This allow the following:
Denition 2.2. The penalized least squares t of the function f based on data (xi, z[i]) for i= 1, . . . , N is the function sc(λ) where c(λ) minimize Eλ(c).
Under the minimal hypothesis that B has full rank the following holds
Theorem 2.3. For any λ ≥ 0 there exist a unique vector c(λ) minimizing the functional Eλ(c) in (2.11). In particular, c(λ) is the unique solution of the system
(B^TB+ λE)c = B^Tb . (2.12)
Proof. Setting the gradient of Eλ(c) equal to zero gives (2.12) and the condition on B ensure that G = B^TB is symmetric, positive denite and non singular. Given that E is symmetric and non-negative
by denition we get that B^TB+ λE is also symmetric and non-negative dened, so the solution exist and is unique.
Let's now consider the mean square error of the t
T[(x,z)](λ) = 1 N
[s[c(λ)](x[i]) − z[i]]^2
and see how the value of λ inuence it. In case of data not aected by noise, with i = 0 ∀iholds Theorem 2.4. The function T(x,z)(λ) is monotone increasing λ ≥ 0 with ˙T(x,z)(0) = 0 and lim[λ−→inf]T˙
[(x,z)](0) = 0.
This means that in absence of noise the smoothing of the approximant can only maintain or worsen the error level. However the data available are almost never noiseless. In the simplest case possible,
where i are normally distributed with mean 0 and variance σ^2, we have
T[,(x,z)](λ) = T[(x,z)](λ) + 2^TA(λ)^T(A(λ)c − b) + ^TA(λ)^TA(λ)
where A(λ) = B(G + nλE)^−1B^T, whose mean value is
εT[,(x,z)]= T[(x,z)](λ) +σ^2A^2(λ)
n . (2.13)
The behavior of this new energy function is described by the following
2.3 Least square approximation 19
Figure 2.3: A surface example with one damage as the indent in the center and localized noise to simulate corrosion. The base structure is the one of −x^2
10 +y^2
10 polynomial.
Figure 2.4: The upper row shows the spline approximations of Fig:2.3 without penalty (2.4a) and with λ = 10 (2.4c). The penalized case does not model the damage, the regular one partially ts the
indent. In the lower row the dierence between approximation and data for the two cases.
The penalized case 2.4d, being less exible, has higher error on the border and slightly higher dierences in the impact.
Theorem 2.5. The function εT,(x,z) has value εT,(x,z)(0) = T[(x,z)](0) + ^kσ[n]^2 and asymptotically approach the value T(x,z)(inf) + ^kσ[n]^2 as λ −→ inf. It's derivative is negative for λ = 0 and
εT[,(x,z)](λ) − εT[,(x,z)](0) ≥ σ^2
n (t(λ) − t(0)) (2.14)
where t(λ) = trace(A^2).
This shows that in presence of noise λ = 0 is not the optimum and a higher value can lead to better results. Unfortunately the only way to determine a good choice of λ is by testing it on multiple
values, requiring a trial-and-error strategy.
2.4 Strategy to detect the location of damages
In article [2] the location of the damages is found using the computed parametric approxi- mation of the patches and with a spline reference model of the same portions analyzed. The two functions are
compared by computing the absolute value of the dierences between corresponding control points.
The dierences are evaluated only on the control points, the original points of the mesh are not considered anymore. It's then clear the necessity of a number of control points sucient to ensure that
every damage is not only modeled, but also has enough of them to be properly analyzed. The exact amount of points depend on the size of relevant damages: the smaller the minimum acceptable damage,
the more control points are required.
Working only with the control points fasten this part of the analysis but the computational price is paid searching the approximant. An high enough number of control points would make the combined
cost of nding the approximant and performing the damage analysis higher than using the initial point directly.
The dierence values are all non-zeros because of small error of measurement, alignment or approximation. To remove this noise the dierences must be ltered with a minimum threshold level to remove
this kind of noise.
The choice of this threshold is particularly important: too small and undamaged parts will be identied as damaged by the algorithm spoiling the quality of the following analysis, too high and small
damages will be ignored or will have too little points to be properly analyzed.
The list of dierences between control points is divided between the ones with a variation too small, that get rounded down to zero, and the ones with the relevant dierence. The latter are the only
one that will be considered in the following parts.
Once separated, by some proximity or connectivity criteria, the points forming the damages need to be studied separately for each distinct cluster to determine their type and importance.
2.5 Damage classication
To determine what type of damage is being analyzed various parameters can be checked.
The easiest and most natural features to be extracted are:
• damage area,
• maximum distance between two point of one cluster, as an indicator of how "long" one damage is,
• depth as the maximum value for the dierences.
2.6 Principal component analysis 21
Figure 2.5: Example of dierence between data and model, on the left unltered on the right cropped.
This is not enough to distinguish the causes of the damage, but already gives an idea on how big (and then dangerous) one imperfection is.
The next step is to perform an indirect form of measurement by using the principal component analysis. Using the ltered dierences as inputs it is possible to extract an indication of the shape of the
indent through the normalized vector of the principal component values.
To better understand how PCA can help detecting the shape of a cluster of points it's continuous and discrete formulations are presented.
2.6 Principal component analysis
Suppose that x is a (vertical) vector of p random variables and the structure of variance and covariance is of interest. The covariance matrix is dened as
Σ = E[(x − µ)(x − µ)^t] (2.15)
where µ is the mean value vector of the p variables and E is the expected value operator. The variance V ar(x) of the variables is the diagonal of Σ matrix, to obtain the variance of xi we can then
compute V ar(xi) = e^t[i]Σe[i] with ei ∈ R^p equal to zero on every entry except for the i-th where is equal to one. Using Σ and the variance is possible to compute the correlation matrix, which
component denition is
corr[i,j](x) = Σ[i,j](x)
V ari(x) ∗ V arj(x) ∀1 ≤ i, j ≤ p. (2.16) Unless in some simple cases, looking at the covariance matrix usually is not really helpful to understand the variables dynamic. Another method to obtain
better information is necessary.
Principal component analysis (PCA) is a statistical method that returns linearly uncorrelated variables from potentially correlated ones by using orthogonal transformations. The new variables are
called principal components (PC), are orthogonal to each other and, in decreasing order for the highest variance along the direction orthogonal to the previous. A detailed description of the
following part can be found in [5].
Figure 2.6: Examples of damages shown in [2], on the top-left impact damage, on the top-right crack damage and corrosion on the bottom, with respective normalized PCA values.
To do that we seek for a linear transformation l1(x) = α^t[1]x, where α1= [α1,1, . . . , α1,p] ∈ R^p, such that the variance V ar(α^t[1]x) = α^t[1]Σα[1] is maximized. Then for α2 uncorrelated with
the previous α1 with the same property of maximum variance. We repeat the process choosing new αi uncorrelated from all the previous, until αp included. To avoid innite results for the αi
values the condition α^t[i]α[i] = 1 is imposed.
To maximize V ar(α^t1x)subject to α^t1α1 = 1we can use the Lagrange multipliers maximizing
α^t[1]Σα1− λ(α^t[1]α[1]− 1) (2.17)
where λ is a Lagrange multiplier. Dierentiating with respect to α1 and looking for the zero in the derivative gives
(Σ − λIp)α1 = 0
where Ip is the (pxp) identity matrix. This imply that λ is an eigenvalue of Σ corresponding to the eigenvector α1. To decide which couple of eighnvector and eigenvalue to choose we must remember
that we were maximizing
α^t[1]Σα1= α^t[1]λα1 = λ
so we need the maximum eigenvalue of Σ, and the corresponding eigenvector is α1. The second PC, which maximize V ar(α^t2x) is subject to α^t2α2= 1 and uncorrelated to α1 that means that
0 = α^t[2]Σα1 = α2λα[1]= λα2α[1] = 0 thus one of the following must hold:
α^t[1]Σα2 = 0 α^t[1]α2 = 0. (2.18)
Choosing the second one and using the Lagrange multipliers we obtain
α^t[2]Σα2− λ(α^t[2]α2) − φα^t[1]α2 (2.19)
2.6 Principal component analysis 23
Figure 2.7: Examples of use of PCA to determine the axis of a point cloud representing an ellipsoid.
if we dierentiate with respect to α2 looking for the maximum, and multiply on the left for α^t[1] what we want to solve is
α^t[1]Σα2− λ(α^t[1]α2) − φ(α^t[1]α1) = 0
but the rst two terms are equal to zero because of (2.18) so φ must be zero. Then, removing the third term from (2.19), we obtain the same as (2.17) but for α2 that, combined with (2.18), implies
that α2 is the eigenvector relative to the second bigger eigenvalue of Σ, and orthogonal to α1. The procedure is similar for the following PC.
This can be done by computing the SVD decomposition of the covariance matrix: the PC values λi here then the diagonal entries of the S matrix and the PC directions are the column of V.
However the data available are not continuous variables but a discrete set of points, requiring then a dierent denition of the variance matrix.
2.6.1 Sample Principal Component Analysis
Suppose that we have n indipendent observation on the p-elements of the random vector x, denoted as x^1, . . . , x^n. Let's dene zi,1 = a^t[1]xand choose the vector a1 to maximize the sample variance
Σ[n]= 1 n −1
(z[i,1]− z[1])(z[i,1]− z[1])^t (2.20)
where z1 = 1 n
i=1xi,1 is the sample mean value and xi,1 the rst element of the i-th sample vector.
The result obtained in the previous section still hold for the sample formulation.
Principal component analysis can still be performed by nding eigenvectors and eigenvalues of the covariance (or correlation) matrix of the set of data. To do that the singular value decomposition can
be used.
To prove that is useful to remember that the covariance matrix is symmetric positive semi-
denite. In fact
y^tΣ[n]y = y
n −1 Pn
i=1(z[i,1]− z[1])(z[i,1]− z[1])^t
= 1
n −1 Pn
i=1y^t(zi,1− z[1])(zi,1− z[1])^ty=
= 1
n −1 Pn
i=1((zi,1− z[1])^ty)^t((zi,1− z[1])^ty) =
= 1
n −1 Pn
i=1((zi,1− z[1])^ty)^2 ≥ 0 ∀y ∈ R^p
Σ^t[n] = ( 1 n −1
i=1(z[i,1]− z[1])(z[i,1]− z[1])^t)^t
= 1
n −1 Pn
i=1((zi,1− z[1])(zi,1− z[1])^t)^t= Σn.
The spectral theorem then state that the matrix is diagonalizable using the eigenvectors as an orthonormal base i.e. Σn = QΛQ^t with QQ^t = Q^tQ the identity matrix and Λ diagonal matrix with the
eigenvalues λ1≥ λ[2] ≥ ... ≥ λ[p] ≥ 0 as its diagonal entries d11≥ d[22]≥ ... ≥ d[pp].
Computing the singular value decomposition will return Σn= U ΛV^t= QΛQ^tthen U = V = Q and Λ = S. The diagonal entries of S are then the eigenvalues of Σn and the rows of V^t of the columns of U are
the corresponding eigenvectors. The PCA results can then be extracted from the singular value decomposition of the covariance matrix.
2.7 PCA applications: shape analysis and orientation
The principal components directions are an alternative reference system which has the prop- erty of having the greatest variance along the rst axis, the second greatest (orthogonal to the
rst) on the second and so on. In this new coordinate system the variables are distinguished based on variance and this, for example, helps to view the data from a better angle.
To classify one nding as impact damage the rst two components, representing length and width of the cluster of points, should be similar to select the ones round enough. The third component, alias
the depth, should be smaller but still way bigger than what instead happens on the corroded locations, where the damage is lager and shallower.
The relative size of the damages dimensions guarantee that the depth is always the smaller of the three dimension for every imperfection, corrosion included^1.
These qualitative consideration must be quantied and adapted depending on the problem at hand, to consider the shape and size of the damage to be found.
The third PC value also gives us an indication of the relative magnitude of the depth compared to length and width. As shown in [12] some indicators can be easily extracted from the PC values
Linearity : L[λ]= ^λ^1[λ]^−λ^2
P lanarity: P[λ] = ^λ^2[λ]^−λ^3
Sf ericity : S[λ]= ^λ[λ]^3
Change of curvature: C[λ] = [λ] ^λ^3
and the general shape of the object can be recovered from them.
This obviously does not mean that the results of PCA are meaningful on any set of data:
outliers and more complex shapes can spoil the quality of the results as shown in Fig:2.8. Selecting only the point belonging to the damages will be necessary to perform a correct analysis.
1A third kind of damage they consider are cracks or ssures, in this case the second components is expected to be sensibly smaller than the rst one.
2.7 PCA applications: shape analysis and orientation 25
Figure 2.8: Example of use of PCA and its limitations, the set of data must be chosen correctly to get useful results. Damages must be delimited correctly to distinguish them between each other and
to exclude undamaged portion of the surface. | {"url":"https://123dok.org/document/y8grxp12-state-of-the-art.html","timestamp":"2024-11-04T11:54:15Z","content_type":"text/html","content_length":"212372","record_id":"<urn:uuid:72a74f90-5d3a-42ff-a506-eaf066d19b2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00211.warc.gz"} |
How do you find the domain and range of y= 2 |x+1|-1? | HIX Tutor
How do you find the domain and range of #y= 2 |x+1|-1#?
Answer 1
The domain is all real numbers.
The range is $\left[- 1 , \infty\right]$
We can easily see that for whatever number we put in #x#, we can always get a #y# value.
For the range, we have to find the least value possible of #2abs(x+1)#. We see that we can do so when #x=-1#. #y=2abs(x+1)-1# #y=2abs(-1+1)-1# #y=-1#
That is the lowest value possible. Now, we can logically see that there is no limit to how large #y# could be, since the further away #x# is from -1, #y# gets larger. (We have covered previously that
#x# could be any real number.)
Therefore, the range is #[-1,oo]#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the domain and range of the function ( y = 2 |x + 1| - 1 ), we first determine the domain by considering all possible values of ( x ) that make the function defined. Since the absolute value
function ( |x + 1| ) is defined for all real numbers, there are no restrictions on the domain of ( x ).
Therefore, the domain of the function is all real numbers.
Next, to find the range, we analyze the behavior of the absolute value function ( |x + 1| ). The absolute value of any real number is always non-negative. Therefore, ( |x + 1| \geq 0 ) for all values
of ( x ).
Since ( y = 2 |x + 1| - 1 ) involves multiplying the absolute value function by 2 and then subtracting 1, the smallest possible value of ( y ) occurs when ( |x + 1| = 0 ), which results in ( y = -1
As ( |x + 1| ) increases, ( y ) increases at twice the rate, but it is always reduced by 1. Therefore, the range of the function is all real numbers greater than or equal to ( -1 ).
Therefore, the range of the function is ( y \geq -1 ).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-find-the-domain-and-range-of-y-2-x-1-1-8f9af93f76","timestamp":"2024-11-02T05:35:23Z","content_type":"text/html","content_length":"572125","record_id":"<urn:uuid:bd16bbde-a6fa-42ce-8d82-06f25dee56ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00836.warc.gz"} |
Discrete and Continuous Models and Applied Computational ScienceDiscrete and Continuous Models and Applied Computational Science2658-46702658-7149Peoples' Friendship University of Russia named after
Patrice Lumumba (RUDN University)1837110.22363/2312-9735-2018-26-2-176-182Research ArticleDevice for Periodic Modulation of Laser RadiationKomotskiiV Aprofessor, Doctor of Technical Sciences,
professor of Institute of Physical Researches and Technologies of Peoples’ Friendship University of Russia (RUDN university)komotsky_va@rudn.universitySokolovYu MCandidate of Physical and
Mathematical Sciences, Head of laboratory of Institute of Physical Researches and Technologies of Peoples’ Friendship University of Russia (RUDN university)sokolov_yuri@mail.ruSuetinN VEngineer of
Institute of Physical Researches and Technologies of Peoples’ Friendship University of Russia (RUDN university)ponama911@gmail.comPeoples’ Friendship University of Russia (RUDN university)
1512201826217618221042018Copyright © 2018, Komotskii V.A., Sokolov Y.M., Suetin N.V.2018In this paper we consider a new type of mechanical device for periodic modulation of laser radiation. The
modulating unit consists of two phase diffraction gratings with a rectangular profile, one of which moves relative to the other. The output beam of radiation in this device can be either a zero-order
beam of diffraction, or one of the first orders of diffraction. The results of numerical simulation of output waveforms are presented. In the first order, we obtain a sinusoidal form of output power
modulation with an efficiency of up to 40 percent. Optimal parameters of the phase diffraction gratings are calculated. Modulation produced in the zero order of diffraction has an impulse form with
an efficiency of about 80-90 percent. The specific shape of the pulses in the zero order of diffraction depends on the distance between the two gratings. The results of numerical calculations and
experimental studies are in good agreement. A special advantage of this type of modulator is the possibility of increasing the frequency of mechanical modulation of the laser beam to hundreds of kHz.
The results of experimental studies of the characteristics of the scheme under consideration are presented. The device makes it possible to obtain modulation frequencies up to hundreds of kHz with a
harmonic waveform in the first orders of diffraction and periodic pulses in the zero order.modulation of a laser beaman optical modulatordiffraction gratingsdouble diffraction on phase
gratingsмодуляция лазерного пучкаоптический модулятордифракционные решёткидвойная дифракция на фазовых решёткахIntroduction Optical choppers are widely used during physical experiments. Optical
chopper is a rotating disk punctuated with holes or slits. Since this type of laser modulator is mechanical in nature, the maximum frequency is limited to the several kHz. In addition, when the laser
beam intersects the borders of holes, diffraction effects occurs, which distort a shape of the output beam. The other type of mechanical laser beam modulator described in [1] allows obtaining up to
100% modulation of laser radiation power, but the maximal frequency of modulation is also low. In this report, we present results of investigations of the device where laser beam modulation occurs as
a result of sequential diffraction by two phase diffraction gratings, one of which is being moved relatively to the other one in the direction across grating lines. There are specific diffraction
gratings which have a rectangular “meander” type profile formed by a relief on a transparent substrate used in this device. With applying of this device it is possible to increase the frequency of
modulation of the laser beam up to hundreds kHz. In the special case, when we use the first diffraction order as output beam, it is possible to obtain its power modulation according to the harmonic
law. When we use the zero diffraction order as output beam, it is possible to get the modulation of the output beam in the form of periodic pulses of a specific shape with a peak power close to the
radiation power at the input of the device. 2. Theoretical Analysis The modulator scheme is shown in Fig. ??. The device includes a laser 7, a modulating unit (1-4, 6) and a spatial filter 5. The
modulating unit consists of two transparent disks 1 and 3. There are two identical circular relief diffraction gratings (DG) 2 and 4 of the same period L, located on the periphery of disks surfaces
at distances RG from the centres of these disks. Disks are located in parallel planes at a small distance from each other. The distance lz between the DGs, which are located on the surfaces of the
disks, facing each other, must satisfy the condition: lz ≪ L2∕l, where l is the laser wavelength. One of the disks is rotated by the motor 6 relative to the second disk, making n revolutions per
second (rps). PIC Figure 1. Schematic diagram of the laser light modulator with modulating unit containing two DGs The relief has a specific form of the “meander” type, with the width of the
protrusions equal to the width of the grooves. The lines of the relief of the DG are located along the radial directions to the center of the disk. The optimal depth of the relief of the DG, at which
the modulation in the zero and in the first orders has the maximum amplitude, is calculated by the following formula: h OPT= l 4(ng - 1),(1) where ng is the refractive index of the substance of which
the relief is made. When the depth of the relief is equal to h OPT,thedepthofopticalwavefrontphasemodulationisequaltoDf = p∕2, and the amplitude of the optical wave spatial phase modulation (SPM)
is F M = Df∕2 = p∕4. The interaction of optical wave with the system of two diffraction gratings, as shown on the Fig. ??, was considered in [2]. This analysis shows that the radiation power in
diffraction orders depends on several parameters: the distance between the DGs, the displacement of one grating relative to the other one in the 0x direction, as far as the amplitude and shape of the
SPM which obtained by the optical wave after propagating through each of these phase type gratings. PIC Рис. 2. Diagram showing an optical wave propagates through a system of two DGs As follows from
theoretical analysis we can get the best results, when we use DG of the special rectangular meander profile, since there are no even diffraction orders in the diffraction spatial spectra of these
DGs. Only in this special case periodical oscillation of the output optical beam power of the first orders of diffraction as a result of one of the DGs moving relative to the other occurs according
to a harmonic law. Power oscillations of the zero order will be periodical also. But these oscillations would not be purely sinusoidal. When the optimal depth of the gratings, h
[1] : I±1(x) = 2 p2 + 2 p2 cos 2p L xmL, (2) L = p l L2lz - the distance parameter. The intensity coefficient in the zero order of diffraction is described by a more complicated formula which
contains infinite series of harmonics [1]: I0 = 1 3 - 4 p2 е k=0+Ґcos (2k + 1)2L (2k + 1)2 cos (2k + 1) 2p L x+ + 8 p4 е k=0+Ґе kў=0+Ґcos(4(k2 - kў2 + k - kў)L) (2k + 1)2(2kў + 1)2 cos 2(kў- k) 2p L
x+ + 8 p4 е k=0+Ґе j=0+Ґcos 2(k + j + 1) 2p L x + 4(j2 - k2 + j - k)L (2k + 1)2(2j + 1)2 ,k№kў.(3) Power of the beam radiated to the order of diffraction with the number n (n = 0,1,-1) is related to
the intensity coefficient and to the radiation power P inmeasuredattheinputofthedevice,byformula : Pn = hInPin. (4) h is coefficient of effective use of power, taking into account radiation losses in
the optical scheme due to reflection and absorption. As follows from formula (??), the output power in the first diffraction order varies according to a harmonic law from zero to a maximum value
which is equal to P1 max = (4∕p2)hPin. The intensity of the zero diffraction order also varies with a period equal to L. The shape of the intensity dependence on the displacement of DG is similar
to pulse shape. The shapes of these pulses are changed in a complex way with changing of the parameter L. Dependences of power of different diffraction orders on the displacement of the DG,
calculated by formulas (??) and (??) are presented in Fig. ??a. The amplitude of modulation of the zero order beam power reaches the value P0 max = 0.9hPin (provided parameter L = 0.05). For the
purpose of experimental investigation of these dependences the special setup with use of two phase gratings of a rectangular profile of period equal to L = 200mm and with relief depth which was close
to the optimum have been fabricated. One of the gratings was stationary. The second one was installed parallel to the first grating on the moving platform. The second grating was driven by a
micrometric screw in the direction across the grating lines. The displacement step was 10 mm. In addition, the design allowed changing the distance between two DGs. A He-Ne laser with a wavelength l
= 0.63mm was used as a source of coherent radiation. Diffraction orders intensities were measured by a photodiode with use of reverse bias. Experimental dependencies of radiation intensities in the
diffraction orders on the displacement of the DG are presented in Fig. ??b. PIC Рис. 3. Calculated (a) and experimental (b) power dependences in the zero and first orders of diffraction on the
displacement for different values of the distance parameter L As can be seen from comparing Fig. ??a and ??b, the shapes of experimental curves are in a good agreement with the calculated ones. When
one grating is being moved in the direction of the 0x axis at a constant speed, periodic modulation of the diffraction orders power in dependence of time would be observed. In the first orders of
diffraction, the modulation shape will be harmonic. Also one can see that phases of the oscillations in the first and minus first orders of diffraction depend on the distance between two gratings. In
the zero order of diffraction the shape of periodic modulation looks like pulses, which amplitudes are about from 0.9hPin to 0.7hPin. 3. Experimental Investigation of the Modulator Setup Experimental
setup was built according to the scheme shown in Fig. ??. Disk sectors with gratings were manufactured using photolithography and chemical etching of glass. The DG period, measured at a distance of 3
cm from the center of the disk, was 150 mm. The amplitude of the SPM of DG was calculated from the measured ratio of the powers of the zero and the first diffraction orders [3]. By measuring results
the amplitude of SPM was close to FM = p∕4, in practice FM = (42СЧ43)в€-. The stationary grating was fixed on the path of the laser beam. The movable grating was installed in the hole on the
surface of the disk. The disc was driven by a DC motor. Gratings were installed in parallel at a distance of about 1 mm. It was possible to tune the position of one of the disks in order to ensure
the parallelism of the grating lines. Photodiodes with load resistors were installed in diffraction orders. Reverse bias voltage was connected to photodiode. In this case, the voltage across the load
resistor is proportional to the power of the radiation incident on the photodiode. The shape of the output signal was recorded using an oscilloscope with a signal recording function. The experimental
modulation characteristics are presented in Fig. ??. The dependence was normalized to the voltage measured on the photodiode load resistor when a laser beam was directed onto the photodiode, with
correction of this value taking into account reflection losses. The modulation curves are very close to the calculated ones. PIC Рис. 4. Experimental dependencies of output beam intensities on time.
Rotation speed was equal to n = 0.15 rps, modulation frequency was F = 190 Hz When the disk rotates, the linear displacement speed of the moving grating relatively to the stationary one is equal to n
= 2pRn. The oscillation frequency is equal to F = n∕L = 2pRn∕L. For R = 3 cm, with n = 0.15 rps and L = 150mm we get calculated value: F = 188 Hz, which is very close to the experimental value of
the modulation frequency F = 190 Hz. With increasing rotation speed of the disk to 100 rps, with the same grating parameters, the modulation frequency will be increased up to F = 125 kHz. 4.
Conclusions The method of laser beam modulation with use of the system of two diffraction gratings is investigated theoretically and experimentally. Modulation frequencies of the hundred of kHz
domain with the use of mechanical type driver are possible. Harmonic type shape of the output beam modulation can be obtained in the first order of diffraction. Pulse type modulation can be obtained
in the zero order of diffraction. Zero-order modulation with amplitude P0 max = 0.75hPV. A. Komotskii, Yu. M. Sokolov, N. V. Suetin, Laser Beam Modulation Using Corner Reflector and Deep Diffraction
Grating, Journal of Communications Technology and Electronics 62 (7) (2017) 822-826. doi:10.1134/S1064226917070063.V. A. Komotskii, Yu. M. Sokolov, Analysis of the Intensities of Diffractional Orders
in Optical Scheme Based on Two Phase Diffraction Gratings, Bulletin of Peoples’ Friendship University of Russia. Series: Physico-Mathematical Sciences 1 (2006) 90-95.V. A. Komotskii, Yu. M. Sokolov,
E. V. Basistyi, Depth measurement of the periodic grooved reflectors of surface acoustic waves using the laser probing, Journal of Communications Technology and Electronics 56 (2) (2011) 220-225. | {"url":"https://journals.rudn.ru/miph/article/xml/18371/en_US","timestamp":"2024-11-03T21:54:14Z","content_type":"application/xml","content_length":"16863","record_id":"<urn:uuid:edc0504d-145e-4f05-903f-fc4511395c00>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00788.warc.gz"} |
Generate new MNIST digits using Autoencoder
Open-Source Internship opportunity by OpenGenus for programmers. Apply now.
Reading time: 30 minutes | Coding time: 20 minutes
Autoencoder is a neural network tries to learn a particular feature of converting an input to an output data and generate back the input given the output. It includes two parts:
• encoder: which learns the features of the data or given answers
• decoder: which tries to generate the answers from the learnt features/ questions
This technique is widely used for a variety of situations such as generating new images, removing noise from images and many others.
Read about various applications of Autoencoders
In this article, we will learn how autoencoders can be used to generate the popular MNIST dataset and we can use the result to enhance the original dataset. We will build an autoencoder from scratch
in TensorFlow and generate the actual images from the MNIST dataset.
Idea of using an Autoencoder
The basic idea of using Autoencoders for generating MNIST digits is as follows:
• Encoder part of autoencoder will learn the features of MNIST digits by analyzing the actual dataset. For example, X is the actual MNIST digit and Y are the features of the digit. Our encoder part
is a function F such that F(X) = Y.
• Decoder part of autoencoder will try to reverse process by generating the actual MNIST digits from the features. At this point, we have Y in F(X)=Y and try to generate the input X for which we
will get the output.
The idea of doing this is to generate more handwritten digits dataset which we can use for a variety of situations like:
• Train a model better by covering a larger possibility of handwritten digits
As there will be multiple features of handwritten digits, until our autoencoder is over trained, we will generate a different set of handwritten digits than MNIST which is expected to differ by a
small amount and will be beneficial in expanding the dataset.
Building our Autoencoder
We use use TensorFlow's Python API to accomplish this.
Import all the libraries that we will need, namely tensorflow, keras, matplotlib, .
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
Define constant parameter for batch size (number of images we will process at a time).
batch_size = 128
Fetch the data from MNIST dataset and load it. Note we take the MNIST dataset to learn the features which we will use to regenerate the dataset.
digits_mnist = keras.datasets.mnist
(train_images, train_labels),(test_images, test_labels) = digits_mnist.load_data()
Create local dataset using tensorflow. Train the data and split it into batches in accordance with the batch size. Reshape each of the image data to equivalent size.
with tf.variable_scope("DataPipe"):
dataset = tf.data.Dataset.from_tensor_slices(train_images)
dataset = dataset.map(lambda x: tf.image.convert_image_dtype([x], dtype=tf.float32))
dataset = dataset.batch(batch_size=batch_size).prefetch(batch_size)
iterator = dataset.make_initializable_iterator()
input_batch = iterator.get_next()
input_batch = tf.reshape(input_batch, shape=[-1, 28, 28, 1])
Iterate through the batches till none are left
init_vars = [tf.local_variables_initializer(),tf.global_variables_initializer()]
with tf.Session() as sess:
sess.run([init_vars, iterator.initializer])
while 1:
batch = sess.run(input_batch)
print(batch.shape) # Get batch dimensions
plt.imshow(batch[0,:,:,0] , cmap='gray')
except tf.errors.OutOfRangeError:
print('All batches have been iterated!')
Encoding phase
This is the encoding function. Use convolutional layers along with padding to help maintain the spatial relations between pixels. Compute the mean, standard deviation and epsilon value. Calculate the
value z using the first 3 values mentioned.
def encoder(X):
activation = tf.nn.relu
with tf.variable_scope("Encoder"):
x = tf.layers.conv2d(X, filters=64, kernel_size=4, strides=2, padding='same', activation=activation)
x = tf.layers.conv2d(x, filters=64, kernel_size=4, strides=2, padding='same', activation=activation)
x = tf.layers.conv2d(x, filters=64, kernel_size=4, strides=1, padding='same', activation=activation)
x = tf.layers.flatten(x)
mean_ = tf.layers.dense(x, units=FLAGS.latent_dim, name='mean')
std_dev = tf.nn.softplus(tf.layers.dense(x, units=FLAGS.latent_dim), name='std_dev') # softplus to force >0
epsilon = tf.random_normal(tf.stack([tf.shape(x)[0], FLAGS.latent_dim]), name='epsilon')
z = mean_ + tf.multiply(epsilon, std_dev)
return z, mean_, std_dev
Note: Z captures the features of the MNIST dataset
Decoding phase
This is the decoding function. Here, we transpose the convulations. But before that apply some non linear transformations using dense layers. To recover the original image, use the unsampling method
from the latent variables.
def decoder(z):
activation = tf.nn.relu
with tf.variable_scope("Decoder"):
x = tf.layers.dense(z, units=FLAGS.inputs_decoder, activation=activation)
x = tf.layers.dense(x, units=FLAGS.inputs_decoder, activation=activation)
recovered_size = int(np.sqrt(FLAGS.inputs_decoder))
x = tf.reshape(x, [-1, recovered_size, recovered_size, 1])
x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1, padding='same', activation=activation)
x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1, padding='same', activation=activation)
x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1, padding='same', activation=activation)
x = tf.contrib.layers.flatten(x)
x = tf.layers.dense(x, units=28 * 28, activation=None)
x = tf.layers.dense(x, units=28 * 28, activation=tf.nn.sigmoid)
img = tf.reshape(x, shape=[-1, 28, 28, 1])
return img
Running the encoding and decoding phase
Link the encoder and decoder.
z, mean_, std_dev = encoder(input_batch)
output = decoder(z)
Reshape input and output to flat vectors.
flat_output = tf.reshape(output, [-1, 28 * 28])
flat_input = tf.reshape(input_batch, [-1, 28 * 28])
Compute the loss function using the binary cross entropy formula. Then calculate the latent loss using the KL divergence formula and finally get the mean of all the image losses.
with tf.name_scope('loss'):
img_loss = tf.reduce_sum(flat_input * -tf.log(flat_output) + (1 - flat_input) * -tf.log(1 - flat_output), 1)
latent_loss = 0.5 * tf.reduce_sum(tf.square(mean_) + tf.square(std_dev) - tf.log(tf.square(std_dev)) - 1, 1)
loss = tf.reduce_mean(img_loss + latent_loss)
Train the model
This is the training loop. For each sample, we create an artificial image and display it. Latent space plot is also being created here
while True:
if flag:
summ, target, output_ = sess.run([merged_summary_op, input_batch, output])
f, axarr = plt.subplots(FLAGS.test_image_number, 2)
for j in range(FLAGS.test_image_number):
for pos, im in enumerate([target, output_]):
axarr[j, pos].imshow(im[j].reshape((28, 28)), cmap='gray')
axarr[j, pos].axis('off')
plt.savefig(os.path.join(results_folder, 'Train/Epoch_{}').format(epoch))
flag = False
writer.add_summary(summ, epoch)
artificial_image = sess.run(output, feed_dict={z: np.random.normal(0, 1, (1, FLAGS.latent_dim))})
with sns.axes_style("white"):
plt.imshow(artificial_image[0].reshape((28, 28)), cmap='gray')
plt.savefig(os.path.join(results_folder, 'Test/{}'.format(epoch)))
if FLAGS.latent_dim == 2 and FLAGS.plot_latent:
coords = sess.run(z, feed_dict={input_batch: test_images[..., np.newaxis]/255.})
colormap = ListedColormap(sns.color_palette(sns.hls_palette(10, l=.45 , s=.8)).as_hex())
plt.scatter(coords[:, 0], coords[:, 1], c=test_labels, cmap=colormap)
cbar = plt.colorbar()
if FLAGS.dataset == 'digits-mnist':
cbar.ax.set_yticklabels(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine'])
plt.title('Latent space')
plt.savefig(os.path.join(results_folder, 'Test/Latent_{}'.format(epoch)))
except tf.errors.OutOfRangeError:
This is the plot of the latent space:
Latent space is the output of the encoder phase which the decoder phase will use. Each point represents a feature or an input MNIST digit and the clusters represent that the points belong to a single
The decoder phase uses this representation to regenerate the MNIST dataset.
Create a mesh grid of values. The matrix that will contain the grid of images.
values = np.arange(-3, 4, .5)
xx, yy = np.meshgrid(values, values)
input_holder = np.zeros((1, 2))
container = np.zeros((28 * len(values), 28 * len(values)))
Run the test images in the matrix which generates the output.
for row in range(xx.shape[0]):
for col in range(xx.shape[1]):
input_holder[0, :] = [xx[row, col], yy[row, col]]
artificial_image = sess.run(output, feed_dict={z: input_holder})
container[row * 28: (row + 1) * 28, col * 28: (col + 1) * 28] = np.squeeze(artificial_image)
plt.imshow(container, cmap='gray')
plt.savefig(os.path.join(results_folder, 'Test/Space_{}'.format(epoch)))
The top line is the input and bottom line is the output-
The output image may seem to be similar to the input image but there are small differences which can be measured using a similarity metric.
Enjoy the autoencoder model | {"url":"https://iq.opengenus.org/mnist-digit-generation-using-autoencoder/","timestamp":"2024-11-07T23:13:22Z","content_type":"text/html","content_length":"62147","record_id":"<urn:uuid:e0e42f06-3569-49ae-95ac-e8c4f7cbd62c>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00625.warc.gz"} |
Data were derived from the Monitoring Avian Productivity and Survivorship (MAPS) program, a network constant-effort mist-netting and bird-banding stations that extends across the continental United
States and, generally, the more southerly portions of Canada. Over 1,175 MAPS stations have been established since the inception of the MAPS program in 1989, of which 982 were operated for at least
one year during the 15-year period (1992-2006) considered here.
The design of the MAPS program and field methods were standardized in 1992 and are described in DeSante (1992) and DeSante et al. (1995, 1996, 2004, 2015). MAPS stations were established by operators
in areas where long-term mist netting was practical and permissible. Typically, ten net sites were established rather uniformly throughout the central 8 ha of a 20-ha study area (station) at
locations where breeding landbirds could be captured efficiently. One mist net, usually 12m x 2.5m, 30mm-mesh, was erected at each net site. Locations and orientations of nets were kept consistent
over all days and years that the station was operated. Nets were operated in a constant-effort manner, typically for six hours per day beginning at local sunrise, for one day per 10-day period, and
for 6-10 consecutive 10-day periods beginning between May 1 and June 10 (starting later at more northerly latitudes) and continuing through the 10-day period ending August 8 (thus, with fewer periods
at more northerly latitudes). To facilitate the collection of constant-effort data, nets were opened, checked, and closed in the same order on all days of operation. Nets were occasionally closed (or
not opened) due to inclement weather, especially high capture rates, or for other logistical reasons.
With few exceptions, each bird captured was marked with a uniquely-numbered aluminum leg band provided by the U.S. Geological Survey or the Canadian Wildlife Service. Band number, capture status,
species, age, sex, ageing and sexing criteria (skull pneumatization, breeding condition [cloacal protuberance and/or brood patch], feather wear, body and flight-feather molt, molt limits, and plumage
characteristics), physical condition (body mass, wing chord, and fat content), date, capture time, station, and net number were recorded using standardized codes for all birds captured, including
recaptures. Ageing and sexing followed guidelines developed by Pyle (1997). The times of opening and closing the nets and beginning of each net run were standardized and recorded each day so that
netting effort could be calculated for each 10-day period each year. The breeding (summer residency) status of each species seen or heard while the station was being operated (including species that
were not captured) was determined by the station operator using methods similar to those employed in breeding bird atlas projects.
Following computer entry, all MAPS data were run through extensive vetting routines that verified: (1) the validity of the codes used in all records; (2) the internal consistency of each banding
record by comparing the ageing and sexing criteria and physical condition data to the resulting species, age, and sex determinations; (3) the consistency of species, age, and sex determinations for
all records of each band number; and (4) the consistency among banding, effort, and breeding status data for all records. These vetting routines were conducted on data from about 60% of the stations
by the station operators themselves through the use of MAPSPROG, a Visual dBASE entry, verification, and error-tracking program (Froehlich et al.2006); data from the remainder of the stations,
including stations operated by IBP interns, were verified by IBP staff biologists.
We used capture-mark-recapture (CMR) models and generalized linear mixed models (GLMMs) to model temporal (annual) and spatial (at the scale of North American Bird Conservation Initiative [NABCI]
Bird Conservation Regions [BCRs]) variation in demographic parameters.
We only included in analyses data from stations and years for which we considered data to be adequate for both constant-effort capture GLMMs and CMR analyses. Because estimation of survival rates
from transient Cormack-Jolly-Seber (CJS) CMR models (i.e., models that correct for the presence of transient individuals among the birds captured; Pradel et al. 1997, Nott and DeSante 2002, Hines et
al. 2003) requires four years of data, we only included data from stations that were operated for at least four years during the 15-yr period 1992-2006, and that had no more than two consecutive
missed years within those four years. If a station met this minimum number and distribution of years but had other longer gaps during which it was not operated, data from all years during which it
was operated were included in the analyses. In addition, to include data from a station for any given year, we required that the station was operated during that year with sufficient effort for the
data to be usable for both survival and productivity analyses. For a station to be considered "operated with sufficient effort" for a given year, at least half of the standardized sampling-period
effort for that station had to be completed for that year during at least three early-season sampling periods (adult superperiod) and two late-season sampling periods (young superperiod). Adult and
young superperiods are defined in the MAPS Manual (DeSante et al. 2015) for each starting period of operation, which varies by latitude from Period 1 (May 1-10; for stations located in the southern
parts of California, Arizona, Texas, or Florida) to Period 5 (June 10-19; for stations located in Alaska [except the SE part] or Boreal Canada). Although MAPS protocol calls for just 1 day of
mist-netting per 10-day sampling period, a few stations operate nets more frequently. For these stations, we used a subset of the capture data that included all data from the first day of operation
of each 10-day MAPS sampling period. By discarding data from additional effort within a sampling period we aimed to provide greater standardization of effort among stations and minimize bias in
capture indices due to net-avoidance and effort saturation (DeSante et al. 2004). A total of 628 stations fulfilled these requirements during 1992-2006; data from these stations comprised the basis
for the analyses reported here. Although some stations dropped out and new stations entered the program each year, many stations were operated for long time spans (e.g., 231 [37%] of those 628
stations were operated for ten or more years).
We included data for a species from all stations at which the breeding status of the species was determined to be "usual breeder" whereby one or more individuals of the species were present at the
station during summer (i.e., in territories overlapping station boundaries) and presumably attempted to breed there during more than half of the years that the station was operated during 1992-2006.
Finally, we limited analyses to 158 species with \(\geq\) 75 adult individuals captured, marked (banded), and released during 1992-2006, and for which at least 14 between-year recaptures were
recorded during 1993-2006. Because of the difficulty of distinguishing Alder (Empidonax alnorum) from Willow (E. traillii) flycatchers, and Pacific-slope (E. difficilis) from Cordilleran (E.
occidentalis) flycatchers, in the hand, we combined data for each of these species pairs and analyzed them as two super-species, "Traill’s" Flycatcher (TRFL) and "Western" Flycatcher (WEFL),
Population change (\(\lambda\); Lambda). An estimate of the annual net change in adult population size, \(N\), typically measured between years \(t\) and \(t+1\) as \(N_{t+1}/N_t\). Here we estimate
\(\lambda\) using Pradel reverse-time CMR models (Pradel 1996). In the context of time-constant temporal analyses and all of our spatial analyses, \(\lambda\) can be interpreted as an estimate of
average population change over the 15-year time period (i.e., trend). Pradel reverse-time CMR models also provide estimates of recapture probability (Pradel_p) and adult apparent survival
(Pradel_phi). Estimates of adult apparent survival from these models are biased low, however, because of the presence of transients (because transients, by definition have zero survival probability;
Pradel et al. 1997). Thus, we do not use Pradel_phi in our interpretations of results.
Adult apparent survival probability (\(\phi\); Phi). An estimate of the annual probability that a resident bird that was alive and present at the station in year t will also be alive and present in
year t+1. Adult apparent survival (TM_PhiR) was estimated from ad hoc length-of-stay transient Cormack-Jolly-Seber (CJS) CMR models (Pradel et al. 1997, Nott and DeSante 2002, Hines et al. 2003).
Adult apparent survival estimated in this way is unbiased with respect to transients. Transient CJS models also provide estimates of recapture probability (TM_p) and first-interval annual survival of
birds of unknown residency status (U_Int1_Phi). Adult apparent survival probability represents a mixture of true survival (the complement of which is mortality) and site-fidelity (the complement of
which is emigration).
Residency (\(\tau\); Tau). An estimate of the proportion of newly-captured adults that are residents at the station. The estimate of the proportion of newly-captured birds of unknown residency that
are resident at the station (Tau) is estimated from the ratio of U_Int1_Phi to TM_PhiR, both of which are estimated from transient CJS CMR models. When the proportion of newly-captured birds that are
residents is expanded to include the presence of newly-captured known residents (individuals that were recaptured at least 7 days later during their first year of capture), we define the parameter as
\(\tau1\) (Tau1).
Recruitment (\(f\)). An estimate of the annual number of new individuals in year t+1 relative to the total number of individuals in year t. Recruitment (\(\hat{f}\)) is calculated as \(\hat{\lambda}-
\hat{\phi}\). Recruitment includes two age-class components, second-year (SY) birds hatched the preceding year and after-second-year (ASY) immigrant birds. Note that, because of natal dispersal, the
SY birds that recruit at any station are generally not the young birds that were produced the previous year at that station.
Index of adults per station (Ad). We estimate of the annual number of captures of adults per station to provide an index of station-scale adult population size. The index of adults per station is
estimated from effort-corrected Poisson generalized linear mixed modes (GLMMs). Because MAPS stations are established to be approximately the same size (20 ha), the index of adults per station (Ad)
could be considered an index of population density (adults per 20 ha). This index will be positively biased to some extent due to the presence of transient individuals and negatively biased due to
imperfect detection.
Index of young birds per station (Yg). An estimate of the annual number of captures of young (hatching-year [HY]) birds per station. The index of young per station, like the index of adults per
station, is estimated from effort-corrected Poisson GLMMs. Unlike Ad, however, many young birds captured at a station are likely dispersing juveniles fledged from nests outside the boundaries of the
Productivity (reproductive index; RI). An index of the annual number of young (hatching-year [HY]) birds produced per adult. Because most HY birds captured at MAPS stations are likely dispersing
juveniles that are independent of their parents, the reproductive index represents the end result of a complex mixture of components including: proportion of adults attempting to breed, number of
nesting attempts and broods, clutch size, egg hatchability, survival of eggs and nestlings, and survival of fledglings to independence from parents. Productivity is estimated independently from
effort-corrected binomial GLMMs (i.e., it is not simply calculated as Yg/Ad).
Post-breeding effects (PBE). An index calculated as \(f\)/RI. Because recruitment (\(f\)) includes two age-class components, SY birds hatched the preceding year and ASY immigrant birds, post-breeding
effects (PBE) reflects both first-year survival of young birds and immigration of adults, although it seems likely that the major effect arises from first-year survival.
1. Temporal analyses. We used capture-mark-recapture (CMR) models to estimate population change (lambda [\(\lambda\)]) and adult apparent survival rate (\(\phi\)) and to calculate estimates of
recruitment rate (\(f\)), and residency (the proportion of residents among newly captured adults [\(\tau1\)]). We ran all CMR models with Program MARK (White and Burnham 1999) using the RMark package
(Laake and Rexstad 2008) in R ver. 2.10.1 (R Development Core Team 2009). We considered three parameterizations of temporal models for each of the four demographic parameters: 1) time-dependent (\(t
\)), for which a year-specific estimates is produced for each of the 14 years or intervals); 2) a linear function of time (\(T\)), for which an intercept and slope (\(\beta\)) are estimated for the
15-yr period, and year-specific estimates are calculated from the intercept and slope); and 3) time-constant (\(\cdot\)) for which a single (mean) estimate is estimated for the entire 15-yr period).
We used Akaike’s Information Criteria for small samples (\(AIC_c\)) for model selection (Burnham and Anderson 1992) and for reporting model-averaged parameter estimates using \(AIC_c\) model weights
(\(w_i\); Burnham and Anderson 1998).
1. Pradel models. We estimated population change (\(\lambda\); where \(\lambda < 1\) indicates a declining population and \(\lambda > 1\) indicates an increasing population) by applying Pradel
reverse-time CMR models to MAPS data (Pradel 1996). We used the '\(\lambda\) and \(\phi\)' version of the likelihood in Program MARK (White and Burnham 1999). In addition to three temporal models
each for \(\lambda\) and \(\phi\), we considered four temporal models for the 'nuisance' parameter, \(p\), recapture probability: (1) time-constant (\(\cdot\), in which a single estimate is
calculated over the entire 15-yr period), (2) time-dependent (\(t\), in which a year-specific estimate is calculated for each of the 15 years), (3) as a linear function of the mean number of
within-season captures of individual birds over the entire 15 year period (CAPCOV), and (4) as a linear function of year and the station-specific mean number of within-season captures of
individual birds (CAPCOV + t). There were thus, a total of 36 models in the Pradel reverse-time CMR model set, 3 for \(\lambda\) \(\cdot\) 3 for \(\phi\) \(\times\) 4 for \(p\).
2. Cormack-Jolly-Seber models. Estimates of adult apparent survival, \(\phi\) from Pradel reverse-time capture- recapture models will be biased low if transient individuals (e.g., passage migrants,
dispersing birds, and 'floaters' [sensu Brown 1969]) that have zero probability of returning to the station are present in populations. Because of this potential bias, we used ad-hoc
length-of-stay transient Cormack-Jolly-Seber (CJS) models that account for the presence of transients (Pradel et al. 1997, Nott and DeSante 2002, Hines et al. 2003) to estimate unbiased (by
transients) adult apparent survival rates (\(\phi^{R}\)). We also used these modified CJS models to estimate the 'nuisance' parameter, \(\tau\), the proportion of residents among newly-captured
adults of unknown residency status (i.e., those individuals not recaptured 7 or more days later in their first year of capture). By using additional information on the number of newly-captured
birds that were known residents (i.e., that were recaptured 7 or more days later in their first year of capture), we calculated the proportion of residents among all newly-captured adults, \(\
tau1\), and suggest that this parameter might contain useful biological information.
Length-of-stay transient models estimate \(\tau\) based on a ratio of two survival rates: \(\phi^{U1}/\phi^{R}\), where \(\phi^{U1}\) is the first interval survival rate for individuals not
captured 7 or more days apart in their first year of capture (a mixture of residents and transients), and \(\phi^R\) is the survival rate of residents.
Then, letting '\(\cdot\)' indicate time-independence, '\(t\)' indicate time-dependence, and '\(T\)' indicate a linear function of time, and letting '\(\times\)' indicate that the two parameters
vary independently of each other (i.e., temporal effects nested within residency classification, U1:t + R:t) and '+' indicate that the two parameters vary in concert (i.e., an additive U1 + t
model), there are 11 biologically meaningful models describing time variation (or lack thereof) in \(\phi^{U1}\) and \(\phi^{R}\) as follows:
Model U1 R
1 \(\cdot\) \(\cdot\)
2 \(t\) \(\cdot\)
3 \(\cdot\) \(t\)
4 \(t\) \(\times\) \(t\)
5 \(t\) + \(t\)
6 \(T\) \(\cdot\)
7 \(\cdot\) \(T\)
8 \(T\) \(\times\) \(T\)
9 \(T\) + \(T\)
10 \(t\) \(\times\) \(T\)
11 \(T\) \(\times\) \(t\)
We modeled time variation in recapture probability, \(p\), using the same four models that we used in the Pradel models, that is, as time-constant (\(\cdot\)), as time-dependent (\(t\)), as a
linear function of the mean number of within-season captures of individual birds over the entire 15 year period (CAPCOV), and as a linear function of year and the mean number of within-season
captures of individual birds (CAPCOV + t). There were thus, a total of 44 models in the transient Cormack-Jolly-Seber CMR model set, 11 for (\(\phi^{U1}\) and \(\phi^R\)) \(\times\) 4 for \(p\).
3. Recruitment rate. Despite (negative) bias in survival-rate estimates from Pradel reverse-time models, estimates of population growth rate from these models are unbiased if we assume that
under-estimation of survival rate is balanced by over-estimation of recruitment rate (i.e., transience in survival and recruitment are of equal magnitude). Based on this assumption, we calculated
year-specific estimates of recruitment rate as:
where \(\hat{f}^{R}_{t}\) represents an estimate of the year-specific number of new individuals in the population in year \(t\), per individual in year \(t-1\), based on \(\hat{\lambda}_{t}\)
from Pradel reverse-time models and \(\hat{\phi}_{t}^{R}\) from the length-of-stay transient CJS models. Note that \(\hat{\lambda}_{t}\) and \(\hat{\phi}_{t}^{R}\) are model-averaged estimates
derived from model sets whereby \(\lambda_t\) (for Pradel models) or \(\phi_{t}^{R}\) (for CJS models) are constrained to be annually varying, but all possible parameterizations of other
parameters are allowed. Although inference regarding demographic contributions to trend can be based on survival and recruitment estimates derived solely from Pradel reverse-time models (Saracco
et al. 2008), we feel that combining information from Pradel and transient CJS models, as we have done here, provides a more appropriate basis for assessing demographic components of trends.
2. Spatial analyses. We used CMR models to provide time-constant estimates of population growth rate (\(\lambda\)) and adult apparent survival rate (\(\phi\)), and to calculate estimates of
recruitment rate (\(f\)), and residency (the proportion of residents among newly captured adults [\(\tau1\)]) at the program-wide (i.e., continental) scale, and at the scale of North American Bird
Conservation Initiative Regions (BCRs). As in temporal CMR models, we used Akaike Information Criteria (\(AIC_c\)) for model selection (Burnham and Anderson 1992) and, in addition to reporting
program-wide and BCR-specific parameter estimates, also report model-averaged parameter estimates using \(AIC_c\) weights (\(w_i\); Burnham and Anderson 1998).
1. Pradel models. We again estimated population growth rate, \(\lambda\), by applying Pradel reverse-time CMR models to MAPS data (Pradel 1996), and used the '\(\lambda\) and \(\phi\)' likelihood
formulation in Program Mark (White and Burnham 1999) to obtain estimates of \(\lambda\) and \(\phi\). We considered two spatial models each for \(\lambda\) and \(\phi\): BCR-constant (whereby a
single mean estimate is calculated over all of the BCRs, thus, program-wide), and BCR- dependent (in which a time-constant estimate is calculated for each BCRs, thus, BCR- specific). We
considered four spatial models for recapture probability, \(p\): BCR-constant (\(\cdot\)), (2) BCR-dependent (BCR), (3) as a linear function of the mean number of station-specific within-season
captures of individual birds (CAPCOV), and (4) as a linear function of BCR and the mean number of within-season captures of individual birds (CAPCOV + BCR). There were thus, a total of 16 models
in the spatial Pradel reverse-time CMR model set, 2 for \(\lambda\) \(\times\) 2 for \(\phi\) \(\times\) 4 for \(p\).
2. CJS models. We again used ad-hoc length-of-stay transient models to provide adult apparent survival rate estimates that were unbiased with respect to transient individuals, and estimates for \(\
tau\), the proportion of residents among newly captured birds that were not recaptured 7 or more days later in their first year of capture. We modeled \(\phi^{U1}\) and \(\phi^{R}\) using five
biologically meaningful models to describe spatial variation (or lack thereof). Again using '\(\cdot\)' to indicate BCR-independence, 'BCR' to indicate BCR-dependence, and '\(\times\)' to
indicate that the two parameters vary independently (i.e., a nested U1:BCR + R:BCR model), and '+' to indicate that the two parameters vary in parallel (i.e., an U1 + BCR additive model), these
five models are as follows:
Model U1 R
1 \(\cdot\) \(\cdot\)
2 BCR \(\cdot\)
3 \(\cdot\) BCR
4 BCR \(\times\) BCR
5 BCR + BCR
We again considered the same four spatial models for p as in the Pradel spatial models, so that there were a total of 20 models in the spatial transient Cormack-Jolly-Seber CMR model set, 5 for (
\(\phi^{U1}\) and \(\phi^{R}\)) \(\times\) 4 for \(p\).
3. Recruitment rate. As with temporal CMR analyses, we calculated BCR-specific estimates of recruitment as:
where \(\hat{\lambda}^{R}_{BCR}\) represents the estimated (average, or time-constant) number of new individuals in the population in year \(t\) per individual in year \(t-1\) based on \(\hat{\
lambda}^{R}_{BCR}\) from Pradel reverse-time models and \(\hat{\phi}_{BCR}^{R}\) from the length-of-stay transient CJS models.
We modeled observed yearly capture data of adult and young birds as Poisson random variables and used a binomial model whereby productivity represented the probability of a captured bird being a
young bird. We used generalized linear mixed models (GLMMs) to assess temporal (by year) and spatial (by BCR) variation in adult and young capture indices and modeled productivity using a logistic
model. We used regional spatial replication of sites to calculate correction offsets for missed or excess effort and incorporated the offsets into the linear models. Because all MAPS stations are
approximately the same size (20 ha) and typically have the same number (10) and distribution (throughout the central 8 ha of the station) of nets, we used the effort-corrected capture index of adults
(Ad) as an index of population density (adults per 20 ha), and the reproductive index (RI, young/adult), based on the probability of a captured bird being a young bird, as our measure of breeding
performance or productivity.
Despite best efforts, MAPS stations are not always operated in a 'constant effort' manner among years. In most cases when effort differed in a given year from what is typical for a particular
station, effort was less than intended (e.g., due to weather or logistical problems); however, in some cases effort was slightly higher than normal. To correct our analyses of capture data for these
inconsistencies, we included offsets in generalized linear mixed models examining spatial and temporal variation in adult captures, young captures, and productivity. We calculated offsets based on
regional scale distribution of effort and captures across the MAPS season for each year. Accounting for within-season timing of missed effort, rather than just the amount of effort missed is critical
for calculating this offset in a meaningful way (Peach et al. 1998, Nott and DeSante 2002). Data from years with complete data for a particular station could be used to estimate individuals (or
fractions of individuals) missed due to missed effort in a particular sampling period. Such an approach has been pioneered and advocated as part of standard analyses of the British Trust for
Ornithology‘s Constant Effort Sites scheme (e.g., Peach et al. 1998, Robinson et al. 2007). Here we use regional spatial replication of sites to calculate effort corrections rather than temporal
replication of a given station. We feel that spatial replication within geographic regions provides a more appropriate means of correcting our data for annual variation in sampling effort because
annual variation in the timing of captures in North America can be high.
Calculation of correction offsets involved summarizing annual effort and capture data for 17 groups based on MAPS region and intended starting period (ISP). MAPS regions were defined based on
biogeographic and meteorological considerations and included: Northwest, Southwest, North-central, South-central, Northeast, Southeast, Alaska, and Boreal Canada regions (see map in DeSante et al.
1993, 2015). Initiation of the MAPS season should ideally begin after most northward migration has been completed, and MAPS protocols provide recommended starting periods based (largely) on latitude
(see DeSante et al. 2004). Because recommendations do not match MAPS regions exactly, and because some individual stations deviate slightly from these recommendations (e.g., due to local conditions),
there can be 2-3 ISPs represented by stations within any given MAPS region (except for the Alaska and Boreal Canada regions, which always had intended starting period 5 [Jun 10-19]).
Our calculations to correct for annual effort inconsistencies are as follows. First we completed a set of summaries of the effort data. For i in \(1,\cdots,M\) MAPS stations, we summed the net-hours
completed across p in \(1,\cdots,P\) sampling periods (where P = 5-10 periods represents the entire MAPS season) in each of G = 17 region-ISP groupings and \(t = 1,\cdots\,T = 15\) years. We
represent this quantity as \(e_{g[i],p,t}\). We then calculated \(H_{g[i],p,t}\) as the proportion of the total annual net-hours represented by group i and sampling period p as:
We then calculated \(h_{i,p,t}\), the proportion of the total intended net hours actually represented by the difference between intended effort, \(I_{i,p,t}\), and actual completed effort, \(A_
{i,p,t}\) in each sampling period (where intended net-hours is based on numbers of nets typically operated and typical numbers of hours of operation, and is defined by the station operator after the
end of the first year of station operation) at the station-scale:
We then completed a series of summaries of the capture data. For each species, region-ISP, and year we summed numbers of adult and young individuals captured (a small number of unknown-aged birds
were excluded from the analysis), which we denote as \(N_{g[i],t}^{A}\) and \(N_{g[i],t}^{Y}\) for adult and young birds, respectively. We then calculated a set of reduced summaries representing the
total number of individuals of each age class captured with data from individual sampling periods removed. We denote these as \(n_{g[i],p,t}^{A}\) and \(n_{g[i],p,t}^{Y}\). The proportion of the
overall annual catch lost by removal of individual periods was then calculated as:
for adult birds, and
for young birds. Correction factors representing the proportion of birds missed (positive values) or gained (negative values) at a station, period, and year due to missing or extra effort can then be
approximated as:
for adults and young, respectively, and the corrected numbers of adult and young at a particular station and year can then be calculated based on these correction factors and the observed numbers
(i.e., numbers captured) of adults and young captured in each sampling period at the station:
\[C^{A}_{i,t}=\sum_{p=1}^{P}(N^{A}_{i,p,t}+c^{A}_{i,p,t}N^{A}_{i,p,t})\; \textrm{and}\; C^{Y}_{i,t}=\sum_{p=1}^{P}(N^{Y}_{i,p,t}+c^{Y}_{i,p,t}N^{Y}_{i,p,t})\;.\]
We modeled observed yearly capture data of adult and young birds at the station-scale, \(N^{A}_{i,t}\) and \(N^{Y}_{i,t}\), as Poisson random variables with mean (and variance) parameters \(\lambda^
{A}_{i,t}\) and \(\lambda^{Y}_{i,t}\); i.e.,
\[N^{A}_{i,t}\sim\textrm{Pois}(\lambda^{A}_{i,t})\; \textrm{and}\; N^{Y}_{i,t}\sim\textrm{Pois}(\lambda^{Y}_{i,t})\;,\]
and we used a binomial model for productivity with \(N^{Y}_{i,t}+N^{A}_{i,t}\) trials and probability parameter, \(p_{i,t}\), representing the probability of a captured bird at a given station and
year being a young bird:
We used generalized linear mixed models (GLMMs) to assess spatial (BCR) and temporal (annual) variation in adult and young capture indices and productivity. As with CMR models, we modeled temporal
variation in our estimates of the capture index of adults (Ad), capture index of young (Yg), and reproductive index (RI) as time- constant (\(\cdot\)), time-dependent (\(t\)), and as a linear
function of time (\(T\)). We modeled spatial variation in Ad, Yg, and RI as BCR-constant (\(\cdot\), essentially program-wide) and BCR-specific (BCR). As in temporal and spatial CMR models, we used \
(AIC_c\) weights to assess support for temporal or spatial variation in these parameters. We used time- and BCR-specific models to compare to CMR estimates of demographic parameters, and so describe
those in detail here. Poisson models for temporal variation in adult and young captures were defined as:
respectively. We modeled productivity using a logistic model:
In each of the above, the \(\beta_{0}\) represents the mean for the year during which the most adults were captured, \(S_{i}\) represent random station effects distributed as \(S_{i}\sim\textrm{Norm}
(0,\sigma^{2})\), and the \(Y_{t}\) represent fixed year effects for years relative to the year during which the most adults were captured. The \(\textrm{log}(N^{A}_{i,t}/C^{A}_{i,t})\), \(\textrm
{log}(N^{Y}_{i,t}/C^{Y}_{i,t})\), and \(\textrm{log}(N^{Y}_{i,t}C^{A}_{i,t}/C^{Y}_{i,t}N^{A}_{i,t})\) terms are offsets to correct for annual effort variation. The offsets for models of adult and
young captures represent ratios of observed captures to estimates of the numbers of captures that would have been observed under complete intended effort (see above Correcting capture data for annual
variation in mist-netting Effort for detail). The offset for productivity was derived from the following expression, which denotes the difference in the proportion of young captured between observed
and corrected catches:
\[\textrm{log}(\frac{\frac{N^{Y}_{i,t}}{N^{Y}_{i,t}+N^{A}_{i,t}}}{1-\frac{N^{Y}_{i,t}}{N^{Y}_{i,t}+N^{A}_{i,t}}}) - \textrm{log}(\frac{\frac{C^{Y}_{i,t}}{C^{Y}_{i,t}+C^{A}_{i,t}}}{1-\frac{C^{Y}_
We defined analogous models for our spatial analysis by replacing the \(Y_{i}\) term in the temporal models with an effect to indicate differences among BCRs, \(BCR_{i}\). Models were implemented
using the glmmML package (Brostrom 2009) in the R statistical package (R Development Core Team 2009). Model parameters were estimated using Laplace approximation, which is more accurate than
quasi-likelihood methods and appropriate for application to data such as ours, with Poisson (young and adult captures) and binomial (reproductive index) responses and a single random effect (Bolker
et al. 2009).
We present capture indices and indices of productivity for the first year of the study and for the BCR that representing the lowest BCR factor level (i.e., the lowest-numbered BCR) as the inverse-log
transformed point estimates of model intercepts, \(\textrm{exp}(\hat{\beta_{0})}\). For remaining years and BCRs, we added year and BCR effects to intercepts; e.g., for the temporal model,
year-specific indices for 1993-2006 would be \(\textrm{exp}(\hat{\beta_{0}}+\hat{Y_{t}})\). We estimated standard errors (SEs) of these year- and BCR-specific indices using the delta method (Oehlert
1992; implemented in R using Jackson 2010). We approximated 95% confidence intervals approximated as \(\textrm{exp}(\hat{\beta_{0}}+\hat{Y_{t}}\pm{1.96}({\widehat{SE}}(\beta_{0}+Y_{t}))\).
Breeding performance in a given year is not the only factor driving the recruitment of new individuals into breeding populations in the subsequent year. The survival of young through their first
winter, the ability of surviving young to recruit into a breeding population, and the extent of immigration of adults will also have post-breeding effects on recruitment. In an attempt to get a
handle on these effects, we created a parameter we called post-breeding effects (PBE) by dividing year- or BCR-specific estimates of recruitment by the corresponding reproductive index estimate
(e.g., \(f/RI_{t}\)). Again, in the temporal analyses, we provided time-constant (\(\cdot\)), time-dependent (\(t\)), and linear function of time (\(T\)) estimates for PBE, while in the spatial
analyses, we provided BCR-constant (\(\cdot\), program-wide) and BCR-specific estimates for PBE.
We examined the year-specific and BCR-specific estimates obtained for population change (\(\hat{\lambda}_{t}\) and \(\hat{\lambda}_{BCR}\)), adult apparent survival (\(\hat{\phi}_{t}^{R}\) and \(\hat
{\phi}_{BCR}^{R}\)), recruitment (\(\hat{f}_{t}\) and \(\hat{f}_{BCR}\)), and residency (\(\hat{\tau1}_{t}\) and \(\hat{\tau1}_{BCR}\)) from temporal and spatial CMR models; and the indices of adults
(Ad) and young (Yg), productivity (RI), and post-breeding effects (PBE) from GLMM analyses; and excluded estimates with unrealistic values as follows.
We excluded population change estimates that had standard errors (SEs) of 0, had lower 95% confidence limits (LCLs) of 0 or upper 95% confidence limits (UCLs) of infinity, had SEs that were larger
than the estimate, or were < 0.3 (the approximate value of lambda if all three demographic rates contributing to it [productivity, survival of young, and survival of adults] were simultaneously 60%
lower than the previous year) or > 2.1 (the approximate value for lambda if all three demographic rates were simultaneously 60% higher than the previous year); or were associated with recruitment
estimates that were < 0. If we excluded a \(\hat{\lambda}\), we also excluded the corresponding \(\hat{f}\).
We excluded adult apparent survival estimates (either survival of resident birds or first period survival of unknown residency birds), that were 0 or 1, had SEs of 0, or had LCLs of 0 or UCLs of 1;
were associated with residency estimates that were > 1; or (in the case of survival of residents) were associated with recruitment estimates that were < 0. If we excluded an adult apparent survival
estimate of resident birds, we also excluded the corresponding residency and recruitment estimates. If we excluded a first-period survival estimate of unknown residency birds, we again excluded the
corresponding residency estimate but not the associated recruitment estimate.
We excluded residency estimates that were > 1, or were associated with adult apparent survival estimates (either survival of resident birds or first period survival of unknown residency birds) that
were excluded. If we excluded a residency estimate that was > 1, we also excluded the corresponding survival estimate of resident birds and the first-period survival estimate of unknown residency
We excluded recruitment estimates that were < 0, or were associated with lambda or adult apparent survival estimates of resident birds that were excluded. If we excluded a recruitment estimate that
was < 0, we also excluded the corresponding lambda estimate and survival estimate of resident birds.
We excluded productivity (RI) index estimates that were 0, that had LCLs that were 0, or UCLs that were > 10. We excluded Ad and Yg index estimates that were 0, that had LCLs that were 0, or UCLs
that were > 1000. Finally, we excluded post-breeding effects (PBE) estimates that were associated with recruitment or productivity estimates that were excluded.
We then conducted weighted (by number of year-unique individual adult birds captured) pairwise correlation analyses among the index of adult population density, lambda, adult apparent survival,
recruitment, productivity, and post-breeding effects using the wtd.cor function in the 'weights' package (Pasek et al. 2014) in R (R Core Development Team 2009), and examined scatterplots and
pairwise correlation matrices for both temporal (annual) and spatial (BCR-scale) correlations. Because preliminary analyses yielded mean (for all species) temporal correlations between residency (\(\
tau{1}_{t}\)) and each of the other demographic parameters that were very weak (ranging from -0.056 for post-breeding effects to 0.123 for adult apparent survival), and yielded no consistent patterns
in any of these temporal correlations, we did not present results of temporal (or spatial) correlations between \(\tau{1}\) and any other demographic parameter in the scatterplots and pairwise
correlation matrices for any species.
To facilitate comparisons among species, we calculated weighted arithmetic means (after excluding unreliable estimates as described above) for each demographic parameter from each temporal and
spatial model that was employed in its estimation (and for model- averaged estimates from CMR models), along with corresponding standard deviations and coefficients of variation (CVs). This allowed
us not only to compare mean values for these important vital rates among various species, but also to examine and compare the annual and spatial variabilities of a given vital rate among various
species, and to compare the annual and spatial variabilities of various vital rates for a single species. Weighting for calculations of means for all parameters estimated from CMR analyses and for
the index of post-breeding effects was by the number of year-unique captures of adults. Weighing for calculations of means for all parameters estimated from GLMM analyses (population indices of
adults (Ad) and young (Yg) and reproductive index (RI) was by number of stations at which the species was captured.
To elucidate potential patterns in relationships between lambda and other demographic parameters, we grouped species according to their overall population trend (decreasing, stable, increasing) and
their migration strategy (Neotropical-wintering migrant, temperate-wintering migrant, permanent resident). We calculated and used the weighted (again, by number of year-unique individual adult birds
captured) geometric mean of the fully model- averaged lambda estimates (from either temporal or spatial analyses) as our measure of overall population trend, and used the standard errors of the
individual year- or BCR-specific lambda estimates and the delta method to calculate a standard error of the geometric mean and subsequent 95% lower and upper confidence limits (LCL and UCL,
respectively). We then established the following population trend species groups:
DS: Significantly decreasing: \(\hat{\lambda}\) < 1.0 and \(\hat{\lambda}\) UCL < 1.0
DE: Non-significantly decreasing: \(\hat{\lambda}\) < 0.99 and \(\hat{\lambda}\) UCL > 1.0
ST: Stable: 0.99 < \(\hat{\lambda}\) < 1.01 and 95%confidence interval containing 1.0
IN: Non-significantly increasing: \(\hat{\lambda}\) > 1.01 and \(\hat{\lambda}\) LCL < 1.0
IS: Significantly increasing: \(\hat{\lambda}\) > 1.0 and \(\hat{\lambda}\) LCL > 1.0
Finally, we defined the geographical border between Neotropical wintering and temperate wintering to follow approximately the southern boundary of the United States, and used migration strategy
species groups developed for MAPS data by DeSante and Pyle (unpublished MS) as follows:
R: Permanent resident species
RT: Permanent resident species with irregular, irruptive, or minor (<< 50% of individuals) temperate-wintering migrations
T: Temperate-wintering migratory species
TI: Mainly temperate-wintering migratory species, but < 50% of individuals are Neotropical-wintering migrants
NI: Mainly Neotropical-wintering migratory species, but < 50% of individuals are temperate-wintering migrants
N: Neotropical-wintering migratory species
For general discussions and overall classifications, we considered all species classed as R or RT as permanent resident species, all species classed as T or TI as temperate-wintering migratory
species, and all species classed as NI or N as Neotropical-wintering migratory species. | {"url":"https://vitalratesofnorthamericanlandbirds.org/pages/methods.php","timestamp":"2024-11-08T22:28:58Z","content_type":"application/xhtml+xml","content_length":"699407","record_id":"<urn:uuid:f50c7365-dd61-4400-8f66-5cf0ec9ff2fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00508.warc.gz"} |
ball mill working
WEBThe vertical roller mill (VRM) is a type of grinding machine for raw material processing and cement grinding in the cement manufacturing recent years, the VRM cement mill has been equipped in
more and more cement plants around the world because of its features like high energy efficiency, low pollutant generation, small floor area, etc.. The .
WhatsApp: +86 18203695377
WEBAug 5, 2021 · The Ball mill pulveriser is basically horizontal cylindrical tube rotating at low speed on its axis, whose length is slightly more to its diameter. The inside of the Cylinder
shell is fitted with heavy cast liners and is filled with cast or forged balls for grinding, to approximately 1/3 of the diameter. Raw coal to be ground is fed from the ...
WhatsApp: +86 18203695377
WEBJun 6, 2015 · Cleaning and Storing of Ball Mill Charge after the Bond Work Index Procedure is done: Add about 500 g of silica sand into the mill containing the ball charge. Seal the mill.
Rotate for 20 revolutions to clean. Empty the mill charge and sand into the ball try once grinding is complete. Clean out the mill using a brush. Put the lid on the mill.
WhatsApp: +86 18203695377
WEBSep 8, 2023 · Working Principle of Industrial Ball Mill . A ball mill is a mechanical device that usually consists of a rotating cylinder with steel balls or other hard spheres of a certain
size placed inside. The spheres rotate inside the cylinder and interact with food ingredients placed inside the cylinder.
WhatsApp: +86 18203695377
WEBJun 2, 2017 · Ball mills vary greatly in size, from large industrial ball mills measuring more than 25 ft. in diameter to small mills used for sample preparation in laboratories. ... Disk
attrition mills are modern versions of the ancient Buhrstone mill. Vertical shaft mills work on the same principal as VSI crusher, ...
WhatsApp: +86 18203695377
WEBFeb 15, 2023 · Early signs indie the ball mill problems, and this article tell people that how to avoid the problems.
WhatsApp: +86 18203695377
WEBHow does a planetary ball mill work? In the planetary ball mill, every grinding jar represents a "planet". This planet is loed on a circular platform, the socalled sun wheel. When the sun
wheel turns, every grinding jar rotates around its own axis, but in the opposite direction. Thus, centrifugal and Coriolis forces are activated ...
WhatsApp: +86 18203695377
WEBA rod mill is a type of ore grinding equipment used to grind materials into fine powder. Unlike ball mills, rod mills use long steel rods instead of balls as the grinding medium. Rod mills are
ideal for breaking down materials such as minerals and ores. In this article, we'll provide a comprehensive guide on what a rod mill is, how it works ...
WhatsApp: +86 18203695377
WEBMar 14, 2015 · A) Total Apparent Volumetric Charge Filling – including balls and excess slurry on top of the ball charge, plus the interstitial voids in between the balls – expressed as a
percentage of the net internal mill volume (inside liners). B) Overflow Discharge Mills operating at low ball fillings – slurry may accumulate on top of the ball charge; causing, .
WhatsApp: +86 18203695377
WEBYou also need a rod mill work index to design a ball mill operating on a coarse feed, above about 4 mm. Q1: You design for a typical percentage of critical speed, usually 75% of critical. Then
you iterate the mill diameter using a Morrell Cmodel or equation to get the RPM that corresponds to 75% for that mill diameter.
WhatsApp: +86 18203695377
WEBBall Mills work by employing various types of media (either steel, ceramic or lead balls) to crush the material in the barrel. To use the Mill, the material to be ground is loaded into the
barrel which contains grinding media. As the barrel rotates, the material is caught between the individual grinding media balls which mix and crush the ...
WhatsApp: +86 18203695377
WEBWorking up to 10 times faster than conventional laboratory ball mills (sometimes referred to as jar mills or pebble mills), the lab Attritor has a compact, vertical profile, requiring minimal
space, and can be equipped or retrofitted easily and inexpensively with a variety of components and accessories. ...
WhatsApp: +86 18203695377
WEBJan 5, 2016 · In rod mill work the design is such that they will easily pass through the large ball open end discharge trunnion. Where cast liners are used, and especially in rod mill
appliions, we furnish rubber shell liner backing to help cushion the impact effect of the media within the mill and prevent pulp racing.
WhatsApp: +86 18203695377
WEBAug 23, 2023 · Explore the efficiency and functionality of cement ball mills. Learn how these essential industrial machines grind and blend raw materials to produce highquality cement.
Discover the working principles and appliions of cement ball mills in the construction and manufacturing processes.
WhatsApp: +86 18203695377
WEBBall Mill Appliion and Design. Ball mills are used the size reducing or milling of hard materials such as minerals, glass, advanced ceramics, metal oxides, solar cell and semiconductor
materials, nutraceuticals and pharmaceuticals materials down to 1 micron or less. The residence time in ball mills is long enough that all particles get ...
WhatsApp: +86 18203695377
WEBThe working principle of a ball mill is mainly based on impact and wear. During its operation, the ball mill cylinder rotates at a certain speed, and the grinding medium inside the cylinder
(such as steel balls, ceramic balls, etc.) rotates to a certain height under the action of centrifugal force and friction. When the gravity of these ...
WhatsApp: +86 18203695377
WEBAn attritor mill, also known as a stirred ball mill, is a type of milling equipment used for grinding materials into fine particles. It is characterized by its unique working principle, which
involves a highspeed rotating shaft with agitator elements.
WhatsApp: +86 18203695377
WEBBall mills and agitated media mills work according to a simple principle. The balls are freely movable grinding media in a vertical or horizontal drum. The drive sets this drum, and therefore
also the balls, in motion. The material to be ground is then crushed between the balls by impact and shear forces. The fineness achieved is also ...
WhatsApp: +86 18203695377
WEBJun 26, 2020 · Ball mill's working processing includes feeding, grinding, discharging. Zhongde is ball mill manufacturer and supplier, exported machines to 160+ countries, ...
WhatsApp: +86 18203695377
WEBJan 23, 2024 · Definition and Working Principle of ball mill. As mentioned earlier, a ball mill is a grinder used to grind or blend materials for mineral dressing processes, paints,
pyrotechnics, ceramics, and selective laser sintering. It works on the principle of impact and attrition: size reduction is done by impact as the balls drop from near the top of ...
WhatsApp: +86 18203695377
WEBOct 1, 2014 · Abstract. This paper describes the mechanism of a small dualplanetary ball mill which is used in the laboratory, and using the discrete element software PFC3D to establish
corresponding model of ...
WhatsApp: +86 18203695377
WEBMay 1, 2021 · The ball mill grindability test is used for describing ore hardness and it is so widespread that the Bond Work Index generated from the test is often referred to as an ore
characteristic. The ore resistance to grinding and energy consumption can be expressed using the work index and Bond's Third Theory.
WhatsApp: +86 18203695377
WEBThe Planetary Ball Mill PM 100 is a powerful benchtop model with a single grinding station and an easytouse counterweight which compensates masses up to 8 kg. It allows for grinding up to 220
ml sample material per batch. ... Aeration lids are designed for working under inert atmosphere, for example if oxygen can influence the grinding ...
WhatsApp: +86 18203695377 | {"url":"https://deltawatt.fr/ball_mill_working.html","timestamp":"2024-11-14T01:03:48Z","content_type":"application/xhtml+xml","content_length":"22078","record_id":"<urn:uuid:684b09f5-45e7-4c0f-b856-da73b61a0dd3>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00890.warc.gz"} |
Perfect Square-in-a-Square Technique
I admit to being a perfectionist (when it suits me). The square-in-a-square block is one of those easy looking units that never comes out quite right. In other words, WONKY.
I have always used the traditional technique of using corner squares, stitching and trimming them and then crossing my fingers that it comes out square and even:
Traditional square-in-a-square technique via McCall’s Quilting
But no more wonky! I have come up with a way of achieving [DEL:bliss:DEL] perfection, at least on this particular block. Don’t worry, my perfection pretty much ends right there – just ask my
Look how pretty these are:
My method involves something similar to paper piecing. Here are the supplies you’ll need and the instructions:
Supplies: For a 6 1/2″ unfinished block (There will be a chart for different sizes at the end of this post):
Supplies for The Perfect Square-in-a-Square Block
• (1) 6 1/2″ square of fabric for base square
• (2) 4 1/4″ squares cut in half on the diagonal for corners
• (1) 4 1/4″ square of freezer paper
Step 1: Create a crease for placement of freezer paper by folding center square in half and pressing both edges. Repeat in other direction:
Step 2: Place freezer paper onto wrong side of square, lining up corners with creases and press in place:
Step 3: Place corner triangle right sides together with base square making sure that the triangle’s seam allowance is showing above the freezer paper and the corners extend evenly beyond the edges
and pin:
Getting this placement right is probably the trickiest part. The center point of the triangle should be facing the middle of the square. The triangles are over-sized so there is some wiggle room
It might help to start on the right side of the square and place the triangle right side up as it will look when it’s sewn, then flip the triangle over wrong sides together. Adjust as needed from
the freezer paper side:
Step 4: Stitch as close as possible to the freezer paper, starting at the fabric edge and stitching off the end:
This is what it will look like afterward:
Repeat on opposite corner.
Step 5: From right side, press triangles toward corners:
Step 6: Repeat steps 3 – 5 on remaining two corners.
Step 7: From the wrong side, trim all four sides even with the base square:
Step 8: Remove freezer paper (which can be used again) and admire your beautiful and accurate block!
You have the option of trimming away the base fabric corner after sewing and trimming each corner triangle. This time I chose to keep mine on for added stability, but normally I would trim it away.
If you use a scant 1/4″ for your piecing, you will love how this block fits together!
If you want to use this method for different sized blocks, I have come up with a chart based on the unfinished size of the block. These are the most common sizes, however I stopped at 6 1/2″. If
anybody wants larger sizes, you can leave a comment or email me, and I’ll work out the numbers.
Updated 10/2013 – chart has been corrected and updated from original post.
│ │Base Square│ Corner Triangles │ Center Square Size │
│Unfinished Block Size│ │ │ │
│ │ Cut 1 │Cut 2 squares, cut in half on diagonal │Cut 1 from freezer paper│
│ 2 ½” │ 2 ½” │ 2 ¼” │ 1 3/8” │
│ 3 ½” │ 3 ½” │ 2 ¾” │ 2 1/8” │
│ 4 ½” │ 4 ½” │ 3 ¼” │ *2 ¾”+ │
│ 5 ½” │ 5 ½” │ 3 ¾” │ *3 ½”+ │
│ 6 ½” │ 6 ½” │ 4 ¼” │ 4 ¼” │
│ 8 ½” │ 8 1/2″ │ 5 ¼” │ 5 5/8″ │
*Note: The cut sizes with a “+” means to cut slightly larger than the specified size. So, 2 ¾”+ would be in between 2 ¾” and 2 7/8”.
You can download the updated (10/13) cutting chart here: PERFECT SQUARE IN A SQUARE CUTTING CHART
I hope you give this method a try! I used it in my current “Paris in the Fall” block of the month at The Granary:
And in my Fat Quarter Shop 2012 Designer Mystery BOM Block 1, which I posted about here.
Now I get beautiful match points on all of my square-in-a-square blocks! Funny what a quilter will get excited about.
Happy Quilting!
Discover more from The Crafty Quilter
Subscribe to get the latest posts sent to your email.
56 Comments
1. Pingback: It’s Hip to be Square!
2. Thank you for the square in a square.Could you send me the dimensions for a 9 inch square
4. Julie, I love your tips, and I enjoy you youtube videos. May I offer a tip? Quilter’s Paradise has a calculator for square in square dimensions that your readers might find helpful. http://
www.quiltersparadiseesc.com/Calculators/Square%20in%20a%20Square%20Calculator.php I have found it to be a great resource.
5. Hello, Just found your website when I was looking for how to add a setting triangle to a square block, however, I’m placing the triangle in the middle of the square and not making a square in a
square, I’ making a 3-D square pe se, i may not be explaining correctly, but it looks like when both squares are set on point and the points pretrude that makes it look like a star. I’ve already
created the square in the square not I want to add a triangle where it does not span the length of the square but placed in the center.
7. Thank you.
8. Hi Julie. Thanks for the tutorial. Can you give the the dimensions for a 121/2″ unfinished block? Thanks Joanne
9. I know I’m way late, but wanted to say a big thank you for this SIAS tute! I love this for stars and borders and you’ve just taken the anxiety out if it for me. Yea!
10. Thank you Julie, I tried out the block.
Now I need to make a 12.5 unfinished square in a square block – – help! I also need flying geese to go with the square in a square block with no seam in the center of the flying geese?
11. Or u could cut a square place on corner sew on the diagonal trim press
12. Great tutorial. I tried it out and love the result. I wish I had known about this for my daughter’s Quilt! I found that it helped to mark the mid point of each side of the freezer paper with a
small crease. When I aligned the triangles to the freezer paper, I creased the mid point of the hypotenuse very carefully to avoid stretching the bias. Lining up the two midpoints helped me
position the triangles quickly.
Thanks for your practically perfect posts!
Cheers, Pam
14. I want to do a square in a square in a square with three squares that is 8 inches. Have you done one like that and what measurements did you use?
15. Oh my goodness! I am so very grateful to have found your technique! I can’t wait to try your method and will do so this week. I struggled with my own method just last week and trust me, mine was
not pretty. haha I even asked a friend to help me figure the sizes to make a 10.5″ finished block to no avail. I cannot thank you enough for the different sizes chart you so graciously share with
your followers! Beautiful blocks are in my near future, thanks to your instruction.
16. I want to do a 7.5 square, what size center square and what size for corner squares? Thanks so much, I love your method.
17. Help! I need the unfinished outside square to be 12 and 1/2 and finished at 12. I don’t care what the inside square is as long as the points are nice.
18. Woah this blog is magnificent i like reading your posts. Stay up the great paintings! You recognize, many people are looking around for this info, you can aid them greatly.
19. Hi Julie,
Thank you for posting photos to show your square in a square block.
I am a visual learner.
One ? for you: On the step where the block is trimmed to the base block, the photo shows a 6.5″ ruler to trim the base block which measures 5.5″. The unfinished block was to be 6.5″. Did a photo
get switched or am I understanding the method incorrectly? Thanks bunches for your help. Cheryl
20. Thank you so much Great instructions Trying it today
21. THANK YOU for this tutorial!!! I have a quilt I started 3 years ago that I haven’t finished because I couldn’t get perfect square in square. There is 12 of them in this quilt. Now I can finish
it!! THANK YOU!!
22. Julie,
I love this tutorial. Your pictures are great and the concept is brilliant. I intend to try it the next opportunity I get. I quickly read through the comments and didn’t find a discussion of why
you don’t trim off the extra base triangle portion. Can you explain why you prefer to keep it on? I’m curious. Thanks.
1. Great question, Camille! I’ve updated that portion of the blog post to share my thought process (which changes over the years)!
23. Great explanation of how to do this well – thanks!! I’m making a 54 40 or fight variation with a square in square center block.
24. Thought I’d share this site for those wanting to know what size to make the outer square:
25. What about a 12″ block?
26. I have alot of 4″ precut squares to use. What would be the sizes of the others squares that I would need?
27. Hi Julie,
I finished making all my squares and am now concerned whether joining ten of them together will be a problem. It seems like there may quite a bit if bulkiness where the points come together in
the seam allowance. Any tips regarding this? Also, is this a case where I would be better off pressing the seam open( to reduce the bulk), rather than pressing to one side? Any insight you have
into this would be appreciated.
I am grateful for your help,
1. You can press seams open but I find the best way to reduce the bulk is to press half the seams one direction and the other half go the other direction. I like to be able to lock my seams
together to get a nice crisp joined area.
28. Hi Julie, thanks for the great tutorial, love your Perfect Square in a Square, and I am so happy to have found your blog!
29. This seems to be the way to go. Thanks so much
1. Thanks so much for this excellent tutorial! I have to make 28 of these for the quilt I’m making.
For a beginning quilter this seemed daunting. Thanks to this post my squares will be more accurate and I see it as a real time saver, too!
30. They are perfect! Thank you!
31. Thank you for the tutorial. I am making a storm at sea quilt and need a 4 1/2 unfinished and a square in a square in a square 8 1/2 unfinished. I am having trouble with the 8 1/2 one. I will use
your measurements and technique for the 4 1/2 one. Any help you can give will be appreciated.
33. Hi. I would like to try your method for square in a square but need to make 8.5″ unfinished blocks. What measurements would I use for this size? Thanks so much in advance!
34. Thank-you so much for these wonderful clear do-able instructions! This method just makes so much sense and I have a whole box of freezer paper. I appreciate you sharing and posting this tutorial.
35. Thank you for sharing your love quilting with such attention to detail!
36. Thanks for that tutorial. I’m certainly going to give it a whirl! 🙂
37. I am so grateful that you have posted this tutorial. I am beginning to make a quilt for my daughter who just got married. It is blue and white and accurate piecing will be very important. I am
looking forward to using this promising technique.
38. I really like your method of making square in a square blocks. I’ve always used the stitch and flip method, but have trouble getting all the blocks to turn out the exact same size. Do you have
any suggestions for sewing a square in a square block to another square in a square block? I just made a quilt top where I had to do this and I had a terrible time getting the “points” to match
up. Hopefully this makes sense. I was thinking that there’s got to be a secret to making it work. Thanks for your help!
1. The best way to get your points to match up is to use a “setting pin”. I have a picture of it on a previous post: Quarter Square Triangle Tutorial and a description of the process. It’s
towards the end of the tutorial. I hope that helps.
1. Hi again Julie,
Thanks so much for your reply. I do use a setting pin, BUT, have never tried pinning on either side of the setting pin. I will definitely try this next time. Just discovered your website
and I love your ideas!
39. Julie, your method is fantastic! I am making the blocks right now and your method is resulting in perfect blocks. Thanks for showing me a wonderful new method to make these blocks!
40. Will try your square in a square-right now making triangles into squares-
41. I’m going to be doing a 4 3/4 base square. could you please send me the measurements for the corner triangles and freezer paper size. Thankyou so much for your help.
1. For a 4 3/4″ base square, you will need a 3″ center square for the freezer paper and 2 squares at 3 1/2″ cut in half diagonally for the corner triangles. Good luck on your project and let me
know if you have any other questions.
42. Thanks for the Sq.in Sq. tutoral, very helpful!!
43. Thank you for the great tutorial! I always have trouble with this block, and can’t wait to try your version!
44. Thank you for making it sooooooo simple.fantastic tutorial.God Bless
45. What a great little way to make these blocks!!
46. Great tutorial!
47. Hi Julie, I came across your blog through the Plum and June Blog hop. What a great and detailed tutorial, and your blocks are just stunning. Oh, and perfect!! Nice to meet you. Sarah
48. What a fantastic tutorial! I’ll definitely try this the next time (the first time, actually) I have to do a square-in-square block. Thanks for the chart of a variety of sizes, too!
49. Brilliant Julie! I just loved your Mystery Quilt BOM pattern you used this technique on, and will try it. Thanks for posting the Square in a Square cutting chart, it will be a pleasure to use:-)
after a week of muscle- building digging in the garden…weeds have a habit of taking over in a summer- Norway climate, I’m delighted with a rainy forecast for the next few days (gives the weeds
another boost!) and have my sewing machine oiled up and ready to go!
Have a lovely weekend.
Summer greetings from
Ps I have my flight booked for September in Sunnyvale, so hope you have a class planned at the Granary!
50. Dear Julie,
I feel almost as if this was a private lesson for me!! Thank you so much. I can’t wait to try this technique this afternoon. I think I need to get the paper first. I have heard other people using
the paper for different quilting projects but this is the first time I see it “action”. You are so kind to take the time for the tutorial.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Some of the links on this site are affiliate links and I may be compensated a small commission when you make a purchase by clicking on those links. I only promote products and services that I use and
love myself. Your support enables me to maintain the content of this blog and I am truly grateful! | {"url":"https://thecraftyquilter.com/2012/07/perfect-square-in-a-square-technique/","timestamp":"2024-11-03T15:09:08Z","content_type":"text/html","content_length":"311250","record_id":"<urn:uuid:de3b1f95-b9af-4b1c-858b-c608a838967c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00835.warc.gz"} |
ilometers to Nautical miles (International)
Kilometers to Nautical miles (International) Converter
Enter Kilometers
Nautical miles (International)
β Switch toNautical miles (International) to Kilometers Converter
How to use this Kilometers to Nautical miles (International) Converter π €
Follow these steps to convert given length from the units of Kilometers to the units of Nautical miles (International).
1. Enter the input Kilometers value in the text field.
2. The calculator converts the given Kilometers into Nautical miles (International) in realtime β using the conversion formula, and displays under the Nautical miles (International) label. You do
not need to click any button. If the input changes, Nautical miles (International) value is re-calculated, just like that.
3. You may copy the resulting Nautical miles (International) value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Kilometers to Nautical miles (International)?
The formula to convert given length from Kilometers to Nautical miles (International) is:
Length[(Nautical miles (International))] = Length[(Kilometers)] / 1.8520000118528
Substitute the given value of length in kilometers, i.e., Length[(Kilometers)] in the above formula and simplify the right-hand side value. The resulting value is the length in nautical miles
(international), i.e., Length[(Nautical miles (International))].
Calculation will be done after you enter a valid input.
Consider that a high-end electric car has a maximum range of 400 kilometers on a single charge.
Convert this range from kilometers to Nautical miles (International).
The length in kilometers is:
Length[(Kilometers)] = 400
The formula to convert length from kilometers to nautical miles (international) is:
Length[(Nautical miles (International))] = Length[(Kilometers)] / 1.8520000118528
Substitute given weight Length[(Kilometers)] = 400 in the above formula.
Length[(Nautical miles (International))] = 400 / 1.8520000118528
Length[(Nautical miles (International))] = 215.9827
Final Answer:
Therefore, 400 km is equal to 215.9827 nmi.
The length is 215.9827 nmi, in nautical miles (international).
Consider that a private helicopter has a flight range of 150 kilometers.
Convert this range from kilometers to Nautical miles (International).
The length in kilometers is:
Length[(Kilometers)] = 150
The formula to convert length from kilometers to nautical miles (international) is:
Length[(Nautical miles (International))] = Length[(Kilometers)] / 1.8520000118528
Substitute given weight Length[(Kilometers)] = 150 in the above formula.
Length[(Nautical miles (International))] = 150 / 1.8520000118528
Length[(Nautical miles (International))] = 80.9935
Final Answer:
Therefore, 150 km is equal to 80.9935 nmi.
The length is 80.9935 nmi, in nautical miles (international).
Kilometers to Nautical miles (International) Conversion Table
The following table gives some of the most used conversions from Kilometers to Nautical miles (International).
Kilometers (km) Nautical miles (International) (nmi)
0 km 0 nmi
1 km 0.54 nmi
2 km 1.0799 nmi
3 km 1.6199 nmi
4 km 2.1598 nmi
5 km 2.6998 nmi
6 km 3.2397 nmi
7 km 3.7797 nmi
8 km 4.3197 nmi
9 km 4.8596 nmi
10 km 5.3996 nmi
20 km 10.7991 nmi
50 km 26.9978 nmi
100 km 53.9957 nmi
1000 km 539.9568 nmi
10000 km 5399.568 nmi
100000 km 53995.68 nmi
A kilometer (km) is a unit of length in the International System of Units (SI), equal to 0.6214 miles. One kilometer is one thousand meters.
The prefix "kilo-" means one thousand. A kilometer is defined by 1000 times the distance light travels in 1/299,792,458 seconds. This definition may change, but a kilometer will always be one
thousand meters.
Kilometers are used to measure distances on land in most countries. However, the United States and the United Kingdom still often use miles. The UK has adopted the metric system, but miles are still
used on road signs.
Nautical miles (International)
A nautical mile (international) is a unit of length used in maritime and aviation contexts. One nautical mile is equivalent to 1,852 meters or approximately 1.15078 miles.
The nautical mile is defined based on the Earth's circumference and is equal to one minute of latitude.
Nautical miles are used worldwide for navigation at sea and in the air. They are particularly important for charting courses and distances in maritime and aviation industries, ensuring consistency
and accuracy in navigation.
Frequently Asked Questions (FAQs)
1. How do I convert kilometers to nautical miles?
To convert kilometers to nautical miles, divide the number of kilometers by 1.852. For example, 37.04 kilometers divided by 1.852 equals 20 nautical miles.
2. What is the formula for converting kilometers to nautical miles?
The formula is: \( \text{nautical miles} = \dfrac{\text{kilometers}}{1.852} \).
3. How many nautical miles are in a kilometer?
There are approximately 0.539957 nautical miles in 1 kilometer.
4. Is 1.852 kilometers equal to 1 nautical mile?
Yes, 1 nautical mile is exactly equal to 1.852 kilometers.
5. How do I convert nautical miles to kilometers?
To convert nautical miles to kilometers, multiply the number of nautical miles by 1.852. For example, 15 nautical miles multiplied by 1.852 equals 27.78 kilometers.
6. Why do we divide by 1.852 to convert kilometers to nautical miles?
Because each nautical mile is equal to 1.852 kilometers, dividing by 1.852 converts kilometers into nautical miles.
7. How many nautical miles are in 100 kilometers?
100 kilometers divided by 1.852 equals approximately 53.9957 nautical miles. | {"url":"https://convertonline.org/unit/?convert=kilometers-nautical_miles","timestamp":"2024-11-04T17:53:33Z","content_type":"text/html","content_length":"99671","record_id":"<urn:uuid:77d9dfae-4f96-476f-b38d-ca128d2fd937>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00631.warc.gz"} |
Elastoplastic Models in Structural Finite Element Analysis
Plastic forming is one of the most important processing methods in modern industry. As small as spring winding, as large as automobile body forging, it is a plastic forming process. The basic forming
production methods include free forging, die forging, sheet metal forming, extrusion, rolling, drawing, etc. In addition, most materials will undergo a process of plastic deformation before damage
and failure. Understanding plastic deformation is of great significance for production and life safety.
Plastic deformation means that the material deforms beyond the elastic limit (also known as yield stress), showing a complex strain-stress relationship. At this time, even if the external force is
unloaded, it cannot return to its original shape. After the material enters the plastic stage, if the strain continues to increase, the stress begins to decrease (also called strain softening) until
the material fails and fractures. There are many factors affecting plastic deformation, such as strain, strain rate, temperature, loading history, and some material factors such as damage. To better
understand and describe plasticity, various plasticity models have been proposed.
Yield Criteria
The yield criterion is the entrance point in the theory of plasticity. It describes the transition from elasticity to plasticity. The yield stress or yield function is often used to represent this
stage of structural deformation. The elastoplastic analysis is based on the yield criterion to determine whether the material is in the elastic range or has entered the plastic flow state. The
initial yield condition gives the stress state at which the material has just entered plastic deformation. The commonly used yield criteria for engineering materials are von Mises, Drucker–Prager,
Tresca, Mohr-Coulomb, and Barlat. In anisotropic materials, the yield equation is often expressed in terms of equivalent stress. In an isotropic system, however, the yield stress is a constant
Plastic Flow Rule
The flow rule is used to describe the relationship between the plastic strain increment and the stress increment, and on this basis, the elastoplastic constitutive relation is established. When the
material is experiencing plastic deformation, the flow rule defines the direction of the strain. After the material has yielded, the flow rule describes how each component of plastic strains develops
for every load increment. From the mathematical perspective, according to whether it is normal to the yield surface, the flow rule can be classified into associated and non-associated criteria. Most
metal materials follow the associated flow rule, while rock and soil materials mostly satisfy the non-associated flow rule. Besides, the flow rule plays a more important role in the process of
viscoplastic deformation.
Hardening Criteria
The hardening criterion specifies the form in which the material is during plastic deformation. When the material undergoes plastic deformation in the yield stage, it is unloaded and then loaded to
yield, and the new yield point is higher than the original yield point. The first yield point corresponds to the “initial yield stress”, and each yield is a little higher than the previous one, and
this development process is hardening. For ideal elastoplastic materials without hardening effects, the subsequent yield function is identical to the initial yield function.
According to the directionality of deformation, if the hardening in each direction is the same after loading-unloading in one direction, it is called “isotropic hardening”; if the hardening in each
direction is different after loading-unloading in one direction, it is called “Kinematic Hardening”. For example, isotropic hardening usually adopts the Von Mises (isotropic) yield criterion, which
has a good approximation for metals, polymers, and saturated geological materials. The kinematic hardening model can adopt the Hill (orthotropic) yield criterion, and the yield process needs to
consider the relative relationship between the stress direction and the axial direction, which can be used in the forging process of microstructure or metal.
For each hardening process, there are bilinear, multilinear, and nonlinear models. All three are used to describe the strain-stress relations, which give important mechanical information such as
yield stress and elastic modulus. The bilinear model is described by two lines, and the multi-linear model is described by multiple lines, which are generally expressed by strain-stress experimental
data. The nonlinear modulus uses a segment of nonlinear functions and parameters to determine the plastic properties.
Temperature effect and loading and unloading criteria
For the impact and shock, most of the plastic strain energy is converted into thermal energy, which leads to an increase in material temperature, and the elevated temperature will affect the physical
properties of the materials. Therefore, in many plastic analyses, plastic heat is taken into account in the calculation. At the same time, due to the hardening of the material, the loading and
unloading processes sometimes need to be treated differently. This is why many plastic calculations need to consider loading and unloading processes separately.
Selected Plasticity Models
Johnson-Cook model
The Johnson-Cook model is widely used for isotropic elastoplastic materials. The Johnson-Cook constitutive is mainly suitable for deformations with strain rates less than 1e4 s-1. True stress is
expressed as
Among them, the yield stress a should be greater than 0, and the plastic hardening exponent n should be less than or equal to 1. Johnson-Cook has the characteristics of simple form and easy
calculation and can reflect the effects of plastic strain, strain rate, and temperature at the same time. It is the most commonly used plastic model in structural analysis. The input parameters are
shown in the figure below
Zerilli-Armstrong model
Similar to Johnson-Cook, it is a simple nonlinear theory used to model isotropic elastoplastic materials. The expression of true stress is
Among them, the yield stress C0 should be greater than 0, and the plastic hardening exponent m should be less than or equal to 1. The input parameters are shown in the figure below
Hill model
Hill is used to describe orthotropic plastic materials. Yield stress is defined as
The Hill model is the most widely used orthotropic elastoplastic model. The input parameters are shown in the figure below
Rate-dependent multilinear hardening model
Define isotropic elastoplastic deformation via tabular data. The input data is the strain-stress relationship at different strain rates. Enter the data as shown in the figure.
Orthotropic Hill model
Orthotropic elastoplastic model. For solid elements, the equivalent stress is defined as:
The input parameters are shown in the figure below.
Cowper-Symonds model
Similar to the Johnson-Cook model, the Cowper-Symonds model is used for isotropic plasticity. Stresses can be expressed by three stress coefficients, tabular data, or a combination of both. Among
them, the three-parameter expression is:
where the yield stress a should be greater than 0, and the plastic hardening exponent n should be less than or equal to 1. The input parameters are shown in the figure below:
Zhao model
This model can describe elastoplastic materials that deform based on plastic strain rates. The stress calculation formula is:
where the yield stress A should be greater than 0, and the plastic hardening index n should be less than or equal to 1. When the strain rate is less than the reference strain rate, the stress
simplifies to:
The input parameters are shown in the figure below.
Steinberg-Guinan model
This model adds an elastoplastic model of the thermal softening effect. The expression for the yield stress is as follows:
The input parameters are shown in the figure below.
Gurson model
The Gurson model can be used for the calculation of viscoelastic plastic materials, especially porous materials. Yield stress can be obtained from experimental data or calculated by the Cowper-Symond
model. For the von Mises criterion under viscoplastic flow, it can be calculated by the following formula:
The relevant parameters are defined as follows.
Barlat3 model
The Barlat3 model is an orthotropic elastoplastic model. Commonly used in metal forming processing, such as aluminum alloy processing. Therefore a large number of applications are relevant to shell
elements. The plastic hardening function can be expressed by parameters or tabular data, where the anisotropic yield criterion for plane stress can be expressed by the following formula
The evolution of Young’s modulus during plastic deformation can be expressed by the following formula.
The input parameters are shown in the figure below:
Yoshida-Uemori model
The Yoshida-Uemori model can be used in the case of large-strain cyclic plastic deformation. This model is based on yield surface and bound surface theories. For solid elements, the von Mises yield
criterion is commonly used. For plate and shell elements, Hill or Barlat3 yield criteria can be applied. The corresponding yield criterion expression is as follows:
The input parameters are shown in the figure below:
Johnson-Holmquist model
The Johnson-Holmquist model is often used for elastoplastic processes of brittle materials, such as ceramics, glass, etc. This model can also be combined with failure models. The equivalent stress
expression is
The input parameters are shown in the figure below:
Swift-Voce model
The Swift-Voce elastoplastic model combines Johnson-Cook strain rate hardening and temperature softening effects. Can be used for orthotropic materials while allowing quadratic non-associative flow
laws. Yield stress can be expressed comprehensively by Swift and Voce models.
The input parameters are shown in the figure below:
Hensel-Spittel model
The Hensel-Spittel model takes into account the effects of strain, strain rate, and temperature. Commonly used in the hot forging process of metals. The expression for the yield stress is
The input parameters are shown in the figure below:
Vegter model
The yield function is expressed as follows.
The input parameters are shown in the figure below:
The plastic models mentioned above all support the generation of material files in the OpenRadioss format.
Postprocessing for Plasticity Analysis
In postprocessing, elastoplastic analysis often requires the evaluation of additional results. Such as
Equivalent Stress: under the hardening circumstance, the current value of the yield stress.
Accumulated Plastic Strain: it refers to the sum of the plastic strain rate along a certain path in the deformation history.
Stress Ratio: The ratio of the elastic stress to the current yield stress is an indicator of plastic deformation under load increments. When >1, the current is plastic deformation; when <1, the
current is elastic deformation; when =1, the current has just reached yield.
This article introduces plastic models commonly used in finite element analysis, which can meet most engineering application scenarios. Due to limited space, not all the formulas are listed here, and
the specific details of the relevant plastic models can be found in the theory manuals in MatEditor or other software.
WelSimulation LLC is an independent engineering simulation technology provider, located in Greater Pittsburgh, PA. Its flagship product WESLIM is a general-purpose engineering simulation software
with an all-in-one graphical user interface and self-integrated features. | {"url":"https://welsim.com/2022/11/22/elastoplastic-models-in-structural-finite-element-analysis.html","timestamp":"2024-11-12T13:24:19Z","content_type":"text/html","content_length":"34337","record_id":"<urn:uuid:bff40f47-7cbe-461b-be54-78a68a24948c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00266.warc.gz"} |
Quantum-Proof Random Numbers: developments and challenges in the implementation of QRNGs in modern cryptography - cyberunity
Written by Nicole Kosel, freelancer at cyberunity AG, in collaboration with Xenia Bogomolec, Information Security Specialist and CEO at Quant-X Security & Coding, Maximiliane Weishäupl, PhD Candidate
in Cryptography at the University Regensburg, Mehrzad Firoozi, PhD Candidate in Physics at Fraunhofer IPMS and Peter Kosel, Founder at cyberunity
The relevance of stochastic processes has expanded and evolved significantly from their initial application in the gaming industry to their current use in cryptography. In the age of information
technology, the generation of random numbers has become essential for information security, modelling, simulations and myriad other digital processes. Given that the quality-requirements for random
numbers and randomness overall are generally highest in cryptographical contexts, it is also important to distinguish between random numbers for modelling purposes and those meant for cryptographic
As a result of the exponential growth in data volumes and the increasing connectivity of devices, the demand for cryptographic methods that are based on reliable randomness is growing. Entropy, which
in information security contexts refers to unpredictability and thus the quality of randomness, plays a decisive role here. Quantum random number generators (QRNGs) represent an advanced technology
that utilises the principles of quantum mechanics to generate random numbers with high levels of entropy. These numbers are of particular importance for cryptography due to their pronounced
unpredictability – a result of quantum phenomena such as the polarisation of individual photons. The randomness generated by such processes is an indispensable component of key generation within
encryption methods and various other security protocols that aim to secure information against unauthorised access. The validation of cryptographic systems, including algorithms, protocols and their
practical implementations, poses an immense challenge to governments worldwide. When it comes to the theoretical definition and standardisation of such cryptographic systems, competitions are often
held in which interdisciplinary research teams develop reference implementations – however, these are generally not market-oriented. To support their conversion into market-ready products, various
government institutions have launched programmes that enable the independent review and certification of standards and practical implementations like modules, libraries and devices according to
clearly defined criteria. In the United States, for example, the National Institute of Standards and Technology (NIST) has taken on a pioneering role by establishing the Cryptographic Module
Validation Program (CMVP), a framework that evaluates the practical security of cryptographic applications and confirms their trustworthiness. At the centre of the CMVP is the validation of entropy
sources, a requirement that emphasises the vital importance of random numbers in cryptography.
NIST further reinforces this importance through the Entropy Source Validation (ESV) programme, which is specifically designed to assess the quality and reliability of entropy sources. Several
projects are currently researching the use of QRNGs, using the general BSI requirements for TRNGs (true random number generators) as a basis for development and evaluation. While the German Federal
Office for Information Security (BSI) has not yet defined any specific guidelines for QRNGs and is still examining their suitability relative to currently certified TRNGs, it is interested in
classifying QRNGs according to the functional classes of the AIS 20/31 standard. Corresponding research projects have been commissioned to address these open questions and to develop a solid basis
for future standards.
“Quant-ID” is one such project and aims to create quantum-safe digital identities. Under the direction of Quant-X Security & Coding and in collaboration with Fraunhofer IPMS, MTG AG and the
University of Regensburg, the project is developing a prototype for quantum-safe authorisation. It is pursuing said quantum-safe authorisation by way of a QRNG developed by IPMS and an identity
provider with clients implemented by Quant-X in order to generate high-quality random values for secure digital authentication and authorisation in critical infrastructures. MTG AG provides the
post-quantum secure PKI and the Faculty of Data Security and Cryptography of the University of Regensburg analyses the security of the QRNG and post-quantum algorithms in practice. The respective
contributions of the individual partners can be found under https://Quant-ID.de. In the context of the Quant-ID project, post-quantum secure cryptographic methods and random values produced by IPMS’
QRNG are applied to secure network communications, web applications and databases, after which the results are tested for their suitability for certification. Below, three members of the Quant-ID
team provide insights into their specific roles and discuss the impending implications that QRNGs could have for the IT security landscape in general and for end users in particular. Among these
three team members are two dedicated PhD candidates, Maximiliane Weishäupl from the University of Regensburg and Mehrzad Firoozi from the Fraunhofer Institut for Photonic Microsystems (IPMS).
Maximiliane Weishäupl is a PhD candidate at the Chair of Data Security and Cryptography at the University of Regensburg concentrating on cryptographic analysis on the one hand and measuring the
security of QRNGs on the other.
“Cryptographic analysis begins with the identification of necessary requirements that cryptographic algorithms must fulfil within the relevant protocols. An example of such a requirement in our use
case is derived from the fact that the authorisation of users must occur rapidly. On a cryptographic level, this translates into the need for rapid verification of digital signatures, which in turn
must be taken into account when selecting post-quantum secure procedures”, explains Weishäupl. The analysis looks at current developments in post-quantum cryptography and the ongoing NIST
standardisation process. When selecting suitable methods for the Quant-ID project, NIST candidates, supplemented by other promising methods, are compared with regard to the identified requirements.
For the cryptography implemented in the Quant-ID project, it is essential that the randomness produced by the QRNG is of exemplary quality.
Maximiliane explains the challenges in this regard as follows:
“In order to assess the quality of QRNGs, various approaches can be found in the literature – from the pure application of ready-made statistical test suites to security proofs with more or less
simplifying assumptions. However, strict conditions must be met for certification by the BSI: A stochastic model must be specified for the physical QRNG, i.e. a family of probability distributions
that describes the QRNG as well as possible in all situations (e.g. different environmental conditions such as temperature) and that includes all conceivable secondary information (e.g. interference
noise from components). The distribution parameters are then determined using experimentally generated data, allowing the entropy of the raw data (i.e. the direct output of the QRNG) to be
calculated. An improvement of the entropy can be achieved by so-called post-processing of the raw data and can, for example, consist of applying a hash function. The final entropy must be above a
threshold defined by the BSI and the implementation of tests that guarantee the quality of the random numbers during the operation of the QRNG is also required”.
Mehrzad Firoozi further emphasises the experimental and technical aspects of the project. As a scientific researcher at Fraunhofer IPMS, he focuses on the macroscopic structure and further
development of the QRNG. “In this phase, I compared many different conventional QRNGs that could fulfil the project requirements, both theoretically and experimentally. This helped us to select the
right QRNG structure for the project. The description of the QRNG using a mathematical model is also of great importance for qualifying its security”, explains Firoozi. In addition to the
implementation of the QRNG, his work also includes the development of a post-processing platform that converts non-uniformly distributed random numbers into uniformly distributed random numbers. “The
raw output of the QRNG usually is not uniformly distributed (having for example a Gaussian distribution instead). In this phase, this analogue output is first digitised by an analogue-to-digital
converter (ADC). Then the digital values are passed to a Field-Programmable Gate Array (FPGA) to be given a uniform probability distribution by a ‘Randomness Extraction’ method. The resulting data is
then ready to be transmitted via a suitable interface (e.g. Ethernet)”, explains Firoozi. A key aspect of his work is to optimise the technology to allow BSI certification such that the subsequent
miniaturisation of the system for practical application is possible. The collaboration between Weishäupl and Firoozi in the Quant-ID project is demonstrative of the interdisciplinary nature of the
post-quantum scene. While Weishäupl discusses the theoretical and analytical aspects of cryptography, Firoozi contributes his extensive experience in experimental physics and technology.
Maximiliane and Mehrzad each decided in favour of joining the quantum cryptography project for different yet complementary reasons. As a PhD candidate with a background in mathematics, Maximiliane
finds the practical application of mathematical concepts in cryptography particularly appealing. For her, the fascination lies in the interdisciplinary challenge that combines mathematics, computer
science, and physics, a combination that requires extensive familiarisation with various specialist disciplines. Mehrzad on the other hand, is attracted to fundamental quantum phenomena, both in
theoretical terms and in terms of the practical observation of these phenomena in his experiments. The need for in-depth knowledge of quantum optics and basic knowledge of information theory
emphasises the complexity of the field.
Together they emphasise the importance of creativity and tenacity in their research. Their work not only pushes the boundaries of our current technological understanding, but also lays a solid
foundation for the future development of secure cryptographic systems. “Cryptography is already used everywhere today,” notes Weishäupl. “Random numbers are essential and there are examples where bad
random numbers have rendered otherwise secure cryptography insecure. QRNGs with good random numbers are therefore of great interest.” Firoozi adds: “With the development of quantum computers,
encryption based on mathematical algorithms can be cracked much more easily. Since the randomness in a QRNG is intrinsically indeterministic, QRNGs can potentially provide a higher level of security
than traditional RNGs. As a result, companies or organisations where data security is critical (military organisations, substations, banks, etc.) can greatly benefit from this technology.”
Xenia Bogomolec, CEO of Quant-X Security & Coding further emphasises the critical role of randomness in cryptography: “The high degree of randomness of a so-called static cryptographic key with long
validity is particularly important. This can be, for example, a certificate for firmware updates of devices, a certificate in the chip on the German electronic passport (ePassport), or root
certificates from so-called Certification Authorities (CAs). The latter form the core of the security of all cryptographic certificates issued with them. If the root certificate is compromised, all
certificates issued with it are also compromised. Root certificates can be compromised via various attack vectors. However, if a weak entropy source is used whose determinism is known to a particular
attacker, the attacker would not even need to hack the CA in question. The impact of such a scenario would be catastrophic, as CAs with a root certificate issue countless certificates for
organisations’ applications. Another example of the need for high entropy sources are Monte Carlo simulations – a method originating in probability theory in which random samples of a distribution
are repeatedly drawn using randomness-experiments. The higher the entropy, the more valid the resulting conclusions are.”
The Quant-X Security & Coding team, under the leadership of Xenia Bogomolec, supports the security qualification of QRNGs from a traditional information security perspective. Its members’ backgrounds
in mathematics and algorithm development allow for well-rounded and effective communication with cryptographers like Maximiliane and physicists like Mehrzad. Additionally, various statistically
relevant data across the digital use of quantum entropy are collected and analysed. QRNGs are already commercially available, e.g. the Quantis series from ID-Quantique. The Quantis QRNG chip is
certified according to both NIST Entropy Source Validation (ESV) and IID SP 800-90B. Certification of QRNGs by the BSI however is likely still several years away.
As Peter Kosel from cyberunity AG sums up, the Quant-ID project not only represents an impressive fusion of expertise and innovative strength – it also embodies the next decisive step in the
development of cryptography and IT security.
Anyone interested in gaining further insight into cryptography and the role it plays in information security can find an informative article on career opportunities in the cryptography scene here:
Cryptography Specialists – The Key to a Secure Post-Quantum World If you would like more information on the future potential of this field we invite you to reach out to Xenia or Peter directly. | {"url":"https://cyberunity.io/en/quantum-proof-random-numbers-developments-and-challenges-in-the-implementation-of-qrngs-in-modern-cryptography/","timestamp":"2024-11-15T02:47:41Z","content_type":"text/html","content_length":"81420","record_id":"<urn:uuid:29706e0a-4538-4028-9224-4b850202870f>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00887.warc.gz"} |