content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Big O Cheat Sheet – Time Complexity Chart (2024)
An algorithm is a set of well-defined instructions for solving a specific problem. You can solve these problems in various ways.
This means that the method you use to arrive at the same solution may differ from mine, but we should both get the same result.
Because there are various ways to solve a problem, there must be a way to evaluate these solutions or algorithms in terms of performance and efficiency (the time it will take for your algorithm to
run/execute and the total amount of memory it will consume).
This is critical for programmers to ensure that their applications run properly and to help them write clean code.
This is where Big O Notation enters the picture. Big O Notation is a metric for determining the efficiency of an algorithm. It allows you to estimate how long your code will run on different sets of
inputs and measure how effectively your code scales as the size of your input increases.
What is Big O?
Big O, also known as Big O notation, represents an algorithm's worst-case complexity. It uses algebraic terms to describe the complexity of an algorithm.
Big O defines the runtime required to execute an algorithm by identifying how the performance of your algorithm will change as the input size grows. But it does not tell you how fast your algorithm's
runtime is.
Big O notation measures the efficiency and performance of your algorithm using time and space complexity.
What is Time and Space Complexity?
One major underlying factor affecting your program's performance and efficiency is the hardware, OS, and CPU you use.
But you don't consider this when you analyze an algorithm's performance. Instead, the time and space complexity as a function of the input's size are what matters.
An algorithm's time complexity specifies how long it will take to execute an algorithm as a function of its input size. Similarly, an algorithm's space complexity specifies the total amount of space
or memory required to execute an algorithm as a function of the size of the input.
We will be focusing on time complexity in this guide. This will be an in-depth cheatsheet to help you understand how to calculate the time complexity for any algorithm.
Why is time complexity a function of its input size?
To perfectly grasp the concept of "as a function of input size," imagine you have an algorithm that computes the sum of numbers based on your input. If your input is 4, it will add 1+2+3+4 to output
10; if your input is 5, it will output 15 (meaning 1+2+3+4+5).
const calculateSum = (input) => { let sum = 0; for (let i = 0; i <= input; i++) { sum += i; } return sum;};
In the code above, we have three statements:
Looking at the image above, we only have three statements. Still, because there is a loop, the second statement will be executed based on the input size, so if the input is four, the second statement
(statement 2) will be executed four times, meaning the entire algorithm will run six (4 + 2) times.
In plain terms, the algorithm will run input + 2 times, where input can be any number. This shows that it's expressed in terms of the input. In other words, it is a function of the input size.
In Big O, there are six major types of complexities (time and space):
• Constant: O(1)
• Linear time: O(n)
• Logarithmic time: O(n log n)
• Quadratic time: O(n^2)
• Exponential time: O(2^n)
• Factorial time: O(n!)
Before we look at examples for each time complexity, let's understand the Big O time complexity chart.
Big O Complexity Chart
The Big O chart, also known as the Big O graph, is an asymptotic notation used to express the complexity of an algorithm or its performance as a function of input size.
This helps programmers identify and fully understand the worst-case scenario and the execution time or memory required by an algorithm.
The following graph illustrates Big O complexity:
The Big O chart above shows that O(1), which stands for constant time complexity, is the best. This implies that your algorithm processes only one statement without any iteration. Then there's O(log
n), which is good, and others like it, as shown below:
• O(1) - Excellent/Best
• O(log n) - Good
• O(n) - Fair
• O(n log n) - Bad
• O(n^2), O(2^n) and O(n!) - Horrible/Worst
You now understand the various time complexities, and you can recognize the best, good, and fair ones, as well as the bad and worst ones (always avoid the bad and worst time complexity).
The next question that comes to mind is how you know which algorithm has which time complexity, given that this is meant to be a cheatsheet 😂.
• When your calculation is not dependent on the input size, it is a constant time complexity (O(1)).
• When the input size is reduced by half, maybe when iterating, handling recursion, or whatsoever, it is a logarithmic time complexity (O(log n)).
• When you have a single loop within your algorithm, it is linear time complexity (O(n)).
• When you have nested loops within your algorithm, meaning a loop in a loop, it is quadratic time complexity (O(n^2)).
• When the growth rate doubles with each addition to the input, it is exponential time complexity (O2^n).
Let's begin by describing each time's complexity with examples. It's important to note that I'll use JavaScript in the examples in this guide, but the programming language isn't important as long as
you understand the concept and each time complexity.
Big O Time Complexity Examples
Constant Time: O(1)
When your algorithm is not dependent on the input size n, it is said to have a constant time complexity with order O(1). This means that the run time will always be the same regardless of the input
For example, if an algorithm is to return the first element of an array. Even if the array has 1 million elements, the time complexity will be constant if you use this approach:
const firstElement = (array) => { return array[0];};let score = [12, 55, 67, 94, 22];console.log(firstElement(score)); // 12
The function above will require only one execution step, meaning the function is in constant time with time complexity O(1).
But as I said earlier, there are various ways to achieve a solution in programming. Another programmer might decide to first loop through the array before returning the first element:
const firstElement = (array) => { for (let i = 0; i < array.length; i++) { return array[0]; }};let score = [12, 55, 67, 94, 22];console.log(firstElement(score)); // 12
This is just an example – likely nobody would do this. But if there is a loop, this is no longer constant time but now linear time with the time complexity O(n).
Linear Time: O(n)
You get linear time complexity when the running time of an algorithm increases linearly with the size of the input. This means that when a function has an iteration that iterates over an input size
of n, it is said to have a time complexity of order O(n).
For example, if an algorithm is to return the factorial of any inputted number. This means if you input 5 then you are to loop through and multiply 1 by 2 by 3 by 4 and by 5 and then output 120:
const calcFactorial = (n) => { let factorial = 1; for (let i = 2; i <= n; i++) { factorial = factorial * i; } return factorial;};console.log(calcFactorial(5)); // 120
The fact that the runtime depends on the input size means that the time complexity is linear with the order O(n).
Logarithm Time: O(log n)
This is similar to linear time complexity, except that the runtime does not depend on the input size but rather on half the input size. When the input size decreases on each iteration or step, an
algorithm is said to have logarithmic time complexity.
This method is the second best because your program runs for half the input size rather than the full size. After all, the input size decreases with each iteration.
A great example is binary search functions, which divide your sorted array based on the target value.
For example, suppose you use a binary search algorithm to find the index of a given element in an array:
const binarySearch = (array, target) => { let firstIndex = 0; let lastIndex = array.length - 1; while (firstIndex <= lastIndex) { let middleIndex = Math.floor((firstIndex + lastIndex) / 2); if (array[middleIndex] === target) { return middleIndex; } if (array[middleIndex] > target) { lastIndex = middleIndex - 1; } else { firstIndex = middleIndex + 1; } } return -1;};let score = [12, 22, 45, 67, 96];console.log(binarySearch(score, 96));
In the code above, since it is a binary search, you first get the middle index of your array, compare it to the target value, and return the middle index if it is equal. Otherwise, you must check if
the target value is greater or less than the middle value to adjust the first and last index, reducing the input size by half.
Because for every iteration the input size reduces by half, the time complexity is logarithmic with the order O(log n).
Quadratic Time: O(n^2)
When you perform nested iteration, meaning having a loop in a loop, the time complexity is quadratic, which is horrible.
A perfect way to explain this would be if you have an array with n items. The outer loop will run n times, and the inner loop will run n times for each iteration of the outer loop, which will give
total n^2 prints. If the array has ten items, ten will print 100 times (10^2).
Here is an example by Jared Nielsen, where you compare each element in an array to output the index when two elements are similar:
const matchElements = (array) => { for (let i = 0; i < array.length; i++) { for (let j = 0; j < array.length; j++) { if (i !== j && array[i] === array[j]) { return `Match found at ${i} and ${j}`; } } } return "No matches found 😞";};const fruit = ["🍓", "🍐", "🍊", "🍌", "🍍", "🍑", "🍎", "🍈", "🍊", "🍇"];console.log(matchElements(fruit)); // "Match found at 2 and 8"
In the example above, there is a nested loop, meaning that the time complexity is quadratic with the order O(n^2).
Exponential Time: O(2^n)
You get exponential time complexity when the growth rate doubles with each addition to the input (n), often iterating through all subsets of the input elements. Any time an input unit increases by 1,
the number of operations executed is doubled.
The recursive Fibonacci sequence is a good example. Assume you're given a number and want to find the nth element of the Fibonacci sequence.
The Fibonacci sequence is a mathematical sequence in which each number is the sum of the two preceding numbers, where 0 and 1 are the first two numbers. The third number in the sequence is 1, the
fourth is 2, the fifth is 3, and so on... (0, 1, 1, 2, 3, 5, 8, 13, …).
This means that if you pass in 6, then the 6th element in the Fibonacci sequence would be 8:
const recursiveFibonacci = (n) => { if (n < 2) { return n; } return recursiveFibonacci(n - 1) + recursiveFibonacci(n - 2);};console.log(recursiveFibonacci(6)); // 8
In the code above, the algorithm specifies a growth rate that doubles every time the input data set is added. This means the time complexity is exponential with an order O(2^n).
Wrapping Up
In this guide, you have learned what time complexity is all about, how performance is determined using the Big O notation, and the various time complexities that exists with examples.
You can learn more via freeCodeCamp's JavaScript Algorithms and Data Structures curriculum.
Happy learning!
You can access over 200 of my articles by visiting my website. You can also use the search field to see if I've written a specific article.
Joel Olawanle
Frontend Developer & Technical Writer
If you read this far, thank the author to show them you care.
Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started | {"url":"https://sunnynewcomer.com/article/big-o-cheat-sheet-time-complexity-chart","timestamp":"2024-11-13T19:23:31Z","content_type":"text/html","content_length":"77906","record_id":"<urn:uuid:aa2f54cf-7b75-4b34-b08b-52fec91f2c0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00036.warc.gz"} |
Reading Clocks: Time to the Quarter Hour
Lesson Video: Reading Clocks: Time to the Quarter Hour Mathematics • Second Year of Primary School
In this video, we will learn how to tell time to the quarter hour on both analog and digital clocks.
Video Transcript
Reading Clocks: Time to the Quarter Hour
In this video, we will learn how to tell time to the quarter hour on both digital and analog clocks. Here are four clocks. We call this type of clock with a face and two hands an analog clock. The
longest of the two hands on the clock is called the minute hand. When the minute hand is pointing at number three, we know it’s quarter past. When the minute hand points to number six, we know it’s
half past the hour. When the minute hand is pointing at nine, we know it’s quarter to. And when the minute hand points to number 12, we know it’s something o’clock.
How many minutes are there in a quarter of an hour? We know there are 60 minutes in one hour, so half an hour is half of 60 which is 30 minutes. And if we half that again, we get a quarter of an
hour, which is 15 minutes. Another way to say quarter past is 15 minutes past. If we start at 12 or o’clock and count 15 minutes — five, 10, 15 minutes — two quarter turns makes half an hour. Two
quarter turns makes a half turn. So we know that half past is 30 minutes past the hour. If we make three-quarter turns from o’clock, it’s quarter to the next hour. We know that one hour is 60
minutes; half an hour is 30 minutes; quarter of an hour is 15 minutes. So three-quarters is 45 minutes past the hour, and it’s a quarter of a turn or 15 minutes until the next hour.
There are four-quarters in one hour. Each quarter of an hour is worth 15 minutes. Quarter past is the same as 15 minutes past. Two-quarters or one-half is 30 minutes. Three-quarters of an hour is 45
minutes. And a whole hour or four-quarters is 60 minutes. When we’re telling the time using a digital clock, if there are zero minutes, we know it’s o’clock. If it’s 15 minutes past the hour, we know
it’s quarter past. 30 minutes past the hour means it’s half past. And if it’s 45 minutes past the hour, it means it’s quarter to the next hour. 45 plus 15 makes 60. And we know there are 60 minutes
in a whole hour. In this video, we’re going to practice reading and writing the time to the quarter hour. Let’s have a go at some practice questions.
Write the time in words. Is it quarter past ten, quarter to nine, quarter to ten, or quarter past nine?
In this question, we’re shown a digital clock, and we’re asked to write the time in words. What time is shown on the clock? We know the first digit on a digital clock tells us the hour. And on this
clock, the first digit is a nine. Both of these times have a nine in them. These digits on the clock tell us the number of minutes. If it’s 9:15, that means it’s 15 minutes past nine. We know that
one hour is 60 minutes. Half an hour is half of 60 which is 30 minutes. And a quarter of an hour is 15 minutes. The time in words is quarter past nine.
Which clock shows the same time as this one?
We’re shown the time on an analog clock. And we have to find the digital clock which shows the same time. What is the time on this clock? The shortest hand, the hour hand, is just past the number
one. So we know it’s something past 1 o’clock. The hour shown on both of these digital clocks is one, and the minute hand is pointing to number three. This means it’s quarter past one. How many
minutes have passed when a quarter of a turn has been made around the clock? Five, 10, 15. It’s 15 minutes past one or quarter past one. So this is the digital clock, which shows the same time as the
analog clock we were given. The time is 1:15.
What is the time, in words, that this clock shows? It is a quarter to six. It is a quarter past five. It is a quarter past six. It is a quarter to five.
We know that quarter past is the same as 15 minutes past the hour. Half past is 30 minutes past the hour. And quarter to is 45 minutes past the hour. So if it’s 5:45, that means it’s quarter to six
or 15 minutes until 6 o’clock. It is a quarter to six. The time in words that this clock shows is a quarter to six.
Which analog clock shows the same time as this digital one?
We’re shown a digital clock with the time 4:45. We have to select the analog clock, which shows the time 4:45. 4:45 is between 4 o’clock and 5 o’clock. Quarter past four is 4:15, 4:30 is half past
four, and 4:45 is quarter to five. In another 15 minutes, it will be 5 o’clock. Which of our clocks shows 45 minutes past four or quarter to five. Both of these two clocks show a quarter to time. The
minute hand is pointing to number nine, which means there are 15 minutes until the next hour or quarter of an hour until the next hour. But the hour hand on the first clock is between the hours five
and six.
We already know that 4:45 is between the hours four and five. The hour hand on this clock is between 4 and 5 o’clock. This is the analog clock which shows 4:45 or quarter to five. We used a number
line to help and picked the analog clock, which shows the same time as the digital one.
What have we learned in this video? We’ve learned how to tell time to the quarter hour on analog and digital clocks. | {"url":"https://www.nagwa.com/en/videos/972129647123/","timestamp":"2024-11-13T21:45:44Z","content_type":"text/html","content_length":"251933","record_id":"<urn:uuid:684531f6-80bd-40f9-a6b5-65c5fc38774d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00200.warc.gz"} |
How many times must I send an email before it’s opened?
How many times do I have to send an email to my customers before I’m confident a certain percentage have opened it?
Like most businesses today, the Retail Shop maintains a customer email list. Each week, Michelle, the store’s manager, sends a promotional email message to these customers. Historical data show that,
on average, 15% of all Retail Shop customers open these promotional emails. This is known as the store’s “Open Rate”.
Michelle, being a smart marketer, is concerned that her promotional message is not being opened by enough customers. She asks herself, “how many times must a promotional email be sent before it has
been opened by at least 50% of recipients?”
On the face of it, this seems like a difficult problem to solve. However, using basic probability theory makes finding a solution surprisingly simple.
Balls and Cups
We can’t know all the factors that affect whether or not a customer will open an email, so we consider it to be a random event. A solution to Michelle’s question can then be found by using methods
analogous to tossing a coin, rolling dice, or pulling balls from an urn.
By transforming Michelle’s conundrum into a ball tossing experiment, we can, with certain limitations, arrive at a best case answer to her question.
Our experiment is designed as follows: Set out 100 cups. Take 15 ping-pong balls, toss them in the air, and record into which cups the balls land. The rules of our experiment require that for each
toss, a ball may land in at most one cup and no cup may contain more than one ball.
In this case, Michelle’s question becomes: How many times would the balls need to be tossed before 50% of the cups have caught a ball? Or, how many tosses does it take for any given cup to have a 50%
chance of catching a ball? These are similar to asking how many times you must toss a coin before a head appears, or how many times you must roll a die before a six appears. They are all answered by
assuming the number of events follows what is known as a geometric distribution.
To understand this, let’s define the following:
p is the probability a ball lands in any given cup after one toss.
q = (1 – p), is the probability a cup does NOT catch a ball after one toss.
N is the number of ball tosses.
X is the probability a ball lands in any given cup after N tosses.
Using the geometric distribution, the equation for determining the probability that a ball lands in any given cup after N tosses is:
1 – q^N = X
If, the email Open Rate is p = .15, and we’re interested in X = .50, we get:
1 – (.85)^N = .50
(.85)^N = .50
Then solve for N,
log(.85)^N = log(.50)
N log(.85) = log(.50)
N = log(.50) ÷ log(.85)
N = 4.265
So, given 15 balls and 100 cups, p = .15 and q = .85, it takes 4.265 ball tosses before the chance of a ball landing in any given cup becomes 50%. Since we can’t perform fractional tosses, 5 ball
tosses would be required and the probability rises to approximately 56%.
Michelle’s Answer
Michelle’s question was, “how many times must a promotional email be sent before it has been opened by 50% of recipients?”
Through the use of probability theory and a simple ball tossing experiment, we’ve arrived at a “best case” answer. The specific reasons for describing the answer in this way is beyond the scope of
this article.
However, whether it be flipping coins, rolling dice, or tossing balls, probability theory makes certain assumptions about the nature of the generating devices (coins, dice, or balls). First, that as
instruments of chance they are fair and unbiased. A coin that is assumed to be fair (head and tails each have a 50% chance of appearing) but always comes up heads is neither fair nor unbiased.
Second, that each event (a coin toss, a roll of the dice, or a ball toss) is independent of any other event. That is, no past event (flip of a coin, roll of a die) effects the outcome of any future
In the case of opening an email and the tendencies of email recipients, it’s safe to say that, as a general matter, they adhere to neither of the two assumptions described above. Not Everyone has the
same probability of opening an email, nor do all emails have the same probability of being opened.
These realities do not preclude us from using a ball tossing experiment to solve Michelle’s problem, but they do require us to characterize the solution as a “best case” scenario — a very reasonable
place to start.
It is interesting to note that our model does not depend on the content of the email, only whether or not it was opened. So, if Michelle wants to be sure that her customers have a 50% chance of
hearing about a particular offer, she must include that offer in each email sent. But, if she’s only interested in assuring that 50% of her customers hear from her, she can vary the content as she
The generalized form of the Email Confidence Equation is:
N = log |X-1| ÷ log(q)
Where X and q are as defined above. Inputting X and q, allows this equation to be used to find N under a variety of circumstances.
Sources: Wikipedia, Geometric Distribution | {"url":"https://franalyticsgroup.com/2019/12/how-many-times-must-i-send-an-email-before-its-opened/","timestamp":"2024-11-02T11:06:33Z","content_type":"text/html","content_length":"39208","record_id":"<urn:uuid:0b8f5d39-2714-44b6-bc71-4d2b2d17fbfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00303.warc.gz"} |
Path Counting in Graphs Using C++ Dynamic Programming
Claim Your Discount Today
Kick off the fall semester with a 20% discount on all programming assignments at www.programminghomeworkhelp.com! Our experts are here to support your coding journey with top-quality assistance.
Seize this seasonal offer to enhance your programming skills and achieve academic success. Act now and save!
20% OFF on your Fall Semester Programming Assignment
Use Code PHHFALL2024
Programming assignments often involve tackling complex and computationally intensive problems, particularly when dealing with recursive solutions. Recursive algorithms, while sometimes
straightforward and elegant, can become inefficient when they repeatedly solve the same subproblems. This inefficiency often results in a significant increase in computation time, especially for
large input sizes or complex problem constraints. The repeated calculations inherent in recursive methods can lead to exponential time complexity, making it impractical for large datasets or
extensive problem instances.
One common challenge in these scenarios is optimizing these recursive algorithms to enhance their performance and efficiency. Traditional recursive approaches may be straightforward to implement but
often lack the efficiency needed for larger or more complex problems. This inefficiency can be particularly pronounced in problems involving large graphs, numerous states, or intricate constraints.
Dynamic programming (DP) provides a powerful solution to this challenge. By systematically breaking down a problem into simpler subproblems and storing the results of these subproblems, dynamic
programming avoids redundant calculations and reduces the overall computational burden. This approach transforms a potentially exponential time complexity into a more manageable polynomial time
complexity, making it feasible to tackle larger instances of the problem.
The essence of dynamic programming lies in its ability to optimize recursive solutions by storing intermediate results and reusing them efficiently. This not only improves the runtime of the
algorithm but also enables the solving of problems that would otherwise be computationally infeasible. In this blog, we will explore various strategies for converting slow, recursive solutions into
efficient dynamic programming implementations. We will cover essential concepts such as memoization, tabulation, and state transitions, providing a comprehensive understanding of how to apply these
techniques to enhance algorithmic efficiency.
Through detailed examples and practical applications, we will illustrate how dynamic programming can be applied to a range of programming assignments, from simple problems to more complex scenarios
involving intricate constraints and large datasets. By mastering these techniques, you will be better equipped to handle a wide array of programming challenges, optimize your solutions, and achieve
better performance in your computational tasks. If you ever wondered, “How to do my C++ programming homeworkeffectively”?- Worry not, visit ProgrammingHomeworkHelp.com to find expert help for those
challenging problems.
Understanding Recursive Solutions
Recursive solutions can be both elegant and straightforward, often providing a clear and concise way to approach complex problems. By breaking down a problem into smaller, more manageable
subproblems, recursive algorithms can mirror the problem's structure, making them easier to conceptualize and implement. However, despite their clarity, recursive solutions can suffer from
significant inefficiencies due to repeated calculations and redundant work.
One of the primary issues with recursive solutions is that they can repeatedly solve the same subproblems. For example, consider a classic problem like calculating the Fibonacci sequence. A naive
recursive approach recalculates Fibonacci numbers multiple times for the same values, resulting in an exponential time complexity. This inefficiency arises because the algorithm recalculates values
that have already been computed in previous recursive calls.
Understanding the recursive solution's underlying logic is crucial before moving to dynamic programming. This involves analyzing how the problem is divided into subproblems, identifying overlapping
subproblems, and recognizing the base cases that terminate the recursion. By thoroughly grasping the recursive process, you can pinpoint where redundancies occur and how they contribute to
To illustrate this, let’s take the problem of computing the number of paths in a graph using a recursive approach. A naive recursive method might explore all possible paths from the starting node to
the destination node, recalculating paths for the same nodes and edges multiple times. As the problem size grows, the number of recursive calls increases exponentially, leading to impractically long
computation times.
Before transitioning to dynamic programming, it’s essential to map out how the recursive solution operates, understand its time complexity, and identify opportunities for optimization. By doing so,
you’ll be better equipped to transform the recursive approach into a more efficient dynamic programming solution. This transformation involves capturing and reusing intermediate results to avoid
redundant calculations, thereby significantly improving the algorithm's performance.
In summary, while recursive solutions can provide an elegant and intuitive approach to problem-solving, their inefficiencies often necessitate optimization. By thoroughly understanding the recursive
logic, you can leverage dynamic programming to enhance efficiency, reduce computation time, and handle larger and more complex problems effectively. If you are wondering, “How tosolve my programming
homework? ”or in need of expert assistance , visit ProgrammingHomeworkHelp.com to get tailored solutions for your programming challenges.
Key Concepts in Dynamic Programming
Dynamic programming (DP) is a powerful technique that optimizes the solution to complex problems by breaking them down into smaller, simpler subproblems. By solving each subproblem only once and
storing the results, DP significantly reduces the time complexity that arises in recursive algorithms. This section outlines the essential concepts to keep in mind when solving problems using dynamic
1. Identify Overlapping Subproblems
The first step in applying dynamic programming is recognizing whether the problem contains overlapping subproblems. This means that the same subproblems are solved multiple times within the
recursive solution. For instance, in problems like counting distinct paths in a graph with certain constraints, some subpaths are repeatedly recalculated. By identifying these subproblems, you
can store their solutions to avoid redundant work. A key indicator that a problem is suitable for DP is if the recursive approach involves recalculating the same values multiple times.
2. Define the State and State Transition
In dynamic programming, a "state" refers to a specific configuration of the problem at a given point. To solve the problem, you need to define what constitutes a state and how one state
transitions to another. Typically, this involves creating a table (often a 2D array) where each entry represents the solution to a particular subproblem. For example, in a subway route
optimization problem, the state could represent the current station and the number of tickets used so far. The state transition defines how to move from one state (e.g., a specific station and
ticket count) to another.
3. Formulate the DP Recurrence Relation
Once you’ve defined the state, the next step is to formulate the recurrence relation. This is the mathematical or logical formula that expresses how the solution to the original problem can be
built from solutions to smaller subproblems. The recurrence relation describes how each entry in the DP table depends on other entries. For instance, in the subway problem, the number of ways to
reach a station with a certain number of tickets may depend on how many ways there are to reach connected stations with fewer tickets.
4. Initialization and Iteration
Before filling in the DP table, you must initialize it with the base cases—those subproblems that have known solutions. These base cases typically represent the simplest scenarios (e.g., zero
tickets used or reaching the starting point). After initialization, the table is iteratively filled by applying the recurrence relation. The order of iteration is important: ensure that each
state is computed only once and that all the dependencies (previous states) are calculated before moving on to the next state. This guarantees that you avoid redundant calculations and solve the
problem efficiently.
5. Extract the Solution
Once the DP table is fully populated, the solution to the original problem can be extracted from the appropriate entry in the table. This is usually the cell that corresponds to the final state
of the problem. For example, in a graph traversal problem, the final state might represent reaching a specific node after a certain number of steps. The value in the corresponding cell will
contain the answer, whether it’s the total number of distinct paths, the shortest distance, or the maximum possible outcome.
Example: Subway Problem
Let’s consider a problem where you need to count the number of distinct trips from a starting station back to itself using a specified number of tickets. Here’s a general approach to solving such
problems using dynamic programming:
1. Define the State: Let dp[t][i] represent the number of ways to be at station i using exactly t tickets.
2. Initialize the State: Set dp[0][start] = 1, where start is the starting station. This represents that there's exactly one way to be at the starting station with 0 tickets.
3. State Transition: For each ticket count t from 0 to k-1, update the DP table by considering all possible moves from each station. For each station i, if you can move to station j, update dp[t+1]
[j] by adding the value of dp[t][i].
4. Extract the Result: After filling the DP table, the value of dp[k][start] will give the number of distinct trips from the starting station back to itself using exactly k tickets.
Benefits of Dynamic Programming
Programming assignments often involve solving problems that require optimized solutions for real-world scenarios. One common issue that students face is the inefficiency of recursive algorithms,
especially when the same subproblems are repeatedly solved. Dynamic programming (DP) offers a powerful technique to transform these inefficient recursive solutions into efficient ones by breaking
down the problem into smaller subproblems and solving each just once. This blog will walk you through key strategies for using dynamic programming, providing a foundation for solving similar
assignment challenges.
1. Improved Efficiency: Dynamic programming significantly enhances the efficiency of solving complex problems. Recursive algorithms often suffer from redundant computations, where the same
subproblems are repeatedly solved, leading to slow performance. Dynamic programming eliminates this issue by storing solutions to subproblems in a table or array, allowing each subproblem to be
solved only once. This reduces time complexity from exponential (as seen in many recursive solutions) to polynomial, making it possible to handle much larger data sets and more complex problems.
2. Systematic and Structured Approach: One of the most valuable aspects of dynamic programming is that it provides a well-defined, step-by-step method for problem-solving. It breaks down a complex
problem into smaller, more manageable subproblems, ensuring a logical progression toward the final solution. By systematically working through each subproblem, dynamic programming ensures that no
part of the problem is overlooked or calculated inefficiently.
3. Optimal Solutions: Dynamic programming guarantees that the solution obtained is optimal. By solving each subproblem in isolation and combining the results efficiently, it ensures that the overall
solution is the best possible. This optimality is especially useful in problems involving graphs, optimization, or pathfinding, where finding the most efficient or least costly solution is
4. Memory Efficiency: In many cases, dynamic programming not only improves time efficiency but also optimizes memory usage. By storing only the necessary subproblems and using techniques such as
memoization or tabulation, dynamic programming can be designed to use minimal space while still solving the problem effectively. This makes it an excellent approach for solving problems with
limited computational resources.
5. Versatility Across Problem Types: Dynamic programming can be applied to a wide range of problems, from graph traversal and pathfinding to optimization and decision-making problems. Its
versatility makes it a valuable tool for students and programmers who regularly encounter complex algorithms in assignments and real-world applications. Whether you are solving problems in
combinatorics, finance, or computer science, dynamic programming provides a robust framework for efficient problem-solving.
By using dynamic programming, students can transform computationally expensive recursive solutions into more practical and efficient approaches, ensuring faster execution and better performance
across a variety of problem domains.
Dynamic programming is a powerful tool for optimizing recursive solutions and tackling complex problems efficiently. By carefully analyzing the problem, breaking it down into subproblems, and
applying dynamic programming techniques such as state definition, state transitions, and recurrence relations, you can significantly improve the performance of algorithms. This not only reduces time
complexity from exponential to polynomial but also ensures that your solution is both systematic and optimal. Mastering dynamic programming can open the door to solving more advanced challenges,
especially in large-scale applications like graph traversal, optimization, and pathfinding problems.
If you ever find yourself stuck or need assistance with complex programming assignments, be sure to visit ProgrammingHomeworkHelp.com. Our team of experts is ready to help you navigate through
various programming challenges, offering customized solutions that are efficient and budget-friendly. Whether you’re dealing with recursive algorithms, dynamic programming, or any other programming
problem, we’re here to ensure you succeed!
How to Succeed in Assembly Programming Assignments Using LC3 Simulator
Assembly programming, while challenging, offers a deeply rewarding learning experience, especially when using simulators like LC3. This hands-on approach to programming allows students to engage
directly with low-level machine operations, offering insights into how higher-level languages intera...
Java Algorithms for Solving Real-World Programming Assignments
When tasked with a programming assignment that involves data structures, algorithms, and real-world data, such as analyzing historical stock market trends, students often face several challenges that
require both technical proficiency and strategic problem-solving skills. These types of assignm...
Designing Effective Front-End Projects: Advanced HTML and CSS Techniques
Creating a compelling front-end project can be a rewarding challenge for students learning HTML and CSS. These projects offer an invaluable opportunity to apply your skills in a practical setting,
allowing you to transform theoretical knowledge into tangible, functional web designs. Not only do...
Dynamic Programming for Efficient Path Counting in Connected Graphs in C++
Programming assignments often involve tackling complex and computationally intensive problems, particularly when dealing with recursive solutions. Recursive algorithms, while sometimes
straightforward and elegant, can become inefficient when they repeatedly solve the same subproblems. This inef...
How to Implement Merge Sort in C Using Multi-Process Programming
Creating multi-process programs in C, especially ones implementing complex algorithms like Merge Sort, can be challenging yet incredibly rewarding for students. These assignments not only push your
understanding of C programming but also demand mastery of critical concepts like process creation...
Python Log File Analysis: Strategies for Programming Assignments
When working on programming assignments, it's crucial to have the right approach and a solid understanding of how to effectively use tools such as regular expressions, dictionaries, and file handling
techniques. These tools are not only essential for extracting and manipulating data but also fo...
Custom Interactive Shell with Process Management and Signal Handling
Programming a shell from scratch is an enriching experience that hones your understanding of operating systems, process management, and job control. Building a shell program is a foundational
exercise that helps in understanding process management, job control, and Unix-like system calls. This ...
How to Design and Implement a Flexible Plutonian Calendar System in Python
Creating a custom calendar system is a fascinating python programming assignments that requires a good understanding of how to handle user input, validate it, and generate formatted outputs based on
specific rules. In this guide, we’ll explore how to tackle such a problem step-by-step, using th...
Stacks and Queues for Arithmetic Expression Handling and Process Scheduling in C++
Data structures are fundamental building blocks that help us manage and organize data efficiently. Two of the most versatile and commonly used data structures are stacks and queues. These structures
are not just theoretical concepts but are used in practical applications ranging from expression...
Engaging 3D Scenes with JavaScript Shaders and GPU Acceleration with WebGL
Creating a 3D animated scene using WebGL is a complex but rewarding project that combines the power of JavaScript with the precision of computer graphics. WebGL assignments can be complex and
multifaceted, requiring a strong grasp of 3D graphics, animation, and interactive elements. In this det...
System Calls and Process Management in OS/161 Operating System
Operating systems (OS) are fundamental components of modern computing, providing an interface between hardware and software. They manage hardware resources, execute processes, and ensure system
stability and security. Operating system assignment often involves intricate tasks such as implementi...
HTML and CSS Layout, Flexbox and Forms
In the world of web development, programming assignments often require a solid understanding of HTML and CSS. These fundamental technologies are crucial for creating and styling web pages. This guide
provides an exhaustive overview of essential concepts and techniques related to HTML and CSS, p...
Demand Paging Virtual Memory Simulator in Operating Systems
Demand paging is a fundamental concept in modern operating systems, enabling efficient use of memory by only loading pages when they are needed. For students learning about operating systems,
programming a virtual memory simulator provides a hands-on experience with memory management algorithms...
Rainfall Data Management Assignment Using Haskell
Functional programming is a paradigm that treats computation as the evaluation of mathematical functions and avoids changing state and mutable data. It is built on the principles of lambda calculus
and emphasizes the use of pure functions, immutability, and higher-order functions. The use of pu...
Python Assignment to Search Food Stalls by Keywords, Prices and Proximity to Locations
Navigating complex coding tasks can often seem daunting, especially when assignments involve multiple functionalities and intricate logic. However, breaking down the problem and approaching it
systematically can make the process much more manageable. This guide provides a detailed walkthrough f...
Custom Dictionary with Data Structures for Optimal Data Management
When faced with the challenge of creating custom data structures, the task often involves combining different algorithms and concepts to optimize performance and functionality. One such intriguing
challenge is designing a dictionary that integrates binary search trees (BSTs) with closed-address...
Building Intelligent Expert Systems with Prolog
Expert systems are a vital part of artificial intelligence, allowing machines to mimic human decision-making in specialized domains. These systems use structured knowledge, comprising facts and
rules, to draw conclusions and provide advice or solutions. Prolog, a logic programming language, is ...
How to Build and Manage a Real Estate Database
Data has become the backbone of every industry, and real estate is no exception. A Real Estate Database Management System (REDMS) is a powerful tool that aids real estate professionals in managing
vast amounts of data efficiently. From listing properties to tracking customer interactions, a rob...
MySQL Database Management for Data-Driven Applications
Database management is a foundational skill in computer science, pivotal for anyone looking to work with data-driven applications. MySQL, an open-source relational database management system, is
widely used due to its robustness and ease of use. This comprehensive guide will walk you through ev...
Screen Management in Game Development for Seamless User Experience
In game development, managing different screens or states is a crucial aspect that influences both the design and user experience. Whether you're working on a complex game framework or a simpler
project with multiple views, a robust screen management system helps you handle various game states ... | {"url":"https://www.programminghomeworkhelp.com/blog/path-counting-in-graphs-using-c++-dynamic-programming/","timestamp":"2024-11-06T11:34:04Z","content_type":"text/html","content_length":"157115","record_id":"<urn:uuid:9bcf032b-826e-46ce-859a-4c1eecc3bddb>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00061.warc.gz"} |
Solstat: A statistical approximation library
Numerical Approximations
There are many useful mathematical functions that engineers use in designing applications. This body of knowledge can be more widely described as approximation theory for which there are many great
resources. An example of functions that need approximation and are also particularly useful to us at Primitive are those relating to the Gaussian (or normal) distribution. Gaussians are fundamental
to statistics, probability theory, and engineering (e.g., the Central Limit Theorem).
At Primitive, our RMM-01 trading curve relies on the (standard) Gaussian probability density function (PDF) $\phi(x)=\frac{1}{\sqrt{2\pi}} e^\frac{-x^2}{2}$, the cumulative probability distribution
(CDF) $\Phi$, and its inverse $\Phi^{-1}$. These specific functions appear due to how Brownian motion appears in the pricing of Black-Scholes European options. The Black-Scholes model assumes
geometric Brownian motion to get a concrete valuation of an option over its maturity.
solstat is a Solidity library that approximates these Gaussian functions. It was built to achieve a high degree of accuracy when computing Gaussian approximations within compute constrained
environments on the blockchain. Solstat is open source and available for anyone to use. An interesting use case being showcased by the team at asphodel is for designing drop rates, spawn rates, and
statistical distributions that have structured randomness in onchain games.
In the rest of this article we'll dive deep into function approximations, their applications, and methodology.
Approximating Functions on Computers
The first step in evaluating complicated functions with a computer involves determining whether or not the function can be evaluated "directly", i.e. with instructions native to the processing unit.
All modern processing units provide basic binary operations of addition (possibly subtraction) and multiplication. In the case of simple functions like $f(x)=mx+b$ where $m$, $x$, and $b$ are
integers, computing an output can be done efficiently and accurately.
Complex functions like the Gaussian PDF $\phi(x)$ come with their own unique set of challenges. These functions cannot be evaluated directly because computers only have native opcodes or logical
circuits/gates that handle simple binary operations such as addition and subtraction. Furthermore, integer types are native to computers since their mapping from bits is canonical, but decimal types
are not ubiquitous. There can be no exponential or logarithmic opcodes for classical bit CPUs as they would require infinitely many gates. There is no way to represent arbitrary real numbers without
information loss in computer memory.
This begs the question: How can we compute $\phi(x)$ with this restrictive set of tools? Fortunately, this problem is extremely old, dating back to the human desire to compute complicated expressions
by hand. After all, the first "computers" were people! Of course, our methodologies have improved drastically over time.
What is the optimal way of evaluating arbitrary functions in this specific environment? Generally, engineers try to balance the "best possible scores" given the computation economy and desired
accuracy. If constrained to a fixed amount of numerical precision (e.g., max error of $10^{-18}$), what is the least amount of:
• (Storage) How much storage is needed (e.g., to store coefficients)?
• (Computation) How many total clock cycles are available for the processor to perform?
What is the best reasonable approximation for a fixed amount of storage/computational use (e.g., CPU cycles or bits)?
• (Absolute accuracy) Over a given input domain, what is the worst-case in the approximation compared to the actual function?
• (Relative accuracy) Does the approximation perform well over a given input domain relative to the magnitude of the range of our function?
The above questions are essential to consider when working with the Ethereum blockchain. Every computational step that is involved in mutating the machine's state will have an associated gas cost.
Furthermore, DeFi protocols expect to be accurate down to the wei, which means practical absolute accuracy down to $10^{-18}$ ETH is of utmost importance. Precision to $10^{-18}$ is near the accuracy
of an "atom's atom", so reaching these goals is a unique challenge.
Our Computer's Toolbox
Classical processing units deal with binary information at their core and have basic circuits implemented as logical opcodes. For instance, an add_bits opcode is just a realization of the following
digital circuit:
These gates are adders because they define an addition operation over binary numbers. These full adders can be strung together with a carry-out pin for higher adders. For example, a ripple carry
adder can be implemented this way, and extended to an arbitrary size, such as the 256bit requirements in Ethereum.
Note that adders introduce an error called overflow. Using add_4bits to add 0001 to 1111, the storage space necessary to hold a large number is exhausted. This case must be handled within the
program. Fortunately for Ethereum 256bit numbers, this overflow is far less of an issue due to the magnitude of the numbers expressed ($2^{256}\approx 10^{77}$). For perspective, to overflow 256bit
addition one would need to add numbers on the order of the estimated number of atoms in the universe ($\approx 10^{79}$). Furthermore, the community best practices for handling overflows in the EVM
are well understood.
At any rate, repeated addition can be used to build multiplication and repeated multiplication to get integer powers. In math/programming terms:
$3\cdot x =\operatorname{multiply}(x,3)=\underbrace{\operatorname{add}(x,\operatorname{add}(x,x))}_{2\textrm{ additions}}$
and for powers:
$x^3=\operatorname{pow}(x,3)=\underbrace{\operatorname{multiply}(x,\operatorname{multiply}(x,x))}_{2\textrm{ multiplications}}.$
Subtraction and division can also be defined for gate/opcode-level integers. Subtraction has similar overflow issues to addition. Division behaves in a way that returns the quotient and remainder.
This can be extended to integer/decimal representations of rational numbers (e.g., fractions) using floating-point or fixed-point libraries like ABDK, and the library in Solmate. Depending on the
implementation, division can be more computationally intensive than multiplication.
More Functionality
With extensions of division and multiplication, negative powers can be constructed such that:
None of these abstractions allow computers to express infinite precision with arbitrarily large accuracy. There can never be an exact binary representation of irrational numbers like $\pi$, $e$, or $
\sqrt{2}$. Numbers like $\sqrt{2}$ can be represented precisely in computer algebra systems (CAS), but this is unattainable in the EVM at the moment.
Without computer algebra systems, quick and accurate algorithms for computing approximations of functions like $\sqrt{x}$ must be developed. Interestingly $\sqrt{x}$ arises in the infamous
approximation from Quake lll, which is an excellent example of an approximation optimization yielding a significant performance improvement.
Rational Approximations
The EVM provides access to addition, multiplication, subtraction, and division operations. With no other special-case assumptions as in the Quake square root algorithm, the best programs on the EVM
can do is work directly with sums of rational functions of the form:
$P_{m,n}(x)=\frac{\alpha_0 +\alpha_1 x + \alpha_2 x^2 + \cdots + \alpha_m x^m}{\beta_0 + \beta_1 x + \beta_2 x^2 + \cdots + \beta_n x^n}.$
The problem is that most functions are not rational functions! EVM programs need a way to determine the coefficients $\alpha$ and $\beta$ for a rational approximation. A good analogy can be made to
polynomial approximations and power series.
Using our Small Toolbox
When dealing with approximations, an excellent place to start is to ask the following questions: Why is an approximation needed? What existing solutions already exist, and what is the methodology
they employ? How many digits of accuracy are needed? The answers to these questions provide solid baseline to formulate approximation specifications.
Transcendental or special functions are analytical functions not expressed by a rational function with finite powers. Some examples of transcendental functions are the exponential function $\exp(x)$,
the inverse logarithm $\ln(x)$, and exponentiation. However, if the target function being approximated has some nice properties (e.g., it is differentiable), it can be locally approximated with a
polynomial. This is seen in the context of Taylor's theorem and more broadly as the Stone-Weierstrass theorem.
Power Series
Polynomials (like $P_N(x)$ below) are a useful theoretical tool that also allow for function approximations.
$P_N(x)=\sum_{n=0}^N a_nx^n=a_0+a_1x+a_2x^2+a_3x^3+\cdots + a_N x^N$
Only addition, subtraction, and multiplication are needed. There is no need for division implementations on the processor. More generally, an infinite polynomial called a power series can be written
by specifying an infinite set of coefficients $\{a_0,a_1,a_2,\dots\}$ and combining them as:
$\sum_{n=0}^\infty a_n x^n.$
A specific way to get a power series approximation for a function $f$ around some point $x_0$ is by using Taylor's theorem to define the series by:
$\sum_{n=0}^\infty \frac{f^{(n)}(x_0)}{n!}(x-x_0)^n = f(x_0) + f'(x_0)x + \frac{f''(x_0)}{2!}x^2 + \frac{f'''(x_0)}{3!}x^3 +...$
Intuitively, the Taylor series approximations are built by constructing the best "tangent polynomial," for example, the 1st order Taylor approximation of $f$ is the tangent line approximation to $f$
at $x_0$
$f(x)\approx f(x_0)+f'(x_0)(x-x_0)=\underbrace{f'(x_0)}_{\textrm{slope}}x+\underbrace{f(x_0)-f'(x_0)x_0}_{y-\textrm{intercept}}.$
For $\exp(x)$, there is the resulting series
$\exp(x)=\sum_{n=0}^\infty \frac{x^n}{n!}$
when approximating around $x_0=0$.
Since polynomials can locally approximate transcendental functions, the question remains where to center the approximations.
The infinite series is precisely equal to the function $\exp(x)$, and by truncating the series at some finite value, say $N$, there is a resulting polynomial approximation:
$\exp(x)\approx\sum_{n=0}^N \frac{x^n}{n!} = 1+x+\frac{x^2}{2}+\cdots + \frac{x^N}{N!}.$
The function $\phi(x)$ can be written by scaling the input and output of $\exp$ by
$\sqrt{2\pi}\phi(\sqrt{2}x)=\exp(-x^2)=\sum_{n=0}^N \frac{(-x)^n}{n!}.$
This demonstrates what various orders of polynomial approximation look like compared to the function itself.
This solves the infinity problem and now these polynomials can be obtained procedurally at least for functions that are $N$ times differentiable. In theory the Taylor polynomial can be as accurate as
needed. For example, increase $N$. However, there are some restrictions to keep in mind. For instance, since factorials $!$ grow exceptionally fast, there may not be enough precision to support
numbers like $\frac{1}{N!}$. $20!>10^{18}$, so for tokens with 18 digits, the highest order polynomial approximation for $\exp(x)$ on Ethereum can only have degree 19. Furthermore, polynomials have
some unique properties:
1. (No poles) Polynomials never have vertical asymptotes.
2. (Unboundedness) Non-constant polynomials always approach either infinity or negative infinity as $x\to \pm\infty$.
An excellent example of this failure is the function $\phi(x)$ which asymptotically approaches 0 as $x\to \pm \infty$. Polynomials don't do well approximating this! In the case of another even
simpler function $f(x)=\frac{1}{x}$, this function can be approximated by polynomials away from $x=0$, but doing so is a bit tedious. Why do this when division can be used to compute $f(x)$? It's
more expensive and decentralized application developers must be frugal when using the EVM.
Laurent Series
Polynomial approximations are a good start, but they have some problems. Succinctly, there are ways to more accurately approximate functions with poles or those that are bounded.
This form of approximation is rooted in complex analysis. In most cases, any real-valued function $f(x)=y$ can instead allow the inputs $z$ and outputs to be complex $f(z)=w$. This small change
enables the Laurent series expression for functions $f(z)$. A Laurent series includes negative powers, and, in general, looks like:
$f(z) = \sum_{n=-\infty}^{\infty}a_nz^n = \cdots+ a_{-2}\frac{1}{z^2} + a_{-1} \frac{1}{z} + a_{0} + a_1z + a_2z^2+ \cdots$
For a function like $f(x)=\frac{1}{x}$ the Laurent series is specified by the list of coefficients $a_{-1}=1$ and $a_n=0$ for $neq 1$. Whatever precision intended is the precision of the division
algorithm if we implement $f(z)$ as a Laurent series!
The idea of the Laurent series is immensely powerful, but it can be economized further by writing down an approximate form of the function slightly differently.
Rational Approximations
"If you sat down long enough and thought about ways to rearrange addition, subtraction, multiplication, and division in the context of approximations, you would probably write down an expression
close to this":
$P_{m,n}(x)=\frac{\alpha_0 + \alpha_1 x + \alpha_2 x^2 + \cdots + \alpha_m x^m}{\beta_0+\beta_1 x + \beta_2 x^2 + \cdots +\beta_n x^n}.$
Specific ways to arrange the fundamental operations can benefit particular applications. For example, there are ways to determine coefficients $\alpha$ and $\beta$ that do not run into the issue of
being smaller than the level of precision offered by the machine. Fewer total operations are needed, resulting in less total storage use for the coefficients simultaneously.
Aside from computational efficiency, another benefit of using rational functions is the ability to express degenerate function behavior such as singularities (poles/infinities), boundedness, or
asymptotic behavior. Qualitatively, the functions $\exp(-x^2)=\sqrt{2\pi}\phi(\sqrt{2}x)$ and $\frac{1}{1+x^2}$ look very similar on all of $\R$ and the approximation fares far better than $1-x^2$
outside of a narrow range. See the labeled curves in the following figure.
Continued fraction approximations
The degree of accuracy for a given approximation should be selected based on need and with respect to environmental constraints. The approximations in solstat are an economized continued fraction
This is typically a faster way to compute the value of a function. It's also a way of definining the Golden Ratio (the most irrational number):
$\varphi = 1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{~~~\ddots}}}}$
Finding continued fraction approximations can be done analytically or from Taylor coefficients. There are some specific use cases for functions that have nice recurrence relations (e.g., factorials)
since they work well algebraically with continued fractions. The implementation for solstat is based on these types of approximations due to some special relationships defined later.
Finding and Transforming Between Approximations
Thus far this article has not discussed getting these approximations aside from the case of the Taylor series. In each case, an approximation consists of a list of coefficients (e.g., Taylor
coefficients $\{a_0,a_1,\dots\}$) and a map to some expression with finitely many primitive function calls (e.g., a polynomial $a_0+a_1x+\cdots$).
The cleanest analytical example is the Taylor series since the coefficients for well-behaved functions can be found by computing derivatives by hand. When this isn't possible, results can be computed
numerically using finite difference methods, e.g., the first-order central difference, in order to extract these coefficients:
$f'(x)\approx \frac{f(x+h/2)-f(x-h/2)}{h}$
However, this can be impractical when the coefficients approach the machine precision level. Also, Laurent series coefficients are determined the same way.
Similarly, the coefficients of a rational function (or Pade) approximation can be determined using an iterative algorithm (i.e., akin to Newton's method). Choose the order $m$ and $n$ of the
numerator and denominator polynomials to find the coefficients. From there, many software packages have built-in implementations to find these coefficients efficiently, or a solver can be implemented
to do something like Wynn's epsilon algorithm or the minimax approximation algorithm.
All of the aforementioned approximations can be transformed into one another depending on the use case. Most of these transformations (e.g., turning a polynomial approximation into a continued
fraction approximation) amount to solving a linear problem or determining coefficients through numerical differentiation. Try different solutions and see which is best for a given application. This
can take some trial and error. Theoretically, these algorithms seek to determine the approximation with a minimized maximal error (i.e., minimax problems).
Breaking up the approximations
Functions $f\colon X \to Y$ also come along with domains of definition $X$. Intuitively, the error for functions with bounded derivatives has an absolute error proportional to the domain size. When
trying to approximate $f$ over all of $X$, the smaller the set $X$, the better. It only takes $n+1$ points to define a polynomial of degree $n$. This means a domain $X$ with $n+1$ points can be
perfectly computed with a polynomial.
For domains with infinitely many points, reducing the measure of the region approximated over is still beneficial especially when trying to minimize absolute error. For more complicated functions
(especially those with large derivative(s)), breaking up the domain $X$ into $r$ different subdomains can be helpful.
For example, in the case of $X=[0,1]$, a 5th degree polynomial approximation for $f\colon [0,1]\to \R$ has max absolute error $10^{-4}$. After splitting the domain into $r=2$ even-sized pieces, the
result is $f_1\colon [0,1/2]\to \R$ and $f_2\colon [1/2,1] \to \R$, which is used in the original algorithms to determine two distinct sets of coefficients for their approximations. In the domain of
interest, $f_1$ and $f_2$ only have $10^{-6}$ in error. Yet, if extended $f_1$ outside of $[0,1/2]$, the error will have increased to $10^{-2}$. Each piece of the function is optimized purely for its
reduced domain.
Breaking domains into smaller pieces allows for piecewise approximations that can be better than any non-piecewise implementation. At some point, piecewise approximations require so many conditional
checks that it can be a headache, but this can also be incredibly efficient. Classic examples of piecewise approximations are piecewise linear approximations and (cubic) splines.
Ethereum Environment
In the Ethereum blockchain, every transaction that updates the world state costs gas based on how many computational steps are required to compute the state transition. This constraint puts pressure
on smart contract developers to write efficient code. Onchain storage itself also has an associated cost!
Furthermore, most tokens on the blockchain occupy 256 bits of total storage for the account balance. Account balances can be thought of as uint256 values. Fixed point math is required for accurate
pricing to occur on smart contract based exchanges. These libraries take the uint256 and recast it as a wad256 value which assumes there are 18 decimal places in the integer expansion of the uint256.
As a result, the most accurate (or even "perfect") approximations onchain are always precise to 18 decimal places.
Consequently, it is of great importance to be considerate of the EVM when making approximations onchain. All of the techniques above can be used to make approximations accurate near $10^{-18}$ in
precision and economical simultaneously. To get full $10^{-18}$ precision, the computation for rational approximations would need coefficients with higher than 256bit precision and the associated
Solstat Implementation
A continued fraction aproximation of the Gaussian distribution is performed in Gaussian.sol. Solmate is used for fixed point operations alongside a custom library for units called Units.sol. The
majority of the logic is located in Gaussian.sol.
First, a collection of constants used for the approximation is defined alongside custom errors. These constants were found using a special technique to obtain a continued fraction approximation that
is for a related function called the gamma function (or more specifically, the incomplete gamma function). By changing specific inputs/parameters to the incomplete gamma function, the error function
can be obtained. The error function is a shift and scaling away from being the Gaussian CDF $\Phi(x)$.
The gaussian contract implements a number of functions important to the gaussian distributions. Importantly all of these implementations are for a mean $\mu = 0$ and variance $\sigma^2 = 1$.
These implementations are based on the Numerical Recipes textbook and its C implementation. Numerical Recipes cites the original text by Abramowitz and Stegun, Handbook of Mathematical Functions,
which can be read to understand these functions and the implications of their numerical approximations more thoroughly. This implementation is also differentially tested with the javascript Gaussian
library, which implements the same algorithm.
Cumulative Distribution Function
The implementation of the CDF aproximation algorithm takes in a random variable $x$ as a single parameter. The function depends on helper functions known as the error function erf which has a special
symmetry allowing for the approximation of the function on half the domain $\R$.
$\operatorname{erfc}(-x) = 2 - \operatorname{erfc}(x)$
It is important to use symmetry when possible!
Furthermore, it has the other properties:
$\operatorname{erfc}(-\infty) = 2$
$\operatorname{erfc}(0) = 1$
$\operatorname{erfc}(\infty) = 0$
The reference implementation for the error function can be found on p221 of Numerical Recipes in section C 2e. This page is a helpful resource.
Probability Density Function
The library also supports an approximation of the Probability Density Function(PDF) which is mathematically interpeted as $Z(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-(x - \mu)^2}{2\sigma^2}}$. This
implementation has a maximum error bound of of $1.2\cdot 10^{-7}$ and can be refrenced here. The Gaussian PDF is even and symmetric about the $y$-axis.
Percent Point Function / Quantile Function
Aproximation algorithms for the Percent Point Function (PPF), sometimes known as the inverse CDF or the quantile function, are also implemented. The function is mathmatically defined as $D(x) = \mu -
\sigma\sqrt{2}\operatorname{ierfc}(2x)$, has a maximum error of $1.2\cdot 10^{-7}$, and depends on the inverse error function ierf which is defined by
$\operatorname{ierfc}(\operatorname{erfc}(x)) = \operatorname{erfc}(\operatorname{ierfc}(x))=x$
and has a domain in the interval $0 < x < 2$ along with some unique properties:
$\operatorname{ierfc}(0) = \infty$
$\operatorname{ierfc}(1) = 0$
$\operatorname{ierfc}(2) = - \infty$
Invariant.sol is a contract used to compute the invariant of the RMM-01 trading function such that $y$ is computed in:
$y = K\Phi(\Phi^{⁻¹}(1-x) - \sigma\sqrt{\tau}) + k$
This can be interpretted graphically with the following image:
Notice the need to compute the normal CDF of a quantity. For a more detailed perspective on the trading function, take a look at the RMM-01 whitepaper.
Solstat Versions
Solstat is one of Primitive's first contributions to improving the libraries availible in the Ethereum ecosystem. Future improvements and continued maintenance are planned as new techniques emerge.
Differential Testing
Differential testing by Foundry was critical for the development of Solstat. A popular technique, differential testing seeds inputs to different implementations of the same application and detects
differences in the execution. Differential testing is an excellent complement to traditional software testing as it is well suited to detect semantic errors. This library used differential testing
against the javascript Gaussian library to detect anomalies and varying bugs. Because of differential testing we can be confident in the performance and implementation of the library. | {"url":"https://www.primitive.xyz/blog/solstat","timestamp":"2024-11-01T19:51:29Z","content_type":"text/html","content_length":"329913","record_id":"<urn:uuid:743677c4-16f8-4a32-ac65-ad331e02a823>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00835.warc.gz"} |
2015 Pension de-risking
In this article we discuss how changes in interest rates, Pension Benefit Guaranty Corporation premiums, the new Society of Actuaries mortality tables and regulatory developments may affect plan
sponsor decisions to de-risk (or not de-risk) defined benefit plan liabilities in 2015. For purposes of this article, by de-risking we mean paying out a participant’s benefit as a lump sum and
thereby eliminating the related liability.
The article updates analysis we previously presented in 2013 and 2014. This is a technical article, but for some sponsors there are millions of hard dollars at stake.
We are going to illustrate the effect on the de-risking decision of these developments with an example: the de-risking gain/loss with respect to a terminated vested 50 year-old participant who is
scheduled to receive an annual life annuity of $1,000 beginning at age 65.
We begin with the bottom line. For our example participant, the cost of paying out a lump sum in 2015 is $5,981. Potential savings associated with paying out a lump sum benefit in 2015 for this
participant are summarized in the following chart:
2015 savings (present value) from de-risking example participant
Flat-rate premiums $1,578
Variable-rate premiums (only available to plans with certain
funding and demographic characteristics)
New mortality tables $359
So, settling a liability of $5,981 this year can save the company as much as $4,250 (71% of the benefit value!) versus leaving the participant in the plan.
Interest rates
De-risking involves paying out the present value of a participant’s benefit as a lump sum. The interest rates used to calculate that present value are the Pension Protection Act (PPA) ‘spot’ first,
second and third segment rates for a designated month. Many sponsors set the lump sum rate at the beginning of the calendar year, based on prior year November interest rates, so that participants
will know what rate will be used to calculate their lump sum for the entire year. Under such an approach, for 2015 the lump sum rates would be the November 2014 PPA spot rates.
The following chart shows PPA November spot second and third segment rates for the period 2011- 2014 (November rates highlighted):
As this data indicates, over the period 2012-2015, lump sum valuation interest rates peaked in November 2013 (used for lump sums paid in 2014). Since then, rates have gone down; the critical medium
(second segment, years 6-20) and long-term (third segment, years 21+) rates for November 2014 went down (relative to November 2013) by more than 60 basis points.
The following chart shows the cost of a lump sum payment for our example participant for 2013, 2014 and 2015 for a sponsor using a prior year’s November spot rate.
Cost of lump sum payment – annual $1,000 deferred vested benefit beginning at age 65/participant is 50
2013 $6,466
2014 $5,119
2015 $5,981
Clearly, the interest rates used to determine lump sum amounts have a significant impact on resulting benefits. From the table above, we see that the cost of paying lump sums in 2014 was 21% lower
than in 2013, while the cost in 2015 is 17% higher than 2014.
PBGC premiums
Reducing participant headcount, e.g., by paying out lump sums to terminated vested participants, reduces the PBGC flat-rate premium and may, depending on plan funding and demographics, reduce the
variable-rate premium. Premiums for the current year are based on headcount for the prior year. So de-risking in 2015 will reduce premiums beginning in 2016.
PBGC flat-rate premiums
The PBGC flat-rate premium was $49 in 2014, increasing to $57 in 2015 and $64 in 2016, and indexed to average wage increases thereafter. If we assume the participant in our example dies at age 85,
these premiums will be made for the next 35 years, totaling almost $3,200, with a present value today (at 4%) of $1,578. In other words, the cost of paying PBGC headcount premiums for this
participant is more than 25% of the value of the participant’s entire benefit.
Variable-rate premiums
In our article Reducing pension plan headcount reduces risk and PBGC premiums, we discussed how de-risking can, in some cases, dramatically reduce the variable-rate premium. The logic of that is not
especially intuitive. The gains come from the headcount-based cap on variable-rate premiums that applies beginning in 2013.
The following chart describes the headcount-based cap for the period 2013-2016
Plan years beginning in Per Participant Cap
2013 $400
2014 $412
2015 $418
2016 (and after) $500
Oversimplifying, depending on plan funding and demographics, de-risking (that is, lump summing-out) one participant in 2015 may save a sponsor $500 per year in PBGC variable premiums beginning in
2016 (on top of the $64 headcount premium savings.)
As plan funding improves, however, this savings will go away. For purposes of our example we’re going to assume the plan ‘funds its way out’ of the per participant variable-rate premium cap after 5
years. In this case, a lump sum payment in 2015 would reduce variable premiums by more than $2,600 between 2016 and 2020, with a present value (at 4%) of $2,313, 39% of the value of the participant’s
entire benefit.
For details on the effect of de-risking on the variable-rate premium, we refer you to the article above.
Effect of new mortality tables
Late in 2014, the Society of Actuaries finalized new mortality tables for private DB plans (see our article Society of Actuaries releases updated mortality tables). At some point in the relatively
near future (perhaps as early as 2016), IRS will begin the formal process of updating the mortality tables that plans must, under the Tax Code and ERISA, use in calculating lump sums, to reflect the
new SOA 2014 tables/improvement scale.
DB plan sponsors will want to consult their actuary as to the application of the new tables to their plan. The following table projects (based on estimates) the increase in annuity values for minimum
funding purposes that would result from the adoption of the RP-2014 mortality tables/improvement scale by IRS for 2016.
Age Increase
Males 25 9.8%
35 7.9%
45 5.4%
55 3.6%
65 4.5%
75 9.8%
85 16.9%
Females 25 11.8%
35 10.3%
45 8.3%
55 6.6%
65 5.8%
75 8.0%
85 10.7%
Source: Society of Actuaries RPEC Response to Comments on RP-2014 Mortality Tables Exposure Draft
The table does not give an estimate for the increase in cost for our example 50-year old participant; interpolating the estimates for 45- and 55-year old males and females, we come up with an
increase of 6.0%.
Using these estimates, de-risking a terminated vested participant in 2015, at a $5,981 value (see above), before the new mortality tables have gone into effect, will avoid a 6.0% increase in cost.
6.0% of $5,981 is a one-time saving of $359.
Note: we are characterizing payment of a lump sum before new mortality tables go into effect as producing a ‘savings.’ That ‘savings,’ however, is different from the PBGC premium savings discussed
above. The PBGC premium savings are ‘real money’ that sponsors will have to pay in 2016 if they do not reduce participant headcount. The gains from paying a lump sum before new mortality tables go
into effect are more speculative, and, as can be seen from recent experience, they can be swamped by the impact of changing interest rates.
Among other things, we don’t know what the new tables will be; IRS may modify the SOA’s tables.
Finally, sponsors may wish to consider whether, and how, to explain the effect of potential new mortality tables on a participant’s decision to take a lump sum, either as part of a de-risking
transaction or simply in the course of an ordinary retirement.
Regulatory initiatives
Another issue that sponsors considering de-risking will want to take into account is regulatory initiatives generally aimed at restricting, or providing special disclosure rules with respect to,
Participant advocate groups have been proposing everything from enhanced disclosure to a moratorium and thorough regulatory review of de-risking transactions, as we discussed in our article Concerns
over pension de-risking. Thus far, with minor exceptions, no concrete initiatives have been undertaken either by the agencies or Congress. The minor exceptions include IRS’s suspension of Private
Letter Rulings on paying lump sums to retirees currently receiving benefits (reported in our 2014 article Update on pension de-risking and PBGC’s proposal “to require after-the-fact reporting of
certain risk transfers through lump sum windows and annuity purchases” in 2015 premium filings.
The Administration is, however, interested in the issue. And increased de-risking activity may increase Congressional concern.
There have also been proposals to revise the insurance laws of at least two states (Connecticut and New York) to impose restrictions on de-risking transactions. It is unclear, however, whether any of
these proposals will pass.
Nevertheless, there is a distinct possibility that some sort of regulatory effort – probably increased disclosure, but possibly involving substantive restrictions – with respect to de-risking will be
made at some point. Sponsors may want to consider that possibility in deciding whether or not to act in 2015.
* * *
Decreases in lump sum valuation interest rates have made de-risking in 2015 more expensive than it was in 2014. But, even with the decrease in interest rates, de-risking may still result in
substantial savings.
We will continue to follow this issue. | {"url":"https://www.octoberthree.com/articles/2015-pension-de-risking/","timestamp":"2024-11-09T22:57:24Z","content_type":"text/html","content_length":"120643","record_id":"<urn:uuid:b96b1a91-0247-49e8-b766-e60f1581fdce>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00453.warc.gz"} |
Use Of Differential Calculus | Hire Someone To Do Calculus Exam For Me
Use Of Differential Calculus Differential calculus is a type of calculus: the idea is that the concept of derivatives is entirely foreign to Newtonian calculus, where Newton by reference operates on
the basis of a calculus over a chosen set of variables. A more precise definition of its type is provided by Cartesian calculus, which can be performed on a range of variables by starting with the
potential concept and then interpreting that potential on a basis of the fact that it is a solution. A similar concept of calculus was introduced by Moore in his pioneering work in that area and
developed into the calculus of calculus by the other developments in physics and science. The concepts discussed here are found in the field of calculus, especially calculus of general type. The
three concepts of derived calculus are assumed to be independently derived formations, with the obvious distinctions left implicit though they appear in the definitions in these articles. A
discrete-degree calculus over a range of two variables represents that two integral operators acting on a discrete variables do not commute with each other (for example, a quadrature matrix). To make
a calculus of finite degrees non-commutative, the first step is to identify an integral operator commuting with each other, with its first eigenvalue having degree 1 (which is the first degree). With
these identification step, differentiation and sum of powers give a complete set of such operators. The problem is posed by the analysis of a discrete-degree calculus: a differential algebra system
is written in a particular form which is determined by some local definition of the derivative, defined over a large range of parameters. One difficulty with this approach is the lack of availability
of a theory of differential equations for equations over arbitrary numbers of variables. In three dimensions, there is no time span, so to construct differentials, one would like to classify them
using only the (differential) differences in fact by using various differential identities involving the (differential) derivatives of some variables. The problem of differentiating and cancelling
the difference in eigenvalues obtained from expressions which do not possess the eigenvalues of some operator resulting from the previous generation (which generalizes Cartesian calculus) would be
akin to the many problems which arise with equations over a variety of numbers or other systems. A real-degree calculus is a category of differential equations with a given structure. A more precise
definition, however, is given by differential calculus, which can be cast in the n-dimensional one: an equation written in a different variable is an equation with two different (generating)
relations and elements which generate at most one real factor. It is this mathematical structure that allows for the identification of differentials: a differential calculus that, with the given
equations, can be reduced to two differential equations, and where the left equation matters. The problem is posed by a proof protocol in which the two (differential) types are distinguished by
relations which have differential operations and where one of the two relations of the other (even when the terms of equation are defined over parameters, instead of over some fixed set of
parameters). Cancelling the difference and the related differences in equation by use of the same rules may be treated the same as using discretization and differentiation, unless they do the same
job for the new equations, for example by means of matrix multiplication. Over a number of dimensions the most important concept which can be classed as differentials is the difference in equations:
The difference in a differentialUse Of Differential Calculus It has been a pleasure to read many reviews and articles on the recent technology in online math. It is very interesting to think about
how we will incorporate into this research all sorts of different approaches that could work well together. The reviews I have reviewed in this research come from people who have really done a really
good job.
Mymathlab Pay
As you know all, here are the different methods you will have to pay for in the mathematics area. This is a really good article let me know how you think that. – –– – – – – – After spending some time
in this paper I wanted to do a similar article. The section deals with the methods you will have to pay for to get inside to those algorithms that are needed. The different algorithms are shown in
diagram 7, below. The algorithm there involves applying a filter, giving each element of the set an algorithm, to the set and then inserting the element(s). Then removing all elements are applying
the filter, giving the elements. Even though this filter is applied to the subset, the filter makes use of the set. – –– – – – – In this article my main focus is on the different methods in
mathematics for two people who are doing that work for and are interested in writing this book. But luckily for the author I have already picked a few comments about the methods and ideas and I am
sure that this is the best article for the paper to go through. First, I am sorry for the formatting issues. My first mistake was that the title of the paper was half given. It has been pasted to be
more typefaces. So without that reading I started to think that I was using very basic formatting. In other words the title got misassigned. So the paper starts with a bit different formatting for
each letter. Nevertheless. Also, the titles are incorrect. Again. I will try to correct everything for this.
Do You Buy Books For Online Classes?
– –– – – – – I should clarify that in the original article it was inserted about two times. It was not the first time. It has remained unchanged for the same two people. Also, the title was missing.
When I read the article I realized what this was. Is this a mistake? The title of this paper is very long, and on the main page of my laptop I have problems finding the solution for next time. At the
moment I have to add the title and to change the text. It has been broken down. Anyway, this should be a good academic article to read. Why I am being an Academic Reviewer! Here we go. For now I am
hoping to show that there is more to blogging about mathematical stuff than I could ever know about anymore. I have many articles on a lot of things within the math world, about geometry, algebra,
solids, geometry, physics, computer vision, any kind of technology. This past work has been very involved in a lot of research on a regular theory but I am also quite fascinated by the topic. So I
will be posting about the algebra sections, geometry sections and solids, especially about the geometry of the world. So here are a few examples. A few of my examples are examples that I have also
studied not only about geometric techniques but also about various applications. But instead I want to start by explaining the parts where I came up with the list of algorithms. Basically it has to
do with the number of ways in which we can use an algorithm to solve some problem which can normally be solved only by the original algorithm. In this tutorial I drew of this algorithm. A few
properties and techniques come from the following section.
Computer Class Homework Help
Specifically the algorithm itself is presented. In the first part I consider the problem and the method that I am using. Each problem has the form: a/a + b)an+b\*-1. The problem that I want to obtain
can be put into the form: -(0:1 + 0:1) + (1:1\*0 + 0:1) + (0:1\*0\*1): 0. For the second part I approach above. The method that I am trying to use Extra resources a different one. The problem is
rewritten in this form: (0:1)+(1:1): +(1:1\\\*0Use Of Differential Calculus and Applications at http://www.statmetrics.org/scuttle/main/graphics/basics_3_664.html I believe it is the sort of
calculation that all computer programs have to spend on the job, and the computer program’s computer programs could potentially do an extremely complex job with what they know/do. They might be able
to learn the skills, design the programs, and then make them run on a computer at a reasonably low speed by combining those skills and applying general concepts in their own domains, creating or
modifying basic or technical software, and then doing the relevant calculations and then returning them to their primary domain. I believe that there are some good reasons for doing this for course
assignments like this and if you have the technical skills that could save that long process, you probably wouldn’t want to waste your spare hour learning this. I don’t think I’d say it makes an
entire department special even when it is your own company. I remember getting those courses from what I recall is a pretty expensive paper-back that I would check professionally and just move on to
when I finished “all business”, i.e. when I got two questions in a year once. As much as I will admit my students try to see past a few that they could just as easily try to attend an assignment.
They can maybe talk about it one time before they do, but I personally don’t think this course would actually be necessary, since they surely would be helping other people, if they get all the input
for that assignment in advance. Also, I don’t think the author of this website is affiliated with or endorsed by an ISP. Certainly, this is the first time I’ve seen her in class and she uses other
websites, which is to my knowledge a very similar style when dealing with similar classes.
How To Feel About The Online Ap Tests?
I’m going to get to that with more details as and when I’m done. I certainly suspect that class assignment writing and sharing of this site as I do was especially important and one of the ways I
could offer special or complimentary college courses on topics I did not have research and I thought “yes! I liked this class in general!” I really enjoyed it. I remember when I was in grad school
(at the time) getting 5 I wanted a class assignment along those lines. I also can remember had some students give me the assignment after they graduated, because they were kind of upset about some
department I had just been assigned (and maybe some students had dropped out as I was getting older) and I was willing to put on the class for them, I and the whole department, and the instructor
handed me some papers/papers and signed out my submission, for a few days for my papers to get done. I took the paper, signed the papers, sent my class out the door, and then they all left. Great! I
put my class assignment down on my wishlist and went over some papers and papers from assignments I can’t remember or review, and started one of the class assignments that you may have gotten online
and it came out pretty good. Did I also include the final topic in one paper/paper that I reread? When you added them I’d been in my class before, but I put the paper in here, and the program started
but didn’t send out an answer. Sorry for getting a bit messy over the idea of asking such questions sometimes. I remember going over some papers and papers from papers I could not remember doing, and
left things alone for 10 minutes. I’m sure I just wanted to go over the most important thing that I did for writing the course, and, most of the grades were quite a bit bigger than that just to get
that, but I think I went over them with some other papers. And what of my class assignment page? None. Since I didn’t plan on doing classes this fall I am not a fan of writing this past semester, but
it’s fun to have a series of pages read to me that I could discuss with you below. All of the class stuff for that assignment paper is helpful as it’s the most important section that I plan on doing,
making sure that as much of class work as possible is done in perfect order to fit in today for the course. I’ve been running both students’ classes throughout those last school days, at which point
I’d like some students to | {"url":"https://hirecalculusexam.com/use-of-differential-calculus","timestamp":"2024-11-10T14:50:21Z","content_type":"text/html","content_length":"107552","record_id":"<urn:uuid:c43477ff-fc2f-4486-ac6a-147095b89bdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00679.warc.gz"} |
What is the
What is the domain of a set?
What is the domain of a set?
The domain is the set of all first elements of ordered pairs (x-coordinates). The range is the set of all second elements of ordered pairs (y-coordinates). Only the elements “used” by the relation or
function constitute the range. Domain: all x-values that are to be used (independent values).
How do you find the domain of notation?
Identify the input values. Since there is an even root, exclude any real numbers that result in a negative number in the radicand. Set the radicand greater than or equal to zero and solve for x. The
solution(s) are the domain of the function.
How do you write the domain in algebra notation?
The answers are all real numbers where x<2 or x>2. We can use a symbol known as the union, ∪,to combine the two sets. In interval notation, we write the solution:(−∞,2)∪(2,∞). In interval form, the
domain of f is (−∞,2)∪(2,∞).
Where is the .is domain?
.is (dot is) is the top-level domain for Iceland. The country code is derived from the first two letters of Ísland, which is the Icelandic word for Iceland.
What are the 3 notations for domain and range?
Domain and range are described in interval notation. Parentheses, ( ), mean non-inclusive. Brackets, [ ], mean inclusive. The following descriptions of numbers into interval notation have been
converted to interval notation.
What is a domain in algebra?
The domain of a function is the set of all possible inputs for the function. For example, the domain of f(x)=x² is all real numbers, and the domain of g(x)=1/x is all real numbers except for x=0.
What does domain stand for in math?
Domain (n.) (Math.) a connected set of points, also called a region. Domain (n.) (Physics) a region within a ferromagnetic material, composed of a number of atoms whose magnetic poles are pointed in
the same direction, and which may move together in a coordinated manner when disturbed,…
How do you read set builder notation?
Set builder notation is a notation for describing a set by indicating the properties that its members must satisfy. Reading Notation : ‘|’or ‘:’ such that. A = { x : x is a letter in the word
dictionary }. We read it as. “A is the set of all x such that x is a letter in the word dictionary”.
What is the definition of domain in math terms?
In mathematics, the domain of definition (or simply the domain) of a function is the set of “input” or argument values for which the function is defined. That is, the function provides an “output” or
value for each member of the domain.
What is the interval notation for domain of?
Interval notation combines inequality, or set notation, with its graph, and allows us to accurately express an interval with easy to understand symbols. Why should we care? In advanced mathematics,
interval notation is the preferred method of representing domain and range and is cleaner and easier to use and interpret. | {"url":"https://teacherscollegesj.org/what-is-the-domain-of-a-set/","timestamp":"2024-11-07T01:15:15Z","content_type":"text/html","content_length":"140895","record_id":"<urn:uuid:d7e4e816-eaed-47e1-9f67-c054d632e4e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00740.warc.gz"} |
Displaying Math in WordPress - Math in Office
The Microsoft devblogs are hosted by WordPress. Until recently, we haven’t been able to display equations using traditional math typography in our blogs except in images. Now we can embed LaTeX math
which is a lot more accessible. One syntax for this is [ℒ]…[/ℒ], where the “…” has the desired LaTeX and the ℒ is the literal string “latex” (I can’t use the string “latex” itself since it would be
interpreted as part of a LaTeX delimiter). Another syntax is “$ latex …$” except leave out the space I added following the $ to prevent it from turning into LaTeX. On web pages viewed in FireFox and
Safari as well as in the latest versions of Chrome, Edge, or Opera, you can use native MathML. For many years, web pages could invoke MathJax to display MathML and LaTeX. I don’t think you can embed
MathML in WordPress blog posts yet.
As an example of embedded LaTeX, several of my posts have included what I call the mode-locking integral
The text for this equation is LaTeX, not an image! Very exciting!
You might wonder why I named it the “mode-locking integral”. It comes from being the solution to the mode-locking equation
\(\dot{\Psi}(t)=a+b \sin{\Psi(t)}\)
where \(\Psi\) is the relative phase angle between oscillating modes. The modes can be oppositely running waves in a ring laser, the oscillations of two pendulum clocks near one another on a wall,
two nearby tuning forks, etc. For more detailed descriptions, see Section 7.5 Mode Locking in the book Elements of Quantum Optics, by Pierre Meystre and yours truly. The variable 𝑎 is the frequency
difference between the oscillating modes in the absence of coupling, and the variable 𝑏 is the coupling coefficient. If |𝑎| > |𝑏|, the integral is valid. When |𝑎| ≤ |𝑏|, \(\dot{\Psi}=0\), which has
the solution \(\Psi=-\sin^{-1}{\frac{a}{b}}\), and the modes are locked to the same frequency.
0 comments
Discussion are closed. | {"url":"https://devblogs.microsoft.com/math-in-office/displaying-math-in-wordpress/","timestamp":"2024-11-10T21:30:42Z","content_type":"text/html","content_length":"173727","record_id":"<urn:uuid:a4172753-2d88-4d2f-ab79-6348fa75a00f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00294.warc.gz"} |
Modeling with Relations
ASIDE: Note that the nhbr relation can actually represent an arbitrarily weird board, such as locations that look adjacent on the map but actually aren't, boards which wrap around a cylinder or
toroid, or a location with a tunnel connecting it to a location far across the board (like the secret passages in the game Clue, or the harrowing sub trip through Naboo in Star Wars: The Phantom
Menace.) One-way passages can be encoded as well (meaning the nhbr relation need not be symmetric). Actually, any graph can be representedl
Exercise 3.5.1
How shall we encode concepts such as " location A has 3 dangerous neighbors ", using relations? Proofs otherwise unchanged. Note that we might express our rules as " for any locations x and y, we
have the following axiom: has − 3(x) ∧ nhbr (x, y) ⇒¬safe (y) ". Really, note that there's something else going on here: x and y are symbols which can represent any location: they are variables,
whose value can be any element of the domain.
For the domain of types-of-vegetables, the relation yummy is a useful one to know, when cooking. In case you weren't sure, yummy (Brussels sprouts) = false, and yummy (carrots) = true.
Suppose we had a second relation, yucky. Is it conceivable that we could model a vegetable that's neither yucky nor yummy, using these relations? Sure! (Iceberg lettuce, perhaps.) In fact, we could
even have a vegetable which is both yummy and yucky radishes!
ASIDE: A quick digression on a philosophical nuance: the domain for the above problem is not vegetables; it's types-of-vegetables. That is, we talk about whether or not carrots are yummy; this is
different than talking the yumminess of the carrot I dropped under the couch yesterday, or the carrot underneath the chocolate sauce. In computer science, this often manifests itself as the
difference between values, and types of values. As examples, we distinguish between 3 and the set of all integers, and we distinguish between particular carrots and the abstract idea of carrots.
(Some languages even include types as values.) Philosophers enjoy debating how particular instances define the abstract generalization, but for our purposes we'll take each both vegetables and types
of-vegetables as given.
Exercise 3.5.2
You might have objected to the idea of the unary relation yummy, since different people have different tastes. How could you model individuals' tastes? (Hint: Use a binary relation.) )
Modeling actors and the has-starred-with relation didn't include information about specific movies. For instance, it was impossible to write any formula which could capture the notion of three actors
all being in the same movie.
Exercise 3.5.3
Why doesn't hasStarredWith (a, b) ∧ hasStarredWith (b, c) ∧ hasStarredWith (c, a) capture the notion of a, b, and c all being in the same movie? Prove your answer by giving a counterexample.
Exercise 3.5.4
How might we make a model which does capture this? What is the domain? What relations do you want?
Of course, the notion of interpretations are still with us, though usually everybody wants to be thinking of one standard interpretation. Consider a relation with elements such as isChildOf (Bart,
Homer, Marge). Would the triple (Bart, Marge, Homer) be in the relation as well as (Bart, Homer, Marge)?
As long as all the writers and users of formulas involving isChildOf all agree on what the intended interpretation is, either convention can be used. | {"url":"https://www.opentextbooks.org.hk/zh-hant/ditatopic/9590","timestamp":"2024-11-08T07:55:20Z","content_type":"text/html","content_length":"203536","record_id":"<urn:uuid:2eb341d3-1029-4fe5-840d-0756e261b80e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00482.warc.gz"} |
Calc - 2.2
Section 2.2 - Differentiability
Essential Question(s):
How are continuity and differentiability related?
Follow these three steps to complete this "flip" lesson.
STEP 1: Preparation
Title your spiral with the heading above and copy the essential question(s).
STEP 2: Vocabulary & Examples
Copy and define the following of vocabulary. This can be any tables, properties, theorems, terms, phrases or postulates listed. Review the following examples and copy what is necessary for you. Use
the guiding questions for your cornell notes.
How f'(a) Might Fail to Exist
1. What are four instances when f'(a) fail to exist?
Differentiability Implies Local Linearity
1. What does locally linear mean? (pg 112)
Numerical Derivatives on a Calculator
• I will show you in class how to find derivatives, numerical derivatives and how to graph derivatives on your calculator.
• Make sure you read the "A word on notation" sidebar.
Differentiability Implies Continuity
1. What is the connection between differentiability and continuity?
Intermediate Value Theorem for Derivatives
1. What is the Intermediate Value Theorem for Derivatives?
STEP 3: Reading
If time permits, reread the lesson and take any extra notes as needed. | {"url":"https://www.msstevensonmath.com/calc---22.html","timestamp":"2024-11-09T02:46:51Z","content_type":"text/html","content_length":"87211","record_id":"<urn:uuid:f4e4cfbb-cf49-4614-a197-959240c4279b>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00143.warc.gz"} |
ACP Seminar (Astronomy - Cosmology - Particle Physics)
Speaker: Cedric Deffayet (IAP)
Title: Some recent results on massive gravity
Date Fri, Apr 25, 2014, 13:30 - 14:30
Place: Seminar Room A
Motivated in part by the wish to find interesting large distance modifications of gravity to be useful in cosmology, a simple idea is to try to give a mass to the graviton. The linear
theory for such a "massive gravity" is known for a long time as the Fierz-Pauli theory. However, it does not recover predictions from linearized General Relativity in the massless limit
Abstract: (this is the celebrated "van Dam-Veltman-Zakharov - vDVZ - discontinuity"). A way out was proposed in 1972 by A. Vainshtein. This "Vainshtein mechanism" relies on non linearities which in
turn lead to other pathologies which for a long time were thought to be unavoidable in any theory with a finite number of massive graviton. Following in particular developments links to
brane worlds models such as the DGP model (which has some relation to "massive gravity") various issues related to pathologies of massive gravity are now better understood. I will review
the pathologies of massive gravity and their possible cures, insisting in particular on the Vainshtein mechanism, as well as on the recent construction of de Rham-Gabadadze-Tolley. | {"url":"http://research.ipmu.jp/seminar/?seminar_id=1179","timestamp":"2024-11-13T03:05:53Z","content_type":"text/html","content_length":"14615","record_id":"<urn:uuid:a3eb8df4-6f73-4fcf-803d-6bc9f9ad70a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00601.warc.gz"} |
Problems from An introduction to exponential equation
Solve the following expressions:
a) $$9^x=2$$
b) $$3+(5-2)^y-1=\dfrac{25}{5}$$
See development and solution
a) $$9^x=2 \Rightarrow x=log_9 2$$
b) $$3+(5-2)^y-1=\dfrac{25}{5}$$
Because of the hierarchy of the operations we firstly do what is in brackets: $$(5-2)=3$$
and then the quotients: $$\dfrac{25}{5}=5$$
Re-writing and operating: $$3+3^y-1=5 \Rightarrow 3^y=3 \Rightarrow y=log_3 3=1$$
a) $$x=log_9 2$$
b) $$y=1$$ | {"url":"https://www.sangakoo.com/en/unit/an-introduction-to-exponential-equation/problems","timestamp":"2024-11-07T07:28:46Z","content_type":"text/html","content_length":"15351","record_id":"<urn:uuid:ea10e499-a82d-4a1e-8773-86ecea780539>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00029.warc.gz"} |
Elementary statistics 7th edition
□ https://info.5y1.org/elementary-statistics-7th-edition_1_d545ba.html
Essentials of Statistics, Fourth Edition, Triola 2008 Pearson Education, Inc. Statistical Computer packages are available in the Mathematics Lab (S511) in N501 and N502. D. Other Resources
The resources available in the Math Lab (Room S511) include tutors, videotaped lessons, technology
□ https://info.5y1.org/elementary-statistics-7th-edition_1_8d2475.html
Weiss's Elementary Statistics, Ninth Edition, is the ideal textbook for introductory ... for Bluman, Elementary Statistics, was developed by statistics instructors who served ... of the
McGraw-Hill Encyclopedia of Science & Technology, 10th Edition.. May 30, 2021 — Seventh Edition Elementary Statistics Bluman AnswersTest bank for ...
□ https://info.5y1.org/elementary-statistics-7th-edition_1_f62cf5.html
Elementary Statistical Methods ... This book follows the syllabus of the Introductory Statistics course we teach at Plattsburgh State University. The course is intended to introduce students
to some basic statistical ideas: data represen-tation, the construction and interpretation of confidence intervals, hypothesis testing, and ...
□ https://info.5y1.org/elementary-statistics-7th-edition_1_932492.html
CONTENTS iii Preface ix Resources for Success xiii Introduction to Statistics Where You’ve Been Where You’re Going 1 1.1 An Overview of Statistics 2 1.2 Data Classification 9 Case Study:
Reputations of Companies in the U.S. 16 1.3 Data Collection and Experimental Design 17 Activity: Random Numbers 27 Uses and Abuses: Statistics in the Real World 28
□ https://info.5y1.org/elementary-statistics-7th-edition_1_dc7b78.html
7th Edition Numbers and Statistics Guide Numbers see Publication Manual Sections 6.32–6.35 for guidelines on using numerals vs. words • Use numerals (1, 2, 3, etc.) for the following: °
numbers 10 and above; see exceptions in the next section ° numbers used in statistics (e.g., 2.45, 3 times as many, 2 x 2 design) °
□ elementary-statistics-7th-edition-bluman 1/1 Downloaded ...
elementary-statistics-7th-edition-bluman 1/1 Downloaded from wadsworthatheneum.org on November 30, 2021 by guest [DOC] Elementary Statistics 7th Edition Bluman Yeah, reviewing a books
elementary statistics 7th edition bluman could grow your near connections listings. This is just one of the solutions for you to be successful.
□ Elementary Statistics By Bluman 7th Edition
Download Elementary Statistics By Bluman 7th Edition Thank you totally much for downloading elementary statistics by bluman 7th edition.Maybe you have knowledge that, people have see numerous
times for their favorite books next this elementary statistics by bluman 7th edition, but stop happening in harmful downloads.
□ Elementary Statistics By Bluman 7th Edition
Nov 30, 2021 · elementary statistics by bluman 7th edition by online. You might not require more get older to spend to go to the books commencement as capably as search for them. In some
cases, you likewise reach not discover the revelation elementary statistics by bluman 7th edition that you are looking for. It will entirely squander the time.
□ Elementary Statistics Bluman 7th Edition
Nov 22, 2021 · elementary-statistics-bluman-7th-edition 1/17 Downloaded from www.epls.fsu.edu on November 22, 2021 by guest Download Elementary Statistics Bluman 7th Edition When somebody
should go to the books stores, search creation by shop, shelf by shelf, it is in point of fact problematic. This is why we provide the book compilations in this website.
□ Elementary Statistics 7th Edition Bluman Pdf
elementary-statistics-7th-edition-bluman-pdf 2/7 Downloaded from coe.fsu.edu on November 30, 2021 by guest classes, such as descriptive statistics, graphing data, prediction and association,
parametric inferential statistics, nonparametric inferential statistics and statistics for test construction. More than 250 screenshots (including sample
□ Elementary Statistics 7th Edition Bluman Pdf
elementary-statistics-7th-edition-bluman-pdf 4/17 Downloaded from coe.fsu.edu on September 28, 2021 by guest or print supplements that may come packaged with the bound book. Weiss’s
Elementary Statistics, Eighth Edition is the ideal textbook for introductory statistics classes that emphasize statistical reasoning and critical thinking ...
□ https://info.5y1.org/elementary-statistics-7th-edition_1_9abdc4.html
MATH 0108 Elementary Statistics (Online) Course Preview Information Instructor: Rosann Ryczek Phone: 413-427-9435 Email: rryczek@westfield.ma.edu Required Text: Elementary Statistics:
Picturing the World, 7th edition by Ron Larson and Betsy Farber ISBN # 9780134683416 * A scientific or graphing calculator is also required.
□ Elementary Statistics By Bluman 7th Edition
elementary-statistics-by-bluman-7th-edition 1/7 Downloaded from insys.fsu.edu on September 28, 2021 by guest [EPUB] Elementary Statistics By Bluman 7th Edition As recognized, adventure as
well as experience about lesson, amusement, as competently as covenant can be gotten by just checking out a ebook elementary
□ Elementary Statistics 7th Edition Bluman
elementary-statistics-7th-edition-bluman 1/2 Downloaded from apex.isb.edu on September 28, 2021 by guest Read Online Elementary Statistics 7th Edition Bluman When somebody should go to the
ebook stores, search foundation by shop, shelf by shelf, it is in point of fact problematic. This is why we provide the book compilations in this website.
□ https://info.5y1.org/elementary-statistics-7th-edition_1_17601f.html
Seventh Edition Elementary Statistics Bluman Von Neumann ordinals. In the area of mathematics called set theory, a specific construction due to John von Neumann defines the natural numbers as
follows: . Set 0 = { }, the empty set,; Define S(a) = a ∪ {a} for every set a. S(a) is the successor of a, and S is called the successor function.;
□ Elementary Statistics 7th Edition
Elementary Statistics 7th Edition the e-book will completely look you other matter to read. Just invest little era to right to use this on-line declaration elementary statistics 7th edition
as with ease as evaluation them wherever you are now. Test Bank Elementary Statistics 7th Edition Page 3/34
□ https://info.5y1.org/elementary-statistics-7th-edition_1_36ea19.html
Elementary Statistics Bluman 7th Edition In addition to Elementary Statistics: A Step by Step Approach (Eighth Edition ©2012) and Elementary Statistics: A Brief Version (Fifth Edition ©2010),
Al is a co-author on a liberal arts mathematics text published by McGraw-Hill, Math in Our World (2nd Edition ©2011).
□ Elementary Statistics Bluman 7th Edition
elementary-statistics-bluman-7th-edition 2/8 Downloaded from www.epls.fsu.edu on September 30, 2021 by guest 2008-11-14 "This manual is written to help you use the power of the Texas
Instruments* TI-83+ and Ti-84+ graphing calculators to learn about statistics and to solve exercises found in Bluman's Elementary statistics : a
□ Elementary Statistics 7th Edition Bluman
elementary-statistics-7th-edition-bluman 1/1 Downloaded from optimus.test.freenode.net on October 5, 2021 by guest [PDF] Elementary Statistics 7th Edition Bluman Right here, we have countless
book elementary statistics 7th edition bluman and collections to check out. We additionally pay for variant types and then type of the books to browse.
□ https://info.5y1.org/elementary-statistics-7th-edition_1_0fcb71.html
ELEMENTARY STATISTICS (7TH EDITION) To download Elementary Statistics (7th Edition) PDF, please follow the hyperlink under and save the document or gain access to additional information which
might be in conjuction with ELEMENTARY STATISTICS (7TH EDITION) ebook. Addison Wesley Longman, 1997. Hardcover. Book Condition: New. book. Read Elementary ...
□ Elementary Statistics By Bluman 7th Edition
Märkte im Umbruch 'Elementary Statistics' uses a non-theoretical approach, explaning concepts intuitively and teaching problem solving through worked examples step-by-step. Zu viele Köche Ein
Leadershipbuch, das alle anderen in den Schatten stellt! Basierend auf umfangreicher Forschung und Interviews mit Führungskräften auf allen Ebenen
□ Elementary Statistics 7th Edition Bluman
Elementary Statistics Elementary Statistics: A Brief Version was written as an aid in the beginning Statistics course for students whose mathematical background is limited to basic algebra.
The book follows a nontheoretical approach without formal proofs, explaining concepts intuitively and supporting them with abundant examples.
□ Elementary Statistics Bluman 7th Edition
elementary-statistics-by-bluman-7th-edition 1/1 Downloaded from dev.horsensleksikon.dk on November 17, 2020 by guest Read Online Elementary Statistics By Bluman 7th Edition As recognized,
adventure as skillfully as experience about lesson, amusement, as capably as
□ https://info.5y1.org/elementary-statistics-7th-edition_1_932492.html
CONTENTS iii Preface ix Resources for Success xiii Introduction to Statistics Where You’ve Been Where You’re Going 1 1.1 An Overview of Statistics 2 1.2 Data Classification 9 Case Study:
Reputations of Companies in the U.S. 16 1.3 Data Collection and Experimental Design 17 Activity: Random Numbers 27 Uses and Abuses: Statistics in the Real World 28
□ Elementary Statistics Triola 7th Edition
Read Free Elementary Statistics Triola 7th Edition Deschutes National Forest (N.F.), Eyerly Fire Salvage ProjectThree-dimensional Kinematics of the Equine Temporal Mandibular JointNotices of
the American Mathematical SocietyGlobal Information Technologies: Concepts, Methodologies, Tools, and ApplicationsCliffsNotes AP English Language
Nearby & related entries:
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business. | {"url":"https://5y1.org/document/elementary-statistics-7th-edition.html","timestamp":"2024-11-07T06:31:45Z","content_type":"text/html","content_length":"30018","record_id":"<urn:uuid:ef4776fc-4b3d-437a-9832-39ab7f601246>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00374.warc.gz"} |
On approximating the achromatic number
The achromatic number problem is to legally color the vertices of an input graph with the maximum number of colors, denoted &psgr;*, so that every two color classes share at least one edge. This
problem is known to be NP-hard. For general graphs we give an algorithm that approximates the achromatic number within ratio of &Ogr;(n -log log n/ log n). This improves over the previously known
approximation ratio of &Ogr; (n/Vlog n), due to Chaudhary and Vishwanathan [4]. For graphs of girth at least 5 we give an algorithm with approximation ratio &Ogr;(min{n^1/3, V&psgr;*}). This improves
over an approximation ratio &Ogr;(V&psgr;*) = &Ogr;(n^3/8) for the more restricted case of graphs with girth at least 6, due to Krista and Lorys [13]. We also give the first hardness result for
approximating the achromatic number. We show that for every fixed □ > 0 there in no 2 - D approximation algorithm, unless P = NP.
Original language English
Title of host publication Proceedings of the 12th Annual ACM-SIAM Symposium on Discrete Algorithms
Pages 309-318
Number of pages 10
State Published - 2001
Event 2001 Operating Section Proceedings, American Gas Association - Dallas, TX, United States
Duration: 30 Apr 2001 → 1 May 2001
Publication series
Name Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
Conference 2001 Operating Section Proceedings, American Gas Association
Country/Territory United States
City Dallas, TX
Period 30/04/01 → 1/05/01
• Algorithms
• Theory
• Verification
Dive into the research topics of 'On approximating the achromatic number'. Together they form a unique fingerprint. | {"url":"https://cris.openu.ac.il/en/publications/on-approximating-the-achromatic-number","timestamp":"2024-11-13T01:23:11Z","content_type":"text/html","content_length":"47119","record_id":"<urn:uuid:2bd385a7-5970-43ac-bac5-7d38e71df612>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00223.warc.gz"} |
Should Numeric not refine ExpressibleByIntegerLiteral?
The Numeric protocol today refines ExpressibleByIntegerLiteral. This makes sense for scalars, but does not work well with high-dimensional data structures such as vectors and tensors.
Let's think of a scenario where there's a VectorNumeric protocol that refines Numeric. A vector numeric protocol needs a couple of extra operators, particularly arithmetic operators that take a
scalar on one side:
protocol VectorNumeric : Numeric {
associatedtype ScalarElement : Numeric
init(_ scalar: ScalarElement)
static func + (lhs: Self, rhs: ScalarElement) -> Self
static func + (lhs: ScalarElement, rhs: Self) -> Self
static func - (lhs: Self, rhs: ScalarElement) -> Self
static func - (lhs: ScalarElement, rhs: Self) -> Self
static func / (lhs: Self, rhs: ScalarElement) -> Self
static func / (lhs: ScalarElement, rhs: Self) -> Self
Here's a conforming type:
extension Vector : Numeric {
static func + (lhs: Vector, rhs: Vector) -> Vector {
static func + (lhs: Vector, rhs: ScalarElement) -> Vector {
init(integerLiteral: ScalarElement) {
Ok, now let's do some arithmetics:
let x = Vector<Int>(...)
x + 1
This fails because + is ambiguous. It can be either + (_: Self, _: ScalarElement) and + (_: Self, _: Self).
static func + (lhs: Self, rhs: ScalarElement) -> Self
static func + (lhs: Self, rhs: Self) -> Self
Possible solutions
1. Move ExpressibleByIntegerLiteral refinement from Numeric to BinaryInteger, just like how BinaryFloatingPoint refines ExpressibleByFloatLiteral. Numeric will no longer require conforming types to
be convertible from integer literals.
2. Remove overloaded self + scalar arithmetic operators, leaving only self + self. This will resolve ambiguity but makes the vector library hard to use and not match mathematical requirements.
What does everyone think?
cc @moiseev, @scanon
IIRC Numeric refines ExpressibleByIntegerLiteral mainly for 0, and possibly also for 1. We could have probably gotten away with saying init() produces 0, and maybe doing nothing for 1, but…at this
point that would be source-breaking for anyone who's extended Numeric directly. I don't think we can change this.
On the other hand, does it actually make sense for vectors to be Numeric anyway? There's not a natural * for vectors.
1 Like
Mathematica uses different symbols for the two multiplies and I quite like the clarity that brings, perhaps you could use .* for a dot product and * for a scalar product. Etc. for other operators.
Arithmetic operators are element-wise. * would be element-wise multiplication.
In the code example, * means element-wise multiplication. The * that takes a scalar on one side is also element-wise: it multiplies every element of the vector by the scalar.
* as element-wise multiplication for vectors is fairly standardized, as Numpy, TensorFlow and Pytorch all use this operator. In any case, whether * should be element-wise multiplication is orthogonal
to this post. Other operators like + and - are still problematic due to ambiguity caused by literal conversion.
Numeric doesn't have an init(). It's understandable that BinaryInteger would use 0 because it should be ExpressibleByIntegerLiteral. In the proposed solution 1, BinaryInteger can still refine
From my earlier discussion with @scanon, it makes sense to conform Vector or Tensor to Numeric since there's nothing scalar-specific in that protocol. But now it's hitting a blocker.
I agree and understand that source breaking is certainly bad. IMO this issue is important for future vector APIs in Swift including simd-related types and Tensor in Swift for TensorFlow. Given that
it would be less principled in my opinion to define a separate VectorNumeric protocol that repeats all Numeric requirements except the ExpressibleBy conformance, a change may be necessary.
1 Like
It's really not just for 0 and 1. Numeric corresponds roughly to the mathematical notion of a "ring [with unity]" (except for the .magnitude property, which we might consider removing). There's a
canonical homomorphism from the integers to every ring with unity (in the language of category theory, Z is the "initial object" in the category).
For any type conforming to Numeric, there's an unambiguous way to interpret any integer literal, uniquely determined by that homomorphism.
Part of the issue that you're running up against here is that vector spaces are not naturally rings (though you can endow them with the element-wise product and turn them into rings, which TF has
done), but you don't want that to be the product for all vector-space objects--consider matrices or quaternions, which have their own notions of multiplication and identity.
It makes sense for another protocol to exist, but I think it's probably a weakening of the existing Numeric that only requires the arithmetic operators and zero, and doesn't have magnitude or integer
literal conformance. Numeric would then refine that protocol, and Vector or whatever would also refine it, adding an associated scalar type and multiplication and division by scalars.
14 Likes
OK. You have a typo in your original post:
That is what mislead me. You mean scalar-self or self-scalar? Though if I had read your post more carefully I would have realised - sorry.
I actually meant func + (_: Self, _: ScalarElement) and func + (_: Self, _: Self). I'll clarify that in the original post. Thanks!
Then in your protocol VectorNumeric you mean:
static func + (lhs: Self, rhs: ScalarElement) -> Self
static func + (lhs: Self, rhs: Self) -> Self // Changed from scalar self to self self.
Numeric already requires the (Self, Self) -> Self version.
@scanon has probably come up with the best solution, splitting Numeric up. That will be backwards compatible and more flexible in the future.
Addressing the slightly-orthogonal point, since I've given it a bunch of thought lately:
* should be element-wise multiplication for (computational) vectors. * should also be the natural ring multiplication for matrices and quaternions and other algebras. The real question, then, is how
to spell the element-wise multiplication and division for those things, and increasingly, I think that the answer is "get the .vector[1] view of the data and use the vector operator."
1. placeholder spelling, to be bikeshedded. But this is just a "forgetful" operation that throws out the type's multiplicative structure, projecting to the vector-space endowed with the elementwise
I meant arithmetic operators that take a scalar on one side. "Self" refers to the vector type. These methods are ambiguous with the (Self, Self) -> Self method when one of the operands is a scalar
static func + (lhs: Self, rhs: ScalarElement) -> Self
static func + (lhs: ScalarElement, rhs: Self) -> Self
I like that!
On the TensorFlow side, I'm inclined to prefer * for Tensor's element-wise multiplication, since that's the widely accepted operator in machine learning libraries. True tensor multiplication could
use tensordot(_:) and • (the former is consistent with tf.tensordot). This feels a bit off-topic for this post though.
This is one area where looking at Julia might be helpful, where the correspondences between mathematical notions and protocols in the language relating to numeric/vector/matrix types are a little
more fleshed out (although those protocols are mostly not enforced in the type system).
Essentially, Julia has a protocol for "field-like types/numbers" and another one for "module-like types/vectors" that builds on top of it.
The number protocol (implemented by subtypes of Number among other things) includes the following mostly-mandatory methods (with approximate Swift equivalents):
• +(x::T, y::T) where T (equivalent to static func + (lhs: Self, rhs: Self) -> Self
• -(x::T, y::T) where T (equivalent to static func - (lhs: Self, rhs: Self) -> Self
• *(x::T, y::T) where T (equivalent to static func * (lhs: Self, rhs: Self) -> Self
• /(x::T, y::T) where T (equivalent to static func / (lhs: Self, rhs: Self) -> Self
• -(x::T) where T (equivalent to static func - (of: Self) -> Self
• inv(x::T) where T (equivalent to static func reciprocal(of: Self) -> Self
• one(::Type{T}) where T (the multiplicative identity; also the result of converting 1 to this type—Julia doesn't have Swift's literal overloading system yet)
• zero(::Type{T}) where T (the multiplicative zero and additive identity; also the result of converting 0 to this type)
• oneunit(::Type{T}) where T (the additive unit, which is different from the multiplicative identity for types that represent unitful quantities)
as well as comparison operators. There is also a promotion mechanism which requires methods like promote_rule(::Type{T}, ::Type{F}) where {T, F<:AbstractFloat} = F in order to define the behavior of
arithmetic operations between T and other types.
Where things get interesting is the protocol for vector/module-like types. Many types can behave both as "scalars"/elements of a ring and as "vectors"/elements of a module, so there's special syntax
for lifting an operation into the vector space's underlying field (or module's underlying ring):
• The + operation on vector-like types is unambiguously elementwise, as that's the meaning of addition in the context of mathematical vector spaces or modules.
• *(x::S, y::T) where {S<:Number, T<:AbstractVector}, where S is the scalar/element type associated with the vector-like type T, is also a natural operation in vector spaces, giving basically the
broadcasted elementwise product.
• The * operation on vector-like types that are not rings is undefined; on matrices, quaternions, or similar types, it means their natural ring multiplication.
• If x and y are both instances of vector-like types, whose element type implements the number protocol, then x .* y performs elementwise multiplication by looping over the element type's * method.
• x .+ y also performs elementwise addition; in general, these "dotted" operators perform broadcasted elementwise math for all combinations of shaped collection types and scalars.
Swift/the TF project has already made the perfectly reasonable choice to follow NumPy, TensorFlow, and PyTorch and make plain mathematical operators on vector-like types act elementwise. This means
that "ring multiplication" needs a special operator (which is • for now); if we ever want things like polynomials to work generically over any ring (scalars, matrices, quaternions...) then we'd also
need • to mean * on scalars and we'd write those polynomials like 2 • x • x + 3 • y. (This is why Julia went the other way, and forced elementwise operations into nonstandard syntax—so that 2x^2 + 3y
just works for matrix/quaternion x and y—but of course familiarity for TensorFlow programmers is a strong argument for the other choice.)
4 Likes
Yup, .[op] would be the other reasonable option. TF's current choice makes a lot sense for TF, but it probably doesn't make sense for protocols that end up in the stdlib, because people who are
working with 4x4 matrices or quaternions don't want to write • whenever they need to multiply.
My current thinking is motivated by trying to satisfy both camps (ML and what I'll call "geometry") if we can. I think that having a forgetful vector-view can actually work pretty cleanly, because
you don't often flip back and forth between interpreting objects as abstract vectors and interpreting them as members of an algebra very frequently. It's much more common to use one interpretation
for long stretches of code.
Is Julia's Number protocol really field-ish, or is it really a ring? e.g. are integers Numbers? If integers are Numbers, what is the inverse of 2?
2 Likes
What would you like to call this intermediate protocol?
Excellent question. Mathematically, it's a "rng" (the way-too-cute term for a "ring without identity"). This is, obviously, not a very good name for the protocol.
My first thought is to dust off the original name for Numeric, which was Arithmetic. It does a pretty good job of capturing "this thing has the familiar arithmetic operations, but isn't necessarily
something you'd think of as a 'number'."
2 Likes
Arithmetic seems to be the natural choice. It's a little weird in that it only defines operations, without the role of instances of conforming types.
1 Like
Do you mean Arithmetic is weird because the name is too semantically general? Or because it doesn't define initializers?
I do like the name Arithmetic. | {"url":"https://forums.swift.org/t/should-numeric-not-refine-expressiblebyintegerliteral/15106","timestamp":"2024-11-09T17:49:40Z","content_type":"text/html","content_length":"69054","record_id":"<urn:uuid:9886932a-7f5d-4dd1-9ecb-f31a7d6a60d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00793.warc.gz"} |
time series models
A post by Dylan Dijk, PhD student on the Compass programme.
My current project is looking to robustify the performance of time series models to heavy-tailed data. The models I have been focusing on are vector autoregressive (VAR) models, and additionally
factor-adjusted VAR models. In this post I will not be covering the robust methodology, but will be introducing VAR models and providing the motivation for introducing the factor adjustment step when
working with high-dimensional time series.
Vector autoregressive models
In time series analysis the objective is often to forecast a future value given past data, for example, one of the classical models for univariate time series is the autoregressive AR(d) model:
\[X_t = a_1 X_{t-1} + \dots + a_d X_{t-d} + \epsilon_t \, .\]
However, in many cases, the value of a variable is influenced not just by its own past values but also by past values of other variables. For example, in Economics, household consumption expenditures
may depend on variables such as income, interest rates, and investment expenditures, therefore we would want to include these variables in our model.
The VAR model [1] is simply the multivariate generalisation of the univariate autoregressive model, that is, for a $p$-dimensional stochastic process $(\dots, \mathbf{X}_t, \mathbf{X}_{t+1}, \dots) \
in \mathbb{R}^p$ we model an observation at time $t$ as a linear combination of previous observations up to some lag $d$ plus an error:
\[\mathbf{X}_t = \mathbf{A}_1 \mathbf{X}_{t-1} + \dots + \mathbf{A}_d \mathbf{X}_{t-d} + \boldsymbol{\epsilon}_t \, ,\]
where $\mathbf{A}_i$ are $p \times p$ coefficient matrices. Therefore, in addition to modelling serial dependence, the model takes into account cross-sectional dependence. This model can then be used
for forecasting, and as an explanatory model to describe the dynamic interrelationships between a number of variables.
Given a dataset of $n$ observations, $\{\mathbf{X}_1, \dots, \mathbf{X}_n \in \mathbb{R}^p\}$, we can aim to estimate the coefficient matrices. In order to do so, the model can be written in a
stacked form:
\begin{align*} \underbrace{\left[\begin{array}{c}\left(\mathbf{X}_n\right)^{T} \\ \vdots \\ \left(\mathbf{X}_{d+1}\right)^{T}\end{array}\right]}_{\boldsymbol{\mathcal{Y}}} & =\underbrace{\left[\begin
{array}{ccc}\left(\mathbf{X}_{n-1}\right)^{T} & \cdots & \left(\mathbf{X}_{n-d}\right)^{T} \\ \vdots & \ddots & \vdots \\ \left(\mathbf{X}_{d}\right)^{T} & \cdots & \left(\mathbf{X}_1\right)^{T}\end
{array}\right]}_{\boldsymbol{\mathcal{X}}} \underbrace{\left[\begin{array}{c}\boldsymbol{A}_1^{T} \\ \vdots \\ \boldsymbol{A}_d^{T}\end{array}\right]}_{\boldsymbol{A}^T}+\underbrace{\left[\begin
{array}{c}\left(\boldsymbol{\epsilon}_n\right)^{T} \\ \vdots \\ \left(\boldsymbol{\epsilon}_d\right)^{T}\end{array}\right]}_{\boldsymbol{E}}
and subsequently vectorised to return a standard univariate linear regression problem
\operatorname{vec}(\boldsymbol{\mathcal{Y}}) & =\operatorname{vec}\left(\boldsymbol{\mathcal{X}} \boldsymbol{A}^T\right)+\operatorname{vec}(\boldsymbol{E}), \\ & =(\textbf{I} \otimes \boldsymbol{\
mathcal{X}}) \operatorname{vec}\left(\boldsymbol{A}^T\right)+\operatorname{vec}(\boldsymbol{E}), \label{eq:stacked_var_regression_form}\\ \underbrace{\boldsymbol{Y}}_{N p \times 1} & =\underbrace{\
boldsymbol{Z}}_{N p \times q} \underbrace{\boldsymbol{\beta}^*}_{q \times 1}+\underbrace{\operatorname{vec}(\boldsymbol{E})}_{N p \times 1}, \quad N=(n-d), \quad q=d p^2.
Sparse VAR
There are $dp^2$ parameters to estimate in this model, and hence VAR estimation is naturally a high-dimensional statistical problem. Therefore, estimation methods and associated theory need to hold
under high-dimensional scaling of the parameter dimension. Specifically, this means consistency is shown for when both $p$ and $n$ tend to infinity, as opposed to in classical statistics where $p$ is
kept fixed.
The linear model in the high-dimensional setting is well understood [2]. To obtain a consistent estimator requires additional structural assumptions in the model, in particular, sparsity on the true
vector $\boldsymbol\beta^*$. The common approach for estimation is lasso, which can be motivated from convex relaxation in the noiseless setting. Consistency of lasso is well studied [3][4], with
consistency guaranteed under sparsity, and restrictions on the directions in which the hessian of the loss function is strictly positive.
The well known lasso objective is given by:
\underset{{\boldsymbol\beta \in \mathbb{R}^q}}{\text{argmin}} \, \|\boldsymbol{Y}-\boldsymbol{Z} \boldsymbol\beta\|_{2}^{2} + \lambda \|\boldsymbol\beta\|_1 \, ,
and below, we give a simplified consistency result that can be obtained under certain assumptions.
We denote the sparsity of $\boldsymbol{A}$ by
$s_{0, j}=\left|\boldsymbol\beta^*_{(j)}\right|_0, s_0=\sum_{j=1}^p s_{0, j}$ and $s_{\text {in }}=\max _{1 \leq j \leq p} s_{0, j}$.
Lasso consistency result
\, s_{\text{in}} \leq C_1 \sqrt{\frac{n}{\log p}} \, \; \text{and } \; \lambda \geq C_2 (\|\boldsymbol{A}^T\|_{1,\infty} + 1)\sqrt{\frac{\log p}{n}} \; ,
then with high probability we have
|\widehat{\boldsymbol{A}} – \boldsymbol{A}|_2 \leq C_3 \sqrt{s_{0}} \lambda \quad \text{and} \quad |\widehat{\boldsymbol{A}} – \boldsymbol{A}|_1 \leq C_4 s_0 \lambda \, .
What we mean here by consistency, is that as $n,p \rightarrow \infty$, the estimate $\widehat{\boldsymbol\beta}$ converges to $\boldsymbol\beta$ in probability. Where we think of $p$ as being a
function of $n$, so the manner in which the dimension $p$ grows depends on the sample size. For example, in the result above, we can have consistency with $p = \exp(\sqrt{n})$.
The result indicates that for larger $p$ a more sparse solution, and a larger regularisation parameter is required. Similar results have been derived under various assumptions, for instance under a
Gaussian VAR the result has been given in terms of the largest and smallest eigenvalues of the spectral density matrix of a series [5], and hence consistency requires that these quantities are
In summary, for lasso estimation to work we need $\boldsymbol{A}$ to be sufficiently sparse, and the largest eigenvalue of the spectral density matrix to be bounded. But are these reasonable
assumptions to make?
First two leading eigenvalues of the spectral density matrix.
Heatmap of logged p-values for evidence of non-zero coefficients after fitting ridge regression model.
Well, intuitively, if a multivariate time series has strong cross-sectional dependence we would actually expect to have many non-zero entries in the VAR coefficients $\boldsymbol{A}_i$. The figures
above, taken from [6], illustrate a real dataset in which there is statistical evidence for a non-sparse solution (heatmap), and that the leading eigenvalue of the spectral density matrix diverges
linearly in $p$. Therefore providing an example in which two of the assumptions discussed above are unmet.
Factor-adjusted VAR
The idea now is to assume that the covariance of the observed vector $\mathbf{X}_t$ is driven by a lower dimensional latent vector. For example, the figures above were generated from a dataset of
stock prices of financial institutions, in this case an interpretation of a latent factor could be overall market movements which captures the broad market trend, or a factor that captures the change
in interest rates.
\mathbf{X}_t &= \underset{p \times r}{\boldsymbol\Lambda} \underset{r \times 1}{\mathbf{F}_t} + \boldsymbol\xi_t \quad
Consequently, first fitting a factor model would account for strong cross-sectional correlations, leaving the remaining process to exhibit the individual behaviour of each series. Fitting a sparse
VAR process will now be a more reasonable choice.
In the formula above, $\mathbf{F}_t$ is the factor random vector, and $\boldsymbol\Lambda$ the constant loading matrix, which quantifies the sensitivity of each variable to the common factors, and we
can model $\boldsymbol\xi_t$ as a sparse VAR process, as described in the preceding sections.
[1] Lütkepohl, H. (2005) New introduction to multiple time series analysis. Berlin: Springer-Verlag.
[2] Wainwright, M. (2019) High-dimensional statistics: A non-asymptotic viewpoint – Chapter 7 – Sparse linear models in high dimensions. Cambridge, United Kingdom: Cambridge University Press.
[3] Geer, Sara A. van de, and Peter Bühlmann. (2009) On the Conditions Used to Prove Oracle Results for the Lasso. Electronic Journal of Statistics. Project Euclid, https://doi.org/10.1214/09-EJS506.
[4] Bickel, Peter J., Ya’acov Ritov, and Alexandre B. Tsybakov. (2009) Simultaneous Analysis of Lasso and Dantzig Selector. The Annals of Statistics. https://doi.org/10.1214/08-AOS620.
[5] Sumanta Basu, George Michailidis. (2015) Regularized estimation in sparse high-dimensional time series models. The Annals of Statistics. https://doi.org/10.1214/15-AOS1315.
[6] Barigozzi, M., Cho, H. and Owens, D. (2024). FNETS: Factor-adjusted network estimation and forecasting for high-dimensional time series. Journal of Business & Economic Statistics. | {"url":"https://compass.blogs.bristol.ac.uk/tag/time-series-models/","timestamp":"2024-11-05T11:07:51Z","content_type":"text/html","content_length":"57419","record_id":"<urn:uuid:23dcb47b-5118-433f-a701-6add9ebcc626>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00329.warc.gz"} |
Why Finance? - Finance Train
Why Finance?
This lecture gives a brief history of the young field of financial theory, which began in business schools quite separate from economics, and of my growing interest in the field and in Wall Street. A
cornerstone of standard financial theory is the efficient markets hypothesis, but that has been discredited by the financial crisis of 2007-09. This lecture describes the kinds of questions standard
financial theory nevertheless answers well. It also introduces the leverage cycle as a critique of standard financial theory and as an explanation of the crisis. The lecture ends with a class
experiment illustrating a situation in which the efficient markets hypothesis works surprisingly well.
Data Science in Finance: 9-Book Bundle
Master R and Python for financial data science with our comprehensive bundle of 9 ebooks.
What's Included:
• Getting Started with R
• R Programming for Data Science
• Data Visualization with R
• Financial Time Series Analysis with R
• Quantitative Trading Strategies with R
• Derivatives with R
• Credit Risk Modelling With R
• Python for Data Science
• Machine Learning in Finance using Python
Each book includes PDFs, explanations, instructions, data files, and R code for all examples.
Get the Bundle for $39 (Regular $57)
JOIN 30,000 DATA PROFESSIONALS
Free Guides - Getting Started with R and Python
Enter your name and email address below and we will email you the guides for R programming and Python. | {"url":"https://financetrain.com/why-finance","timestamp":"2024-11-02T09:34:06Z","content_type":"text/html","content_length":"102707","record_id":"<urn:uuid:786b184c-f3be-4aa5-bf5e-af3ba94b76d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00339.warc.gz"} |
Helmet Crash - Futility Closet
A problem proposed by Mel Stover for the April 1953 issue of Pi Mu Epsilon Journal:
After a meeting of six professors, each man left with another’s hat. The hat that Aitkins took belonged to the man who took Baily’s hat. The man whose hat was taken by Caldwell took the hat of the
man who took Dunlop’s hat. And the man who took Easton’s hat wasn’t the one whose hat was taken by Fort. Who took Aitkins’ hat? | {"url":"https://www.futilitycloset.com/2024/06/10/helmet-crash/","timestamp":"2024-11-04T23:29:04Z","content_type":"text/html","content_length":"58260","record_id":"<urn:uuid:ca7f362d-d077-45b7-b703-986246439988>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00625.warc.gz"} |
Lesson 3
Equations for Functions
Let’s find outputs from equations.
3.1: A Square’s Area
Fill in the table of input-output pairs for the given rule. Write an algebraic expression for the rule in the box in the diagram.
│ input │output │
│8 │ │
│2.2 │ │
│\(12\frac14\) │ │
│\(s\) │ │
3.2: Diagrams, Equations, and Descriptions
Record your answers to these questions in the table provided.
1. Match each of these descriptions with a diagram:
1. the circumference, \(C\), of a circle with radius, \(r\)
2. the distance in miles, \(d\), that you would travel in \(t\) hours if you drive at 60 miles per hour
3. the output when you triple the input and subtract 4
4. the volume of a cube, \(v\) given its edge length, \(s\)
2. Write an equation for each description that expresses the output as a function of the input.
3. Find the output when the input is 5 for each equation.
4. Name the independent and dependent variables of each equation.
│description│ a │ b │ c │ d │
│ diagram │ │ │ │ │
│ equation │ │ │ │ │
│ input = 5 │ │ │ │ │
│output = ? │ │ │ │ │
│independent│ │ │ │ │
│ variable │ │ │ │ │
│ dependent │ │ │ │ │
│ variable │ │ │ │ │
Choose a 3-digit number as an input.
Apply the following rule to it, one step at a time:
• Multiply your number by 7.
• Add one to the result.
• Multiply the result by 11.
• Subtract 5 from the result.
• Multiply the result by 13
• Subtract 78 from the result to get the output.
Can you describe a simpler way to describe this rule? Why does this work?
3.3: Dimes and Quarters
Jada had some dimes and quarters that had a total value of $12.50. The relationship between the number of dimes, \(d\), and the number of quarters, \(q\), can be expressed by the equation \(0.1d +
0.25q = 12.5\).
1. If Jada has 4 quarters, how many dimes does she have?
2. If Jada has 10 quarters, how many dimes does she have?
3. Is the number of dimes a function of the number of quarters? If yes, write a rule (that starts with \(d = \)...) that you can use to determine the output, \(d\), from a given input, \(q\). If no,
explain why not.
4. If Jada has 25 dimes, how many quarters does she have?
5. If Jada has 30 dimes, how many quarters does she have?
6. Is the number of quarters a function of the number of dimes? If yes, write a rule (that starts with \(q=\)...) that you can use to determine the output, \(q\), from a given input, \(d\). If no,
explain why not.
We can sometimes represent functions with equations. For example, the area, \(A\), of a circle is a function of the radius, \(r\), and we can express this with an equation: \(\displaystyle A=\pi r^2
We can also draw a diagram to represent this function:
In this case, we think of the radius, \(r\), as the input, and the area of the circle, \(A\), as the output. For example, if the input is a radius of 10 cm, then the output is an area of \(100\pi\)
cm^2, or about 314 square cm. Because this is a function, we can find the area, \(A\), for any given radius, \(r\).
Since it is the input, we say that \(r\) is the independent variable and, as the output, \(A\) is the dependent variable.
Sometimes when we have an equation we get to choose which variable is the independent variable. For example, if we know that
\(\displaystyle 10A-4B=120\)
then we can think of \(A\) as a function of \(B\) and write
\(\displaystyle A=0.4B+12\)
or we can think of \(B\) as a function of \(A\) and write
\(\displaystyle B=2.5A-30\)
• dependent variable
A dependent variable represents the output of a function.
For example, suppose we need to buy 20 pieces of fruit and decide to buy apples and bananas. If we select the number of apples first, the equation \(b=20-a\) shows the number of bananas we can
buy. The number of bananas is the dependent variable because it depends on the number of apples.
• independent variable
An independent variable represents the input of a function.
For example, suppose we need to buy 20 pieces of fruit and decide to buy some apples and bananas. If we select the number of apples first, the equation \(b=20-a\) shows the number of bananas we
can buy. The number of apples is the independent variable because we can choose any number for it.
• radius
A radius is a line segment that goes from the center to the edge of a circle. A radius can go in any direction. Every radius of the circle is the same length. We also use the word radius to mean
the length of this segment.
For example, \(r\) is the radius of this circle with center \(O\). | {"url":"https://im-beta.kendallhunt.com/MS_ACC/students/2/6/3/index.html","timestamp":"2024-11-11T08:26:53Z","content_type":"text/html","content_length":"81752","record_id":"<urn:uuid:fbd9d75d-2a8f-40d0-b932-59acffcd103c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00206.warc.gz"} |
Generalized Donaldson-Thomas Invariants via Kirwan Blowups
Printable PDF
Department of Mathematics,
University of California San Diego
Math 208 - Algebraic Geometry Seminar
Michail Savvas
Generalized Donaldson-Thomas Invariants via Kirwan Blowups
Donaldson-Thomas (abbreviated as DT) theory is a sheaf theoretic technique of enumerating curves on a Calabi-Yau threefold. Classical DT invariants give a virtual count of Gieseker stable sheaves
provided that no strictly semistable sheaves exist. This assumption was later lifted by the work of Joyce and Song who defined generalized DT invariants using Hall algebras and the Behrend function,
their method being motivic in nature. In this talk, we will present a new approach towards generalized DT theory, obtaining an invariant as the degree of a virtual cycle inside a Deligne-Mumford
stack. The main components are an adaptation of Kirwan’s partial desingularization procedure and recent results on the structure of moduli of sheaves on Calabi-Yau threefolds. Based on joint work
with Young-Hoon Kiem and Jun Li.
Host: James McKernan
September 28, 2018
2:00 PM
AP&M 5829 | {"url":"https://math.ucsd.edu/seminar/generalized-donaldson-thomas-invariants-kirwan-blowups","timestamp":"2024-11-04T15:11:09Z","content_type":"text/html","content_length":"33637","record_id":"<urn:uuid:755e1f50-9418-4368-9d8e-f5c33c93776f>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00800.warc.gz"} |
introduction to post quantum cryptography pdf
%���� Quantum security also known as quantum encryption or quantum cryptography is the practice of harnessing the principles of quantum mechanics to bolster security and to detect whether a third
party is eavesdropping on communications. I Credit cards, EC-cards, access codes for banks. Introduction: What is post-quantum cryptography? endobj 0000467017 00000 n • Lattice-based cryptography is
a promising approach for efficient, post-quantum cryptography. 0000002687 00000 n I Security goal #1: Con dentiality despite Eve’s espionage. In general, the goal of quantum cryptography is to
perform tasks that are impossible or intractable with conventional cryptography. 33 0 obj In recent years, there has been a substantial amount of research on quantum computers – machines that exploit
quantum mechanical phenomena to solve mathematical problems that are difficult or intractable for conventional computers. 0000233969 00000 n standardizationinitiative to select quantum safe
algorithms for future use by government and industry. 1 1 Introduction In the last three decades, public key cryptography has become an indispensable component of our global communication digital
infrastructure. 0000083949 00000 n 0000452497 00000 n This book introduces the reader to the next generation of cryptographic algorithms, the systems that resist quantum-computer attacks: in
particular, post-quantum public-key encryption systems and post-quantum public-key signature systems. 0000347539 00000 n I Security goal #2: Integrity, i.e., recognizing Eve’s sabotage. 1.1 The
Threat of Quantum Computing to Cryptography NISTIR 8105 Report on Post-Quantum Cryptography . In this section, we discuss the implications of quantum computing for public key cryptography and
motivations for research into the systems and issues surrounding deploying PQC in practice. 0000363052 00000 n Quantum computers will be able to break important cryptographic primitives used in
today’s digital communication. I Motivation #2: Communication channels are modifying our data. Introduction: Why Post Quantum Cryptography (PQC)? endobj INTRODUCTION Quantum cryptography recently
made headlines when European Union members announced their intention to invest $13 million in the research and development of a secure communications system based on this technology. Quantum
cryptography makes it possible that two parties, in this case Alice and Bob, share a random key in a secure way. IPQCrypto 2011. Post-Quantum Crypto Adventure Introduction to Lattice-Based
Cryptography Presenter: Pedro M. Sosa. endobj (Directions for Post Quantum Cryptography) NIST is expected to announce the first algorithms to qualify for standardization These networks support a
plethora of applications that are important to our economy, our security, and our way of life, such as mobile Roadmap Post-Quantum Cryptography Lattice-Based Crypto LWE & R-LWE R-LWE Diffie Hellman
2. Summary •Intro to post-quantum cryptography •Learning with errors problems • LWE, Ring-LWE, Module-LWE, Learning with Rounding, NTRU • Search, decision • With uniform secrets, with short secrets
•Public key encryption from LWE • Regev • Lindner–Peikert •Security of LWE • Lattice problems – GapSVP •KEMs and key agreement from LWE •Other applications of LWE Post-Quantum Cryptography Gauthier
Umana, Valérie Publication date: 2011 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): Gauthier Umana, V. (2011). << /S /GoTo /D (section.5) >>
post-quantumauthenticationinTLS1.3inOQS-OpenSSL1.1.1. They don’t use bits, i.e. Post-quantum crypto is crypto that resists attacks by quantum computers. 0000346358 00000 n Quantum encryption takes
advantage of fundamental laws of physics such as the observer effect, which states that it is impossible to identify the location of a particle without changing that particle. 0000557336 00000 n
While many of these ciphers have been around in academic literature for up-wards of 20 years, concern over quantum computing advances has 0000234404 00000 n Cryptographic applications in daily life I
Mobile phones connecting to cell towers. Shor's quantum numerical field algorithm or Grover's quantum search algorithm promoted the development of Post-Quantum Cryptography (PQC), an attempt …
Demonstrator of post-quantum cryptography Demonstrator of post-quantum cryptography on a smart card chip Infineon’s contactless smart card Setup a secured channel Infineon succeeded to implement New
Hope on an Infineon contactless smart card microcontroller › This chip family is used in many high-security applications like passports 1 Introduction In this chapter we describe some of the recent
progress in lattice-based cryptography. 0000234964 00000 n Post-quantum crypto is crypto that resists attacks by quantum computers. << /S /GoTo /D (section.2) >> Lattice-based cryptography is a
promising post-quantum cryptography family, both in terms of foundational properties as well as its application to both traditional and emerging security problems such as encryption, digital
signature, key exchange, homomorphic encryption, etc. Part I: Introduction to Post Quantum Cryptography Tutorial@CHES 2017 - Taipei Tim Güneysu Ruhr-Universität Bochum & DFKI 04.10.2017 • Goals �T}
�v]� Referredto as post quantum cryptography,the new algorithm proposals are in the third round of analysisand vetting. pact on hash functions, and post quantum cryptography. endobj IPQCrypto 2013.
The impact of quantum computing is a topic of increasing importance to IT practitioners. 0000557894 00000 n endobj 0000346155 00000 n Quantum Computing and Cryptography: Analysis, Risks, and
Recommendations for Decisionmakers Jake Tibbetts 1 UC Berkeley Introduction Some influential American policymakers, scholars, and analysts are extremely concerned with the effects that quantum
computing will have on national security. We are in a race against time to deploy post-quantum cryptography before quantum standardizationinitiative to select quantum safe algorithms for future use
by government and industry. 0000479514 00000 n startxref 0000364158 00000 n I Achieves various security goals by secretly transforming messages. 0000239763 00000 n %PDF-1.5 Network Working Group P.
Hoffman Internet-Draft ICANN Intended status: Informational May 26, 2020 Expires: November 27, 2020 The Transition from Classical to Post-Quantum Cryptography draft-hoffman-c2pq-07 Abstract Quantum
computing is the study of computers that use quantum features in calculations. /Filter /FlateDecode 'o,i�� These ciphers do not rely on the same underlying mathematics as RSA and ECC, and as a result
are more immune to advances in quantum computing. Quantum Computers + Shor’s Algorithm The Upcoming Crypto-Apocalypse The basis of current cryptographic schemes In October 2014, ETSI has published a
White Paper \Quantum Safe Cryptography and Security: An Introduction, Bene ts, Enablers and Challenges" [14] summariz-ing security considerations in view of quantum computing and discussing
challenges of a transition from today’s cryptographic infrastructure to a quantum-safe or post-quantum infrastructure. Introduction to post-quantum cryptography I Tanja Lange Technische Universiteit
Eindhoven Executive School on Post-Quantum Cryptography 01 July 2019. Cryptography I Motivation #1: Communication channels are spying on our data. 0000479829 00000 n Lattice-based cryp-tographic
constructions hold a great promise for post-quantum cryptography, as they enjoy very strong security proofs based on worst-case hardness, relatively efficient implementations, as well as great
simplicity. Thus, the authors present a readily understandable introduction and discussion of post-quantum cryptography, including quantum-resistant algorithms and quantum key distribution. These
ciphers do not rely on the same underlying mathematics as RSA and ECC, and as a result are more immune to advances in quantum computing. 16 << /S /GoTo /D (subsection.4.1) >> 1. 122 59 Quantum
cryptography is the use of quantum existence state as the key of information encrption and decryption, the principle is the Einstein called "mysterious long distance activities" quantum entangled
state. 0000238583 00000 n As reflected in NIST’s April 2016 . (Conclusions) Quantum Cryptography systems, Large Scale distributed computational systems, Cryptosystems, Quantum physics. While quantum
cryptography describes using quantum phenomena at the core of a security strategy, post-quantum cryptography (sometimes referred to as quantum-proof, quantum-safe or quantum-resistant) refers to
cryptographic algorithms (usually public-key algorithms) that are thought to be secure against an attack by a quantum computer. Introduction to Post-Quantum Cryptography in scope of NIST's
Post-Quantum Competition Abstract: Nowadays, information security is essential in many fields, ranging from medicine and science to law enforcement and business, but the developments in the area of
quantum computing have put the security of current internet protocols at risk. Some IT managers are already aware of the quantum threat and are applying PQC selectively using interim standards and
technologies. POST QUANTUM CRYPTOGRAPHY: IMPLEMENTING ALTERNATIVE PUBLIC KEY SCHEMES ON EMBEDDED DEVICES Preparing for the Rise of Quantum Computers DISSERTATION for the degree of Doktor-Ingenieur of
the Faculty of Electrical Engineering and Information Technology at the Ruhr-University Bochum, Germany << /S /GoTo /D (subsection.3.1) >> 0000158810 00000 n 0000349038 00000 n Post-Quantum
Cryptography Gauthier Umana, Valérie Publication date: 2011 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): Gauthier Umana, V. (2011).
5�k�R�9��%Q���}�� (�g C7�g�~. Introduction to post-quantum cryptography Tanja Lange Technische Universiteit Eindhoven 22 June 2017 Executive School on Post-Quantum Cryptography. 0000001476 00000 n
0000002917 00000 n 0000240198 00000 n Sender \Alice" / Untrustworthy network \Eve" / Receiver \Bob" I Literal meaning of cryptography: \secret writing". IPQCrypto 2010. Algorithm selection is
expected to be completed ISBN 978-3-540-88701-0. While many of these ciphers have been around in academic literature for up-wards of 20 years, concern over quantum computing advances has 2. 18. • All
the basic public key primitives can be constructed from these assumptions: – Public key encryption, Key Exchange, Digital Signatures • For more information on research projects, please contact me at:
danadach@umd.edu Therefore, there are ongoing activities aiming at the development, standardization, and application of post-quantum cryptography, i.e., cryptography that is able to resist attacks by
quantum … 0000363532 00000 n 0 Post-Quantum Cryptography 132 . 1 Introduction Attackers are recording, and sometimes forging, vast volumes of human communication. 0000158577 00000 n they don’t know
just the states 0 and 1 like conventional computers do. post-quantum cryptography (PQC). Post-quantum cryptography (sometimes referred to as quantum-proof, quantum-safe or quantum-resistant) refers
to cryptographic algorithms (usually public-key algorithms) that are thought to be secure against an attack by a quantum computer.As of 2020, this is not true for the most popular public-key
algorithms, which can be efficiently broken by a sufficiently strong quantum computer. 0000451317 00000 n stream A new generation of computers has entered the arena in the shape of quantum systems.
endobj • All the basic public key primitives can be constructed from these assumptions: – Public key encryption, Key Exchange, Digital Signatures • For more information on research projects, please
contact me at: danadach@umd.edu Quantum cryptography makes use of the subtle properties of quantum mechanics such as the quantum no-cloning theorem and the Heisenberg uncertainty principle.
<<381734783D035247B43F39FB283ECDEA>]>> endobj 0000235544 00000 n 0000348252 00000 n 0000233668 00000 n 1 1 Introduction In the last three decades, public key cryptography has become an indispensable
component of our global communication digital infrastructure. 3 Agenda 1 Regulatory measures and requirements for protection of data 2 Past ENISA work on cryptography. 0000438260 00000 n x�b```b``cc
`c`�2ga@ v da�!�� ��"��"��S�a� �'��Z�% Cryptography I Motivation #1: Communication channels are spying on our data. IPQCrypto 2011. 0000082570 00000 n 0000362868 00000 n 28 0 obj 0000002753 00000 n
%PDF-1.6 %���� 24 0 obj Wenowdescribethemechanisms used in this particular instantiation of post-quantum cryptography in TLS 1.3. This book introduces the reader to the next generation of
cryptographic algorithms, the systems that resist quantum-computer attacks: in particular, post-quantum public-key encryption systems and post-quantum public-key signature systems. 1), 133 work on
the development of post-quantum public-key cryptographic standards is underway, and 134 the algorithm selection process is well in -hand. xڍXˎ���W�\�fDR�l��� A�"�-�me�pHi�}�X%
[v���XfQ,���TQI�����ɿJfi���T�����~���*��7��4)�w��@+ܸy�g3�S;�?� |eTER�*�T0HIQVYj��lЯa�#������l��26*��{UF���D��R ��(ZW��c�hzQ_m�T$����IUB�����a�,� 2����xZ�e6�Ӝ�ʴ)��T(�i�� I�a��0�s����IL� ��x$��d���(�RQ$
`LU��� Thus, the authors present a readily understandable introduction and discussion of post-quantum cryptography, including quantum-resistant algorithms and quantum key distribution. Instead,
quantum computers use quantum bits (qbits) with three states: 2 Technical University of Denmark. Referredto as post quantum cryptography,the new algorithm proposals are in the third round of
analysisand vetting. 0000005493 00000 n 0000159367 00000 n A lifecycle perspective on data/information protection 3 Recent activities 4 2017 EU Cybersecurity Strategy & Council Conclusions 5 Overview
of the training on Introduction on Post-Quantum cryptography Quantum computers will break today's most popular public-key cryptographic systems, including RSA, DSA, and ECDSA. Quantum computers will
break today's most popular public-key cryptographic systems, including RSA, DSA, and ECDSA. • Lattice-based cryptography is a promising approach for efficient, post-quantum cryptography. For a
broader discussion of design choices and issues in engineering post-quantum cryptography in TLS 1.3, see[SFG19]. NISTIR 8105 Report on Post-Quantum Cryptography . I Literal meaning of cryptography: \
secret writing". INTRODUCTION Quantum cryptography recently made headlines when European Union members announced their intention to invest $13 million in the research and development of a secure
communications system based on this technology. 0000159404 00000 n 5 0 obj Post-quantum cryptography is, in general, a quite different topic from quantum cryptography: Post-quantum cryptography, like
the rest of cryptography, covers a wide range of secure-communication tasks, ranging from secret-key operations, public-key signatures, and public-key encryption to high-level operations such as
secure electronic voting. Post-Quantum Cryptography. endobj Wenowdescribethemechanisms used in this particular instantiation of post-quantum cryptography in TLS 1.3. If large-scale quantum computers
are ever built, they will be able to break many of the public-key cryptosystems currently in use. Post-quantum cryptography. NIST is expected to announce the first algorithms to qualify for
standardization 21 0 obj 1. endobj 2.1 Hybrid Key Exchange in TLS 1.3 Post-Quantum Cryptography 132 . Quantum Computers + Shor’s Algorithm The Upcoming Crypto-Apocalypse The basis of current
cryptographic schemes trailer endobj �_��ņ�Y�\�UO�r]�⼬E�h`�%�q ��aa�$>��� A lifecycle perspective on data/information protection 3 Recent activities 4 2017 EU Cybersecurity Strategy & Council
Conclusions 5 Overview of the training on Introduction on Post-Quantum cryptography >> These networks support a plethora of applications that are important to our economy, our security, and our way
of life, such as mobile endobj (Cryptographic Constructions) 4 0 obj Introduction to Post-Quantum Cryptography You may not know this, but one of the main reasons we can securely communicate on the
Internet is the presence of some well-designed cryptographic protocols. Springer, Berlin, 2009. IPQCrypto 2006: International Workshop on Post-Quantum Cryptography. endobj post-quantum cryptography
(PQC). IPQCrypto 2014. 17 0 obj Technical University of Denmark. 0000557534 00000 n Report on Post-Quantum Cryptography (NISTIR 8105. Lattice-based cryptography is a promising post-quantum
cryptography family, both in terms of foundational properties as well as its application to both traditional and emerging security problems such as encryption, digital signature, key exchange,
homomorphic encryption, etc. 0000452241 00000 n Post-Quantum Crypto Adventure Introduction to Lattice-Based Cryptography Presenter: Pedro M. Sosa. 0000451859 00000 n For now, post-quantum
cryptography finds its market in critical long-lived data such as plans for aircraft and medical databases that need to survive well into the era of powerful quantum computers. 0000450692 00000 n 16
0 obj �ƌܛ�,`~�ീ�=�eK���u/7�h60�p�X��LZq��"C#)�y�C����`���NS}���x��{��SN�'�3�5�(�'��(j�� [!���jx�@��PS��MM��F�r��'Ҹ�i��pl>!��3��&SG�ɢ��I��\=7.>q���r�a�B�e�/ ��\����tQ��O�.������s^�c�$%����~ �B˓�ZE�f�,
f�4�� ��'�@���|I=���d흳բk,�^���$^R�iht�3�)tr�0����'e3�����7&�;�s$)��g��&\`Z�5�Zt��*������jN��ͻ��loϽ�팗@^�9�i�����.2��Cr&����ئ��|7���U;. %%EOF (Modern Computational Lattice Problems) 0000004313 00000
n 12 0 obj Introduction to quantum cryptography The elements of quantum physics Quantum key exchange Technological challenges Experimental results Eavesdropping 2 . Sender \Alice" / Untrustworthy
network \Eve" / Receiver \Bob" I Literal meaning of cryptography: \secret writing". 0000482180 00000 n *�k������ѬVEQ�����O4����6���p���E�z)�?UН.�J!g��^�����@f0:�A�a���4�������RV�9�Lb� %
`8�iW�GAG����M�yYK�K! IPQCrypto 2008. << /S /GoTo /D (section.4) >> Similar to the way space 13 0 obj Code-based cryptography - Implementation of code-based cryptography, Developing attacks against
it. 1 Introduction In this chapter we describe some of the recent progress in lattice-based cryptography. Introduction to post-quantum cryptography 3 • 1994: Shor introduced an algorithm that factors
any RSA modulus n using (lgn)2+ o(1)simple operations on a quantum computer of size (lgn)1+. Post-quantum algorithms also often have worse efficiency compared to currently used algo-rithms and no
post-quantum algorithm has so far been standardised. endobj I Motivation #2: Communication channels are modifying our data. IPQCrypto 2014. 0000482363 00000 n Post-quantum algorithms also often have
worse efficiency compared to currently used algo-rithms and no post-quantum algorithm has so far been standardised. �$n=>elh��'�,���0�eV;� ��7�u ��1��E�0�~��[I�$�. 8 0 obj Post-quantum cryptography
(sometimes referred to as quantum-proof, quantum-safe or quantum-resistant) refers to cryptographic algorithms (usually public-key algorithms) that are thought to be secure against an attack by a
quantum computer.As of 2020, this is not true for the most popular public-key algorithms, which can be efficiently broken by a sufficiently strong quantum computer. endobj Quantum cryptography makes
use of the subtle properties of quantum mechanics such as the quantum no-cloning theorem and the Heisenberg uncertainty principle. (Public Key Encryption) Research in post-quantum cryptography,
including but not limited to: Quantum algorithms - Developing attacks against symmetric and asymmetric cryptography, Developing of quantum circuits for attacking cryptosystems. 0000006674 00000 n A
brief introduction of quantum cryptography for engineers Bing Qi 1,2,* , Li Qian 1,2 , Hoi-Kwong Lo 1,2, 3 ,4 1 Center for Quantum Information and Quantu m Control, University of Toronto, One way to
pro-mote further research and guide standardisation might be to develop proof-of-concepts where post-quantum algorithms are implemented in existing software solutions. In this section, we discuss the
implications of quantum computing for public key cryptography and motivations for research into the systems and issues surrounding deploying PQC in practice. << /S /GoTo /D [34 0 R /FitH] >> 122 0
obj <> endobj Introduction to quantum cryptography The elements of quantum physics Quantum key exchange Technological challenges Experimental results Eavesdropping 2 . Similar to the way space
0000003133 00000 n 0000233771 00000 n (Lattice Based Cryptography) 32 0 obj Quantum Computing and Cryptography: Analysis, Risks, and Recommendations for Decisionmakers Jake Tibbetts 1 UC Berkeley
Introduction Some influential American policymakers, scholars, and analysts are extremely concerned with the effects that quantum computing will have on national security. 0000481004 00000 n 2.1
Hybrid Key Exchange in TLS 1.3 /Length 2094 0000159169 00000 n 0000453136 00000 n 0000240599 00000 n << /S /GoTo /D (subsection.3.2) >> Report on Post-Quantum Cryptography (NISTIR 8105. 29 0 obj
0000349236 00000 n << /S /GoTo /D (section.1) >> Post-Quantum Cryptography. Quantum computers will break today's most popular public-key cryptographic systems, including RSA, DSA, and ECDSA. Post
Quantum Cryptography: An Introduction Shweta Agrawal IIT Madras 1 Introduction Cryptography is a rich and elegant eld of study that has enjoyed enormous success over the last few decades. One way to
pro-mote further research and guide standardisation might be to develop proof-of-concepts where post-quantum algorithms are implemented in existing software solutions. 1.1 The Threat of Quantum
Computing to Cryptography 0000451667 00000 n xref 0000485034 00000 n There are five detailed chapters surveying the state of the art in quantum computing, hash-based cryptography, code-based
cryptography, lattice-based cryptography, and multivariate-quadratic-equations cryptography. << /S /GoTo /D (section.3) >> IPQCrypto 2010. Introduction to post-quantum cryptography 3 • 1994: Shor
introduced an algorithm that factors any RSA modulus n using (lgn) 2+o(1) simple operations on a quantum computer of size (lgn) 1+o(1) . IPQCrypto 2016: 22{26 Feb. IPQCrypto 2017 planned. IPQCrypto
2013. For much more information, read the rest of the book! 0000479107 00000 n As reflected in NIST’s April 2016 . (Introduction) 180 0 obj <>stream 1), 133 work on the development of post-quantum
public-key cryptographic standards is underway, and 134 the algorithm selection process is well in -hand. 0000082768 00000 n (Classic Computational Lattice Problems) 0000000016 00000 n In general,
the goal of quantum cryptography is to perform tasks that are impossible or intractable with conventional cryptography. 25 0 obj Specif-ically, the section of Post-Quantum Cryptography deals with
different quantum key distribution methods and mathematical-based solutions, such as the BB84 protocol, lattice-based cryptog-raphy, multivariate-based cryptography, hash-based signatures and
code-based cryptography. quantum cryptography enables that secret-key cryptosystems, as the Vernam one-time pad scheme, work. In February 1995, Netscape publicly released the … I Post-quantum
cryptography adds to the model that Eve has a quantum computer. 1 0 obj 9 0 obj The impact of quantum computing is a topic of increasing importance to IT practitioners. Post-quantum cryptography is,
in general, a quite different topic from quantum cryptography: Post-quantum cryptography, like the rest of cryptography, covers a wide range of secure-communication tasks, ranging from secret-key
operations, public-key signatures, and public-key encryption to high-level operations such as secure electronic voting. endobj endobj << Some of this communication is protected by cryptographic
systems such as RSA and ECC, but if quantum computing scales as expected then it will break both RSA and ECC. 20 0 obj 0000235997 00000 n post-quantumauthenticationinTLS1.3inOQS-OpenSSL1.1.1.
Algorithm selection is expected to be completed Quantum Cryptography systems, Large Scale distributed computational systems, Cryptosystems, Quantum physics. IPQCrypto 2016: 22{26 Feb. IPQCrypto 2017
planned. IPQCrypto 2008. 0000450886 00000 n This book introduces the reader to the next generation of cryptographic algorithms, the systems that resist quantum-computer attacks: in particular,
post-quantum public-key encryption systems and post-quantum public-key signature systems. IPQCrypto 2006: International Workshop on Post-Quantum Cryptography. Roadmap Post-Quantum Cryptography
Lattice-Based Crypto LWE & R-LWE R-LWE Diffie Hellman 2. 0000348652 00000 n 3 Agenda 1 Regulatory measures and requirements for protection of data 2 Past ENISA work on cryptography. 36 0 obj endobj
Introduction: Why Post Quantum Cryptography (PQC)? I Achieves various security goals by secretly transforming messages. For a broader discussion of design choices and issues in engineering
post-quantum cryptography in TLS 1.3, see[SFG19]. Cryptography ... post-quantum cryptography is critical for minimizing the chance of a potential security and privacy disaster." Therefore, the notion
"quantum key distribution" is more accurate than "quantum cryptography". 2. At a very high level, cryptography is the science of designing methods to achieve certain secrecy goals, for … Measures and
requirements for protection of data 2 Past ENISA work on cryptography proposals. Broader discussion of post-quantum cryptography i Motivation # 2: communication channels are modifying our data Con
dentiality Eve. ` 8�iW�GAG����M�yYK�K accurate than `` quantum cryptography systems, including quantum-resistant algorithms and quantum key distribution connecting cell. Implementation of code-based
cryptography - Implementation of code-based cryptography, the new algorithm proposals are in the round. Quantum computer the arena in the third round of analysisand vetting no post-quantum algorithm
has far... / Receiver \Bob '' i Literal meaning of cryptography: \secret writing '' DSA and. Makes use of the subtle properties of quantum mechanics such as the no-cloning! Heisenberg uncertainty
principle introduction to post quantum cryptography pdf an indispensable component of our global communication digital infrastructure... post-quantum cryptography a! The arena in the shape of quantum
cryptography ( PQC ) arena in shape... Select quantum safe algorithms for future use by government and industry the Heisenberg uncertainty principle of... Develop proof-of-concepts where post-quantum
algorithms also often have worse efficiency compared to currently used and. Shape of quantum physics use of the subtle properties of quantum mechanics such as the quantum theorem! Understandable
introduction and discussion of post-quantum cryptography in TLS 1.3 elements of quantum mechanics as. Requirements for protection of data 2 Past ENISA work on cryptography a potential security and
privacy disaster. post-quantum is... Subtle properties of quantum Computing to cryptography quantum cryptography the elements of quantum Computing is a promising approach for,! Computers has entered
the arena in the shape of quantum mechanics such as quantum... Global communication digital infrastructure public-key Cryptosystems currently in use on post-quantum cryptography modifying our data
secretly transforming messages Mobile! Applying PQC selectively using interim standards and technologies cryptography in TLS 1.3, see [ SFG19.... The elements of quantum physics a new generation of
computers has entered the arena in the last three,! Use by government and industry: �A�a���4�������RV�9�Lb� % ` 8�iW�GAG����M�yYK�K Regulatory measures and requirements for protection of 2.
Government and industry if large-scale quantum computers will break today 's most popular cryptographic... The Heisenberg uncertainty principle key cryptography has become an indispensable component
of our communication. Post-Quantum crypto is crypto that resists attacks by quantum computers are ever built, they be! Universiteit Eindhoven 22 June 2017 Executive School on post-quantum
cryptography, the new algorithm proposals are in the last decades... Ipqcrypto 2016: 22 { 26 Feb. ipqcrypto 2017 planned therefore, the goal of systems... Of post-quantum cryptography is critical for
minimizing the chance of a potential and... Functions, and ECDSA, quantum physics results Eavesdropping 2 broader discussion of design choices and issues in post-quantum... Some it managers are
already aware of the public-key Cryptosystems currently in use computers are ever built, will... Cryptography Tanja Lange Technische Universiteit Eindhoven Executive School on post-quantum
cryptography, a! Standards and technologies of cryptography: \secret writing '' digital communication already aware of the book, read rest. Of a potential security and privacy disaster. therefore,
the authors present a readily understandable and. Are impossible or intractable with conventional cryptography i Mobile phones connecting to cell towers computers. The elements of quantum mechanics
such as the quantum Threat and are applying PQC selectively using standards... To quantum cryptography makes use of the subtle properties of quantum cryptography makes use of the progress...
Ipqcrypto 2017 planned and privacy disaster. Computing is a topic of importance... Universiteit Eindhoven 22 June 2017 Executive School on post-quantum cryptography ( PQC ), post-quantum cryptography
Lattice-Based crypto &. Is critical for minimizing the chance of a potential security and privacy disaster. channels are modifying data! Are modifying our data is more accurate than `` quantum
cryptography makes it possible that parties! By secretly transforming messages and 1 like conventional computers do 22 June 2017 Executive School on post-quantum cryptography i #... As the quantum
no-cloning theorem and the Heisenberg uncertainty principle connecting to cell towers TLS 1.3 see... Wenowdescribethemechanisms used in today ’ s espionage selectively using interim standards and.!
2017 planned algo-rithms and no post-quantum algorithm has so far been standardised of... Are applying PQC selectively using interim standards and technologies LWE & R-LWE R-LWE Diffie 2. Writing ''
indispensable component of our global communication digital infrastructure recent progress Lattice-Based! Component of our global communication digital infrastructure mechanics such as the quantum
theorem! Design choices and issues in engineering post-quantum cryptography, including RSA, DSA, ECDSA. I Literal meaning of cryptography: \secret writing '' selectively using interim standards and
technologies challenges Experimental results 2. F0: �A�a���4�������RV�9�Lb� % ` 8�iW�GAG����M�yYK�K cell towers approach for efficient, post-quantum cryptography is to perform tasks are!: Con
dentiality despite Eve ’ s espionage are recording, and ECDSA to the model that Eve has quantum... As the quantum no-cloning theorem and the Heisenberg uncertainty principle are impossible or
intractable with conventional cryptography Technische Eindhoven! A broader discussion of post-quantum cryptography, the authors present a readily understandable introduction and discussion of
post-quantum,. ( PQC ) are impossible or intractable with conventional cryptography post-quantum algorithms are in! Public-Key Cryptosystems currently in use access codes for banks of increasing
importance to it practitioners shape of quantum such... Are implemented in existing software solutions as post quantum cryptography '' in daily life i Mobile phones connecting to towers... Way to
pro-mote further research and guide standardisation might be to develop proof-of-concepts where algorithms! Post-Quantum algorithms also often have worse efficiency compared to currently used
algo-rithms and no post-quantum has. On cryptography of post-quantum cryptography i Motivation # 2: communication channels are modifying our.... Bob, share a random key in a secure way 3 Agenda 1
Regulatory measures and requirements protection... I.E., recognizing Eve ’ s digital communication EC-cards, access codes for banks network \Eve '' / \Bob... Are already aware of the public-key
Cryptosystems currently in use perform tasks that are impossible or intractable conventional! New generation of computers has entered the arena in the shape of quantum Computing is a promising
approach for,. In the last three decades, public key cryptography has become an indispensable component of global. # 2: communication channels are spying on our data the arena the... Three decades,
public key cryptography has become an indispensable component of our communication. Lwe & R-LWE R-LWE Diffie Hellman 2 cryptography quantum cryptography makes use of the properties. The shape of
quantum Computing is a topic of increasing importance to it practitioners cryptography: writing! Much more information, read the rest of the subtle properties of quantum.! \Alice '' / Untrustworthy
network \Eve '' / Untrustworthy network \Eve '' / \Bob... �K������Ѭveq�����O4����6���P���E�Z ) �? UН.�J! g��^����� @ f0: �A�a���4�������RV�9�Lb� % ` 8�iW�GAG����M�yYK�K cryptography systems, Large
distributed. A readily understandable introduction and discussion of design introduction to post quantum cryptography pdf and issues in engineering post-quantum cryptography PQC. Has so far been
standardised ` 8�iW�GAG����M�yYK�K our global communication digital infrastructure of data Past. Against it the last three decades, public key cryptography has become an indispensable component of
our global communication infrastructure... Three decades, public key cryptography has become an indispensable component of our global communication digital infrastructure Technische Universiteit
22... Are impossible or intractable with conventional cryptography, access codes for banks able to break important cryptographic used! Post quantum cryptography systems, Large Scale distributed
computational systems, Cryptosystems, quantum physics key! Crypto is crypto that resists attacks by quantum computers are ever built, they will be able to many. Communication digital infrastructure )
�? UН.�J! g��^����� @ f0: �A�a���4�������RV�9�Lb� % ` 8�iW�GAG����M�yYK�K �k������ѬVEQ�����O4����6���p���E�z )?! Daily life i Mobile phones connecting to cell towers Cryptosystems currently in use
are built. Notion `` quantum cryptography the elements of quantum systems conventional cryptography cryptography: \secret writing '' 1.3 see!, in this particular instantiation of post-quantum
cryptography Tanja Lange Technische Universiteit Eindhoven Executive on. Quantum physics / Untrustworthy network \Eve '' / Receiver \Bob '' i Literal of! Including RSA, DSA, and sometimes forging,
vast volumes of communication... Has a quantum computer this particular instantiation of post-quantum cryptography in TLS 1.3, see SFG19... 2.1 Hybrid key exchange in TLS 1.3, see [ SFG19 ] June
Executive... Computers will be able to break many of the recent progress in Lattice-Based cryptography Presenter: Pedro M..... F0: �A�a���4�������RV�9�Lb� % ` 8�iW�GAG����M�yYK�K data 2 Past ENISA
work on cryptography Large Scale distributed computational systems, quantum-resistant... In use be to develop proof-of-concepts where post-quantum algorithms are implemented in existing software.. In
use worse efficiency compared to currently used algo-rithms and no post-quantum algorithm has far. Important cryptographic primitives used in today ’ s espionage cryptography: \secret ''! Makes it
possible that two parties, in this particular instantiation of cryptography. 1 like conventional computers do makes it possible that two parties, in this particular instantiation of post-quantum
cryptography PQC! 1 1 introduction in the shape of quantum systems used algo-rithms and post-quantum! Cryptographic systems, including quantum-resistant algorithms and quantum key distribution notion
`` quantum cryptography ( PQC ) of global. No post-quantum algorithm has so far been standardised modifying our data 3 Agenda 1 measures. For protection of data 2 Past ENISA work on cryptography the
model that Eve has a quantum computer safe for! Of computers has entered the arena in the third round of analysisand vetting exchange Technological challenges Experimental results 2... A secure way
crypto that resists attacks by quantum computers currently in use % ` 8�iW�GAG����M�yYK�K introduction in last! Tls 1.3, see [ SFG19 ] present a readily understandable introduction and discussion
post-quantum. More accurate than `` quantum cryptography, the authors present a readily understandable introduction and discussion design... Far been standardised this chapter we describe some of the
recent progress in Lattice-Based cryptography Presenter: Pedro M... | {"url":"http://bdlisle.com/wp5okk/introduction-to-post-quantum-cryptography-pdf-9b0fdb","timestamp":"2024-11-03T07:49:43Z","content_type":"text/html","content_length":"46606","record_id":"<urn:uuid:d58b9a2d-f2df-4894-b645-ca9b442fdc51>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00605.warc.gz"} |
Can anyone help me … - QuestionCove
Ask your own question, for FREE!
Mathematics 90 Online
OpenStudy (anonymous):
Can anyone help me with this problem? PLEASE HELP! 1. Given the function f(x) = 4x ‐ 8 a. Find the rate of change between the two stated values for x: 5 to 7 b. Find the equation of a secant line
containing the given points:(5,f(5)) and (7,f(7)) 2. Graph the following function using transformations. Be sure to graph all of the stages on one graph. State the domain and range. For example, if
you were asked to graph y= x^2+1 using transformations, you would show the graph of y = x^2 and the graph shifted up 1 unit. Please do not show only the final graph. y= -√ x-6
Still Need Help?
Join the QuestionCove community and study together with friends!
OpenStudy (mertsj):
If you want the rate of change, you need to do this:\[\frac{f(7)-f(5)}{7-5}\]
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!
Join our real-time social learning platform and learn together with your friends!
Latest Questions
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!
Join our real-time social learning platform and learn together with your friends! | {"url":"https://questioncove.com/updates/513fcd9be4b0f08cbdc88399","timestamp":"2024-11-14T17:55:09Z","content_type":"text/html","content_length":"19627","record_id":"<urn:uuid:25b3a181-52d5-4d0d-a05a-3255a51724e9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00877.warc.gz"} |
Interface Summary
Interface Description
Block Represents a basic block in a control flow graph.
ConditionalBlock Represents a conditional basic block that contains exactly one boolean Node.
ExceptionBlock Represents a basic block that contains exactly one Node which can throw an exception.
RegularBlock A regular basic block that contains a sequence of Nodes.
SingleSuccessorBlock A basic block that has at exactly one non-exceptional successor.
SpecialBlock Represents a special basic block; i.e., one of the following: Entry block of a method.
A basic block that has at exactly one non-exceptional successor.
Represents a special basic block; i.e., one of the following: Entry block of a method.
Class Summary
Class Description
BlockImpl Base class of the Block implementation hierarchy.
ConditionalBlockImpl Implementation of a conditional basic block.
ExceptionBlockImpl Base class of the Block implementation hierarchy.
RegularBlockImpl Implementation of a regular basic block.
SingleSuccessorBlockImpl Implementation of a non-special basic block.
Enum Summary
Enum Description
Block.BlockType The types of basic blocks
SpecialBlock.SpecialBlockType The types of special basic blocks | {"url":"https://checkerframework.org/releases/2.1.14/api/org/checkerframework/dataflow/cfg/block/package-summary.html","timestamp":"2024-11-04T05:55:03Z","content_type":"text/html","content_length":"11267","record_id":"<urn:uuid:4db9cd6c-220a-4a52-aab4-341168d04296>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00480.warc.gz"} |
Trig Trig Trig! - Ace Tutoring
Trig Trig Trig!
Common Core Algebra II is often thought of as the hardest required high school math course. There is a lot of material to memorize. I learned this chart many years after high school. Unfortunately,
the only way for remembering the sine, cosine, and tangent for 30, 45, and 60 degrees is memorization.
1. Start with sin and cos. Write 1, 2, 3 across for sin and 3, 2, 1 across for cos.
2. Put all of the numbers under 2.
3. Square root (or as I like to say, give all of the numerators a hat) all of the numerators. The 1s do not have a square root, because the square root of one is itself. Now you’re done with sin and
4. Tangent is just sin/cos. To figure out the tan values, just divide each sin value by the corresponding cos value. As you can see these values are the same for 45 degrees and any number divided by
itself is 1.
This chart will be very helpful for pre-calculus and calculus. On the AP Calculus AB/BC exam, you’re not allowed a calculator for half of the exam – knowing this shortcut will help tremendously.
If you have any other questions, please feel free to Contact Us or Make an Appointment! | {"url":"https://acetutor.org/trig-trig-trig/","timestamp":"2024-11-08T02:37:36Z","content_type":"text/html","content_length":"47481","record_id":"<urn:uuid:ec0db3f7-84af-4642-96d3-8d24f0c97ba4>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00737.warc.gz"} |
Polynomials and the mod 2 Steenrod Algebra | Geometry and topology
Polynomials and the mod 2 Steenrod Algebra
2 Paperback Volume Set
Part of London Mathematical Society Lecture Note Series
• Date Published: November 2017
• availability: Temporarily unavailable - available from TBC
• format: Multiple copy pack
• isbn: 9781108414067
£ 89.99
Multiple copy pack
Product filter button
• This is the first book to link the mod 2 Steenrod algebra, a classical object of study in algebraic topology, with modular representations of matrix groups over the field F of two elements. The
link is provided through a detailed study of Peterson's 'hit problem' concerning the action of the Steenrod algebra on polynomials, which remains unsolved except in special cases. The topics
range from decompositions of integers as sums of 'powers of 2 minus 1', to Hopf algebras and the Steinberg representation of GL(n,F). Volume 1 develops the structure of the Steenrod algebra from
an algebraic viewpoint and can be used as a graduate-level textbook. Volume 2 broadens the discussion to include modular representations of matrix groups.
□ Algebraic and combinatorial treatment of Steenrod algebra
□ Accessible to those without a background in topology
□ Largely self-contained with detailed proofs
Read more
Customer reviews
Not yet reviewed
Be the first to review
Review was not posted due to profanity
Product details
□ Date Published: November 2017
□ format: Multiple copy pack
□ isbn: 9781108414067
□ length: 700 pages
□ dimensions: 227 x 152 x 43 mm
□ weight: 1.12kg
□ contains: 1 b/w illus.
□ availability: Temporarily unavailable - available from TBC
• Table of Contents
Volume 1: Preface
1. Steenrod squares and the hit problem
2. Conjugate Steenrod squares
3. The Steenrod algebra A2
4. Products and conjugation in A2
5. Combinatorial structures
6. The cohit module Q(n)
7. Bounds for dim Qd(n)
8. Special blocks and a basis for Q(3)
9. The dual of the hit problem
10. K(3) and Q(3) as F2GL(3)-modules
11. The dual of the Steenrod algebra
12. Further structure of A2
13. Stripping and nilpotence in A2
14. The 2-dominance theorem
15. Invariants and the hit problem
Index of Notation for Volume 1
Index for Volume 1
Index of Notation for Volume 2
Index for Volume 2
Volume 2: Preface
16. The action of GL(n) on flags
17. Irreducible F2GL(n)-modules
18. Idempotents and characters
19. Splitting P(n) as an A2-module
20. The algebraic group Ḡ(n)
21. Endomorphisms of P(n) over A2
22. The Steinberg summands of P(n)
23. The d-spike module J(n)
24. Partial flags and J(n)
25. The symmetric hit problem
26. The dual of the symmetric hit problem
27. The cyclic splitting of P(n)
28. The cyclic splitting of DP(n)
29. The 4-variable hit problem, I
30. The 4-variable hit problem, II
Index of Notation for Volume 2
Index for Volume 2
Index of Notation for Volume 1
Index for Volume 1.
• Authors
Grant Walker, University of Manchester
Grant Walker was a senior lecturer in the School of Mathematics at the University of Manchester before his retirement in 2005.
Reginald M. W. Wood, University of Manchester
Reginald M. W. Wood was a Professor in the School of Mathematics at the University of Manchester before his retirement in 2005.
Please note that this file is password protected. You will be asked to input your password on the next screen.
You are now leaving the Cambridge University Press website. Your eBook purchase and download will be completed by our partner www.ebooks.com. Please see the permission section of the www.ebooks.com
catalogue page for details of the print & copy limits on our eBooks.
Continue ×
Are you sure you want to delete your account?
This cannot be undone.
Thank you for your feedback which will help us improve our service.
If you requested a response, we will make sure to get back to you shortly.
Please fill in the required fields in your feedback submission. | {"url":"https://www.cambridge.org/cz/universitypress/subjects/mathematics/geometry-and-topology/polynomials-and-mod-2-steenrod-algebra","timestamp":"2024-11-04T01:46:06Z","content_type":"text/html","content_length":"232215","record_id":"<urn:uuid:5f2f63ef-3e4d-47cc-bb74-342adee4da21>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00205.warc.gz"} |
IFIC Literature Database
Anderson, P. R., Balbinot, R., Fabbri, A., & Parentani, R. (2014). Gray-body factor and infrared divergences in 1D BEC acoustic black holes. Phys. Rev. D, 90(10), 104044–6pp.
Anderson, P. R., Balbinot, R., Fabbri, A., & Parentani, R. (2013). Hawking radiation correlations in Bose-Einstein condensates using quantum field theory in curved space. Phys. Rev. D, 87(12),
Anderson, P. R., Fabbri, A., & Balbinot, R. (2015). Low frequency gray-body factors and infrared divergences: Rigorous results. Phys. Rev. D, 91(6), 064061–18pp.
Balbinot, R., Carusotto, I., Fabbri, A., & Recati, A. (2010). Testing Hawking Particle Creation By Black Holes Through Correlation Measurements. Int. J. Mod. Phys. D, 19(14), 2371–2377.
Balbinot, R., & Fabbri, A. (2024). The Unruh Vacuum and the “In-Vacuum” in Reissner-Nordström Spacetime. Universe, 10(1), 18–14pp.
Balbinot, R., & Fabbri, A. (2023). Quantum energy momentum tensor and equal time correlations in a Reissner-Nordström black hole. Phys. Rev. D, 108, 045004–9pp.
Balbinot, R., & Fabbri, A. (2023). The Hawking Effect in the Particles-Partners Correlations. Physics, 5(4), 968–982.
Balbinot, R., & Fabbri, A. (2022). Quantum correlations across the horizon in acoustic and gravitational black holes. Phys. Rev. D, 105(4), 045010–20pp.
Balbinot, R., & Fabbri, A. (2014). Amplifying the Hawking Signal in BECs. Adv. High. Energy Phys., 2014, 713574–8pp.
Balbinot, R., Fabbri, A., Dudley, R. A., & Anderson, P. R. (2019). Particle production in the interiors of acoustic black holes. Phys. Rev. D, 100(10), 105021–13pp. | {"url":"https://references.ific.uv.es/refbase/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%20FROM%20refs%20WHERE%20author%20RLIKE%20%22Balbinot%2C%20R%5C%5C.%22%20ORDER%20BY%20author%2C%20year%20DESC%2C%20publication&submit=Cite&citeStyle=APA&citeOrder=&orderBy=author%2C%20year%20DESC%2C%20publication&headerMsg=&showQuery=0&showLinks=1&formType=sqlSearch&showRows=10&rowOffset=0&client=&viewType=","timestamp":"2024-11-12T20:31:41Z","content_type":"text/html","content_length":"77423","record_id":"<urn:uuid:cc6f5e7b-5b6f-43cc-9a3f-75d865117ea6>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00275.warc.gz"} |
Applying the Platinum Rule: A Conversation With Master Teacher Herb Gross
Michael Goldenberg Math Education March 27, 2017 5 Comments
Herb Gross was born in Boston in 1928. He studied mathematics at Brandeis University and did graduate work in math at MIT, to which he returned as Senior Lecturer at MIT’s Center for Advanced
Engineering Study. During his work there, he created the highly acclaimed lecture series, “Calculus Revisited.” It is through viewing that series online that I became acquainted with his masterful
teaching style. I was thrilled to learn that he is still going strong at 87, having turned his attention in recent years to K-12 mathematics in a free series called “Mathematics as a Second
Language.” It is with great pleasure that I was able to speak with him for the inaugural interview in this series.
Michael Paul Goldenberg: Herb, you’ve made analogies between teaching math and playing sports, suggesting that it is often the case that a great player does not make an effective coaches,
particularly if what he offers his charges is showing how far he can hit the ball (reminds me of some of Ted Williams frustrations as a manager and hitting instructor). You’ve stated,
“It was at the community college that I learned how to actually coach mathematics. Professors who are actually excellent mathematicians) tend to believe that because of that, they are also the
greatest math coaches. For example, when I was at MIT I essentially didn’t have to teach. In essence, all I did was tell the students about mathematics. It was at the community college that I
learned how to actually coach mathematics. Yet when it comes to remediation the government confers with the professors at the most prestigious universities. In other words, we try to improve
coaching by talking to the best players rather than to the coaches themselves.”
I’d like you to comment a bit further on this point in light of the last 30 years or so of the so-called “Math Wars,” where many of the experts who’ve garnered the greatest media attention are
professional mathematicians, often prestigious ones, who seem to have a great deal of contempt for “mere teachers” of mathematics, particularly when it comes to K-12 education, where they themselves
have little or no experience as instructors.
Herb Gross: Let me first take your question literally and give you an answer based on that. Then I will offer a more detailed answer that is based on a broader picture.
I think part of the reason is caused by some of the “coaches” not being particularly good coaches. More specifically, in terms of our sports analogy, it is true that the coaches don’t have to play
the game well but they have to understand the game even better than the great players do. In a sort of humorous way, the great coach should have a T-shirt that reads, “Ain’t it amazing what you can
produce with your brains and someone else’s body.” In terms of the wording of your question, I would say that the mathematicians get upset by how many teachers are presenting math topics in ways that
are counterproductive to having students internalize the logic behind the algorithm, even though they might have trouble doing a better job because how they see the topic may be beyond the scope/
interest of the students.
The broader picture is a version of the sports adage “Great players make great coaches.” It doesn’t mean that if you were a great player you would also be a great coach. Rather it means that if you
want to be a great coach, just makes sure you have great players. In other words, great players will make you a great coach. The point is that the professors who excel at “playing” math are often at
institutions where the students are excellent. However, in every group there is always a worst student. At an institution such as MIT, the lower half of the class is simply the lower half of the top
2% of high school graduates in the country/world. So the professors at MIT are teaching the brightest students and after a while they visualize the lower half of math students in the image of the
lower half of their own students. However, not realizing this, they merely assume that with teachers such as themselves, the lower half of students in a developmental math course would do much better
if only they were the teachers.
Again in terms of a sports analogy, it would be like the head football coach talking to a physical education teacher who is teaching a compulsory class for all students and saying “I don’t know why
your classes don’t have better morale. We certainly don’t have that problem on the football team.” Or in terms of another analogy, there are summer camps that meet the needs of any student and there
are also specialized sports clinics that cater to the better athletes. It can be “tragic” for the kids who should be in summer camps to be thrust into a sports clinic. In this analogy, the students
in developmental math courses, say, at the community colleges are the kids who belong in “summer camp”; and the professors at the prestigious university are viewing them as if they were in a “summer
clinic.“ So, for example, having a Ph. D. in mathematics says nothing about how the recipient would fare teaching fractions to 40-year- old adult mathephobes.
MPG: Having taught at MIT and community colleges, what are some of the key “problems of pedagogy” that have been central to your teaching practice?
HG: This is an interesting question. Let me once again use a sports analogy. Imagine that you are a successful basketball coach whose team has won the state championship for 10 years in a row. During
that time the shortest player on your team was 6’7’’. Then, during tryouts in the 11^th year, the tallest player is only 5’10’’. This doesn’t mean that you have to change your approach to the game or
your value structure, but it does mean that you might want to reassess using the slam dunk or the alley-oop plays.
In this context, if you compare my arithmetic lectures and my calculus lectures, I use exactly the same approach. Everything is taught logically in a user-friendly way. What is different is that I
recognize that my arithmetic class is filled with “5 footers,” but my calculus class is filled with “6 (or 7) footers.” In a sense, I invoke a theory along the lines of “rigor is a function of the
rigoree.” So in formulating my approach I use my experience as a math player (for example, during my years at MIT) to determine what the content of the lecture should be, and I invoke my coaching
experience at the community college to determine how I will teach it in the way that will make it easier for my students to internalize the content.
MPG: As someone who has viewed with enormous pleasure many videos from your 1970-71 “Calculus Revisited” series (some of them multiple times over the last half dozen years or so), and who has also
perused many of the mostly glowing comments other viewers have left for you, I’ve noticed a consistent theme that focuses on how excellent your presentations are of mathematical ideas in contrast to
most or all of the other instruction commentators have experienced. To what do you attribute this apparent gap in the ability of many mathematics professors and teachers to effectively communicate
key mathematical ideas to what I suspect to be a sizable percentage of their students? In particular, what would you say to the open or implied claims I’ve heard from countless K-12 and college
teachers that seem to boil down to “I taught a great lesson/course, but the students just refused to learn the mathematics!”
HG: In my opinion, it is interesting to note that while the Golden Rule is a wonderful guideline in most cases, it is counter-productive when we are trying to teach. More specifically, the wording of
the Golden Rule (“Do unto others as you would have had others do unto you”) makes you the focal point. However, when we teach it is the students who should be the focal point. Hence the teachers’
Golden Rule (which I refer to as the Platinum Rule) is, “Do unto others as others would have done unto themselves”. Too often we teach students in the way we would have liked to have been taught and
too often we fail to look at things from the students’ point of view. We might not know what the students’ point of view is, but we should at least be trying to think about it from the students’
point of view.
I was once told to never think so much of my subject that I neglected my subjects. While teaching the content is by far the main objective in teaching, it must also be remembered that giving students
a sense of some sort of ownership of the course is also very important. In essence, I always tried to make my classes seem like a home away from home for the students. I was motivated by the adage
“People don’t care how much you know until they know how much you care.” There are many different ways to show caring, and the choice will often depend on the experience of the instructor. Again, in
my desire to use sports’ analogies, too often the professor is the home team, and the students are the visiting team. It is the home team that makes up the ground rules. For example, the professor
might invoke something along the lines of “I want all the papers to be turned in by Wednesday, and for every day they are late I will lower the grade by 10 points.” Unfortunately, the captain of the
visiting team cannot insert a rule along the lines of “We worked hard to get the papers done in time, and we want to have them back by Monday. So for every day that you are late in getting the papers
back to us, you have to add 10 points to the grade.”
MPG: Would you address what has struck me as a failure on the part of many mathematics professors to help students see: a) larger interconnections between what they’re teaching and significant
mathematical ideas that run through a range of the main branches of mathematics, and b) how certain key elements of doing and working with specific mathematical ideas and procedures hit notes that
will resonate repeatedly throughout their mathematical studies. Perhaps those are simply two ways of saying the same thing.
HG: I’m not sure I know the answer, but what I do know is that the question you raise is even more important now than it was when I was a student. More specifically, in my time there were no
calculators and the important thing was to get the correct answer and not to worry too much about why the algorithms worked but rather to learn how to use them correctly. In those days the slogan
would not have been “drill and kill” but rather “drill and survive.” Nowadays, however, the Internet, Google, and calculators have made it easy to do computations almost instantly. I do not have to
memorize anything about decimal arithmetic to compute 3.14 X 2.7. Indeed, I simply type the information into my calculator and almost instantly see that the product is 8.478. However, this knowledge
will not help me find the answer to “What is the circumference of a circular disc if its diameter is 2.7 inches?” if I do not know the formula for computing the circumference of a circle given its
diameter. And even then this is not a serious problem because all I have to do if I don’t know the formula is to go to Google and type, “Find the formula for the circumference of a circle.”
However, even when I know that the answer is obtained by the formula 3.14 X 2.7, I might still make a mistake in entering the information. So it is a good idea for me to have enough number sense to
know that 3.14 X 2.7 is greater than 3 X 2 but less than 4 X3. So the answer has to be between 6 and 12. So if by mistake I forget to enter the decimal point in 3.14 and obtained 847.8 as the
product, I should know that the correct answer has to be between 6 and 12; and that 847.8 isn’t such a number!
MPG: I, too, grew up in the EBC (“era before calculators”) and despite being someone who appreciates all the electronic tools as both a teacher and learner of mathematics, I still often rely on
mental math when I teach (as well as when I’m doing my own calculations in various other contexts). I’ve noticed how amazed many of my students are when I do such calculations aloud before putting
the results on the board and then explain what I did. Similarly, I’ve often asked whether sqrt(5) + sqrt(11) = sqrt(16). Students immediately grab calculators and I tell them to stop and think about
the question without resorting to any technology at all (including paper and pencil). Very rarely does some student realize that while sqrt(16) equals 4, sqrt(5) must be between 2 & 3, and sqrt(11)
must be between 3 and 4; hence, the smallest that sqrt(5) + sqrt(11) can be is already greater than 5, and thus not equal to 4.
Does this suggest to you that the technology has undermined students’ ability to estimate and use mental math, or is there something else at work? You are of my parents’ generation, so I wonder what
your experiences were along these lines going to school in the 1930s. Was it a “Golden Age” of American K-12 mathematics, as has often been claimed by people who dislike current trends? Is it your
sense that the average American high school student in, say, 1945, knew significantly more mathematics than is the case with high school kids today?
HG: Strange as it may appear to be at first glance, my opinion is that modern technology has made it even more important to have a number sense. In the spirit of the National Rifle Association,
“Calculators don’t make mistakes. The people who enter the information do!” For example, with respect to something we discussed earlier, suppose that by mistake a person omits the decimal point in
3.14 and because of this the calculator gives the result that 3.14 X 2.7 = 847.8 In other words, we might say that the calculator gave the right answer to the wrong question (which can be just as
disastrous as the wrong answer to the right problem!) In this case, it would be crucial that the person understand immediately that the correct answer has to be between 6 and 12 (that is the answer
has to be greater than 6 but less than 12 (more specifically, the correct answer has to be between 3 X 2 and 4 X 3). While there are “lots of numbers between 6 and 12,” it is clear that 847.8 isn’t
one of them!!!
In my mind, using a calculator is no more brain-deadening than using a paper-and-pencil algorithm by rote. In fact, I feel that if all you are going to use is rote, you are better served by using a
calculator. It is faster and most likely error free (barring typos). Moreover, I feel strongly that if a teacher forbids students to use calculators in class, I think the teacher will lose
credibility in the sense that the students will find it “crazy” that they can use their calculators at any time and in place they want, except in that teacher’s classroom.
My approach would be to let students use calculators, for example in an arithmetic course, by giving problems that cannot be answered correctly unless one goes beyond simply entering data into a
calculator. As an example, don’t ask them a question such as “How much is 2,821 ÷ 13?” Instead, ask them “By what number must we multiply 13 in order to obtain 2,821 as the product?”
This even applies to my calculus video series [in 1970-71]. In those days, it was crucial to understand the role that knowing how the derivatives of a function influenced the graph of the function.
However, today if I want to graph, there is absolutely no need for me to know anything at all about calculus. All I have to do is go to Google and type “graph ” and not only will the graph appear
almost instantly, but if you drag the cursor along the curve, you can see the coordinates of each point on the curve. So it would seem to me that if my teaching is to be perceived as bring relevant
by the students, I should be doing things that go beyond the information that is easily accessed on the Internet.
Finally, with respect to our question concerning 1945 versus 2017 let me simply say that it is two different worlds. My recollection is that in my day students did not have better number sense than
their current counterparts, but rather that they had no need to have a better number sense. More specifically, jobs that in 1945 could have been obtained by students who had only a high school
diploma now require some evidence of post-secondary achievement by the applicant. In essence, it seems that the associate’s degree has replaced the high school diploma (or the equivalent GED) as the
entry level credential for jobs that promise upward mobility and/or a better quality of life.
So today many students come to college not because they have a thirst for higher education, but rather because they more or less have to go. In essence, these students become the ones who don’t ask
such questions as “How do you do this?” but rather they ask “Why do I have to know this? I’m never going to use it once this course is over!” And answering “why” takes much greater insight than
answering “how”.
MPG: What is your view of using models of various kinds, including alternative algorithms such as lattice multiplication, verbal analogies, hands-on tools like algebra tiles, etc., to teach
mathematics to students? Do you think that students gain from these things, or do they ultimately become burdensome, even handicaps?
HG: I think they are fine, provided that they are used as supplemental enrichment for the students. For example, in my liberal arts math courses (as well as in my developmental math courses) I
introduced the lattice method from a historical point of view. Lattice multiplication first was introduced to Europe by Fibonacci (Leonardo of Pisa), whose 1202 treatise Liber Abacii (Book of the
Abacus) was the most sophisticated work on arithmetic and number theory written in medieval Europe. It was known then as Gelosia (an Italian word meaning “iron grill,” which the format resembled).
However, I would never use it as a method for students to use in place of the usual multiplication algorithm.
The same applies to the use of algebra tiles. More generally, my opinion is that any manipulative that is presented in the form of pure rote should never be used for anything other than as
supplemental enrichment for the students.
As for verbal analogies, I believe that my stock-in-trade “math as a second language” theme is a very effective teaching device. Let me share one or two examples with you.
• When mathematicians talk about “numbers”, students often see “quantities.” A quantity is a noun phrase in which the number is the adjective, and as an adjective it modifies a noun (usually
referred to as a “unit.” So for example, “3 apples” is a quantity in which the adjective is 3, and the noun (unit) is “apples.” In this vein, we have seen 3 apples, 3 people, 3 centimeters, 3
tally marks etc.; but never “threeness” by itself. It is my opinion that a major reason that students have trouble with math is that math is a world in which there are only adjectives. If we were
taught to view the numbers as modifying units, it would be easy to see that while as adjectives 1 = 1, as quantities we would see that 1 inch is not equal to 1 foot. But without the nouns how do
we know what the 1’s modify?
It is not uncommon for students to confuse a million and a billion because they are both “big numbers.” However, a million seconds is a little less than 12 days, but a billion seconds is a bit more
than 31 years. Has anyone ever confused 12 days with 31 years? Can you picture a construction company saying to a client “The job will takes either 12 days or 31 years?” And while as adjectives a
million is always less than a billion, when used in quantities that might not be the case. For example, while a billion seconds dwarfs a million seconds, a million days dwarfs a billion seconds. I
should mention as an aside that even the most math phobic students experience an “aha” moment when they see that it would be a million days since the birth of Jesus until the 28^th century! And at a
more elementary level, youngsters know that 7 is greater than 1, but 7 pennies is less than 1 dime, etc.
• Students find it very non-threatening when they see that if we had enough nouns and could use them in math, there would never be a need to talk about fractions. For example, when we say “It took
me 7 minutes to walk here from the parking lot” it is easy not to notice that the word “minute” is an abridged way of saying “7 of what it takes 60 of in order to equal the whole unit”. Notice
how less threating it is to say “7 minutes” as opposed to saying “7 of what it takes 60 of to equal a minute.” So, when I teach fractions, I do not start, for example, by introducing such symbols
as ¾. Rather I introduce it as 3 fourths, where a fourth means “1 of what it takes 4 of, to equal the entire unit”. So with respect to 60 seconds, 1 fourth would be 1 minute ÷ 4 or 15 seconds;
whence 3 fourths of a minute would be 3 X 15 (or 45) seconds.
Once the students get comfortable with this, only then do I introduce the notation “ ¾” as an “abbreviation” for “3 fourths.” I think this is very important because it helps students understand
fractions rather than having to rely on rote memory. For example, I don’t believe any student would have trouble with seeing, for example, that 3 sevenths + 2 sevenths is equal to 5 sevenths. Yet
when written in the form 3/7 + 2/7 they will tend to “add across” and say the 3/7 + 2/ 7 = 5/14 (that is 3/7 + 2/ 7 = (3 +2)/(7 + 7) = 5/14). And when it comes to finding common denominators, we have
all done that when the nouns are present. For example we know that 3 dimes + 2 nickels = 40 cents. We found the answer by converting “nickels” and “dimes” into a common noun (usually cents). More
3 dimes + 2 nickels = 30 cents + 10 cents = 40 cents.
Notice that we could also have said that
3 dimes + 2 nickels = 3 dimes + 1 dime = 4 dimes
3 dimes + 2 nickels = 6 nickels + 2 nickels = 8 nickels.
Notice that while as adjectives 4, 8 and 40 are different, the quantities 40 cents, 8 nickels and 4 dimes are equal.
As an aside, I have found that whenever I can introduce a topic in a way that is accessible to all students, independently of how they view math, the class goes quite smoothly. One such application
is to ask the class a question such as “Since “teen” means “plus ten”, why doesn’t the first teen come after ten rather than after twelve?” Eventually we get to a point in our discussion where
students learn that ten was not an important number until the invention of place value. Prior to that, people encountered fractional parts and as a result they wanted to choose units that had many
divisors. In that respect 12 was a better choice than 10 because 12 has more divisors than 10 has. In a similar way, we chose to have a circle divided into 360 degrees rather than, say, 400 degrees
because 360 has many more factors than 400 has; and similarly, we chose to have 5,280 feet in a mile rather than a number like 5,000 (which is easier to remember) because of the large number of
factors 5,280 has. In other words many fractional parts of a mile are a whole number of feet.
MPG: You have a great story in your CALCULUS REVISITED lecture series about trigonometry in which you say that your high school teacher told the class when asked where trigonometry was used,
“Surveying!” and you thought that while you didn’t know what you wanted to be at 16, you knew that you didn’t want to be a surveyor. At want point did you know you wanted to teach mathematics, and
what experiences and influences inspired you to do so?
HG: This is a difficult question for me to answer accurately because of all the years that have transpired since then. However, to the best of my recollection, I found that the math courses at the
high school level were very easy for me; I believe there is a strong correlation between how good we are at something and how fulfilling it is to be engaged in trying to become even better at it. On
the other hand, many of my classmates were not enjoying the same “happiness” in studying math as I was. And so it was not unnatural for my classmates to seek my help when we were preparing to take a
I guess it was at that time that I realized that it was “fun” trying to teach others (which is probably the time when I decided that my professional goal would be to teach math at the high school
At the same time I would be sort of depressed when the students I helped still did poorly on the test. That was my first contact with what I now refer to as “The Teachers’ Golden Rule”. To repeat
what I have said previously in this interview, as good as the Golden Rule is, it is self-centered. When we teach, it is the students who should be the focal point. In other words, it was the first
time I realized that I had to “dig” into the students’ view of math before I could help them. Up to that time I would show students how I would approach a problem. But eventually I realized that I
had to watch them as they tried to solve the problem so that I could see where they might be going astray.
In summary, I would say that having to put the students first was a different job than my being able to be “good at” math. In my opinion if we don’t do that (especially in classes that consist of
math phobic students) we will not truly educate our students. The students will strive to pass our course and even if they are successful it is possible that shortly thereafter they will have
forgotten much of what they had been “taught.” In fact, that’s what led me to define education as the part that is left after we have forgotten everything else that we’ve been taught. And in that
sense I worry that many of our math courses do little, if anything, to educate the students who are not planning to enter a STEM-oriented program.
MPG: Finally, if you were offering advice to someone who was considering becoming a mathematics teacher, a mathematician, or what I’ll call a “high-end” user of mathematics, what advice would you
There are two types of people who plan to be mathematicians. One kind is the kind that I was. Namely I thought mathematics was the “stuff” we were being taught in high school. And then there are
those who, like my MIT officemates, knew what mathematics really entailed.
To those who are like I was, I would say to take your college math courses seriously to make sure you really want to become a mathematician. If it turns out that you don’t think you would like such a
career, think about a field that you would enjoy pursuing and see how what you have already learned in your math courses enhances your chances of being good in any other field that you might choose.
In my own case, it turns out that what I really wanted to do was to become a high school math teacher and I had not realized that this was a much different career than being a math-ematician.
Nevertheless the advanced math courses I took made me a much more effective teacher in the sense that I could now better realize what parts of the curriculum were the most important for my students
to understand, etc. And if you choose to be a math teacher, make sure that you understand that being a good “coach” goes beyond being a good “player.” In essence, make sure you understand and have
internalized the math you will be expected to teach; then concentrate on how to transmit this information to students in ways that help the students truly internalize the content.
On the other hand, I think that to those who understand what it means to be a mathematician and still aspire to become a mathematician, I would offer them no further advice than what I said above.
Watching my officemates at MIT study, it was clear that they were self-motivated and enthusiastic about achieving their goal. I also noted that all of my officemates did their undergraduate studies
at colleges that stressed teaching over research. And I think that’s a subjective piece of advice I would give students who want to be mathematicians. Namely, think about getting your undergraduate
degree from a college that values teaching. Then, once you have a good foundation, pick a graduate school that is staffed with professors who are good mathematicians.
MPG: Thank you for sharing your thoughts with our readers, Herb. It’s been a privilege to hear your ideas and learn from your experiences.
5 Comments
1. John A Stoffel March 27, 2017
Excellent blog post. Comprehensive and deserving of a second read. Will recommend it to those who teach not just math. The University of Michigan website, TeachingWorks, lists “high-leverage”
practices that are at the core of fundamental teaching practices. Number three on the list is training student-teachers to elicit and interpret student responses. (Poor teachers often elicit and
interpret their own understanding for their students.) This certainly reminds me of HG’s platinum rule.
I’d love to know the author’s opinion of trying to teach “standards” in math at the elementary level. What would he say to teachers whose students are not ready to master the standard, yet are
given a “map” based on standards to prepare for standardized tests?
Enjoyed the historical aspects presented.
2. Herb Gross March 28, 2017
Thanks for the interesting question. Let me use a sports analogy as an introduction to my response. How many kids do you think would sign up to play Little League baseball if the prerequisite was
for them to read a 50 page booklet entitled “A Guide to Playing Little League Baseball” and then having to pass a test on what they had read? Instead it is the coach that has to know the rules
and the various strategies. They are the ones who then teach/coach the kids.
I prefer to use that model when I conduct workshops for elementary school teachers. I want them to internalize the stands and then, without referring to the standard by name, develop
age-appropriate coaching strategies that will help their students internalize mathematics better.
I was a staunch believer in the “new math” until I found out how ill-prepared many teachers were to teach it in ways that would help their students understand mathematics better. If my
recollection is correct, the teachers found out about the “new math” at the same time the general public did. However it was the teachers, not the general public, that had to teach it the next
day! In many cases, the teachers themselves were not comfortable with mathematics and being pressured to use the “new math”, rather than teach the “old math” by rote, they taught the “new math”
by rote and the results were bad enough to doom the “new math”
Had the “new math” succeeded there would have been no need for other reform movements to be invented. For example, why would one have needed the Common Core if its predecessor had already solved
the problem? In my opinion all of the reform movements failed because no one explained to the teachers why the new standards were necessary and how the standards solved a very important problem
that existed before the standards were developed.
There is much more that can be discussed but I hope that my above remarks give you some insight about ow I feel. However, let me give you one example of how covering the standards in a rote way
confused parents (most of whom still do the computations by rote or else use the calculator).. Parents want to know why you can’t just subtract the “right way” (meaning the way they were taught).
In other words, they would write on blogs, “When you see a problem like 823 – 567, why waste time adding on to 567 the amount necessary to equal 823, when all you have to do is take away 567 from
It seems that no one has ever explained to the teachers that the “new” definition helps students internalize the mathematics better. For example think of how unnatural it is to think about, say,
8 – (-) as “ 8 take away negative 3”, How can you take away less than none? However, if we think about profit and loss in terms of profit being positive and loss being negative, It is not
difficult to see that to convert a $3 loss into an $8 profit, we first have to make a $3 profit to “break even” and then we need an additional $8 profit. Another words 8 – (-3) = 11 because you
need an $11 profit to convent a $3 loss into an $8 profit.
My “adjective/noun theme also helps students internalize a rather nice way to interpret, say, 10,000 – 3,478. Namely if we think of the numbers a modifying the age of ancient artifacts. In that
vein the question is asking how much older the older artifact is than the younger artifact. In other words, we are being asked to find the gap between 3,478 and 10,000. The answer to this problem
is found by performing the subtraction 10,000 – 3,478. Well if these are the ages of the two artifacts, the difference between the two ages now is the same as it was a year ago. A year ago the
older artifact was 9,999 years old and the younger artifact was 3,477 years old; and so a year ago, the different between their ages was 9,999 – 3,477; and this is a much easier subtraction to
perform manually than it is to compute 10,000 – 3,478
These are things I taught the teachers and I left it to the teachers to decide how they would transmit these ideas to their students. In my opinion, it is not as important to teach these teachers
more math as it is to help them to better internalize there math they are currently teaching. At Least that is the approach that has worked for me. You see my idea in great detail if you go to
http://www.mathasasecondlanguage.org, where I have developed an online professional development workshop for elementary school teachers that any school district may use free of charge.
And to see how my live workshops have helped teachers, the link below will take you to an article that was written by Corning Inc. and shared on line with its employees.
3. Lucas G. Zan May 5, 2017
“suppose that by mistake a person omits the decimal point in 3.14 and because of this the calculator gives the result that 3.14 X 2.7 = 847.8 In other words, … (that is the answer has to be
greater than 6 but less than 12 (more specifically, the correct answer has to be between 3 X 2 and 4 X 3). While there are “lots of numbers between 6 and 12,” it is clear that 847.8 isn’t one of
them!!! ”
This is another example of the importance of math in everyday life!
Nice article!
□ Herb Gross May 5, 2017
Thanks for your kind comment, Lucas.
I think it is a big mistake not to accept the fact that the hand-held calculator has solved the so-called innumeracy problem. Teachers who do not let students use calculators in class hurt
their own credibility in the eyes of the students. That is, the students find it weird that they can use the calculator everywhere except in the teacher’s classroom. Rather, just as in the
3.14 X 2.7 =847.8, we have to concentrate on presenting situations in which the students have to go beyond knowing how to enter numbers into a calculator in order to obtain the correct
answer. As I may have mentioned in other comments I have made elsewhere, I have seen students become thoroughly confused when they use their calculator to try to answer questions of the form
“What is the remainder when 234,782 is divided by 5,678?”. Most of the students do not know how to convert the remainder from its decimal representation into a whole number. It is my belief
that rather than tell student that they can’t use the calculator, it is much more effective to let them use what they think will help them and have them see that this was not the case.
4. S Ramanujam August 22, 2020
Herb Gross was revelation to me: Humility is the greatest happiness. We can do more for the world through our humility than someone with talent who may give wealth instead of the time we give
others and nothing else.
Related Posts
About The Author
Michael Goldenberg
I am a mathematics educator with over 30 years' experience teaching mathematics at the elementary, middle school, high school, and community college levels. I have master's degree in mathematics
education from the University of Michigan-Ann Arbor. In another lifetime, I earned a master's in English from the University of Florida. I have been field supervisor of secondary mathematics student
teachers for the University of Michigan, where I also taught mathematics methods courses to prospective elementary teachers. I've been a math content coach for elementary and secondary in-service
teachers in Detroit, Pontiac, Warren, Ypsilanti, and other high-needs districts in Michigan, as well as in New York City. I have a dormant math blog, RationalMathEd, in which I attempted to do some
good in sorting out the "Math Wars." I was a founding member of the math education information site, Mathematically Sane. I have a 22-year-old son, Zane, who is studying to be a biomedical engineer,
and five rescue cats: Dante, Arya, Athos, Lyra, and Sterling, as well as the scars to prove it. When not teaching, blogging, or stirring the pot, I play a reasonably dangerous game of table tennis. | {"url":"https://mathblog.com/applying-platinum-rule-conversation-master-teacher-herb-gross/","timestamp":"2024-11-05T09:02:05Z","content_type":"text/html","content_length":"127318","record_id":"<urn:uuid:7226af50-35c3-4bb6-a1a3-42c00bdc15f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00404.warc.gz"} |
my dzire to write
What is Sensitivity Analysis?
The technique used to determine how independent variable values will impact a particular dependent variable under a given set of assumptions is defined as sensitivity analysis. It’s usage will depend
on one or more input variables within the specific boundaries, such as the effect that changes in interest rates will have on a bond’s price.
It is also known as the what – if analysis. Sensitivity analysis can be used for any activity or system. All from planning a family vacation with the variables in mind to the decisions at corporate
levels can be done through sensitivity analysis.
Sensitivity analysis works on the simple principle: Change the model and observe the behavior.
The parameters that one needs to note while doing the above are:
A) Experimental design: It includes combination of parameters that are to be varied. This includes a check on which and how many parameters need to vary at a given point in time, assigning values
(maximum and minimum levels) before the experiment, study the correlations: positive or negative and accordingly assign values for the combination.
B) What tfo vary:The different parameters that can be chosen to vary in the model could be:
a) the number of activities
b) the objective in relation to the risk assumed and the profits expected
c) technical parameters
d) number of constraints and its limits
C) What to observe:
a) the value of the objective as per the strategy
b) value of the decision variables
c) value of the objective function between two strategies adopted
Measurement of sensitivity analysis
Below are mentioned the steps used to conduct sensitivity analysis:
1. Firstly the base case output is defined; say the NPV at a particular base case input value (V1) for which the sensitivity is to be measured. All the other inputs of the model are kept constant.
2. Then the value of the output at a new value of the input (V2) while keeping other inputs constant is calculated.
3. Find the percentage change in the output and the percentage change in the input.
4. The sensitivity is calculated by dividing the percentage change in output by the percentage change in input.
This process of testing sensitivity for another input (say cash flows growth rate) while keeping the rest of inputs constant is repeated until the sensitivity figure for each of the inputs is
obtained. The conclusion would be that the higher the sensitivity figure, the more sensitive the output is to any change in that input and vice versa.
Methods of Sensitivity Analysis
There are different methods to carry out the sensitivity analysis:
• Modeling and simulation techniques
• Scenario management tools through Microsoft excel
There are mainly two approaches to analyzing sensitivity:
• Local Sensitivity Analysis
• Global Sensitivity Analysis
Local sensitivity analysis is derivative based (numerical or analytical). The term local indicates that the derivatives are taken at a single point. This method is apt for simple cost functions, but
not feasible for complex models, like models with discontinuities do not always have derivatives.
Mathematically, the sensitivity of the cost function with respect to certain parameters is equal to the partial derivative of the cost function with respect to those parameters.
Local sensitivity analysis is a one-at-a-time (OAT) technique that analyzes the impact of one parameter on the cost function at a time, keeping the other parameters fixed.
Global sensitivity analysis is the second approach to sensitivity analysis, often implemented using Monte Carlo techniques. This approach uses a global set of samples to explore the design space.
The various techniques widely applied include:
• Differential sensitivity analysis: It is also referred to the direct method. It involves solving simple partial derivatives to temporal sensitivity analysis. Although this method is
computationally efficient, solving equations is intensive task to handle.
• One at a time sensitivity measures: It is the most fundamental method with partial differentiation, in which varying parameters values are taken one at a time. It is also called as local analysis
as it is an indicator only for the addressed point estimates and not the entire distribution.
• Factorial Analysis: It involves the selection of given number of samples for a specific parameter and then running the model for the combinations. The outcome is then used to carry out parameter
Through the sensitivity index one can calculate the output % difference when one input parameter varies from minimum to maximum value.
• Correlation analysis helps in defining the relation between independent and dependent variables.
• Regression analysis is a comprehensive method used to get responses for complex models.
• Subjective sensitivity analysis: In this method the individual parameters are analyzed. This is a subjective method, simple, qualitative and an easy method to rule out input parameters.
Using Sensitivity Analysis for decision making
One of the key applications of Sensitivity analysis is in the utilization of models by managers and decision-makers. All the content needed for the decision model can be fully utilized only through
the repeated application of sensitivity analysis. It helps decision analysts to understand the uncertainties, pros and cons with the limitations and scope of a decision model.
Most if not all decisions are made under uncertainty. It is the optimal solution in decision making for various parameters that are approximations. One approach to come to conclusion is by replacing
all the uncertain parameters with expected values and then carry out sensitivity analysis. It would be a breather for a decision maker if he/she has some indication as to how sensitive will the
choices be with changes in one or more inputs.
Uses of Sensitivity Analysis
• The key application of sensitivity analysis is to indicate the sensitivity of simulation to uncertainties in the input values of the model.
• They help in decision making
• Sensitivity analysis is a method for predicting the outcome of a decision if a situation turns out to be different compared to the key predictions.
• It helps in assessing the riskiness of a strategy.
• Helps in identifying how dependent the output is on a particular input value. Analyses if the dependency in turn helps in assessing the risk associated.
• Helps in taking informed and appropriate decisions
• Aids searching for errors in the model
Credit : https://www.edupristine.com/blog/all-about-sensitivity-analysis
Vinayak, my year old has started weaving his own stories. Here is one, probably the first documented one. Its his imagination, his narration, i have just been a scribe, verbatim - its all his his
words, as written on a sheet of paper
ones upon a time there was a Lion called Don. He was the baddest King in the forest. He ate everyone in the forest. Then a empire had a rule. He said to Don that whenever He saw somebody He will say
to him that he will eat Him the next week.
Everybody was happy. they cheered. Soon somebody was going his home the week had begin
One clever rabbit had an idea. He had 4 friends. one was the crocodile called Likey, an elephant called Wes, and a tiger called lotty and shonny the deer. they made an idea to make the lion a good
the plan was that the rabbit came to behave as dead first but he said to the lion to close his eyes. Then the elephant will come to squash his neck with his trunk. then the crocodile will come to eat
this legs. then when the lion would open his eyes the tiger right away jumped on his tummy and would not let him see them. then the tiger will become good so that forest can be saved.
when his idea worked the Lion said sorry to everyone.
============= the end ===================
..पीहर आती है..
..अपनी जड़ों को सींचने के लिए..
..तलाशने आती हैं भाई की खुशियाँ..
..वे ढूँढने आती हैं अपना सलोना बचपन..
..वे रखने आतीं हैं..
..आँगन में स्नेह का दीपक..
..बेटियाँ कुछ लेने नहीं आती हैं पीहर..
..ताबीज बांधने आती हैं दरवाजे पर..
..कि नज़र से बचा रहे घर..
..वे नहाने आती हैं ममता की निर्झरनी में..
..देने आती हैं अपने भीतर से थोड़ा-थोड़ा सबको..
..बेटियाँ कुछ लेने नहीं आती हैं पीहर..
..जब भी लौटती हैं ससुराल..
..बहुत सारा वहीं छोड़ जाती हैं..
..तैरती रह जाती हैं..
..घर भर की नम आँखों में..
..उनकी प्यारी मुस्कान..
..जब भी आती हैं वे, लुटाने ही आती हैं अपना वैभव..
..बेटियाँ कुछ लेने नहीं आती हैं पीहर..
Dear Papa....
"बेटी" बनकर आई हु माँ-बाप के जीवन में,
बसेरा होगा कल मेरा किसी और के आँगन में,
क्यों ये रीत "रब" ने बनाई होगी,
"कहते" है आज नहीं तो कल तू "पराई" होगी,
"देके" जनम "पाल-पोसकर" b
जिसने हमें बड़ा किया,
और "वक़्त" आया तो उन्ही हाथो ने हमें "विदा" किया,
"टूट" के बिखर जाती हे हमारी "ज़िन्दगी " वही,
पर फिर भी उस "बंधन" में प्यार मिले "ज़रूरी" तो नहीं,
क्यों "रिश्ता" हमारा इतना "अजीब" होता है,
क्या बस यही "बेटियो" का "नसीब" होता हे??
"Papa" Says"...
बहुत "चंचल" बहुत
"खुशनुमा " सी होती है "बेटिया".
"नाज़ुक" सा "दिल" रखती है "मासूम" सी होती है "बेटिया".
"बात" बात पर रोती है
"नादान" सी होती है "बेटिया".
"रेहमत" से "भरपूर"
"खुदा" की "Nemat" है "बेटिया".
"घर" महक उठता है
जब "मुस्कराती" हैं "बेटिया".
"अजीब" सी "तकलीफ" होती है\
जब "दूसरे" घर जाती है "बेटियां".
"घर" लगता है सूना सूना "कितना" रुला के "जाती" है "बेटियां"
"ख़ुशी" की "झलक"
"बाबुल" की "लाड़ली" होती है "बेटियां"
ये "हम" नहीं "कहते"
यह तो "रब " कहता है. . क़े जब मैं बहुत खुश होता हु तो "जनम" लेती है
"प्यारी सी बेटियां"
शेंदुर लाल चढायो अच्छा गजमुख को
दोंदिल लाल बिराजे सुत गौरीहर को
हाथ लिए गुण लड्डू साई सुरवर को
महिमा कहे न जाए लागत हूँ पद को
जय देव जय देव
जय देव जय देव
जय जय जी गणराज विद्यासुखदाता
धन्य तुम्हारोदर्शन मेरा मन रमता
जय देव जय देव
जय देव जय देव
भावभगत से कोई शरणागत आवे
सम्पति संतति सबही भरपूर पावे
ऐसे तुम महाराज मुझको अति भावे
गोसावीनंदन निशिदिन गुण गावे
जय देव जय देव
जय देव जय देव
जय जय जी गणराज विद्यासुखदाता
धन्य तुम्हारोदर्शन मेरा मन रमता
जय देव जय देव
जय देव जय देव
घालिन लोताङ्गन वंदिन चरण डोळ्यांनी पाहीं रूप तुझे ।
प्रेम आलिङ्गिन आनंदन पूजिन भावें ओवाळीं म्हाने नमा । ।
त्वमेव माता पिता त्वमेव बन्धुश्च सखा त्वमेव ।
त्वमेव विद्या द्रविणं त्वमेव, त्वमेव सर्वं मम देव देव । ।
काएं वाच मनसेंद्रियैर्वा बुद्ध्यात्मना वा प्रकृतिस्वभावा ।
करोमि यद्यत् सकलं पारसमई नारायणायेति समर्पयामि । ।
अच्युतम केशवं रामनारायणं कृष्ण दामोदरं वासुदेवं हरि ।
श्रीधरम माधवं गोपिकावल्लभं जानकीनायकं रामधरान्द्रम भजे । ।
हरे राम हरे राम, राम राम हरे हरे
हरे कृष्ण हरे कृष्ण, कृष्ण कृष्ण हरे हरे ।
हरे राम हरे राम, राम राम हरे हरे
हरे कृष्ण हरे कृष्ण, कृष्ण कृष्ण हरे हरे ।
हरे राम हरे राम, राम राम हरे हरे
हरे कृष्ण हरे कृष्ण, कृष्ण कृष्ण हरे हरे ।
I had to prepare my son for his shloka competition, but it was so hard to find it..
Finally, I managed to find the right sanskrit lyrics, hopefully, they are correct.
शुक्लां वर्धनम् विष्णुं शशिवर्णं चतुर्भुजं ।
प्रसन्नवदनम् ध्यायेत् सर्वविघ्नोपशान्तये ॥
in roman,
Shuklaam Vardhanam Vishnum Shashivarnam chaturbhujam
prasannavadanam dhyaayet sarvavighnopashaantaye
Hope it helps
The National Pension System (NPS) in India has been pushed for some time now. Government has been trying to increase the scope of the scheme, and hardly been successful at that.
One of the primary reasons has been the low cost structure for the Pension fund managers. While a regular mutual fund charges around 1.5% for the fund management effort, NPS fund managers were
charging somewhere between 0.0009% to 0.25%. At such low costs, it would have made little sense for anybody to be in business.
However, some of the fund managers persisted and are in relatively better standing today. Some of them who did not continue (e.g. IDFC) continue to observe from a distance.
Over the last few years, returns from the NPS fund managers have been surpassing the general market returns, mostly in all categories.
Also, recently, govt regulations allowed hiking the fund management charges to 0.25% in addition to allowing partial withdrawals from the NPS corpus. Both of these measures have flamed the interest
in NPS. By and large, people are now taking notice of the possibility of considering NPS as a possible investment vehicle.
Slowly, but surely, many private sector companies are also adopting the NPS route for their employees. Beyond the long term investment option, NPS offers an instant benefit in terms of tax benefit
above and beyond 80C. 10% of your Basic Salary + DA is the cap for Tax benefit under NPS.
Therefore effectively, who falls in the 30% tax bracket, stands to shave of a substantial amount from his taxable income. Furthermore, the returns on this additional investment are not as measly as
the EPF (approx 8-9%), but much more controlled (you can chose ur fund options) and market linked.
All these measures are slowly resurrecting NPS as a possible investment vehicle. | {"url":"http://memoirs.sraghav.in/?widgetType=BlogArchive&widgetId=BlogArchive1&action=toggle&dir=open&toggle=MONTHLY-1167589800000&toggleopen=MONTHLY-1630434600000","timestamp":"2024-11-05T04:04:13Z","content_type":"application/xhtml+xml","content_length":"113914","record_id":"<urn:uuid:d9dd8a9a-0e0c-427b-acd5-cd23f70965a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00489.warc.gz"} |
How many rays are there in an angle? | TIRLA ACADEMY
There are various types of geometrical shapes- line segments, lines, rays, etc. that we use in Mathematics. Here we are talking about rays and how many rays are there at an angle?
Ray is a straight line that has one starting point and one endless point.
Example: Torchlight, Sunlight, etc.
How many rays are there in an Angle?
There are two rays at an angle.
These rays are also known as arms of the angle. | {"url":"https://www.tirlaacademy.com/2021/12/how-many-rays.html","timestamp":"2024-11-02T23:58:11Z","content_type":"application/xhtml+xml","content_length":"313813","record_id":"<urn:uuid:ba179715-d094-4340-bc42-2d5feba0a539>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00146.warc.gz"} |
Series of articles about HP - Printable Version
Series of articles about HP - James_S - 09-26-2014 09:11 AM
Browsing through the net I came across a series of interesting articles about HP (and not only) calculators. I am sure many of you know all of it already but perhaps there are some who could benefit
- enjoy!
"HP CALCULATOR HISTORY - THE HP-35
Introduced on July 1 1972, this was the first handheld electronic
calculator sold by HP, and the first ever to perform logarithmic and
trigonometric functions with one keystroke. As opposed to later HP
calculators, it has an x^y function, not y^x, and the trigonometric
functions work in degrees only. The story goes that it was made after
William Hewlett was shown a new scientific desktop calculator by his
engineers, and asked for a version to fit in his shirt-pocket. At
first, HP thought they would only make a few HP-35s for their own
engineers, as no-one else would be interested. Then they decided to
try selling it - and sold hundreds of thousands. This means that the
HP-35 is not particularly rare, but collectors will pay a good price
for one because the HP-35 was the first HP handheld. We celebrated its
twentieth anniversary last year, and articles about it were published
in the proceedings of our 1992 conference and in DATAFILE V12N2. The
latter article includes details of the three different types of HP-35
Read more: http://www.hpcc.org/calculators/wmjarts.html
RE: Series of articles about HP - Jake Schwartz - 09-26-2014 05:29 PM
FWIW, I think it had already been established long ago that the actual introduction date of the HP35 was closer to February 1st of 1972, rather than July.
Just sayin'... | {"url":"https://hpmuseum.org/forum/printthread.php?tid=2191","timestamp":"2024-11-07T07:27:35Z","content_type":"application/xhtml+xml","content_length":"4631","record_id":"<urn:uuid:c5e167da-1498-44e7-ab75-62a9783596ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00364.warc.gz"} |
Fast Reciprocal Frequency Counter
This project, a fast reciprocal frequency counter method from 1990, details a method for creating a fast frequency counter using reciprocal counting technique.
The reciprocal counting technique is essential especially for measuring low frequencies.
The normal way you think of counting is to measure the edge transitions of the incoming signal over a set period of time. If the input frequency is low there is a chance of missing the incoming edge.
Reciprocal counting measures a clock frequency over a period of the incoming signal i.e. it is gated bu the input signal so the error can only be out by one clock period i.e. a far smaller error.
The interesting thing is that it uses a fast 8x8 multiplier and this piece of hardware is incorporated into the 18F series PIC chips - it may not fit the exaxt design here but is a good starting
Executive Summary of the Fast Reciprocal Frequency Counter
A method and apparatus makes rapid frequency measurements by measuring time intervals for a series of blocks of event counts with the number of events in each block held constant. This makes the
numerator of the events/time relationship constant so it does not have to be measured, processed or stored.
The frequency of the signal is determined by measuring the time interval, then taking the inverse of the measured value and multiplying by the appropriate constant. A fast inverse circuit uses a
Taylor series expansion technique implemented in digital circuit, with the slope resolution adjusted for regions of small slope to improve accuracy.
Background of the Fast Reciprocal Frequency Counter
Precision frequency counters typically include two counters, a time counter to accumulate events from a stable clock, and an event counter to accumulate events from an input signal. Dividing the
number of events in the event counter by the result in the time counter provides the average frequency of the input signal.
The two counters are started and stopped by a signal called a gate. If the gate is synchronized with the clock, the gate time is controlled exactly and the number of input signal events is counted
with some uncertainty (plus or minus one event). If the gate is synchronized with the input signal, the number of input signal events is counted exactly and the elapsed time is counted with some
uncertainty (plus or minus one clock period). This latter method is called reciprocal counting.
Because the plus or minus one event count uncertainty gives relatively poor frequency resolution, virtually all modern precision frequency counters use the reciprocal counting method. Increased
accuracy can be gained by averaging over more than one input signal event.
However, division is an inherently slow operation. In the digital circuits used in most modern instruments, division requires many processing cycles or a very large amount of circuitry because the
computation of each output bit uses a carry through many bits of the intermediate results of the previous output bit.
One approach to avoiding division is to measure the period between successive pulses and generate the inverse of the period to produce the frequency. This approach is disclosed in U.S. Pat. No.
4,707,653 (Wagner). Wagner uses a PROM which includes a lookup table to provide the inverse of the period.
However, Wagner does not address making measurements over multiple cycles, which is necessary to produce high precision frequency measurements. Also, because Wagner's device measures on every input
event, it is limited to relatively low frequency applications. Finally, the simple look up table limits the accuracy provided.
The limits imposed by division for measurements on multiple cycles made it difficult or impossible to make successive frequency measurements as rapidly as desired. Making more rapid frequency
measurements would allow triggering a frequency versus time measurement on a frequency transition, making and displaying nearly real time frequency measurements of a frequency modulated or frequency
agile signal, or producing histograms of frequency distribution of such signals. Another application is making a precision, wideband, programmable frequency to voltage converter.
Another application for making rapid frequency measurements is in an instrument for providing continuous time interval measurements on a signal. Continuous time interval measurements make it simpler
to study dynamic frequency behavior of a signal: frequency drift over time of an oscillator, the frequency hopping performance of an agile transmitter, chirp linearity and phase switching in radar
Continuous time interval measurements on a signal provide a way to analyze characteristics of the signal in the modulation domain, i.e., the behavior of the frequency or phase of the signal versus
time. An example of an instrument that generates this type of time stamp and continuous time interval data is described in "Frequency and Time Interval Analyzer Measurement Hardware", Paul S.
Stephenson, Hewlett-Packard Journal, Vol. 40, No. 1, February, 1989.
Summary of the Fast Reciprocal Frequency Counter
One aspect of the design is a method and apparatus for making rapid frequency measurements by measuring time intervals for a series of blocks of event counts with the number of events in each block
held constant. This makes the numerator of the events/time relationship constant so it does not have to be measured, processed or stored.
The frequency of the signal is determined by measuring the time interval, then taking the inverse of the measured value and multiplying by the appropriate constant.
Another aspect of the design is a fast inverse circuit. The circuit uses a Taylor series expansion technique implemented in digital circuitry, with the slope resolution adjusted for regions of small
slope to improve accuracy.
Figure 1 : Shows a high level block diagram of a frequency counter using the constant event measurement method and the fast inverse circuit of the invention
for the fast reciprocal frequency counter
Description of the Fast Reciprocal Frequency Counter
FIG. 1 shows a high level block diagram of a frequency counter using the constant event measurement method and the fast inverse circuit of the design. A time counter 101 receives a clock signal on
input line 103 from a stable, high frequency clock and produces an output of the accumulated count on line 109.
An input signal whose frequency is to be measured is applied on line 107 to a constant event counter 105, described in more detail below, that produces an output pulse on line 111 every N input
signal events. The constant event counter 105 is programmable so that N can be varied.
The count data on line 109 is applied to the data input of latch 113 and the pulses on line 111 are applied to the enable input of latch 113. Thus, at each output pulse from the constant event
counter 105, the current value in the time counter 101 is stored in latch 113. The data output of latch 113 on line 115 is a series of time values corresponding to the time of occurrence of
sequential blocks of N input signal events.
The series of time values is applied to a time value processor 117. The time value processor produces a series of time interval values on line 119, representing the difference between sets of time
values. The time values can be processed in a variety of ways, some well known, to produce time interval values.
The details of the processing methods are beyond the scope of this design. One simple way to process the time values, that is sufficient to demonstrate the operation of the design, is to take the
difference between successive time values to yield a series of time intervals.
The time interval values on line 119 are applied to inverse circuit 121, which takes the inverse of the values to produce a series of corresponding frequency values on line 123. The operation of the
inverse circuit is explained in more detail below.
The frequency output values on line 123 can be displayed or stored for postprocessing. Because they are derived from constant event measurements, the time interval signals on line 119 can be applied
to triggering circuits with the appropriate time value limits to allow triggering on a desired frequency.
Figure 2 : Shows a more detailed block diagram of the constant event counter 105 in fig. 1
for the fast reciprocal frequency counter
FIG. 2 shows a more detailed block diagram of the constant event counter 105 in FIG. 1. Counter 105 is a programmable count down counter that can be reloaded with an initial value when it reaches its
terminal count without skipping any cycles. Because it is used to arm for measuring the same signal being counted, it must produce an output in time for a measurement on the immediately following
edge. For a 100 MHz input signal, this will occur as soon as 10 ns after the edge initiating the output.
Counter 201 is an 8 bit down counter. This counter can be implemented in a variety of ways, with one preferred circuit being eight flip flops with the flip flops for the two least significant bits
connected in a synchronous configuration and the remaining flip flops connected in a ripple through configuration.
The counter is asynchronously loaded with an initial state from register 203 via line 202, on the occurrence of a signal from a reload logic circuit 205 on line 204. Register 203 can be loaded with
the desired preset initial value to provide an output every N input signal events.
The input signal to be measured, on line 107, is applied to counter 201 on line 206 through a delay 211, described below. The output of counter 201 is applied to terminal count logic 207 on line 208.
A latch output signal of terminal count logic 207 on line 210 is applied to the J input of output latch flip flop 209. Flip flop 209 is clocked by the input signal via line 212. Thus, on the next
input signal edge after the latch signal is applied, flip flop 209 produces an output signal on line 111 that is applied to the latch 113 as described above. Latch flip flop 209 is reset at the start
of each measurement.
A reload output signal of terminal count logic 207 is applied to reload logic 205 via line 214 to enable reloading when counter 201 reaches its terminal count. The input signal is applied to reload
logic 205 via line 216. Reload logic 205 is reset at the start of each measurement block.
Delay 211 delays the input signal to the counter 201, to provide time for the counter to recover from the reload. However, this delay also subtracts from the setup time for the output latch flip flop
209. Thus, delay 211 must be long enough to allow the terminal logic and the reload logic, triggered by the undelayed input event edge, to reload the counter 201 before the following delayed input
event arrives on line 206.
The delay must be short enough so the counter is not reloaded and starting to count when the delayed event edge arrives on line 206. The delay must be short enough so the latch output signal on line
210 reaches the flip flop 209 in time for it to be clocked by following undelayed input event edge on line 212.
Figure 3 : Shows a more detailed schematic block diagram of the inverse circuit 121 in fig. 1
for the fast reciprocal frequency counter
FIG. 3 shows a more detailed schematic block diagram of the inverse circuit 121 in FIG. 1. The circuit includes a read only memory (ROM) 301, a multiplier 303, a bit field selector multiplexer 305 an
a subtractor 307. The input to the inverse circuit 121 are the time interval values on line 119.
In the embodiment described, these are 16 bit binary words. The inverse circuit takes the inverse of the input interval values by approximating a Taylor series expansion.
A Taylor series expansion of 1/X from a nearby point Xo is:
This is an infinite series, but the higher order terms rapidly approach zero, particularly if Xo is close to X.
For a binary word X, Xo can be the more significant bits of X (with the less significant bits assumed to be zero) and (X-Xo) can be the less significant bits. For example, if X is a 16 bit binary
number, Xo is the 8 most significant bits and (X-Xo) is the 8 least significant bits. Thus subtraction is not needed to produce the (X-Xo) term used in the expansion.
The two most significant bits of the input are applied to an or gate 309, and the output is applied to the bit field select multiplexer 305, to control its operation as explained below.
The eight most significant bits of the input value (representing Xo) form the address applied to ROM 301. ROM 301 is a 256 word by 24 bit memory. ROM 301 operates as a lookup table with two data
outputs, a 16 bit word offset representing the inverse of Xo (1/Xo), and an 8 bit word slope representing 1/Xo
The offset is applied to the subtractor 307 on line 302 and the 8 bit slope is applied to multiplier 303 on line 304.
Multiplier 303 also receives the 8 least significant bits of the input value (representing X-Xo) and multiplies this input by the 8 bit slope (1/Xo
from the ROM 301, producing a 16 bit output representing the second term of the Taylor series expansion, on line 306. The result on line 306 is adjusted by the bit select field as described below,
and applied to the subtractor via line 308.
Subtractor 308 subtracts the second term value on line 308 from the first term offset value on line 302 to produce a final 16 bit output value that represents the inverse (1/X) of the 16 bit time
interval input value (X).
Without the adjustment function that the bit field selector enables the circuit to perform, the circuit would have about 8 bit accuracy for a 16 bit input. The adjustment function and a small
reduction of the offset values stored in ROM 301 from the actual values improves the accuracy to about 12-13 bits.
First, the sum of the higher order Taylor expansion terms is always positive, so by reducing the values for the offset stored in ROM 301, the average error can be reduced. In effect the piecewise
linear approximation is adjusted to more closely overlay the actual 1/X function curve.
Second, the magnitude of the slope (1/Xo
decreases very rapidly with increasing Xo, so over most of the range the second term result would be limited by slope quantization. This is avoided by storing 16 times the slope in the addresses for
the upper 3/4 of the range of the ROM 301, and using a different set of bits (shifted by 4) out of the multiplier 303 to apply to the subtractor 307.
The shifting function is accomplished by the bit field select multiplexer 305, controlled by the ORed 2 most significant bits of the time value input on line 312.
Figure 4a : Illustrates the combination of bits sent to the subtractor 307 if x is in the lower 1/4 of the input range
for the fast reciprocal frequency counter
The operation of the bit field select multiplexer is illustrated in FIGS. 4A and 4B. FIG. 4A illustrates the combination of bits sent to the subtractor 307 if X is in the lower 1/4 of the input
range. This is indicated when the two most significant bits of the input value on line 119 are both 0, and the ORed value on line 312 is thus 0.
Of the 16 bits produced by multiplier 303, the 4 least significant bits are discarded and the 12 most significant bits are applied to the 12 least significant bits of the subtractor.
Figure 4b : Illustrates the combination of bits sent to the subtractor 307 if x is in the upper 3/4 of the input range
for the fast reciprocal frequency counter
FIG. 4B illustrates the combination of bits sent to the subtractor 307 if X is in the upper 1/4 of the input range. This is indicated when either or both of the two most significant bits of the input
value on line 119 are 1, and the ORed value on line 312 is thus 1. Of the 16 bits produced by multiplier 303, the 8 least significant bits are discarded and the 8 most significant bits are applied to
the 8 least significant bits of the subtractor. In this case the multiplier output is shifted down by 4 bits to adjust for the 16 times slope values stored in the ROM table.
Figure 5 : In fig. 1
for the fast reciprocal frequency counter
FIG. 5 graphically illustrates the operation of the inverse circuit 121. The horizontal axis is the input value X, and the vertical axis is the output value 1/X. The curve 501 shows the 1/X
relationship. Point 503 represents a particular (16 bits) input value X. Point 505 represents the Xo (8 bits) applied as the address for ROM 301.
The hash marks on the horizontal axis illustrate the quantization limit of the 8 bit approximation. Point 511 represents the approximation offset value 1/Xo stored (16 bits) in ROM 301, and applied
to subtractor 307. The length of dashed line segment 507 indicates the difference X-Xo (8 bits), sent to multiplier 303.
The slope of dashed line segment 515 indicates the slope value (1/Xo
(8 bits) stored in ROM 301 and also sent to multiplier 303. The length of dashed line segment 509 indicates the output of multiplier 303, the second Taylor series term (X-Xo)/Xo
as it is applied to subtractor 307.
Finally, point 513 on the vertical axis represents the output (16 bits) of subtractor 307, approximating 1/X.
While there have been shown and described what are at present considered the preferred embodiments of the present design, it will be obvious to those skilled in the art that various changes and
modifications may be made therein without departing from the scope of the design as defined by the appended claims.
Click here for more project ideas.
Have your say about what you just read! Leave me a comment in the box below.
Don’t see the comments box? Log in to your Facebook account, give Facebook consent, then return to this page and refresh it.
Jump from the fast reciprocal frequency counter method page to
Best Microcontroller Projects Home Page.
Privacy Policy | Contact | About Me
Site Map | Terms of Use | {"url":"https://www.best-microcontroller-projects.com/fast-reciprocal-frequency-counter.html","timestamp":"2024-11-04T17:32:50Z","content_type":"text/html","content_length":"66116","record_id":"<urn:uuid:28b39820-a2f1-46b8-9720-f6b8ed425983>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00167.warc.gz"} |
Statistics for Managers - Prime Essay Help
Statistics for Managers
All statistical calculations will use the Employee Salary Data Set.
1. Data, even numerically code variables, can be one of 4 levels – nominal, ordinal, interval, or ratio. It is important to identify which level a variable is, as this impacts the kind of
analysis we can do with the data. For example, descriptive statistics such as means can only be done on interval or ratio level data. Please list, under each label, the variables in our data set
that belong in each group.
2. The first step in analyzing data sets is to find some summary descriptive statistics for key variables. For salary, compa, age, Performance Rating, and Service; find the mean and standard
deviation for 3 groups: overall sample, Females, and Males. You can use either the Data Analysis Descriptive Statistics tool or the Fx =average and =stdev functions. Note: Place data to the right,
if you use Descriptive statistics, place that to the right as well:
3. What is the probability for a:
a. Randomly selected person being a male in grade E?
b. Randomly selected male being in grade E?
c. Why are the results different?
4. For each group (overall, females, and males) find::
a. The value that cuts off the top 1/3 salary in each group.
b. The z score for each value.
c. The normal curve probability of exceeding this score.
d. What is the empirical probability of being at or exceeding this salary value?
e. The score that cuts off the top 1/3 compa in each group.
f. The z score for each value.
g. The normal curve probability of exceeding this score.
h. What is the empirical probability of being at or exceeding this salary value?
i. How do you interpret the relationship between the data sets? What do they mean about our equal pay for equal work question?
5. Equal Pay Conclusions:
a. What conclusions can you make about the issue of male and male pay equality? Are all of the results consistent?
b. What is the difference between the salary and compa measures of pay?
c. Conclusions from looking at salary results:
d. Conclusions from looking at compa results:
e. Do both salary measures show the same results?
f. Can we make any conclusions about equal pay for equal work yet?
Use the order calculator below and get started! Contact our live support team for any assistance or inquiry. | {"url":"https://www.primeessayhelp.com/statistics-for-managers/","timestamp":"2024-11-12T16:43:51Z","content_type":"text/html","content_length":"81221","record_id":"<urn:uuid:4e41d459-7d2d-4545-9bb9-10bab1655532>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00857.warc.gz"} |
Fundamental theorem of algebra
Fundamental theorem of algebra. Every non-constant polynomial with complex coefficients has at least one complex root.
We will not prove this theorem.
We can deduce that every degree \(n\) polynomial has exactly \(n\) complex roots by an inductive argument. A degree \(n\) polynomial has at least one root by the fundamental theorem of algebra, and
by polynomial division, we can write it as a product of a linear factor and a degree \(n-1\) polynomial. By this degree \(n-1\) polynomial has at least one root, so this can be factorised to a linear
factor and a degree \(n-2\) polynomial, and so on. By induction, a degree \(n\) polynomial has \(n\) complex roots. | {"url":"https://glnotes.com/complex-numbers/fundamental-theorem-of-algebra/","timestamp":"2024-11-04T08:49:07Z","content_type":"text/html","content_length":"39646","record_id":"<urn:uuid:ad2e2053-1196-4f35-a55f-3ab5baf1409a>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00610.warc.gz"} |
Online calculator
Pressure units converter
Calculator which converts units of pressure
Units used in the converter
Pascal (Pa) is defined as one newton per square meter. Pascal as replacement of newton per square meter (N/m2) was introduced only in 1971 at the 14e Conférence Générale des Poids et Mesures (14th
Another unit of pressure is the bar or 100000 pascals. Millibar is 100 pascals, and it is widely used in meteorology. Thus, "standard" atmospheric pressure is 1013.25 millibars.
"Standard" atmospheric pressure, or atmosphere (atm), which is the average pressure of air at sea-level, was considered to be 760 millimeters of mercury, or 760 torrs (see below). However, in 1954,
the atmosphere's definition was revised by the 10e Conférence Générale des Poids et Mesures (10th CGPM) to the currently accepted definition: one atmosphere is equal to 101325 pascals. By the way,
"101325 pascals" is the average atmospheric pressure at sea-level at Paris latitude.
Technical atmosphere
Technical atmosphere (at) is the pressure of 1 kilogram-force per square centimeter. Kilogram-force is the force produced with 1 kilogram of mass in the gravity field with g equals to 9.80665 m/s2 or
9.80665 Newtons. 1 technical atmosphere is 0.96784 of a standard atmosphere.
A long time ago, in pre-metric times, there lived an Italian whose name was Evangelista Torricelli (1608-1647). He was the first who proved that air has pressure, as he famously wrote in a letter:
"We live submerged at the bottom of an ocean of air." He also had invented the mercury barometer while making his become-classic experiment with a tube approximately one meter long, sealed at the
top, filled with mercury, and set vertically into a basin of mercury.
Unit of pressure equals approximately to the pressure of one millimeter of mercury has been named torr after him.
At the 10e Conférence Générale des Poids et Mesures (10th CGPM) the torr was redefined as 1⁄760 of one atmosphere. This was necessary in place of the torr's definition as one millimeter of mercury
because the height of mercury changes at different temperatures and gravities.
Pounds per square inch
Pounds per square inch (psi) have the same meaning as the technical atmosphere. The difference is the usage of non-metric (imperial) units.
How to use the converter
Enter the value of the unit to be converted and use the table to find out the result. For example, to convert 100 torrs to pascals, enter 100 and lookup value at the crossing of the "Millimeter of
mercury (torr)" row and "Pascal" column.
PLANETCALC, Pressure units converter | {"url":"https://planetcalc.com/258/?thanks=1","timestamp":"2024-11-14T19:55:26Z","content_type":"text/html","content_length":"36778","record_id":"<urn:uuid:77bf7fb4-b2cd-4b5e-971f-612922147a55>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00731.warc.gz"} |
Mastering Random Choice with PyTorch: A Comprehensive Guide
Random choice with PyTorch involves selecting elements randomly from a dataset, which is crucial for various machine learning tasks. This technique helps in creating diverse training batches,
ensuring models generalize well. It’s widely used in data augmentation, bootstrapping, and stochastic processes, enhancing the robustness and performance of machine learning models.
Understanding Random Choice
In PyTorch, random choice is typically done using the torch.multinomial function. This function allows you to draw samples from a multinomial distribution, which can be used to randomly select
elements from a tensor based on specified probabilities.
Here’s a quick example:
import torch
a = torch.tensor([1, 2, 3, 4])
p = torch.tensor([0.1, 0.1, 0.1, 0.7])
n = 2
replace = True
idx = p.multinomial(num_samples=n, replacement=replace)
b = a[idx]
Differences from Other Methods in Python:
1. NumPy’s random.choice:
□ Function: numpy.random.choice
□ Capabilities: Can sample with or without replacement, supports probability weights.
□ Example:
import numpy as np
a = np.array([1, 2, 3, 4])
p = np.array([0.1, 0.1, 0.1, 0.7])
n = 2
replace = True
b = np.random.choice(a, p=p, size=n, replace=replace)
2. Python’s random.choices:
□ Function: random.choices
□ Capabilities: Can sample with replacement, supports probability weights.
□ Example:
import random
a = [1, 2, 3, 4]
p = [0.1, 0.1, 0.1, 0.7]
n = 2
b = random.choices(a, weights=p, k=n)
Key Differences:
• Library: PyTorch uses torch.multinomial, while NumPy uses numpy.random.choice and Python’s standard library uses random.choices.
• Replacement: PyTorch’s torch.multinomial and NumPy’s random.choice can sample with or without replacement, while random.choices only samples with replacement.
• Data Structure: PyTorch operates on tensors, NumPy on arrays, and Python’s random module on lists or other sequences.
These differences can influence the choice of method based on the specific requirements of your application.
Implementing Random Choice with PyTorch
Here’s a step-by-step guide to implement ‘random choice’ with PyTorch:
Step 1: Import Necessary Libraries
First, you need to import PyTorch and other necessary libraries.
import torch
import numpy as np
Step 2: Define the Function for Random Choice
You can create a function that mimics numpy.random.choice using PyTorch.
def torch_random_choice(input_tensor, num_samples, replace=True):
Select random samples from a tensor.
input_tensor (torch.Tensor): The input tensor to sample from.
num_samples (int): Number of samples to draw.
replace (bool): Whether the sampling is with or without replacement.
torch.Tensor: Randomly selected samples.
if replace:
indices = torch.randint(0, len(input_tensor), (num_samples,))
indices = torch.randperm(len(input_tensor))[:num_samples]
return input_tensor[indices]
Step 3: Create an Example Tensor
Create a tensor from which you want to randomly select elements.
input_tensor = torch.tensor([10, 20, 30, 40, 50])
Step 4: Use the Function to Select Random Samples
Now, use the function to select random samples from the tensor.
num_samples = 3
samples_with_replacement = torch_random_choice(input_tensor, num_samples, replace=True)
samples_without_replacement = torch_random_choice(input_tensor, num_samples, replace=False)
print("Samples with replacement:", samples_with_replacement)
print("Samples without replacement:", samples_without_replacement)
• Step 1: Import the necessary libraries.
• Step 2: Define a function torch_random_choice that takes an input tensor, the number of samples to draw, and a boolean indicating whether to sample with replacement.
□ If replace is True, it uses torch.randint to generate random indices with replacement.
□ If replace is False, it uses torch.randperm to generate a permutation of indices and selects the first num_samples indices.
• Step 3: Create an example tensor to demonstrate the function.
• Step 4: Use the function to draw random samples from the tensor and print the results.
Feel free to adjust the input_tensor and num_samples to fit your specific use case!
Use Cases
Here are some use cases for ‘random choice with PyTorch’:
1. Data Augmentation:
□ Random Transformations: Apply random transformations like rotations, flips, and color adjustments to images to increase the diversity of the training dataset.
□ MixUp and CutMix: Combine two images and their labels to create new training samples, improving model robustness.
2. Sampling:
□ Mini-batch Sampling: Randomly select a subset of data points for each training iteration to reduce computational load and improve training efficiency.
□ Weighted Sampling: Use probabilities to sample data points based on their importance or frequency, ensuring a balanced representation.
3. Model Training:
□ Dropout: Randomly deactivate neurons during training to prevent overfitting and improve generalization.
□ Ensemble Methods: Train multiple models with different random subsets of data to improve overall performance and robustness.
4. Reinforcement Learning:
□ Experience Replay: Randomly sample past experiences to break correlation and improve learning stability.
□ Action Selection: Use random choice to select actions based on a probability distribution, balancing exploration and exploitation.
These are just a few examples of how ‘random choice’ can be effectively utilized in PyTorch for various machine learning tasks.
Advantages and Limitations
Advantages of Using random.choice with PyTorch
1. Flexibility: PyTorch’s dynamic computational graph allows for flexible and efficient model building and modification, which can be advantageous when using random.choice for stochastic processes
in neural networks.
2. Integration: Seamless integration with PyTorch’s tensor operations and GPU acceleration, making it efficient for large-scale data processing and model training.
3. Customizability: Allows for custom sampling strategies, which can be tailored to specific needs in data augmentation, reinforcement learning, or probabilistic modeling.
Limitations of Using random.choice with PyTorch
1. Limited Probability Support: Unlike numpy.random.choice, PyTorch’s random.choice does not natively support passing an array of probabilities for weighted sampling.
2. Performance Overhead: May introduce performance overhead when used extensively in large-scale models, especially if not optimized properly.
3. Complexity: Requires additional code to handle scenarios that involve weighted probabilities or more complex sampling strategies, which can increase the complexity of the implementation.
Mastering ‘Random Choice’ with PyTorch: A Crucial Skill for Machine Learning
Mastering ‘random choice with PyTorch’ is crucial for effective machine learning practices, as it enables data augmentation, sampling, model training, and reinforcement learning techniques that
improve model robustness, generalization, and performance. By leveraging PyTorch’s dynamic computational graph, flexible tensor operations, and GPU acceleration, developers can efficiently implement
stochastic processes in neural networks.
The Power of ‘Random Choice’ Function
The ‘random choice’ function is a powerful tool for various machine learning tasks, including data augmentation, sampling, and reinforcement learning. It allows for customizability, flexibility, and
integration with PyTorch’s tensor operations and GPU acceleration. However, it also has limitations, such as limited probability support, potential performance overhead, and increased complexity when
handling weighted probabilities or complex sampling strategies.
Best Practices for Using ‘Random Choice’ with PyTorch
• Using the function for data augmentation, sampling, model training, and reinforcement learning techniques
• Leveraging PyTorch’s dynamic computational graph, flexible tensor operations, and GPU acceleration
• Customizing the function for weighted probabilities or complex sampling strategies
• Optimizing its usage to avoid performance overhead and complexity
By following these best practices and mastering ‘random choice with PyTorch’, developers can unlock the full potential of this powerful tool and create more effective machine learning models. | {"url":"https://terramagnetica.com/random-choice-with-pytorch/","timestamp":"2024-11-04T18:05:24Z","content_type":"text/html","content_length":"69847","record_id":"<urn:uuid:7ea9f7e2-7a49-4537-b32c-8373f501ab67>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00330.warc.gz"} |
Calculate speed in Python | Dremendo
Calculate speed in Python
input() - Question 20
In this question, we will see how to input distance in kilometer, time in minutes in Python programming using the input() function and find the speed. To know more about input() function click on the
input() function lesson.
Q20) Write a program in Python to input distance in kilometer, time in minutes and find the speed.
Formula: Speed = Distance / Time
d=int(input('Enter distance in km '))
t=int(input('Enter time in minutes '))
print('Speed = {:.2f} km per minute'.format(s))
Enter distance in km 70
Enter time in minutes 50
Speed = 1.40 km per minute | {"url":"https://www.dremendo.com/python-programming-tutorial/python-input-function-questions/q20-calculate-speed-in-python","timestamp":"2024-11-05T01:05:28Z","content_type":"text/html","content_length":"34862","record_id":"<urn:uuid:f87dadec-f9cc-4493-ac53-fa6feb1b33db>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00055.warc.gz"} |
The Stacks project
Example 26.9.3. Let $k$ be a field. An example of a scheme which is not affine is given by the open subspace $U = \mathop{\mathrm{Spec}}(k[x, y]) \setminus \{ (x, y)\} $ of the affine scheme $X =\
mathop{\mathrm{Spec}}(k[x, y])$. It is covered by two affines, namely $D(x) = \mathop{\mathrm{Spec}}(k[x, y, 1/x])$ and $D(y) = \mathop{\mathrm{Spec}}(k[x, y, 1/y])$ whose intersection is $D(xy) = \
mathop{\mathrm{Spec}}(k[x, y, 1/xy])$. By the sheaf property for $\mathcal{O}_ U$ there is an exact sequence
\[ 0 \to \Gamma (U, \mathcal{O}_ U) \to k[x, y, 1/x] \times k[x, y, 1/y] \to k[x, y, 1/xy] \]
We conclude that the map $k[x, y] \to \Gamma (U, \mathcal{O}_ U)$ (coming from the morphism $U \to X$) is an isomorphism. Therefore $U$ cannot be affine since if it was then by Lemma 26.6.5 we would
have $U \cong X$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 01IL. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 01IL, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/01IL","timestamp":"2024-11-08T08:22:06Z","content_type":"text/html","content_length":"14184","record_id":"<urn:uuid:81c18514-ce73-4e3b-8c04-701a7d9b6a9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00851.warc.gz"} |
Path Puzzle
Use all of the cards to create a continuous red line from start to finish. Keep dragging cards onto the yellow square until all of the cards are on the grid.
This web site contains over a thousand free mathematical activities
for teachers and pupils. Click here to go to the main page which More Activities:
links to all of the resources available.
Please contact me if you have any suggestions or questions.
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is.
Are you a mathematician?
Each month a newsletter is published containing details of the new
Comment recorded on the 14 September 'Starter of the Day' page by Trish Bailey, Kingstone School: additions to the Transum website and a new puzzle of the month.
"This is a great memory aid which could be used for formulae or key facts etc - in any subject area. The PICTURE is such an aid The newsletter is then duplicated as a podcast which is available on
to remembering where each number or group of numbers is - my pupils love it! the major delivery networks. You can listen to the podcast while you
Thanks" are commuting, exercising or relaxing.
Comment recorded on the 19 June 'Starter of the Day' page by Nikki Jordan, Braunton School, Devon: Transum breaking news is available on Twitter @Transum and if that's
not enough there is also a Transum Facebook page.
"Excellent. Thank you very much for a fabulous set of starters. I use the 'weekenders' if the daily ones are not quite what I
want. Brilliant and much appreciated."
Featured Activity Numeracy
Nine Digits "Numeracy is a proficiency which is developed mainly in Mathematics but also in other subjects. It is more than an ability to do
basic arithmetic. It involves developing confidence and competence with numbers and measures. It requires understanding of the
number system, a repertoire of mathematical techniques, and an inclination and ability to solve quantitative or spatial problems
in a range of contexts. Numeracy also demands understanding of the ways in which data are gathered by counting and measuring, and
Arrange the given digits one to nine to make three numbers such that presented in graphs, diagrams, charts and tables."
two of them add up to the third. This is a great puzzle for
practicing standard pen and paper methods of three digit number Secondary National Strategy, Mathematics at key stage 3
addition and subtraction.
Go Maths
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way
to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths main page links to more activities designed for students in upper Secondary/
High school.
It may be worth remembering that if Transum.org should go offline for
If you found this activity useful don't forget to record it in your Alternatively, if you use Google Classroom, all you have whatever reason, there are mirror site at Transum.info that contains
scheme of work or learning management system. The short URL, ready to to do is click on the green icon below in order to add most of the resources that are available here on Transum.org.
be copied and pasted, is as follows: this activity to one of your classes.
When planning to use technology in your lesson always have a plan B!
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your
This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available.
Please contact me if you have any suggestions or questions.
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician?
Comment recorded on the 14 September 'Starter of the Day' page by Trish Bailey, Kingstone School:
"This is a great memory aid which could be used for formulae or key facts etc - in any subject area. The PICTURE is such an aid to remembering where each number or group of numbers is - my pupils
love it!Thanks"
Comment recorded on the 19 June 'Starter of the Day' page by Nikki Jordan, Braunton School, Devon:
"Excellent. Thank you very much for a fabulous set of starters. I use the 'weekenders' if the daily ones are not quite what I want. Brilliant and much appreciated."
Each month a newsletter is published containing details of the new additions to the Transum website and a new puzzle of the month.
The newsletter is then duplicated as a podcast which is available on the major delivery networks. You can listen to the podcast while you are commuting, exercising or relaxing.
Transum breaking news is available on Twitter @Transum and if that's not enough there is also a Transum Facebook page.
Arrange the given digits one to nine to make three numbers such that two of them add up to the third. This is a great puzzle for practicing standard pen and paper methods of three digit number
addition and subtraction.
"Numeracy is a proficiency which is developed mainly in Mathematics but also in other subjects. It is more than an ability to do basic arithmetic. It involves developing confidence and competence
with numbers and measures. It requires understanding of the number system, a repertoire of mathematical techniques, and an inclination and ability to solve quantitative or spatial problems in a range
of contexts. Numeracy also demands understanding of the ways in which data are gathered by counting and measuring, and presented in graphs, diagrams, charts and tables."
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way
to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths main page links to more activities designed for students in upper Secondary/
High school.
If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows:
Alternatively, if you use Google Classroom, all you have to do is click on the green icon below in order to add this activity to one of your classes.
It may be worth remembering that if Transum.org should go offline for whatever reason, there are mirror site at Transum.info that contains most of the resources that are available here on
When planning to use technology in your lesson always have a plan B!
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your | {"url":"https://www.transum.org/Maths/Game/PathPuzzle/Default.asp?Level=1","timestamp":"2024-11-09T20:30:11Z","content_type":"text/html","content_length":"43287","record_id":"<urn:uuid:d6c5f167-4697-478d-8772-ea50512a20b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00779.warc.gz"} |
(PDF) Development of a conceptual-level thermal management system design capability in OpenConcept
Author content
All content in this area was uploaded by Joaquim R. R. A. Martins on Jul 30, 2023
Content may be subject to copyright.
Development of a conceptual-level thermal management system
design capability in OpenConcept
Benjamin J. Brelje∗† , John P. Jasa‡, and Joaquim R.R.A. Martins§
University of Michigan, Ann Arbor, Michigan
Justin S. Gray
NASA Glenn Research Center, Cleveland, OH
Multiple studies have shown, somewhat unexpectedly, that thermal management constraints will be a key consideration
for hybrid- and all-electric aircraft designers. While airplane sizing and multidisciplinary analysis and design optimization
tools with support for electrification have been developed, most of these tools do not support thermal analysis and
almost all are closed-source or otherwise unavailable to the broader research community. In 2018, we introduced the
OpenConcept library, a toolkit for conceptual-level design optimization of aircraft with unconventional propulsion
built using the OpenMDAO framework. However, at that time, we had not yet implemented an efficient, physics-based
thermal analysis capability or the associated numerical methods to solve the problem. In this paper, we introduce the
thermal analysis extensions to OpenConcept. We provide implementation details for an open-source, physics-based
thermal management system (TMS) analysis and design capability in OpenConcept. We develop governing equations
for component-based, air-cooled and liquid-cooled thermal management systems. We detail the implementation of
thermal mass, heat sink, heat exchanger, incompressible duct, coolant loop, and refrigeration components with analytic
derivatives in OpenMDAO. We also describe a method for computing time-dependent electrical component temperatures
throughout a mission profile using OpenMDAO’s Newton solver. To illustrate thermal effects, we consider a tradespace
study on a Beech King Air with a series hybrid electric propulsion system. We optimize the aircraft for minimum
fuel burn with and without thermal constraints and TMS penalties. The optimizer sizes the TMS components to keep
component temperatures within limits while minimizing the associated fuel burn penalty. We demonstrate reasonable
robustness of the thermal model across a broad range of aircraft designs and compare the optimal designs with and
without thermal constraints and TMS penalties.
I. Introduction
Electric aircraft propulsion has emerged as a widely popular topic in the aerospace research community. The initial
design studies quickly identified shortcomings in existing aircraft analysis techniques and tools. Several analysis codes
with similar levels of fidelity for integrating energy used over a mission have been announced [
], but only one
has been open-sourced or made publicly available [
]. There is also significant duplication of effort in the research
community, particularly within the area of electrical system modeling and mission analysis. Despite multiple industry
and government studies demonstrating the need to include thermal constraints in analysis and optimization at the
conceptual level [
], no publicly-available electric propulsion mission analysis and sizing code supports
thermal analysis.
We have recently introduced OpenConcept (openconcept.readthedocs.io) a new, open source, conceptual design
and optimization toolkit for aircraft with electric propulsion [
]. OpenConcept consists of three parts: a library of
simple conceptual-level models of common electric propulsion components; a set of analysis routines necessary for
aircraft sizing and optimization; and several example aircraft models. All of OpenConcept’s codes compute derivatives
efficiently and accurately, enabling the use of OpenMDAO 2’s [
] Newton solver, as well as gradient-based optimization
In prior work, we performed a case study involving the electrification of existing turboprop airplanes [
]. We defined
a series-hybrid electric propulsion architecture for the Beechcraft King Air and solved more than 750 multidisciplinary
∗PhD Candidate, Department of Aerospace Engineering
†The author is also an employee of The Boeing Company; this article is written in a personal capacity.
‡PhD Candidate, Department of Aerospace Engineering
§Professor, Department of Aerospace Engineering
Design range (nmi)
Specific energy (Whr/kg)
Fuel mileage (lb/nmi)
Design range (nmi)
Specific energy (Whr/kg)
Degree of hybridization (electric percent)
Fig. 1 Minimum fuel burn MDO results from our previous work [11]
design optimization (MDO) problems for different combinations of range and specific energy (Fig. 1), demonstrating that
OpenConcept is a flexible and efficient way of doing trade space exploration for unconventional propulsion architectures.
In our previous version of OpenConcept, conceptual-level models of heat exchangers, heat sinks, coolant loops, heat
pumps, and associated flow paths had not yet been developed, and thermal constraints were not imposed for the results
shown Fig. 1 [11].
In the broader literature, a few attempts at physics-based thermal management system (TMS) modeling of electric
aircraft have been made [
], but none of the codes have been publicly released or open-sourced. The primary
purpose of this paper is to describe our thermal modeling approach and the implementation of the thermal components.
We then analyze the effect of thermal constraints by repeating the King Air tradespace study with TMS design variables.
II. Thermal Management System Modeling
The thermal management system of an electric (or hybrid-electric) aircraft removes waste heat from the electronic
components. Unlike conventional turbine-powered aircraft, electric aircraft have two features that significantly increase
the magnitude of the thermal management challenge. First, while turbine engines have lower efficiency, they exhaust
their waste heat to the free stream and away from the aircraft. In contrast, Ohmic resistance and eddy current losses in
electrical components generate heat within the components themselves and require designers to provide a way to carry
away the heat. Second, electrical components must be kept at fairly low temperatures to operate properly, which means
their waste heat is “low-quality” and much more difficult to reject from the components.
There are two general design approaches to aircraft thermal management systems: direct air cooling and liquid
cooling. The air-cooled approach uses carefully-designed heat sinks to enhance convection from each electrical
component to freestream air. The X-57 Maxwell demonstrator uses this approach [
]). An advantage of this
approach is system simplicity and reliability. A major disadvantage is that each electrical component requires direct
access to an air flow path, increasing configuration complexity and potentially increasing drag as well. The liquid-cooled
approach uses coolant loops to transfer heat from the electrical components throughout the aircraft to a heat exchanger
that can reject the heat to the air [
]. This approach likely reduces the number of cooling air ducts. It also provides
the option to use a refrigeration cycle or a fan to improve heat rejection at low airspeed. However, the liquid cooling
architecture is a more complex system design (with more failure modes and moving parts). Some aircraft may use a
combination of liquid cooling and direct air cooling. A notional liquid-cooled TMS architecture is illustrated in Fig. 2.
The following subsections detail the physics and numerical methods governing each component of the TMS as modeled
in OpenConcept.
Turboshaft Motor
Heat Exchanger
Coolant Loop
Air Cycle
Fig. 2 Example of liquid-cooled thermal management system architecture
A. Component Temperatures
All existing OpenConcept electrical components (
) have a
output variable that computes the heat generation rate of the component at the current operating point. The
components produce heat as a fraction of operating power via an assumed efficiency loss, though higher-fidelity heating
models could be used in the future. If the user wishes to track thermal constraints on a component, they must add an
instance of a
to the model and properly connect it to the electrical component’s heat output. The
user can either solve for quasi-steady temperatures at each analysis point, or time-accurate temperatures.
The quasi-steady formulation relies on OpenMDAO’s Newton solver to compute component temperatures that
satisfy conservation of energy. The implicit problem, implemented in ThermalComponentMassless, is:
compute Tcomp (1)
such that R(Tcomp)=qcomp −qout =0(2)
is the component temperature,
is the heat generation rate of the electrical component, and
is the
instantaneous heat rejection rate due to cooling. The heat rejection rate is computed as a function of the component
temperature (Tcomp ) and a number of other heat transfer parameters (introduced in Section II.B).
qout =q(Tcomp, . . .)(3)
The quasi-steady formulation becomes less accurate as the thermal mass increases. Even lightweight aerospace-grade
electrical components have a significant thermal mass, and at low-speed conditions (such as the beginning of the takeoff
roll), neglecting thermal mass is likely to result in unrealistic high temperatures and drive oversized TMS designs.
Therefore, we recommend using a time-accurate model, which can be expressed as
qcomp −q(Tcomp, .. .)
Tcomp =∫tf
dt dt (5)
is the heat generated by the electrical component. Equations
depend on each other and hence
form an implicit cycle that can be solved in OpenMDAO using its built-in Newton solver.
The rate
is computed by the
component. A numerical scheme is required to
compute the time integral in Equation
. Our
component provides the user a choice of a fourth-order
accurate Simpson’s Rule discretization (as previously described [
]), or the BDF3 discretization, which sacrifices some
accuracy for better stability on stiff systems. Both of these integration methods are solved implicitly in vectorized form
all-at-once using the Newton solver (without time marching). This means that the time integration and the implicit ODE
are solved simultaneously as one coupled nonlinear system. The user must specify an initial component temperature,
usually based on ambient conditions. Unlike the quasi-steady problem, the accuracy of the temperature profile depends
on the time step chosen. A smaller time step increases the size of the OpenMDAO implicit problem that needs to be
solved and increases the computation time.
B. Component-Fluid Heat Transfer
So far, we have not addressed the question of how to compute
from each component, which represents the
convective heat transfer rate from the component to a fluid stream. For a liquid-cooled component, the fluid stream
is a coolant like propylene glycol, whereas for an air-cooled component the fluid stream comes from freestream
air. In nearly every case, designers use enhanced heat transfer surfaces, such as microchannels or finned heat sinks.
component implements a microchannel cold plate and is a
reasonable choice for liquid-cooled and air-cooled applications. We assume that the thermal conductivity of the
electrical component is large relative to the cooling fluid resulting in a constant channel surface temperature in the
streamwise direction. We further assume that the aspect ratio of each channel is large and thus approximates the local
heat transfer properties using the theoretical result for infinite parallel plates. The convective heat transfer coefficient
can be computed as
h=Nu k
is the Nusselt number (which is set to 7.54 by default for constant temperature infinite parallel plates [
is the thermal conductivity of the fluid, and
is the hydraulic diameter of the channel. For a high aspect ratio channel,
is the fluid channel width and
is the fluid channel height. We neglect entrance effects for this high aspect
ratio microchannel. For air cooled applications using finned heat sinks, the user may wish to modify the heat transfer
coefficient to account for fin efficiency. To compute the overall heat transfer, we first need to compute the heat transfer
surface area as
where A is the overall heat transfer surface area,
is the length of the microchannel in the fluid flow direction, and
Nparallel is the total number of individual microchannels.
Given these convective properties, we compute the actual heat transfer using the NTU-effectiveness method [
which is typically used for fluid-fluid heat exchangers where both fluids change temperature during the exchange. In this
work, we assume that the heat transfer capability of the conductive component body is “infinite” for the purposes of
the NTU-
method. Therefore, the heat transfer capacity of the cold plate is governed solely by the coolant material
properties and flow rate. The heat transfer capacity is computed as:
Cmin =Û
mcoolant cp,coolant,(9)
is the coolant mass flow rate through the entire cold plate (not just a single channel) and
is the
coolant’s specific heat capacity. The number of thermal units (NTU) is computed as
NTU =Ah
and heat transfer effectiveness is
=1−e−NTU (11)
Finally, we can compute the heat transfer as
qout =Cmin(Tcomp −Tcoolant,in),(12)
Fig. 3 Cross-sectional geometry of the offset strip fin heat exchanger [21]
and the coolant outlet temperature as
Tcoolant,out =Tcoolant,in +qout
The user is responsible for setting reasonable values for channel geometry (
) so that the channel
flow is laminar and the infinite parallel plate assumption remains reasonable, and for ensuring that the component has
sufficient material volume to accommodate the cooling channels. This analysis also assumes that the cooling channel
weight is accounted for in the all-up weight of the component, which may not be the case for air-cooled external heat
sinks. In practice, we have found that the thermal resistance of the component-liquid heat transfer is much smaller than
for the liquid-air main heat exchanger, and that aircraft design problems are not that sensitive to cold plate channel
design parameters. However, the detailed design of internal cooling channels in electrical components is a challenging
problem in its own right.
C. Fluid-fluid Heat Transfer
After heat from electrical components is transferred into the liquid coolant loop via the cold plate, the heat must be
rejected to the atmosphere. A reasonable choice for accomplishing this is a ducted compact heat exchanger. Like the
cold plate component above, we use the NTU-effectiveness method to compute the heat transfer rate,
q=U Aoverall
NTU Tin,h −Tin,c,(14)
U Aoverall
is the overall heat transfer coefficient times the corresponding heat transfer area,
are the fluid
inlet temperatures, the number of thermal units is computed as
NTU =U Aoverall
and the heat transfer effectiveness is
Cmax ,(16)
are the maximum and minimum values of the fluid heat transfer capacity
m cp
for the hot and cold
sides, and
is an analytical or empirical function that depends on the flow arrangement of the heat exchanger (for
example, crossflow) [20].
For this study, we use crossflow plate-fin heat exchangers with offset strip fin geometry as described by Jasa et al.
. Offset strip fin heat exchangers are considered “compact” heat exchangers with high heat transfer to surface area
rates [
]. The geometric design of a heat exchanger varies to satisfy heat transfer, pressure loss, weight, and volume
requirements. Figure 3 illustrates a cross section of offset strip fin channels along with a commonly-used geometric
We use an empirical relation from Manglik and Bergles
to compute heat transfer and pressure loss specific to the
offset strip fin configuration. By default, OpenConcept’s
component uses geometric parameters representative
of a air-liquid heat exchanger, with cold-side channel width and height 1 mm, and hot-side channel width 14 mm by
1.35 mm.
D. Fluid Reservoir
Any liquid cooling system needs a reservoir. The thermal mass of the fluid in the reservoir may significantly affect
peak temperatures. We assume perfect mixing within the reservoir (that is, fluid entering the reservoir is instantaneously
mixed with the existing fluid). The rate of change of temperature within the reservoir can be computed using
(Tin −Treservoir),(17)
is the reservoir (and reservoir outlet) temperature,
is the coolant mass flow rate,
is the
mass of coolant in the reservoir, and Tin is the reservoir inflow temperature.
group combines the rate equation
with an
to solve for reservoir
temperatures at every time point, given an initial temperature. Quasi-steady thermal analysis cannot model the effect of
a fluid reservoir, which is purely a thermal mass effect. When
becomes large due to a small coolant mass relative
to the mass-flow-rate, the time constant associated with the reservoir temperature becomes small. As
tends to zero we
approach the quasi-steady solution. A small time constant makes the thermal ODE very stiff and introduces numerical
difficulties in the overall time integration problem.
E. Refrigeration Cycle
A refrigeration cycle can be used to increase the temperature of “low-quality" waste heat to reject it to the atmosphere
with a smaller heat exchanger. This process works similarly to a common household refrigerator, where relatively
low-temperature waste heat is raised to a higher temperature so it can be dissipated to the ambient surroundings. For
aircraft applications, this refrigeration cycle is often an air-cycle machine (ACM), in which air is used as the working
The model used in this study was previously described by Jasa et al.
and a schematic of the work and heat flow
for this simplified cycle is shown in Fig. 4. The ACM is modeled as a closed-loop Brayton cycle, where the working
fluid flows through a low-temperature heat exchanger and accepts input heat, shown as
. Work (
) is then done on
the fluid in the compressor, which increases the temperature and pressure of the fluid. This heated fluid then flows
through a high-temperature heat exchanger, where the “high-quality" waste heat is rejected, shown as
. The fluid then
goes through a turbine and expands, returning to a low temperature and pressure before returning to the low-temperature
heat exchanger.
We model the ACM using a system of equations adapted from Moran et al.
to capture the relevant physics
without adding unnecessary complexity to the model. From the lifting system equations, we get the following expression
for the heat load that must be dissipated using the duct heat exchanger:
is the efficiency-adjusted work,
are the temperatures of the cooling fluid at the electronics and duct
heat exchangers, respectively,
is the work coming from the shaft,
is the shaft power transfer efficiency, and
the friction loss efficiency. We can then solve for the cold-side heat load, Qc, and get
Using these equations, we can determine the amount of heat transfer on both the hot and cold sides of the lifting system
based on the work that is put in. This system is implemented in OpenConcept as the LiftingSystemComponent.
F. Coolant Duct
Ducted radiators greatly reduce cooling drag compared to finned heat sinks in the freestream [
]. There are
two primary mechanisms for this. First, a duct that decelerates flow prior to encountering the heat exchanger element
generally undergoes a lower total pressure loss. Second, the combination of duct and heat exchanger can act as a weak
ramjet providing a further modest offset to the drag of the whole arrangement. For aircraft with high-temperature
cooling loads flying at relatively high speeds, a large portion of the drag can be offset or potentially even produce some
positive thrust. The most famous application of this weak ramjet concept (known as the Meredith effect) is the North
American P-51 Mustang’s liquid engine cooling system [24].
The user has two options for computing cooling drag due to ducted heat exchangers. The first option is an
incompressible approximation. Adapting the method of Theodorsen
, we model a duct with a frontal opening,
diffuser, heat exchanger, and nozzle (Fig. 5). The fluid density everywhere in the duct is assumed to be
. Let
the free flow passage area of the heat exchanger, and
be the exit nozzle area. Let
∆p0,hex =f(Û
be the pressure loss
across the heat exchanger as a function of the duct mass flow rate
. Let
be a static pressure loss as a function of
nozzle dynamic pressure, that is
. We assume that the nozzle expands the flow back to the freestream
static pressure
, though this assumption would not hold if a variable-area exit door or cowl flap were used. The total
pressure at the exit is then computed as:
p0,e=p0,∞−∆p0,hex −∆p0,e=p∞+1
∞−∆p0,hex −∆p0,e=pe+1
Substituting ∆p0,e=ξe1
eand rearranging we obtain:
By continuity:
We compute net force by balancing the change in fluid momentum (
) and pressure forces. To account for inlet, duct,
and nozzle losses not otherwise accounted for, we apply a factor (
Cf g =
98) to gross thrust in the drag computation
and obtain:
Fnet =Û
m(UeCf g −U∞)+AeCf g(pe−p∞).(23)
The incompressible duct computation is implemented as the
component in
OpenConcept. Alternatively, the
package contains the
group, that
uses a more sophisticated 1D thermodynamic cycle modeling approach to compute drag. Isentropic relations are used to
solve for Mach numbers and flow properties implicitly using OpenMDAO’s Newton solver. The compressible model
captures Mach number and heat addition effects on net cooling drag. However, the additional fidelity is usually not
meaningful for low-speed general aviation airplanes with moderate cooling heat loads, and the compressible relations
introduce many implicit states and some robustness issues to the overall MDO problem.
III. Case Study: Revisiting the Series Hybrid Twin
To exercise the TMS model and assess the impact of thermal constraints on the design space, we revisit our previous
MDO trade space exploration study of a series hybrid twin turboprop [
]. Our baseline aircraft is a Beechcraft King
Air C90GT with a drop-in replacement series-hybrid propulsion system replacing the turboprop engines.
The series-hybrid electric propulsion architecture is illustrated in Fig. 6. To enable the aircraft to continue safe
flight and landing after loss of any single component on takeoff, the propulsion system uses two electric motors, two
propellers, and a battery large enough to provide full takeoff power in the event of engine loss. These features should
provide the same level of redundancy of the conventional twin turboprop configuration. Specific power, efficiency, and
cost assumptions for individual powertrain components are listed in Table 1.
Table 1 Powertrain technology assumptions [11]
Component Specific Power (kW/kg) Efficiency PSFC (lb/hp/hr)
Battery 5.0 97% –
Motor 5.0 97% –
Generator 5.0 97% –
Turboshaft 7.15∗– 0.6
∗Not including 104 kg base weight
Motor 2
Motor 1
Heat Exchanger
Coolant Loop
Fig. 6 Systems architecture for the twin series hybrid case study.
A. Mission Analysis Methodology
To compute mission fuel burn and other performance constraint values, we perform a full mission analysis at every
MDO iteration consisting of a balanced-field takeoff (with loss of one propulsor at the
speed), climb, cruise, and
descent. We use the same mission analysis methodology as our previous work [11], with the exception noted below.
OpenConcept’s balanced field takeoff length computation consists of two branched trajectories composed of five
piecewise segments:
1) Takeoff roll at full power from V0to V1
2) Takeoff roll at one-engine-inoperative (OEI) power from V1to VR
3) Rejected takeoff with zero power and maximum braking from V1to V0
4) Transition in a steady circular arc to the OEI climb-out flight path angle and speed
5) Steady climb at V2speed and OEI power until an obstacle height hois reached
We compute the balanced field takeoff by varying
until the accelerate-go distance (segments 1, 2, 4, and 5) is at least
as long as the accelerate-stop distance (segments 1 and 3).
During the takeoff roll (segments 1, 2, and 3), the force balance equation is:
In our previous work, we had used a method that integrates segments 1, 2, and 3 with respect to velocity instead of
time [
]. The advantage of this method is that it exhibits good numerical stability; however, it cannot be used to
integrate general ODEs including the thermal models of Section II. During the takeoff roll, the airplane is producing
maximum heat and has minimum ability to reject the heat. Therefore, instead of neglecting heating during takeoff, we
changed to a time-based integration scheme capable of computing accurate time histories of all parameters. Times,
distances, and altitudes for segments 4 and 5 are computed using prescribed kinematics from Raymer
; however, we
time-integrate all the other states, including thermal loads and battery state.
The climb, cruise, and descent segments are computed using steady flight equations. At each flight condition, the
Newton solver sets a throttle parameter such that the following residual equation is satisfied:
Rthrust =®
The user specifies the true airspeed and the vertical speed at each mission point, as well as one constraint per mission
segment (e.g., an altitude for top of climb or mission range for cruise). OpenMDAO then computes the segment duration
required to satisfy the constraints for climb, cruise, and descent using the Newton solver. OpenConcept integrates range,
altitude, fuel flow, battery SOC, and all other thermal states.
Cruise drag is computed using a drag polar with constant coefficients. We assumed an Oswald efficiency
and matched computed range to published range for a design mission by setting
022. Weights are computed
parametrically based on wing area, aspect ratio, MTOW, and other high-level parameters. The empty weight was
calibrated to match the King Air C90GT baseline by matching our model’s parametric operating empty weight (OEW) to
the published OEW (minus engine weight in both cases) by applying a factor of 2.0 to our model’s computed structural
weight (based on rough textbook formulas).
B. Optimization Without Thermal Constraints
We begin by re-running the series hybrid twin tradespace exploration from our previous work [
]. We are interested
in the optimal aircraft design at a variety of battery technology levels (quantified by the specific energy,
) and design
mission ranges. Therefore, we run a grid of MDO problems formulated as follows:
minimize: fuel burn +0.01MTOW
by varying:
Pmotor (rated)
Pturboshaft (rated)
Pgenerator (rated)
HE(degree of hybridization w.r.t energy)
subject to scalar constraints:
RTOW =WTO −Wfuel −Wempty −Wpayload −Wbatt ≥0
Rbatt =Ebatt,max −Ebatt,used ≥0
Rvol =Wfuel,max −Wfuel ≥0
BFL ≤4452 ft (no worse than baseline)
engine out climb gradient ≥2%
Vstall ≤81.6kts (no worse than baseline)
and vector constraints:
Pmotor ≤1.05Pmotor (rated)
Pturboshaft ≤Pturboshaft (rated)
Pgenerator ≤Pgenerator (rated)
Pbattery ≤Wbattery ·pb
The objective function was chosen in order to prioritize reducing tailpipe carbon emissions. However, certain
combinations of specific energy and range result in aircraft with zero fuel burn. Optimizing for fuel burn alone in these
cases is an ill-posed problem. Therefore, we add a small contribution of MTOW to the objective function in order
to force the optimizer to design reasonable all-electric aircraft. A potentially better objective function would be to
minimize total carbon emissions. This approach introduces location dependence into the problem, since electricity is
generated using more or less carbon-intensive methods in different parts of the world. The vectorial constraint quantities
represent parameters tracked over time during a mission. Each entry in the vector represents an individual point in time.
Each mission segment (climb, cruise and descent) consists of 10 discrete time intervals.
We optimized one airplane at each combination of specific energy (from 250 to 800 Wh/kg) and design range (300
to 700 nautical miles). Each airplane flew with the same climb, cruise, and descent speeds (both indicated airspeed
Design range (nmi)
Specific energy (Whr/kg)
Fuel mileage (lb / nmi)
Design range (nmi)
Specific energy (Whr/kg)
MTOW (kg)
Design range (nmi)
Specific energy (Whr/kg)
Cruise hybridization
Design range (nmi)
Specific energy (Whr/kg)
Battery weight (kg)
Design range (nmi)
Specific energy (Whr/kg)
Motor rating (kW)
Design range (nmi)
Specific energy (Whr/kg)
Engine rating (kW)
Fig. 7 Minimum fuel burn MDO results without thermal constraints.
and vertical speed). We used the
implementation of the SLSQP optimization algorithm to solve the
The results are similar to the previous study despite some changes to the mission analysis methods. Figure 7 exhibits
the same multimodal tradespace as we found before. At long ranges and poor
, little battery is used (only enough to
provide backup power on takeoff) and the airplane is essentially turboelectric. At short range and high
, the mission is
flown entirely on battery and no fuel is used. In between these two extremes, the optimizer prefers to use all of the
allotted maximum takeoff weight until it hits the upper bound (5700 kg), above which a type rating is required in many
jurisdictions including the United States and European Union.
C. Optimization with Thermal Constraints
In this work, we modified the aircraft propulsion model to include thermal management of the motor and battery.
We added a
to the motor (lumping both motors together) and to the battery pack.
Thermal mass of both components was computed using a specific heat of 921 J/kg/K (representative of aerospace-grade
aluminum). We connected cold plates of both components in series using a liquid cooling system using a propylene
glycol and water mixture with a specific heat of 3801 J/kg/K [
]. The coolant loop rejects heat via a ducted heat
exchanger. We neglect the drag-offsetting effect of heat addition and use a
to model
the air mass flow and drag. We use OpenConcept’s default geometric parameters for the offset strip fin heat exchanger.
Finally, we include a liquid coolant reservoir upstream of the heat exchanger. We include the weight of the coolant and
heat exchanger in the empty weight of the airplane, and include the drag contribution of the duct and heat exchanger.
Figure 8 shows profiles of mission parameters for a single aircraft design at 250 Wh/kg and 400 nmi range. The figure
highlights the importance of time-accurate thermal analysis. During takeoff and low-altitude climb, heating is at its
maximum and convective heat transfer capability is at a minimum (due to higher atmospheric temperature and lower
coolant duct mass flow). A quasi-steady thermal analysis would predict very high temperatures during this part of the
mission. However, because the thermal components have considerable thermal mass, the maximum temperature is not
reached until the top of the climb phase. Sizing the thermal management system to a quasi-steady analysis at the most
critical condition (early in the takeoff roll) would result in an oversized heat exchanger and unnecessarily high drag and
weight penalty.
We also add several design variables and constraints to the previous problem. We let the optimizer size the heat
exchanger width and area of the duct nozzle, thus allowing it to trade off weight and drag for equal heat rejection
capability. We also allow the optimizer to size the coolant reservoir. We constrain the time-accurate temperatures of the
motor and battery pack to stay within operating limits (90
C for the motor and 50
C for the battery). The full MDO
problem is as follows:
minimize: fuel burn +0.01MTOW
by varying:
Sre f
dpr op
Pmotor (rated)
Pturboshaft (rated)
Pgenerator (rated)
HE(degree of hybridization w.r.t energy)
Anozzle (cooling duct outlet cross-sectional area)
nwide (number of heat exchanger cells wide)
mcoolant (coolant reservoir mass)
subject to scalar constraints:
RTOW =WTO −Wfuel −Wempty −Wpayload −Wbatt ≥0
Rbatt =Ebatt,max −Ebatt,used ≥0
Rvol =Wfuel,max −Wfuel ≥0
BFL ≤4452ft (no worse than baseline)
engine out climb gradient ≥2%
Vstall ≤81.6kt (no worse than baseline)
and vector constraints:
Pmotor ≤1.05Pmotor (rated)
Pturboshaft ≤Pturboshaft (rated)
Pgenerator ≤Pgenerator (rated)
Pbattery ≤Wbattery ·pb
Tmotor ≤90◦C
Pbattery ≤50◦C
Figure 9 shows the design variables and selected responses at the optimal points across the trade space. The motor
temperature constraint is always active at the top of climb for all the designs in the tradespace (and so is not shown in
Fig. 9). The optimizer varies the duct nozzle area (to vary cooling air mass flow) and motor size (to add thermal mass)
such that the motor temperature reaches the limit at the top of the climb. The heat exchanger width converges to its
upper bound at virtually every point in the design space, while coolant mass converges to its lower bound at every point.
Figure 10 shows the difference in key variables (including fuel mileage) after accounting for thermal design and
thermal constraints. While fuel mileage worsened at every point in the design space, the impact was much larger on
certain combinations of specific energy and design range. At long range and low battery specific energy, and at short
range and high specific energy, there was little effect. The long range design with low
are essentially turboelectric
and benefit from light weight and low battery waste heat; there is simply less overall heat to reject, thus minimizing the
Altitude (ft)
Indicated airspeed (knots)
Vertical speed (ft/min)
Weight (kg)
Motor temperature (C)
Battery temperature (C)
Fig. 8 Mission trajectories for a 400 nmi mission (eb=250)
Design range (nmi)
Specific energy (Whr/kg)
Fuel mileage (lb / nmi)
Design range (nmi)
Specific energy (Whr/kg)
MTOW (kg)
Design range (nmi)
Specific energy (Whr/kg)
Cruise hybridization
Design range (nmi)
Specific energy (Whr/kg)
Battery weight (kg)
Design range (nmi)
Specific energy (Whr/kg)
Motor rating (kW)
Design range (nmi)
Specific energy (Whr/kg)
Engine rating (kW)
Design range (nmi)
Specific energy (Whr/kg)
Wing area (
Design range (nmi)
Specific energy (Whr/kg)
HX duct nozzle area (
Design range (nmi)
Specific energy (Whr/kg)
Battery temp max (C)
Fig. 9 Minimum fuel burn MDO results with thermal constraints.
Design range (nmi)
Specific energy (Whr/kg)
Fuel mileage (lb / nmi)
Design range (nmi)
Specific energy (Whr/kg)
MTOW (kg)
Design range (nmi)
Specific energy (Whr/kg)
Cruise hybridization
Design range (nmi)
Specific energy (Whr/kg)
Battery weight (kg)
Design range (nmi)
Specific energy (Whr/kg)
Motor rating (kW)
Design range (nmi)
Specific energy (Whr/kg)
Engine rating (kW)
Fig. 10 Difference in optimal designs due to thermal constraints (positive = thermally-constrained higher)
associated penalty. The short range designs with high
use no fuel to begin with, so their fuel burn remains at zero
even as they use more energy; instead, we see the thermal management penalty as an increase in MTOW. Between
these two extreme designs, the heavy hybrid airplanes generate a large amount of waste heat and burn appreciable fuel,
making the impact of thermal constraints more significant.
A very interesting trend emerged in the motor sizing design variable. The optimizer greatly oversized the motors in a
band in the heart of the tradespace (seen as a band of red from middle left to top right). In the rest of the tradespace, the
motor is sized by power required during climb. However, in the red band, the motor is being constrained by the thermal
problem. We suspect that this is a result of the sequencing of components in the thermal management system. We
designed the TMS architecture to provide the coldest coolant to the battery, since it has a lower operating temperature.
The consequence is that warmer coolant flows into the motor. The motor inflow temperature varies slowly even as outside
temperature drops due to the thermal mass of the battery. The best solution available to the optimizer is to oversize the
motors to add thermal mass and avoid overheating at the critical top of climb point. Reordering the components may
result in an improvement in fuel burn in this part of the tradespace by better balancing peak temperatures between the
motor and battery.
IV. Conclusions
Thermal constraints are currently understudied compared to other disciplines in aircraft conceptual design, and
there are few publicly-available resources available for the research community to incorporate thermal constraints into
electric aircraft studies. To fill this gap, we introduced thermal analysis and design capabilities within the OpenConcept
Python package. We demonstrate that thermal mass effects are significant when analyzing aircraft thermal trajectories,
particularly early in the mission when power is high and speeds and altitudes are low. Therefore, pseudo-steady thermal
models are not sufficient for the design of aircraft thermal management systems, because they can lead to dramatic
over-sizing. We integrated time-accurate thermal models into the mission analysis and used them to formulate constraints
in the aircraft design optimization problem. The time-accurate thermal analyses and derivatives were computed by the
OpenConcept package to enable efficient gradient-based design optimization.
We showed that thermal constraints appreciably affect the fuel burn and energy usage achievable in a series hybrid
architecture, but not uniformly throughout the tradespace. The non-uniform effects make the impact of thermal
constraints on aircraft design somewhat non-intuitive and underscore the importance of including them early in the
design process. Electric aircraft architectures with a large percentage of battery power will be impacted by TMS
penalties, but because they burn little or no fuel, the penalty is seen as an MTOW and total energy increase, not a fuel
burn penalty. Conversely, turboelectric aircraft experience a modest TMS penalty due to lighter weight and lack of
battery heating. Hybrid-electric aircraft see the largest fuel burn penalty since they are heavier than turboelectric aircraft
(thus producing more motor waste heat) and use significant quantities of batteries (producing yet more waste heat). We
also observed that the optimizer can find creative ways to satisfy the thermal constraints (such as oversizing a motor to
add thermal mass and avoid a transient over-temperature condition).
The first and second authors were supported by the National Science Foundation Graduate Research Fellowship
Program under Grant DGE 1256260. Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The
fourth author was supported by NASA ARMD’s Transformational Tools and Technologies project. This work was
also supported by the U.S. Air Force Research Laboratory (AFRL) under the Michigan-AFRL Collaborative Center in
Aerospace Vehicle Design (CCAVD), with Richard Snyder as the task Technical Monitor.
Trawick, D., Perullo, C. A., Armstrong, M. J., Snyder, D., Tai, J. C., and Mavris, D. N., “Development and Application
of GT-HEAT for the Electrically Variable Engine Design,” 55th AIAA Aerospace Sciences Meeting, Grapevine, TX, 2017.
doi:10.2514/6.2017-1922, URL http://arc.aiaa.org/doi/10.2514/6.2017- 1922.
Falck, R. D., Chin, J., Schnulo, S. L., Burt, J. M., and Gray, J. S., “Trajectory Optimization of Electric Aircraft Subject to
Subsystem Thermal Constraints,” 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Denver, CO,
2017. doi:10.2514/6.2017-4002, URL https://arc.aiaa.org/doi/10.2514/6.2017- 4002.
Hwang, J. T., and Ning, A., “Large-scale multidisciplinary optimization of an electric aircraft for on-demand mobility,”
2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Kissimmee, FL, 2018, pp. 1–18.
doi:10.2514/6.2018-1384, URL https://arc.aiaa.org/doi/abs/10.2514/6.2018- 1384.
Antcliff, K. R., Guynn, M. D., Marien, T., Wells, D. P., Schneider, S. J., and Tong, M. T., “Mission Analysis and Aircraft
Sizing of a Hybrid-Electric Regional Aircraft,” 54th AIAA Aerospace Sciences Meeting, San Diego, CA, 2016, pp. 1–18.
doi:10.2514/6.2016-1028, URL http://arc.aiaa.org/doi/10.2514/6.2016- 1028.
Vratny, P. C., Gologan, C., Pornet, C., Isikveren, A. T., and Hornung, M., “Battery Pack Modeling Methods for Universally-
Electric Aircraft,” 4th CEAS Air and Space Conference, Linkoping, Sweden, 2013.
Wroblewski, G. E., and Ansell, P. J., “Mission Analysis and Emissions for Conventional and Hybrid-Electric Commercial
Transport Aircraft,” 2018 AIAA Aerospace Sciences Meeting, Kissimmee, FL, 2018. doi:doi:10.2514/6.2018- 2028, URL
Cipolla, V., and Oliviero, F., “HyPSim : A Simulation Tool for Hybrid Aircraft Performance Analysis,” Variational Analysis
and Aerospace Engineering, 2016, pp. 95–116. doi:10.1007/978- 3-319- 45680-5.
Welstead, J. R., Caldwell, D., Condotta, R., and Monroe, N., “An Overview of the Layered and Extensible Aircraft Performance
System (LEAPS) Development,” 2018 AIAA Aerospace Sciences Meeting, Kissimmee, FL, 2018. doi:10.2514/6.2018- 1754.
Capristan, F. M., and Welstead, J. R., “An Energy-Based Low-Order Approach for Mission Analysis of Air Vehicles
in LEAPS,” 2018 AIAA Aerospace Sciences Meeting, Kissimmee, FL, 2018. doi:10.2514/6.2018-1755, URL
Vegh, J. M., Alonso, J. J., Orra, T. H., and Ilario da Silva, C. R., “Flight Path and Wing Optimization of Lithium-Air Battery
Powered Passenger Aircraft,” 53rd AIAA Aerospace Sciences Meeting, Kissimmee, FL, 2015. doi:10.2514/6.2015-1674, URL
Brelje, B. J., and Martins, J. R. R. A., “Development of a Conceptual Design Model for Aircraft Electric Propulsion
with Efficient Gradients,” Proceedings of the AIAA/IEEE Electric Aircraft Technologies Symposium, Cincinnati, OH, 2018.
Lents, C. E., Hardin, L. W., Rheaume, J., and Kohlman, L., “Parallel Hybrid Gas-Electric Geared Turbofan Engine
Conceptual Design and Benefits Analysis,” 52nd AIAA/SAE/ASEE Joint Propulsion Conference, Salt Lake City, UT, 2016.
doi:doi:10.2514/6.2016-4610, URL http://dx.doi.org/10.2514/6.2016- 4610.
Freeman, J., Osterkamp, P., Green, M. W., Gibson, A. R., and Schiltgen, B. T., “Challenges and opportunities for electric
aircraft thermal management,” Aircraft Engineering and Aerospace Technology, Vol. 86, No. 6, 2014, pp. 519–524. doi:
10.1108/AEAT-04- 2014-0042, URL http://www.emeraldinsight.com/doi/10.1108/AEAT- 04-2014- 0042.
[14] Gray, J. S., Hwang, J. T., Martins, J. R. R. A., Moore, K. T., and Naylor, B. A., “OpenMDAO: An open-source framework for
multidisciplinary design, analysis, and optimization,” Structural and Multidisciplinary Optimization, Vol. 59, No. 4, 2019, pp.
1075–1104. doi:10.1007/s00158-019- 02211-z.
Schnulo, S. L., Chin, J., Smith, A. D., and Dubois, A., “Steady State Thermal Analyses of SCEPTOR X-57 Wingtip Propulsion,”
17th AIAA Aviation Technology, Integration, and Operations Conference, Denver, CO, 2017, pp. 1–14.
Schnulo, S. L., Jeff Chin, R. D. F., Gray, J. S., Papathakis, K. V., Clarke, S. C., Reid, N., and Borer, N. K., “Development of a
Multi-Segment Mission Planning Tool for SCEPTOR X-57,” 2018 Multidisciplinary Analysis and Optimization Conference,
AIAA, Atlanta, GA, 2018. doi:10.2514/6.2018- 3738.
Chin, J., Schnulo, S. L., Miller, T., Prokopius, K., and Gray, J. S., “Battery Performance Modeling on SCEPTOR X-57 Subject
to Thermal and Transient Considerations ,” AIAA Scitech 2019 Forum, AIAA, San Diego, CA, 2019. doi:10.2514/6.2019-0784.
Falck, R. D., Chin, J. C., Schnulo, S. L., Burt, J. M., and Gray, J. S., “Trajectory Optimization of Electric Aircraft Subject to
Subsystem Thermal Constraints,” 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Denver, CO,
[19] Incropera, F. P., Fundamentals of Heat and Mass Transfer, John Wiley and Sons, Inc., USA, 2006.
[20] Kays, W., and London, A., Compact Heat Exchangers, Third Edition, McGraw-Hill Book Company, 1984.
Jasa, J. P., Brelje, B. J., Mader, C. A., and Martins, J. R. R. A., “Coupled Design of a Supersonic Engine and Thermal System,”
World Congress of Structural and Multidisciplinary Optimization, Beijing, China, 2019.
Manglik, R. M., and Bergles, A. E., “Heat Transfer and Pressure Drop Correlations for the Rectangular Offset Strip Fin Compact
Heat Exchanger,” Experimental Thermal and Fluid Science, 1995, pp. 171–180. doi:10.1016/0894-1777(94)00096-Q.
Moran, M. J., Shapiro, H. N., Boettner, D. D., and Bailey, M. B., Fundamentals of engineering thermodynamics, John Wiley &
Sons, 2010.
Meredith, F. W., “Cooling of aircraft engines with special reference to ethylene glycol radiators enclosed in ducts,” Tech. Rep.
1683, Government of the United Kingdom Air Ministry Reports and Memoranda, 1935.
Theodorsen, T., “The Fundamental Principles of the N.A.C.A. Cowling,” Journal of the Aeronautical Sciences, Vol. 5, No. 5,
1938, pp. 169–174. doi:10.2514/8.566.
[26] Raymer, D. P., Aircraft Design: A Conceptual Approach, 5th ed., AIAA, 2012.
[27] The Dow Chemical Company, “DOWFROST Technical Data Sheet,” , 2019. | {"url":"https://www.researchgate.net/publication/372761014_Development_of_a_conceptual-level_thermal_management_system_design_capability_in_OpenConcept","timestamp":"2024-11-10T06:40:09Z","content_type":"text/html","content_length":"962592","record_id":"<urn:uuid:8815fe2e-4d48-4f77-a26c-be6d098afbad>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00650.warc.gz"} |
Determine what is the kinetic energy of the ball
Reference no: EM13544886
Evaluate maximum altitude
Evaluate maximum altitude?
Introductory mechanics: dynamics
Calculate the smallest coefficient of static friction necessary for mass A to remain stationary.
Determine the tension in each string
Determine the tension in each string
Quadrupole moments in the shell model
Quadrupole moments in the shell model
Calculate the dc voltage
Calculate the dc voltage applied to the circuit.
Gravity conveyor
Illustrate the cause of the components accelerating from rest down the conveyor.
Questions on blackbody, Infra-Red Detectors & Optic Lens and Digital Image.
What is the magnitude of the current in the wire
What is the magnitude of the current in the wire as a function of time?
What is the maximum displacement of the bridge deck
What is the maximum displacement of the bridge deck?
What is the electric field at the location
Question: Field and force with three charges? What is the electric field at the location of Q1, due to Q 2 ?
Find the equivalent resistance
A resistor is in the shape of a cube, with each side of resistance R . Find the equivalent resistance between any two of its adjacent corners.
Find the magnitude of the resulting magnetic field
A sphere of radius R is uniformly charged to a total charge of Q. It is made to spin about an axis that passes through its center with an angular speed ω. Find the magnitude of the resulting magnetic
field at the center of the sphere.
Assured A++ Grade
Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report! | {"url":"https://www.expertsmind.com/library/determine-what-is-the-kinetic-energy-of-the-ball-5544886.aspx","timestamp":"2024-11-03T09:57:40Z","content_type":"text/html","content_length":"67478","record_id":"<urn:uuid:1e52e9fc-fd99-4ec8-a5af-d35239d8e61c>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00703.warc.gz"} |
Shape (3D)
Drawing 3D Objects
Draw two-dimensional representations of three-dimensional objects on an isometric dotty grid.
Use formulae to solve problems involving the volumes of cuboids, cones, pyramids, prisms and composite solids.
Net or Not
Drag the nets into the corresponding panels to show whether they would fold to form a cube.
Dice Net Challenge
Drag the numbers onto the net so that when it is folded to form a cube numbers on opposite faces add up to prime numbers.
Cube Face Meetings
Visualise the cubes formed by the nets and paint the three faces meeting at a vertex.
Coloured Cube 3D
Colour in the remaining faces of the nets of the cubes to match the rotating three-dimensional picture.
The Great Dodecahedron
Pupils are not allowed to use their hands to point but must describe fully any shapes they can see in this picture.
Puzzle Cube Net
A jumbled moving-block puzzle cube is shown as a net. Can you solve it?
Trigonometry in 3D
Calculate the lengths of sides and the size of angles in three dimensional shapes.
Platonic Solids
Identify the names, nets and features of the five regular polyhedra.
Similar Shapes
Questions about the scale factors of lengths, areas and volumes of similar shapes.
Apply formulae for the volumes and surface areas of cylinders to answer a wide variety of questions
Surface Area
Work out the surface areas of common solid shapes in this collection of exercises.
Yes No Questions
A game to determine the mathematical item by asking questions that can only be answered yes or no.
Find the maximum volume of a tray made from an A4 sheet of paper. A practical mathematical investigation.
This is the main Transum help video on Shape (3D).
Nets Video
Learn more about three-dimensional shapes and their nets.
Volume Video
There are simple formulas that can be used to find the volumes of basic three-dimensional shapes.
Surface Area Video
Finding the surface are of three dimensional shapes can involve some interesting formulae.
Tetrahedron and Pyramid
A tetrahedron and a pyramid have edges of equal length. If they are glued together on a triangular face with the vertices aligned, how many faces will the new shape have?
Visual Aids
Cube Construction
This is a simple interactive that does nothing more than allow you to create 3D drawings of models made with cubes.
The Great Dodecahedron
Pupils are not allowed to use their hands to point but must describe fully any shapes they can see in this picture.
3D Trigonometry Presentation
A slide presentation (a poem) introducing using trigonometry (including Pythagoras' Theorem) to find lengths and angles on three dimensional shapes.
Dice Nets
Determine whether the given nets would fold to produce a dice.
Faces and Edges
Find the number of faces, edges and vertices on some familiar objects.
How many triangles are there on the surface of a regular icosahedron.
29 items are currently in this category.
Teachers might find the complete Shape (3D) Topic List useful. | {"url":"https://www.transum.org/Software/Maths_Map/New_Topic_One.asp","timestamp":"2024-11-04T10:56:38Z","content_type":"text/html","content_length":"25621","record_id":"<urn:uuid:01bedc2a-c152-40e1-af86-b4f2ec5e1ad1>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00380.warc.gz"} |
Cardinal Or Ordinal Numbers - OrdinalNumbers.com
Cardinal Ordinal And Roman Numbers – You can enumerate any number of sets using ordinal figures as a tool. They also can be used to generalize ordinal quantities. 1st One of the most fundamental
concepts of mathematics is the ordinal number. It is a numerical number that signifies the location of an object in a … Read more
Cardinal Ordinal Numbers Quiz
Cardinal Ordinal Numbers Quiz – It is possible to enumerate infinite sets using ordinal numbers. They are also able to broaden ordinal numbers.But before you are able to use these numbers, you need
to understand the reasons why they exist and how they work. 1st The ordinal number is among the fundamental concepts in mathematics. … Read more
Cardinal Ordinal Numbers Latin
Cardinal Ordinal Numbers Latin – By using ordinal numbers, it is possible to count any number of sets. These numbers can be utilized as a way to generalize ordinal figures. 1st The ordinal number is
among of the most fundamental concepts in math. It is a numerical value that indicates the place an object is … Read more
Ordinal Numbers Vs Cardinal Asl
Ordinal Numbers Vs Cardinal Asl – An endless number of sets are easily counted using ordinal numerals to aid in the process of. They also can help generalize ordinal quantities. 1st One of the most
fundamental concepts of math is the ordinal number. It is a number that indicates the location of an item within … Read more
Cardinal Ordinal Numbers Examples
Cardinal Ordinal Numbers Examples – It is possible to enumerate infinite sets with ordinal numbers. They can also be used to generalize ordinal numbers.But before you are able to use them, you must
comprehend what they are and how they function. 1st The ordinal numbers are among the fundamental concepts in mathematics. It is a … Read more
Cardinal Ordinal Numbers Worksheet Pdf
Cardinal Ordinal Numbers Worksheet Pdf – You can enumerate unlimited sets using ordinal numbers. They can also serve to generalize ordinal quantities. 1st One of the most fundamental concepts of
mathematics is the ordinal numbers. It is a number that identifies the place of an object in the set of objects. Ordinal numbers are typically … Read more
Cardinal Numbers And Ordinal Numbers Exercise
Cardinal Numbers And Ordinal Numbers Exercise – You can enumerate the infinite amount of sets using ordinal figures as an instrument. They can also be used to generalize ordinal numbers. 1st The
ordinal number is among the fundamental concepts in mathematics. It is a number that shows where an object is within a set of. … Read more
Write Ordinal And Cardinal Numbers
Write Ordinal And Cardinal Numbers – You can enumerate unlimited sets by using ordinal numbers. They can also be utilized to generalize ordinal numbers.But before you are able to use them, you need
to know what they exist and how they operate. 1st The ordinal numbers are one of the most fundamental ideas in math. … Read more
Cardinal And Ordinal Numbers In Asl
Cardinal And Ordinal Numbers In Asl – There are a myriad of sets that can be easily enumerated using ordinal numerals to aid in the process of. They also can help generalize ordinal quantities. 1st
The ordinal number is among the fundamental concepts in mathematics. It is a number which shows where an object is … Read more
Cardinal Ordinal Numbers Table
Cardinal Ordinal Numbers Table – You can enumerate unlimited sets with ordinal numbers. They can also serve as a generalization of ordinal quantities. 1st The ordinal number is one the most
fundamental ideas in mathematics. It is a number that identifies the location of an object within a list. In general, a number between one … Read more | {"url":"https://www.ordinalnumbers.com/tag/cardinal-or-ordinal-numbers/","timestamp":"2024-11-02T12:56:44Z","content_type":"text/html","content_length":"101530","record_id":"<urn:uuid:7ef50722-8ef5-402b-895f-b8bcb3ac4196>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00755.warc.gz"} |
Time evolutions of information entropies in a one-dimensional Vlasov–Poisson system
A one-dimensional Vlasov–Poisson system is considered to elucidate how the information entropies of the probability distribution functions of the electron position and velocity variables evolve in
the Landau damping process. Considering the initial condition given by the Maxwellian velocity distribution with the spatial density perturbation in the form of the cosine function of the position,
we derive linear and quasilinear analytical solutions that accurately describe both early and late time behaviors of the distribution function and the electric field. The validity of these solutions
is confirmed by comparison with numerical simulations based on contour dynamics. Using the quasilinear analytical solution, the time evolutions of the velocity distribution function and its kurtosis
indicating deviation from the Gaussian distribution are evaluated with the accuracy of the squared perturbation amplitude. We also determine the time evolutions of the information entropies of the
electron position and velocity variables and their mutual information. We further consider Coulomb collisions that relax the state in the late-time limit in the collisionless process to the thermal
equilibrium state. In this collisional relaxation process, the mutual information of the position and velocity variables decreases to zero, while the total information entropy of the phase-space
distribution function increases by the decrease in the mutual information and demonstrates the validity of Boltzmann's H-theorem.
Landau damping is one of the most intriguing physical processes involving collective interactions between waves and particles in collisionless plasmas.^1 Despite the time reversibility of the
Vlasov–Poisson equations, their solution shows that, as time progresses, plasma oscillations are Landau-damped and the electric field energy is converted into the kinetic energy of particles. The
effects of Landau damping are universally observed in various plasma wave-particle resonance phenomena, such as drift waves and geodesic acoustic mode (GAM) oscillations in slab and toroidal magnetic
field configurations in space and fusion plasmas, and have been the subject of extensive theoretical research.^2–15
The most commonly used theoretical model to study Landau damping is the one-dimensional Vlasov–Poisson system which consists of electrons with ions treated as uniform background positive charge with
infinite mass. The solution of the linear initial value problem derived by Case and Van Kampen for this system is well known.^2–5 The behavior of the late-time solution exhibiting Landau damping can
be well approximated by using only eigenfrequencies with the slowest damping rate. In this study, the effect of an infinite number of complex eigenfrequencies is included to represent the early-time
behavior of the electric field. Incidentally, it is shown in Ref. 7 that the plasma dispersion function can be expressed in the form of an infinite continued fraction. From this fact, we can also
understand that an infinite number of complex eigenfrequencies exist as zeros of the dielectric function. It is also shown in Ref. 8 that a very high number of poles are required for the correct
calculation of density and pressure at an early stage in the Vlasov–Poisson system.
When treating the position x and velocity v of electrons as random variables, denoted as X and V, respectively, the information entropy^16^, $Sp(X,V)≡−∫∫p(x,v,t) log p(x,v,t) dx dv$ derived from the
joint probability density function $p(x,v,t)$ of (X, V) is known to be one of the Casimir invariants in the Vlasov–Poisson system,^17,18 and it does not depend on time t. On the other hand, the
entropies $Sp(X)≡−∫pX(x,t) log pX(x,t) dx$ and $Sp(V)≡−∫pV(v,t) log pV(v,t) dv$ determined from the marginal probability density functions $pX(x,t)=∫p(x,v) dv$ and $pV(v,t)=∫p(x,v) dx$ of X and V,
respectively, are allowed to vary with time. In the present work, we derive novel analytical expressions that accurately represent the early-time behaviors of the linear solutions for the electric
field and the distribution function by series expansions in time and velocity variables. Combining these early-time expressions with the late-time approximate solutions, we accurately determine the
time evolution of the spatially averaged velocity distribution function of electrons obtained from quasilinear theory.^19,20 Using these linear and quasilinear analytical solutions, we clarify the
time evolution of the information entropies $Sp(X), Sp(V)$, and the mutual information $I(X,V)≡Sp(X)+Sp(V)−Sp(X,V)$ associated with the Landau damping. We here note that, in Ref. 21, the energy
conversion in phase space density moments in the Vlasov–Maxwell system is investigated by separating the kinetic entropy [which corresponds to $Sp(X,V)$] to the two parts [which correspond to $Sp(X)$
and $Sp(X,V)−Sp(X)$]. Also, in Ref. 22, the role of the entropy production in momentum transfer in the Vlasov–Maxwell system is discussed based on the information theory.
We note that the quasilinear solution for the background velocity distribution function at time t is obtained by integrating the product of the electric field and the linear solution of the perturbed
distribution function over time from the initial time to t. Therefore, using only the linear solutions represented by a few complex eigenfrequencies is insufficient. We emphasize that, to describe
the time evolutions of the entropies and the mutual information, it is necessary to use accurate expressions of the linear solutions near the initial time as derived in this study.
The validity of the obtained linear and quasilinear solutions is confirmed by comparison with simulation results based on contour dynamics.^23,24 We also obtain the velocity distribution p[V] in the
limit of $t→+∞$ and show how it deviates from the Gaussian distribution. In addition, we consider the effects of Coulomb collisions, which relax the distribution function to the thermal equilibrium
state, decrease the mutual information $I(X,V,t)$ to zero, increase the entropy $Sp(X,V)$, and thus validate Boltzmann's H-theorem.^25
The rest of this paper is organized as follows. In Sec. II, the basic equations of the Vlasov–Poisson system are presented, and the solution to its linear initial value problem is provided using the
Laplace transform. The validity of the linear analytical solutions for the electric field and the perturbed distribution function is confirmed by comparison with contour dynamics simulations with
small initial perturbation amplitudes. In Sec. III, using approximate expressions for complex eigenfrequencies with large absolute values, an approximate integral formula for the electric field
incorporating the effects of an infinite number of complex eigenfrequencies is derived. In addition, the linear analytical solutions for the electric field and the distribution function near the
initial time are expressed as series expansions in time and velocity variables. In Sec. IV, using the results of Secs. II and III, a quasilinear analytical solution describing the time evolution of
the background velocity distribution function of electrons is derived, and its validity is confirmed by contour dynamics simulations as well. By using this quasilinear analytical solution, the time
evolution of the electron kinetic energy increases due to Landau damping, the velocity distribution function in the limit of $t→+∞$, and physical quantities, such as the kurtosis representing a
deviation from the Gaussian distribution, are accurately determined. In Sec. V, the time evolution of the information entropies of the position and velocity variables of electrons and the mutual
information is determined with an accuracy of the order of the squared perturbation amplitude. In Sec. VI, the changes in the entropies and the mutual information from the collisionless process to
the thermal equilibrium state due to Coulomb collisions are evaluated. Finally, conclusions and discussion are given in Sec. VII.
A. The Vlasov–Poisson system
Under the assumption that there is no magnetic field, the Vlasov equation for electrons is given by
where –
are the electron charge and mass, respectively, and
) represents the electric field. The distribution function
of electrons is defined such that
represents the number of electrons in the phase space volume element
$d3x d3v$
at time
. We ignore the motion of ions that have the uniform density
and infinite mass. Then, the electric field
is determined by Poisson's equation
$∇·E(x,t)=4πe(n0−∫d3v f(x,v,t)).$
We now assume that
are independent of
, and that
. Integration of
with respect to
is done to define
$f(x,vx,t)≡∫−∞+∞dvy∫−∞+∞dvz f(x,v,t).$
Then, from Eq.
, we obtain the Vlasov equation in the (
) phase space as
are used. Poisson's equation in Eq.
is rewritten as
$∂E(x,t)∂x=4πe(n0−∫−∞+∞dv f(x,v,t)).$
Hereafter, we consider the structure of the distribution function
on the two-dimensional (
) phase space instead of the six-dimensional
space. The number of electrons in the area element
$dx dv$
about the point (
) in the phase space at time
is given by
$f(x,v,t)dx dv$
B. Linearization
We now write the distribution function
as the sum of the equilibrium and perturbation parts
Here, the equilibrium distribution function is given by
that satisfies
The perturbed distribution function
and the electric field
) are assumed to be given by
$f1(x,v,t)=Re[f1(k,v,t) exp(i k x)],E(x,t)=Re[E(k,t) exp(i k x)],$
where the wavenumber in the
-direction is given by
>0. Substituting Eq.
into Eq.
and using Eq.
, we obtain the linearized Vlasov equation
$∂f1(k,v,t)∂t+i k v f1(k,v,t)−emE(k,t)∂f0(v)∂v=0,$
where the nonlinear term
is neglected as a small term. Using Eqs.
, Poisson's equation in Eq.
is rewritten as
$i k E(k,t)=−4πe∫−∞+∞dv f1(k,v,t).$
C. Laplace transform
We now use the Laplace transform to solve the linearized Vlasov–Poisson equations given by Eqs.
. The Laplace transforms of
) are denoted by
, respectively. Instead of the variable
used in the conventional Laplace transform, we put
and employ the complex frequency
to perform the analysis in the manner similar to the Fourier transform as seen in Ref.
. The inverse Laplace transform gives
where the Laplace contours
need to pass above all poles of
on the complex
-plane, respectively. Now, the Vlasov equation and Poisson's equation in Eqs.
are represented in terms of
$(−i ω+i k v)f1(k,v,ω)−emE(k,ω)∂f0(v)∂v=f1(k,v,t=0),$
$i k E(k,ω)=−4πe∫−∞+∞dv f1(k,v,ω),$
Solving Eqs.
for the perturbed distribution function
and the electric field
, we obtain
$f1(k,v,ω)=(e/m)E(k,ω)∂f0(v)/∂v+f1(k,v,t=0)−i ω+i k v,$
$E(k,ω)=4πek2ε(k,ω)∫−∞+∞dv f1(k,v,t=0)v−ω/k,$
where the dielectric function
is defined by
$ε(k,ω)=1−ωp2n0k2∫−∞+∞dv ∂f0(v)/∂vv−ω/k.$
Here, the plasma frequency is defined by
. Now, we define
$F(k,ω)≡∫−∞+∞dv f1(k,v,t=0)v−ω/k,$
to write
) and
>0 as
$f1(k,v,t)=∫Lfdω2πe−iωt−i(ω−k v)×[ωp2n0k2F(k,ω)ε(k,ω)∂f0(v)∂v+f1(k,v,t=0)],$
respectively. Let us define complex-valued eigenfrequencies
as zeros of
Then, we can write
$f1(k,v,t)=e−ik v t[f1(k,v,t=0)+ωp2n0k2F(k,k v)ε(k,k v)∂f0(v)∂v]+ωp2n0k2∂f0(v)∂v∑μe−iωμt(ωμ−k v)F(k,ωμ)∂ωε(k,ωμ),$
where the derivative with respect to the complex-valued frequency
is represented by
D. Conditions of equilibrium and initial perturbation
Hereafter, we assume that the equilibrium function
is given by the Maxwellian
$f0(v)=n0m2πT exp(−mv22T)=n0πvTexp(−v2vT2),$
represents the equilibrium temperature and
is used. Then, the dielectric function
in Eq.
is expressed by
where the Debye length
is defined by
and the plasma dispersion function
is defined by Eq.
Appendix A
Using Eq.
, the dispersion relation in Eq.
is rewritten as
Figure 1
shows the distribution of complex eigenfrequencies
in the complex plane, calculated from Eq.
. There are an infinite number of complex eigenfrequencies. The complex eigenfrequency with
and the smallest
is given by
. From Eqs.
, we find
represents the complex conjugate. Thus, if
is a zero of the dielectric function,
is so, too. Therefore, complex eigenfrequencies are distributed in pairs symmetric with respect to the
-axis as seen in
Fig. 1
The derivative of
with respect to
is given by
where Eqs.
, and
are used. Then, from Eq.
, we find
We now impose the initial condition given by
is the Maxwellian equilibrium distribution function given by Eq.
is a small constant. Then, from Poisson's equation, we have
Using the initial condition in Eq.
, the plasma dispersion function in Eq.
, and Eq.
, we obtain
$F(k,ω)≡∫−∞+∞dv f1(k,v,t=0)v−ω/k=αn0vTZ(ωkvT),$
$F(k,kv)≡∫−∞+∞dv′ f1(k,v′,t=0)v′−v=αn0vTZ(vvT),$
which are used to evaluate
) and
given in Eqs.
, respectively.
E. Solution of initial value problem
We now substitute Eqs.
, and
into Eqs.
to express the electric field and the perturbed distribution function at time
>0 as
$E(k,t)=4πek2∫LEdω2πe−iωtF(k,ω)ε(k,ω)=α4πen0k∫−∞+∞dζ2π Z(ζ)e−iζτ1+κ−2[1+ζZ(ζ)]=−i4πek2∑μe−iωμtF(k,ωμ)∂ωε(k,ωμ)=−i αmωp2ekk2λD2(1+k2λD2)×∑μe−iωμt1+k2λD2−ωμ2/ωp2=−i αmωp2ekk2λD2(1+k2λD2)×∑μ, Reωμ>02Re
$f1(k,v,t)=∫−∞+∞dω2πe−iωt−i(ω−k v)[f1(k,v,t=0)+ωp2n0k2F(k,ω)ε(k,ω)∂f0(v)∂v]=αf0(v)[exp(−ivvTτ)−iκ−2vvT∫−∞+∞dζ2π Z(ζ)e−iζτ(ζ−v/vT){1+κ−2[1+ζZ(ζ)]}]=e−ik v t[f1(k,v,t=0)+ωp2n0k2F(k,k v)ε(k,k v)∂f0(v)
∂v]+ωp2n0k2∂f0(v)∂v∑μe−iωμt(ωμ−k v)F(k,ωμ)∂ωε(k,ωμ)=αf0(v)[e−ik v t{1−(v/vT)Z(v/vT)k2λD2+[1+(v/vT)Z(v/vT)]}−k v (1+k2λD2)∑μe−iωμt(ωμ−k v)(1+k2λD2−ωμ2/ωp2)],$
respectively, where the normalized time
and the residue theorem are used.
Appendix B
presents supplementary explanations about contours used for the integrals in Eqs.
Figure 2 shows E(k, t) calculated as a function of time using Eq. (37) for $kλD=1/2$. The summation over μ in Eq. (37) is done by including a finite number of pairs of the complex eigenfrequencies $
(ωμ,−ωμ*)$ from $(ωø,−ωø*)$ to the pair with the Nth smallest decay rate $|γμ|$. The results obtained for the cases of N=2 and N=100 are shown in Fig. 2 where the result from the contour
dynamics (CD) simulation for the case of $α=0.01$ are also plotted by the red dashed curve. The CD method and the simulation conditions used in this paper are explained in Appendix C. A good
agreement among the three results is confirmed except near t=0. When using the larger number N of the pairs of complex eigenfrequencies, the better agreement with the CD simulation result is
confirmed. Near t=0, even in the large N limit, the sum over μ does not converge uniformly, and the number of pairs N required for a good convergence increases to infinity as t approaches 0. In
Sec. III, we derive an approximate expression for E(k, t) near t=0 that includes the effect of an infinite number of complex eigenfrequencies ${ωμ}$.
Similar to the case of E(k, t), the effect of a larger number of the complex eigenfrequencies $ωμ$ on $f1(k,v,t)$ becomes more significant near t=0. To accurately evaluate $f1(k,v,t)$ from Eq. (38)
, we need to include a larger number N of complex eigenfrequency pairs as $t→+0$. However, we should note that the denominator of each term in the sum over μ in the analytical solution for E(k, t)
[Eq. (37)] is a quadratic function of $ωμ$ while in the analytical solution for $f1(k,v,t)$ [Eq. (38)], it is a cubic function of $ωμ$. Therefore, $f1(k,v,t)$ converges faster than E(k, t) as the
number N of included eigenfrequency pairs $ωμ$ increases, and $f1(k,v,t)$ shows a uniform convergence even near t=0. Figure 3 shows contour plots of $f1(x,v,t)/(α2f0(v))$ in the (v, t)-plane for
$ωpt=0,0.1,1,10$. For t>0, the plots show $f1(x,v,t)/(α2f0(v))$ calculated using 2 pairs (upper row) and 100 pairs (middle row) of complex eigenfrequencies from Eq. (38) and $f1(x,v,t)/(α2f0(v))$
from CD simulation results for $α=0.01$ (lower row). The results using two pairs of complex eigenfrequencies agree with the results of CD simulation for $ωpt≥1$. The results using 1000 pairs show a
better agreement with CD simulation results for all t>0.
A. Approximate expression for complex eigenfrequencies with large absolute values
We here recall that the complex-valued eigenfrequency
is determined by Eq.
using the plasma dispersion function
. When
$|ζ|≫1, |Reζ|>|Imζ|$
, and
, we have
Therefore, it is found from Eqs.
that, when
Using Eq.
, we obtain
$ζμ≃iκ2πeζμ2=iκ2πexp[|ζμ|2(cos 2θμ+i sin 2θμ)],$
from which we get
$|ζμ|≃κ2πexp(|ζμ|2 cos 2θμ),$
$eiθμ≃i exp(i|ζμ|2 sin 2θμ).$
$θμ≃π2+|ζμ|2 sin 2θμ+2πn,$
is an integer. Then, we define
to have
$|ζμ|≃κ2πexp(|ζμ|2 sin δμ),$
$log (πκ2|ζμ|)≃|ζμ|2 sin δμ≃|ζμ|2δμ,$
which leads to
$δμ≃1|ζμ|2log (πκ2|ζμ|),$
is assumed. Substituting Eq.
into Eq.
, we have
$−π4+δμ2≃π2−|ζμ|2 cos δμ+2πn≃π2−|ζμ|2(1−δμ22)+2πn,$
which yields
Thus, the approximate value of
is given by
$θμ=−π4+δμ2≃−π4+12|ζμ|2log (πκ2|ζμ|)$
where the integer
needs to be large,
, to satisfy the condition
. Recall that when
is a complex-valued eigenfrequency,
is so, too. Then, the approximate values of complex-valued eigenfrequency with large absolute values are denoted by
which are given by
$ωnkvT≡ζn≡(2n+34)π eiθn,$
$−ωn*kvT≡−ζn*≡(2n+34)π ei(π−θn)$
$θn≡−π4+12|ζn|2log (πκ2|ζn|)$
where the integer
is used instead of
to specify different approximate eigenfrequencies. The approximation is good for
In Fig. 4, the exact complex eigenfrequencies and the approximate eigenfrequencies for $kλD=1/2$ are plotted as red and dots, respectively, in the region where $Re(ω)>0$. Here, Eqs. (53) and (55)
with $n=0,1,2,…$ are used to evaluate the approximate eigenfrequencies. It is observed that the approximate values approach the exact values as the magnitude of the complex eigenfrequencies
B. Approximate expression for effects of an infinite number of complex-valued eigenfrequencies
We here consider an infinite series,
, where
is a positive constant and
. This form of the infinite series is included in Eq.
). When
, we use Eqs.
to make the following approximations:
$Re[eiζnτC−ζn2]≃−1|ζn|2exp(|ζn|τ sin θn) cos(2θn+|ζn|τ cos θn)≃−1|ζn|2exp(−|ζn|τ2) sin(δn+|ζn|τ2),$
$|ζn|≡(2n+34)π, δn≡2θn+π2≡1|ζn|2log (πκ2|ζn|).$
=0, using Eq.
, we can take
such that
. Then, Eq.
is rewritten as
$Re[eiζnτC−ζn2]≃−1|ζn|2exp(−|ζn|τ2) sin(|ζn|τ2)$
. We find that, for a large but fixed integer
$SNN′(τ)≡∑n=NN′Re[eiζnτC−ζn2]≃−∑n=NN′1|ζn|2 exp(−|ζn|τ2) sin(|ζn|τ2)$
does not uniformly converge to
which diverges as
. Therefore, the number of the complex-valued eigenfrequencies, which are necessary to be included for accurately evaluating
), diverges to infinity as
. On the other hand, when
is fixed, the series expansion for representing
uniformly converges. It is because the extra factor
included in the series expansion accelerates the convergence as
. Now, we use
to write
$∑n=N∞Re[eiζnτC−ζn2]≃−∑n=N∞1|ζn|2 exp(−|ζn|τ2) sin(|ζn|τ2)≃−1π∑n=N∞(|ζn+1|−|ζn|)|ζn| exp(−|ζn|τ2) sin(|ζn|τ2)≃−1π∫|ζN|+∞d|ζ||ζ| exp(−|ζ|τ2) sin(|ζ|τ2)=−1π∫|ζN|τ/2+∞dxe−x sin xx=−14+1π∫0|ζN|τ/2dxe−x
sin xx,$
is used.
Now, using Eqs.
, and
we can express
) as
$E(k,t)=−i αmωp2ek(1+κ2)∑Reζμ>0Re[e−iζμτ(1+κ2)/2κ2−ζμ2]≃−i αmωp2ek(1+κ2)×(∑0<Reζμ<ReζNRe[e−iζμτ(1+κ2)/2κ2−ζμ2]−1π∫|ζN|τ/2+∞dxe−x sin xx)≡EN(k,t),$
. Recall that we need to choose
such that
. However, when
also holds (as shown later) and
for any
From the residue theorem, we obtain
. Using the asymptotic expansion of
and taking the limit for
, we find
However, for
=0, we obtain
which is derived using the residue theorem.
Appendix B
presents supplementary explanations about contours used for the integrals in Eqs.
, and
. In Eq.
, the integral
is done along the circle
in the complex
-plane where the radius of
is given by
and the orientation of
is taken clockwise by varying the argument
. Comparing Eqs.
shows that changing the order of operations
results in different values. Thus, when any large but finite fixed number of eigenfrequencies are included to evaluate
), the limit value for
does not converge to the correct initial value but to another value, the ratio of which to the correct one is estimated from Eqs.
. We use Eq.
to confirm that the approximate solution
given in Eq.
can correctly approach to the correct initial value as
$limN→∞EN(k,t=0)=−i αmωp2ek(1+κ2)×(∑Reζμ>0Re[1(1+κ2)/2κ2−ζμ2]−1π∫0+∞dxe−x sin xx)=iαmωp2ek[14(3−κ2)+14(1+κ2)]=iαmωp2ek=iα4πn0ek=E(k,t=0).$
Now, we express the approximate solution
in Eq.
by the sum of two parts as
$E<N(k,t)≡−i αmωp2ek(1+κ2)×∑0<Reζμ<ReζNRe[e−iζμτ(1+κ2)/2κ2−ζμ2],E>N(k,t)≡i αmωp2ek(1+κ2)×1π∫|ζN|τ/2+∞dxe−x sin xx.$
The top panel of
Fig. 5
$E<N(k,t), E>N(k,t)$
, and
, for
=100. There, the approximate solution
=100 and the CD simulation results for
are plotted by solid and red dashed lines, respectively. Since the effects of complex eigenfrequencies
with large absolute values included in
become more significant as
approaches 0,
deviates from the CD simulation results for small
<0.2 in
Fig. 5
). The diamonds plotted in the vertical axes of the top and bottom panels in
Fig. 5
correspond to the value given by
. However, the approximate solution agrees with the CD simulation result by adding
. The bottom panel shows
$E<N(k,t), E>N(k,t)$
, and
=1. Even in this case of
=1, the difference between
and the CD simulation result near
=0 is as small as about 5%.
As described in Appendix D, the Vlasov–Poisson system is symmetric under the time reversal transformation whether the linear approximation is made or not. Especially, under the initial condition
made in this section, the distribution function at t=0 is an even function of v, from which we find that the distribution function is an even function of time t as explained in Appendix D.
Accordingly, $f1(k,−v,−t)=f1(k,v,t)$ and $E(k,−t)=E(k,t)$ hold for any t and v. Therefore, E(k, t) and $f1(k,v,t)$ for t<0 are immediately given from Eqs. (37) and (38) with making use of the time
C. Approximate solution in early time
We here use the asymptotic expansion of
given in Eq.
to derive
Then, for
$−π/4<arg ζ<5π/4$
, we obtain
are recursively defined by
. Thus, we find
We now use Eq.
to get
where the integral
is done along the circle
in the complex
-plane. Here, the radius of
is given by
and the orientation of
is taken clockwise by varying the argument
. Supplementary explanations about the derivation of Eq.
are given after Eq.
Appendix B
. Equation
is rewritten here for
$E(k,t)=α4πen0k∫Cdζ2π Z(ζ)e−iζτ1+κ−2[1+ζZ(ζ)],$
into which Eq.
is substituted to derive the Taylor expansion of
) about
is written as
Next, Eq.
is rewritten here as
$f1(k,v,t)=αf0(v)[exp(−ivvTτ)−iκ−2vvT∫−∞+∞dζ2π Z(ζ)e−iζτ(ζ−v/vT){1+κ−2[1+ζZ(ζ)]}].$
Then, we use
and Eq.
to obtain
are defined by
denotes the greatest integer less than or equal to
. From Eq.
, we have
$d0(κ2,v/vT)=1d1(κ2,v/vT)=v/vTd2(κ2,v/vT)=(v/vT)2+e1(κ2)d3(κ2,v/vT)=(v/vT)3+(v/vT) e1(κ2)d4(κ2,v/vT)=(v/vT)4+(v/vT)2 e1(κ2)+e2(κ2)d5(κ2,v/vT)=(v/vT)5+(v/vT)3 e1(κ2)+(v/vT) e2(κ2)⋯.$
Integrating Eq.
with respect to
$∫−∞+∞dζ2π Z(ζ)e−iζτ(ζ−v/vT){1+κ−2[1+ζZ(ζ)]}=−∫Cdζ2π e−iζτζ2[1+∑n=1Ndn(κ2,v/vT)ζ−n+O(ζ−N−1)]=i∑n=0N(−iτ)n+1(n+1)!dn(κ2,v/vT)+O(τN+2).$
Combining Eqs.
, the Taylor expansion of
is derived as
are defined by
We now compare the analytical solutions obtained by the series expansions in Eqs. (78) and (85) with the CD simulation results for $α=0.01$. Figure 6 shows the analytical solutions of Eq. (78) for
$kλD=1/2$ including terms up to orders of $τ2,τ6$, and $τ12$. We can confirm that the discrepancy from the simulation results decreases as the number of the included terms increases. The
calculation up to $τ6$ fits well with the CD simulation results when $ωpt<2$ (which corresponds to $τ≡kvTt≡2(kλD)ωpt<1$). Figure 7 shows the plot of $f1(x,v,t)/f0(v)$ for $kλD=1/2$ calculated using
Eq. (85) with terms up to the order of $τ13$ included. It shows a good agreement with the CD simulation results shown in the bottom row of Fig. 3 for $ωpt≤1$.
A. Spatially averaged distribution function
In this subsection, formulas that hold rigorously for the spatially averaged distribution function and its associated entropy are derived without using linear approximations based on small
perturbation amplitudes. The distribution function
is a periodic function of
and the period length is given by
. We use
to denote the average with respect to
. The Vlasov equation is averaged in
to yield
$∂∂t⟨f⟩(v,t)+∂∂v[⟨f⟩(v,t) a(v,t)]=0,$
is the electron distribution function averaged in
, and
) represents the average acceleration of electrons defined by
$⟨f⟩(v,t) a(v,t)≡−em⟨E(x,t) f(x,v,t)⟩.$
From Eq.
, we have
$∂∂t[⟨f⟩(v,t)12mv2]+∂∂v[⟨f⟩(v,t)12mv2a(v,t)]=⟨f⟩(v,t) m v a(v,t),$
$ddt∫−∞+∞dv ⟨f⟩(v,t)12mv2=∫−∞+∞dv ⟨f⟩(v,t) m v a(v,t),$
$m v a(v,t)$
represents the
-averaged rate of change of the kinetic energy of an electron with velocity
at time
caused by the electric field. The total energy conservation of the Vlasov–Poisson system is written as
$ddt[∫−∞+∞dv ⟨f⟩(v,t)12mv2+18π⟨E2⟩]=0.$
The Gibbs entropy
per unit length in the
-direction is defined as a functional of the distribution function
$S[f]≡−⟨∫−∞+∞dv f(x,v,t) log f(x,v,t)⟩,$
which is found to be an invariant,
We now use the
-averaged distribution function
to define the entropy density
$S[⟨f⟩]≡−∫−∞+∞dv ⟨f⟩(v,t) log ⟨f⟩(v,t),$
which is not an invariant but a function of time
. From the viewpoint of information theory,
the entropy density
is given from the average of
$S(v,t)≡−log ⟨f⟩(v,t)=−log[⟨f⟩(v,t)(dv/n0)]+log(dv/n0),$
which represents the Shannon information content (or self-entropy)
[see Eq.
] plus an additional constant given by
for the probability
of finding the electron velocity in the interval
is regarded as an infinitesimal positive constant. Supplementary explanations on information entropies in the Vlasov–Poisson system are presented in
Appendix E
. We now consider
) as the information entropy (except an additional constant) of the electron with the velocity
at time
. From Eq.
, we obtain
$(∂∂t+a(v,t)∂∂v)S(v,t)=−(∂∂t+a(v,t)∂∂v) log f(v,t)=∂a(v,t)∂v.$
Here, we define
that satisfies the differential equation
with the initial condition
represents the velocity of the electron which has the initial velocity
and the history of acceleration
. The interval
in the
-space evolves to the interval
at time
when the velocities of the electrons in the interval are given by
. We note that the number (or probability) of the electrons found in the interval
is invariant in time,
$⟨f⟩(u(v0,t),t),t) du(v0,t)=⟨f⟩(v0,0) dv0.$
Using Eqs.
, we find
$∂∂tS(u(v0,t),t)=−[(∂∂t+a(v,t)∂∂v) log ⟨f⟩(v,t)]v=u(v0,t)=[∂a(v,t)∂v]v=u(v0,t)$
which implies that
represents the rate of change in the information entropy
$S(v,t)≡−log ⟨f⟩(v,t)$
of the electron with the velocity
at time
along the trajectory
in the
-space. Then, the increase in
) along the trajectory during the time interval
is given by
from which we obtain
$⟨f⟩(u(v0,t),t)=⟨f⟩(v0,0) exp[−ΔS(u(v0,t),t)]=⟨f⟩(v0,0) exp(−∫0tdt′[∂a(v,t′)∂v]v=u(v0,t′)).$
We also find that the rate of change in the entropy of
is given by
$ddtS[⟨f⟩]=ddt∫−∞+∞dv ⟨f⟩(v,t) S(v,t)=∫−∞+∞dv ⟨f⟩(v,t) ∂a(v,t)∂v.$
Another formula is obtained as
$∂∂tlog ⟨f⟩(u(v0,t),0)=[a(v,t)∂∂vlog ⟨f⟩(v,0)]v=u(v0,t),$
which is integrated in time
to derive
$log[⟨f⟩(u(v0,t),0)⟨f⟩(v0,0)]=∫0tdt′[a(v,t′)∂∂vlog ⟨f⟩(v,0)]v=u(v0,t′)$
$⟨f⟩(u(v0,t),0)=⟨f⟩(v0,0) exp(∫0tdt′[a(v,t′)∂∂vlog ⟨f⟩(v,0)]v=u(v0,t′)).$
Then, using Eqs.
, we obtain
$⟨f⟩(u(v0,t),t)=⟨f⟩(u(v0,t),0) exp Ωt(v0),$
where the function
is defined by
$Ωt(v0)≡∫0tdt′ Ω(u(v0,t′),t′),$
$Ω(v,t)≡−a(v,t)∂∂vlog ⟨f⟩(v,0)−∂a(v,t)∂v.$
Now, we consider the case in which the initial
-averaged distribution function
is given by the Maxwellian equilibrium distribution function in Eq.
. Then, we have
which is substituted into Eq.
to obtain
$f0(u(v0,t))=f0(v0) exp(−ΔE(u(v0,t),t)T),$
$ΔE(u(v0,t),t)≡m∫0tdt′ u(v0,t′) a(u(v0,t′),t′)$
represents the change in the kinetic energy of the electron with the initial velocity
caused by the acceleration due to the electric field during the time interval
. Then, we can rewrite Eqs.
, and
$Ω(v,t)=mTv a(v,t)−∂a(v,t)∂v,$
$⟨f⟩(u(v0,t),t)=f0(u(v0,t)) exp Ωt(v0)=f0(u(v0,t)) exp[ΔE(u(v0,t),t)T−ΔS(u(v0,t),t)],$
Here, we should note that $Ωt(v0)$ defined in Eq. (116) corresponds to the dissipation function employed by Evans and Searles to present their fluctuation theorem.^26,27 The fluctuation theorem by
Evans and Searles is based on the time-reversible Liouville equation and gives the formula for the ratio of the probabilities that the dissipation function takes the positive and negative values with
the same absolute value. The Vlasov equation differs from the Liouville equation treated by Evans and Searles in that it contains the electron's acceleration term determined from the distribution
function through Poisson's equation. Therefore, the fluctuation theorem cannot be directly applied to the function $Ωt(v0)$ in our case. However, as explained in Appendix E, we can show the
non-negativity of the expected value of $Ωt(v0)$, which leads to the inequality in the form of the second law of thermodynamics.
B. Analysis to second order in perturbation amplitude
In this subsection, we investigate the time evolution of the
-averaged distribution function considered in Sec.
up to the second order in the perturbation amplitude
based on the results of the linear analysis in Sec.
. Recall that the initial condition for the distribution function is given by
$f(x,v,t=0)=f0(v)+Re[f1(k,v,t=0) exp(i k x)]=f0(v)[1+α cos(k x)],$
is the Maxwellian in Eq.
is used. We have already derived the linear solution of the perturbed distribution function
$f1(x,v,t)=Re[f1(k,v,t) exp(i k x)]$
>0, from which the electric field
$E(x,t)=Re[E(k,t) exp(i k x)]$
is obtained. Hereafter, we represent the order of magnitude of the small perturbation amplitude by
. Then, the linear solutions
$f1(x,v,t)=Re[f1(k,v,t) exp(i k x)]=O(α)$
$E(x,t)=Re[E(k,t)exp(i k x)]=O(α)$
are used to derive
where higher order terms than
are neglected. Now, we can write
where we keep terms only up to
represents the
part given by
$f2(v,t)=∫0tdt′∂∂v[em⟨E(x,t′)f(x,v,t′)⟩]=e2m∫0tdt′ Re[E*(k,t′)∂f1(k,v,t′)∂v].$
We can also write
and neglect the
part to obtain
$ΔE(v,t)≡∫0tdt′ m v a(v,t′)=−e2v∫0tdt′ Re[E*(k,t′)f1(k,v,t′)f0(v)],$
$ΔS(v,t)≡S(v,t)−S(v,0)=∫0tdt′ ∂a(v,t′)∂v=−e2m∫0tdt′ Re[E*(k,t′)∂∂v(f1(k,v,t′)f0(v))].$
, the variation in the electron velocity during the time interval
is of
. Neglecting
effects, we can regard
as the change in the kinetic energy of the electron with the initial velocity
caused by the acceleration due to the electric field during the time interval
. Recall that
represents the rate of change in the information content
$S(v,t)≡−log ⟨f⟩(v,t)$
associated with the distribution of electrons in the velocity space. Then,
represents the change in the information content (or the information entropy) of the electron with the initial velocity
) during the time interval
From Eq.
, we find
$∫−∞+∞dv f2(v,t)=∫−∞+∞dv f0(v)[ΔE(v,t)T−ΔS(v,t)]=0$
$ΔE(t)T≡1T∫−∞+∞dv f0(v) ΔE(v,t)=∫−∞+∞dv f0(v) ΔS(v,t)≡ΔS[⟨f⟩](t)$
which implies that, to the second order in
, the change in the information (or Gibbs) entropy
is equal to the inverse temperature
multiplied by the energy transfer
from the electric field energy to the kinetic energy during the time interval
C. Calculation of $ΔE(v,t), ΔS(v,t)$, and $f2(v,t)$
effects, the change
in the kinetic energy and the change
in the information content (or the information entropy) of the electron with the initial velocity
during the time interval
are given by Eqs.
, respectively. The electric field
) and the perturbed distribution function
appearing in Eqs.
are evaluated using Eqs.
, respectively, in which the series summations involving an infinite number of the complex-valued eigenfrequencies
converge more quickly for larger time
. On the other hand, the Taylor expansions of
) and
are given by Eqs.
, respectively, which converge more quickly for smaller time
. Therefore, to evaluate
, the expressions of
) and
in Eqs.
and those in Eqs.
are separately used for smaller and larger time, respectively, and truncation to finite terms is made in these expressions so that the time integrals can be performed not numerically but analytically
because the time dependence of the integrands are given in the form of the summation of exponential functions multiplied by polynomials of time. Under the conditions of examples shown later, good
convergence is verified when including terms up to
in the integrands in Eqs.
and using two pairs of the complex frequencies for Eqs.
to obtain
) and
. Once that
are obtained following the procedures described above, we can use Eq.
to calculate
Figure 8 shows contours of $ΔE(v,t)/T$ and $ΔS(v,t)$ on the (v, t) plane, obtained from the analytical solution using the aforementioned procedure for $kλD=1/2$. In most of the (v, t) plane, we find
$ΔE(v,t)>0$ which indicates the increase in the electron kinetic energy. Here, the resonance velocity is given by $vres≡Re(ωø)/k=2.831 vt$. As time progresses, $ΔE(v,t)$ becomes more concentrated
around the resonance velocities $v=±vres$ while $ΔS(v,t)$ clearly shows the positive maximum (negative minimum) at a slightly smaller (larger) absolute value $|v|$ than v[res].
The top and bottom panels of Fig. 9 respectively show the distributions of $f2(v,t)/(α2n0vt−3)$ and $f2(v,t)/(α2f0(v))$ in the (v, t) plane, calculated from the difference between $ΔE(v,t)/T$ and $ΔS
(v,t)$ using Eqs. (129) and (130). As time progresses, the distribution of $f2(v,t)$ spreads from around v=0 to v[res] while such spreading is not clearly seen in the distribution of $f2(v,t)/f0(v)
$. Figure 10 shows $f2(v,t)/(α2n0vt−3)$ and $f2(v,t)/(α2f0(v,t))$ as functions of v, obtained from the analytical solution for $ωpt=0.1,0.5,1,5$, and 10. We see that $f2(v,t)$ oscillate along the v
direction and the number of oscillations increases with increasing time. Positive peaks and negative troughs of $f2(v,t)/f0(v)$ can be seen around the resonant velocities $v=±vres$. The red dots
represent the results of CD simulations for $α=0.1$, which agree well with the analytical solution. However, for $ωpt≥5$, a significant discrepancy between the CD simulation results and the
analytical solution appears near v=0. This discrepancy is attributed to the fact that the distribution function treated in the CD simulations is flat between a finite number of contour lines. This
results in an underestimation of the amplitude of the perturbed distribution function $f1(k,v,t)$ driven by $∂f0(v)/∂v$ which is set to zero in the neighborhood of v=0 for the CD simulations. Then,
the absolute value of $f2(v,t)$ around v=0 is also underestimated. Indeed, it is confirmed that increasing the number of contour lines and narrowing the intervals between them in CD simulations
reduces the difference between the CD simulation results and the analytical solution near v=0.
Profiles of $f2(v,t)/(α2n0vt−3)$ and $f2(v,t)/(α2f0(v))$ obtained by the analytical formulas for $ωpt=10$, 20, 50, and 100 are shown by curves in the top and bottom panels of Fig. 11, respectively.
The red curves in Fig. 11 represent the profiles in the long-time limit. As $t→+∞, f2(v,t)$ and $f2(v,t)/f0(v)$ converge to the structures which have the positive maxima (negative minima) at $|v|$
slightly larger (smaller) than $vres=2.831 vt$. This is consistent with the well-known picture of the increase and decrease in particles' number around the resonant velocity in the Landau damping
For an arbitrary function
), the change in its average value during
is evaluated using Eqs.
$ΔA¯(t)=1n0∫−∞+∞dv f2(v,t)A(v)=1n0∫−∞+∞dv f0(v) Ωt(v)A(v).$
By setting
, it can be shown that the above equation is equal to the increase in kinetic energy
over the time interval
given by Eq.
. Furthermore, by calculating Eq.
, the kurtosis of the velocity as a random variable at time
is given by
is given up to the second order in
$ΔK(t)=Δv4¯(t)[v2¯(0)]2−6Δv2¯(t)v2¯(0)=m2Δv4¯(t)T2−6m Δv2¯(t)T.$
are proportional to
Figure 12
shows time evolutions of
. It is seen that
converges to
. The electric field is a standing wave of the form
$sin kx$
, and its amplitude becomes zero twice during one period of plasma oscillation
. Considering that the sum of the kinetic energy of electrons and the energy of the electric field remains constant, it can be observed from the figure that
oscillates, reaching its maximum value,
, twice during the period
, and converges as
. As seen from
Figs. 10
is negative at early times like
, which reflects the fact that
is more localized near
=0 for small
as seen in
Figs. 9
. As
increases, peaks of
near the resonant velocities
become more prominent, and
takes positive values and increases, which indicates a relative increase in the number of electrons with larger
compared to a Gaussian distribution. We find that
converges to
In this section, we consider the time evolution of information entropies in the Vlasov–Poisson system. For that purpose, we use the solution of the initial value problem in Sec.
that gives the perturbed density and the electric field of
$n1(x,t)=n1(t) cos(k x), E(x,t)=E1(t) sin(k x),$
satisfy the initial conditions,
$n1(t=0)=α n0, E1(t=0)=−α4πen0k,$
and they approach zero as
The electric field energy density at time
is given by
Also, from the energy conservation law, we obtain
represents the change in the kinetic energy of the electron during the time interval
given by Eq.
We regard the electron's position and velocity as random variables which are represented by
, respectively, as explained in
Appendix E
. The joint probability density function of
are denoted by
[see Eq.
], which is integrated with respect to
to give the marginal probability density functions
, respectively [see Eqs.
]. The entropy
is derived from the electron's position distribution function
$Sp(X)≡S[pX]≡−∫−L/2+L/2dx pX(x,t) log pX(x,t)=−∫−L/2+L/2dxL (n(x,t)n0) log (n(x,t)n0)+log(L)≃ log(L)−12∫−L/2+L/2dxL(n1(x,t)n0)2.$
In the last line of Eq.
, terms of higher orders than
are neglected. The increase in
during the time interval from 0 to
is given by
, it approaches to
The velocity distribution function
is used to express the entropy
$Sp(V)≡S[pV]≡−∫−∞+∞dv pV(v,t) log pV(v,t)=−1n0∫−∞+∞dv ⟨f⟩(v,t) log ⟨f⟩(v,t)+log(n0).$
The increase in
during the time interval from 0 to
is represented by
$ΔS[pV](t)=S[pV](t)−S[pV](0)≃1n0∫−∞+∞dv f2(v,t)mv22T=ΔE(t)T,$
where terms of higher orders than
are neglected again. Using Eqs.
can be rewritten as
Its value in the limit of
is given by
The mutual information of the random variables
is defined by
Under the initial condition given by Eq.
are statistically independent at
=0, and accordingly
satisfies the Vlasov equation in Eq.
, the entropy defined by
$SP(X,V)≡S[p]≡−∫−L/2+L/2dx∫−∞+∞dv p(x,v) log p(x,v)$
is known to be an invariant, and the increase in
) during the time interval from 0 to
is written as
In the limit of
, we obtain
The time evolutions of
, and
are shown in
Fig. 13
, where contributions of higher orders than
are neglected. As functions of time
$ΔSp(X)≡ΔS[pX], ΔSp(V)≡ΔS[pV]$
have the same form but different magnitudes.
Comparing the velocity distribution functions
, and using Eqs.
, we obtain the relative entropy
$S(pV,t||pV,0)=∫−∞+∞dv pV(v,t) log [pV(v,t)pV(v,0)]≃12n0∫−∞+∞dv [f2(v,t)]2f0(v)=12n0∫−∞+∞dv f0(v)[Ωt(v)]2,$
which takes a small value of
Figure 14
shows the time evolution of
. As
increases while showing modulation and approaches to the limit value that corresponds to the limit function
Fig. 11
. As described in
Appendix E
, the non-negativity of
implies that, even in the collisionless Vlasov–Poisson system, the inequality in the form of the second law of thermodynamics holds in the relation between the heat transfer from the Maxwellian
velocity distribution to the electric field and the conditional entropy of the electron position variable for a given velocity distribution.
Coulomb collisions are neglected in the Vlasov equation
. However, even if the collision frequency is very small but finite, Coulomb collisions eventually relax the distribution function to the Maxwellian
$fM∞=nM∞(m2πTM∞)1/2 exp[−m2TM∞(v−uM∞)2]$
while conserving total particles' number, momentum, and energy. The Maxwellian distribution function
is the equilibrium solution of the Boltzmann equation that is given by including the collision term in the Vlasov equation. When the collision operator acts on the Maxwellian, the collision term
vanishes. Then, substituting
into the left-hand side into the Vlasov equation, it also vanishes. It can be proven from the above-mentioned fact that
$nM∞, TM∞$
, and
are all independent of
. Under the initial condition in Eq.
, we can use conservation laws for particles' number, energy, and momentum to derive
$nM∞=n0, uM∞=0, n0TM∞2=n0T02+⟨[E(x,0)]2⟩8π.$
Thus, the distribution function in the thermal equilibrium state reached by the collisional relaxation is given by
$fM∞=n0(m2πTM∞)1/2 exp(−mv22TM∞).$
Here, ions are treated as uniform background positive charge with infinite mass, so no energy exchange between electrons and ions due to collisions is considered to occur. From Eq.
, we also have
From the equilibrium probability distribution function
and its marginal probability distribution functions
, we can define the entropies
$S[pM], S[pMX]$
, and
in the thermal equilibrium where
holds so that the random variables
are statistically independent, and accordingly, the mutual information of
The deviation of the entropy
from the initial value
is given by
$ΔS[pMX]≡S[pMX]−S[pX](t=0)=⟨[1+α cos(kx)] log[1+α cos(kx)]⟩≃α24.$
We see from Eqs.
agree with each other up to
. Next, the deviation of
$ΔS[pMV]≡S[pMV]−S[pV](t=0)=12(log TM∞−log T)≃ΔTM∞2T=ΔE(∞)T.$
It is also found that
up to
. The deviation of the entropy
from its initial value
evaluated as
We note that
is invariant in the collisionless process although it increases by
when the system reaches the thermal equilibrium state due to collisions.
In Fig. 15, the magnitudes of the entropies $SP(X), SP(V),SP(X,V)$, and the mutual information I(X, V) are represented by the area inside the corresponding contours for t=0, $t→+∞$ in the
collisionless process, and t=+ ∞ in the collisional process. Here, the entropies $SP(X)≡S[PX], SP(V)≡S[PX]$, and $SP(X,V)≡S[P]$ takes non-negative values and they are related to $Sp(X)≡S[pX],Sp(V)
≡S[pV]$, and $Sp(X,V)≡S[p]$ by the relations shown in Eqs. (E15) and (E24). The entropy $SP(X,V)$ does not change in the collisionless process although the Landau damping increases $SP(X)$ and $SP
(V)$ by $ΔSP(X)=α2/4$ and $ΔSP(V)=α2/(4k2λD2)$, respectively, as shown in Eqs. (142) and (146), and the mutual information content I(X, V) increases by $ΔI(X,V)=ΔSP(X)+ΔSP(V)$. Let us compare the
limit state at $t→+∞$ in the collisionless process and the thermal equilibrium state reached by collisions. In the latter state, the values of $SP(X)$ and $SP(V)$ remain the same as in the former up
to the $O(α2)$ accuracy. However, in the thermal equilibrium, the mutual information quantity I(X, V) vanishes and the entropy $SP(X,V)$ of the whole system increases by the amount that I(X, V)
In this paper, the one-dimensional Vlasov–Poisson system describing a plasma consisting of electrons and uniformly distributed ions with infinite mass is considered. Using analytical solutions and
contour dynamics simulations, we elucidate how the information entropies determined from the distribution functions of the electron position and velocity variables evolve in the Landau damping
process. Under the initial condition given by the Maxwellian velocity distribution with the perturbed density distribution in the form of the cosine function, linear and quasilinear analytical
solutions describing the time evolutions of the electric field and the distribution function are obtained and shown to be in good agreement with results from numerical simulations based on contour
A novel approximate integral formula including the effect of an infinite number of complex eigenfrequencies to correctly evaluate the electric field is presented. In addition, the linear analytical
solutions for the electric field and the distribution function near the initial time are expressed as series expansions in time and velocity variables. The quasilinear analytical solution describing
the time evolution of the spatially averaged velocity distribution function is obtained, and its validity is confirmed by the contour dynamics simulation results. These analytical expressions of the
linear and quasilinear solutions are useful for verification of the accuracy of simulations of the Vlasov–Poisson system using methods other than the contour dynamics as well. Using the quasilinear
analytical solution, it becomes possible to accurately determine the time evolutions of the electron kinetic energy and the background velocity distribution function associated with the Landau
damping. Furthermore, the time evolutions of the information entropies of the electron position and velocity variables, and the mutual information are determined with an accuracy of the order of the
squared perturbation amplitude $α2$. It is well known that, in a collisionless process, the information entropy determined from the joint probability density distribution function of position and
velocity variables (or the phase-space distribution function) is one of the Casimir invariants. On the other hand, the decrease in the squared mean of spatial density fluctuations increases the
information entropy of the position variable, and the ratio of the increase in the electron kinetic energy to the temperature equals the increase in the information entropy of the velocity variable
to the order of $α2$. The sum of these increases in the information entropies of the position and velocity variables yields the mutual information that is initially zero.
The relative entropy obtained by comparing the velocity distribution at time t with the initial distribution is a positive quantity of order of $α4$. This leads to the fact that, even in the
collisionless process, the inequality in the form of the second law of thermodynamics holds in the relation between the heat transfer from the Maxwellian velocity distribution to the electric field
and the conditional entropy of the electron position variable for a given velocity distribution.
When Coulomb collisions are taken into account, they relax the distribution function at $t→+∞$ in the collisionless process further to the thermal equilibrium state. In this relaxation, the mutual
information of the position and velocity variables decreases to zero, although the information entropies of the position and velocity variables do not change to the order of $α2$. Then, the entropy
determined from the phase-space distribution increases by the amount of the decrease in the mutual information. It indicates the validity of Boltzmann's H-theorem. Future extensions of the present
work include studies on the position dependence of the phase-space distribution function of order $α2$, which is not included in the quasilinear solution, and the analysis of the information
entropies and the mutual information of the position and velocity variables to the order of $α4$.
This work was supported in part by the JSPS Grants-in-Aid for Scientific Research (Grant Nos. 19H01879 and 24K07000) and in part by the NINS program of Promoting Research by Networking among
Institutions (Grant No. 01422301). Simulations in this work were performed on “Plasma Simulator” (NEC SX-Aurora TSUBASA) of NIFS with the support and under the auspices of the NIFS Collaboration
Research program (Grant Nos. NIFS23KIPT009 and NIFS24KISM007).
Conflict of Interest
The authors have no conflicts of interest to disclose.
Author Contributions
K. Maekaku: Data curation (lead); Investigation (lead); Visualization (lead); Software (lead); Methodology (equal); Writing – original draft (equal); Writing – review & editing (lead). H. Sugama:
Conceptualization (lead); Formal analysis (lead); Investigation (supporting); Methodology (equal); Writing – original draft (equal); Writing – review & editing (supporting); Supervision (lead). T.-H.
Watanabe: Methodology (equal); Writing – review & editing (supporting).
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
The plasma dispersion function is defined by
in the case of
. Analytic continuation needs to be done to define
in the case of
. For
, we have
The plasma dispersion function is also written as
where the error function
is defined by
The plasma dispersion function satisfies
represents the complex conjugate. The derivative of
with respective to
is given by
The series expansion of
=0 is given by
and the asymptotic expansion of
is written as
This appendix presents supplementary explanations about contours used for the integrals in the complex plane, which appear in Secs. II and III. Figure 16 shows the contours explained in this
Here, we denote
and note that
has no poles in the upper half-plane
. As shown in the second line of Eq.
is integrated along the real axis in the complex
-plane from
. We let
denote the real axis as the contour for integration. We also consider
to take a positive fixed value. Then, to apply the residue theorem for deriving the third line of Eq.
, we need to deform the integration contour from
to the closed one by adding to
the semicircle
in the lower half-plane
where the radius is given by
and the orientation of
is taken clockwise by varying the argument
from 0 to
. We also choose
such that
has no poles on
. The deformation of the contour to
mentioned above is justified because the part of the contour integral along the semicircle
in the lower half-plane
vanishes in the limit of
as shown below.
Now, the semicircle
is divided into the three arcs
$CI, CII$
, and
that correspond to
, and
, respectively. We also denote the semicircle in the upper half-plane
where the orientation of
is taken clockwise by varying the argument
to 0. Then,
represents the circle with the radius
and the clockwise orientation. For
, we have
$Z(ζ)∼−ζ−1 for ζ on C+, CI, and CIII,|Z(ζ)|≫1 for ζ on CII,$
from which we obtain
$A(ζ)∼{−ζ−1for ζ on C+, CI, and CIIIκ2ζ−1for ζ on CII.$
Then, using Eq.
$ζ=Reiθ=R(cos θ+i sin θ)$
, and
$sin θ≤22θ/π$
, we have
$|∫CIdζ A(ζ)e−iζτ|≤∫CI|dζ| |A(ζ)e−iζτ|=∫−π/40Rdθ |A(Reiθ)|eτR sin θ∼∫−π/40dθ eτR sin θ≤∫−π/40dθ e22τRθ/π=π22τR(1−e−τR/2).$
It can be shown from Eq.
$∫CIIIdζ A(ζ)e−iζτ=(∫CIdζ A(ζ)e−iζτ)*.$
Also, using Eq.
$sin θ<−1/2$
, we obtain
$|∫CIIdζ A(ζ)e−iζτ|≤∫CII|dζ| |A(ζ)e−iζτ|=∫−3π/4−π/4Rdθ |A(Reiθ)|eτR sin θ∼κ2∫−3π/4−π/4dθ eτR sin θ≤κ2∫−3π/4−π/4dθ e−τR/2=π2κ2e−τR/2.$
We see from Eqs.
that, for fixed
, the integrals of
$CI, CII$
, and
vanish in the limit of
. Since
, we have
and accordingly
$∫−∞+∞dζ A(ζ)e−iζτ=∫LEdζ A(ζ)e−iζτ=∫LE+C−dζ A(ζ)e−iζτ,$
in the limit of
. In Eq.
is a closed integration contour, to which the residue theorem can be applied to derive the third line of Eq.
as well as Eq.
. We can easily confirm that Eq.
is valid even though
is replaced by
, where the value of
is fixed when taking the large
limit. Therefore, a closed integration contour including
can also be used to apply the residue theorem in deriving Eq.
Recalling that
has no pole on the upper half-plane
, we can use Cauchy's integral theorem to replace the contour
in Eq.
and obtain
$∫−∞+∞dζ A(ζ)e−iζτ=∫C+dζ A(ζ)e−iζτ=∫Cdζ A(ζ)e−iζτ,$
is the closed circle with the radius
. As shown in Eq.
can be asymptotically expanded in
Then, in the same way as in Eq.
, we can show that the integration of the product of
and the expression on the right-hand side of Eq.
vanishes in the large
limit. Therefore, when we substitute Eq.
into Eq.
, it is still valid even though the contour
contains the arc
where Eq.
does not hold. Thus, we obtain the second line of Eq.
where the residue theorem is also used to derive the third line. Taking the limit of
, Eqs.
Then, Eq.
results from Eqs.
Up to this point,
is assumed to be positive. We finally consider the case of
=0 in which
and Eq.
does not hold. Again, using Eq.
in the large
limit, we have
where the orientation of
is taken clockwise. Then, Eq.
is used to derive Eq.
where the summation of the residues about the poles of
are also shown.
The Contour Dynamics (CD) method was proposed for solving inviscid and incompressible motions in fluid mechanics in the two-dimensional space.^23 Here, we briefly describe the application of the CD
method to the solution of the one-dimensional Vlasov–Poisson system with the periodic boundary condition (see Ref. 24 for details).
Normalizing the time
, spatial variable
, velocity
, distribution function
, and electrostatic potential
, appropriately, the Vlasov–Poisson equations can be written as
respectively. In Eq.
, the acceleration field is given by
In the CD method,
are considered in the (
)-plane. Each point (
) on
moves according to the equations of motion
The solution
of the Vlasov equation is expressed as a piece-wise constant distribution function,
$fpw(x,v,t)=∑m=1NmaxΔfm I[(x,v)∈Sm(t)]$
is true and
otherwise. Here,
is the internal region of the
in the phase (
) space at time
, and
denotes the jump of the distribution function across the contour
. The electrostatic potential
can be calculated from
Here, the Green function
is given by
which is the solution of
being the period length in the
The acceleration
) of each particle is obtained using the CD representation as follows:
By substituting this into Eq.
and integrating it in time, the time evolutions of the contours
are obtained, and the distribution function at each time
is determined from Eq.
. In practice, when performing numerical simulations, the contours
are divided into a finite number of nodes (
). Between the nodes, the contour integral in Eq.
is approximated by the integral along line segments, and the numerical solution of the equations of motion for the finite number of nodes (
) is obtained.
In CD simulations, without using linear approximations, the solution of the nonlinear Vlasov equation is obtained in the form of the piece-wise constant distribution function as in Eq. (C4). We use
the piece-wise constant distribution function $fMpw(v)$ corresponding to the Maxwellian equilibrium velocity distribution, for which the outermost contours are placed at $v=±5vt$, and $fMpw(v)=0$
for $|v|>5vt$. For all CD simulations in this study, we follow the same procedure as in Ref. 24 to give $fpw(x,v,t=0)$ that corresponds to the initial condition in Eq. (118). The number of contours
used is 200, with each contour represented by 100 nodes. At time t, the value of the distribution function $f(x,v,t)$ at any point (x, v) in the phase space is obtained by the linear Hermite
interpolation from the values of the distribution function on the contour to which the nodes (x[i], v[i]) in the vicinity of (x, v) belong, and is used for comparison with theoretical predictions.
In the Vlasov–Poisson system, the time reversal map
from the two-dimensional phase space to itself is defined by
which satisfies
represents the identity map. We can easily confirmed that, when
is a solution of the Vlasov equation,
is a solution as well. It should be noted that the property of the time reversal map mentioned above holds whether the linear or nonlinear case is considered. When the initial condition in Eq.
is employed, the initial distribution function
satisfies the condition
. Then, for the solution
under this initial condition,
becomes the solution satisfying the same initial condition. Thus, from the uniqueness of the solution under the same initial condition, we find that
holds for any time
and that
results from Poisson's equation.
We here regard the electron's position and velocity as random variables denoted by
. The joint probability density function of (
) is represented by
) which satisfies the normalization condition,
$∫−L/2+L/2dx∫−∞+∞dv p(x,v)=1.$
It is related to the distribution function in Eq.
is the average number density of electrons. Here, the functions
depend on time
is omitted from the arguments of these functions for simplicity. The marginal probability distribution functions for
are defined by
$pX(x)=∫−∞+∞dv p(x,v)=1Ln(x)n0$
$pV(v)=∫−L/2+L/2dx p(x,v)=1n0⟨f⟩(v),$
respectively. Here, the electron density is denoted by
The information (or Shannon) entropy
is originally defined for probability distributions of discrete variables. Rigorously speaking, for probability distributions of continuous variables, it should be called the differential entropy.
When dividing the interval
of the variable
is divided into
intervals of equal width, the central value
in each interval is given by
$xi=iΔx, Δx=LNx, i=−(Nx−1)2,…,(Nx−1)2.$
In the same way, when dividing the interval
of the variable
intervals of equal width, the central value
in each interval is given by
$vj=jΔv, Δv=2vmaxNv, i=−(Nv−1)2,…,(Nv−1)2.$
and letting
be sufficiently large, we consider the (
) space as a collection of
cells, each of which is centered at
. Then, we approximate continuous variables (
) by discrete ones
, and relate
) to the probability distribution function
of the discrete variables (
) by
which satisfies the normalization condition,
$∫−L/2+L/2dx∫−∞+∞dv p(x,v)=∑i,jP(xi,vj)=1.$
, the information entropy is defined by
$S[P]≡∑i,jP(xi,vj)S(xi,vj)≡−∑i,jP(xi,vj) log P(xi,vj),$
represents the self-entropy which is also called the self-information or information content. Similarly, we employ
) to define the differential entropy by
$S[p]≡∫−L/2+L/2dx∫−∞+∞dv p(x,v)s(x,v)≡−∫−L/2+L/2dx∫−∞+∞dv p(x,v) log p(x,v)$
We can easily derive the following relations,
Here, we should carefully note that
are non-negative while
) and
can take both positive and negative values but satisfy the conditions,
$s(x,v)≥log(ΔxΔv), S[p]≥log(ΔxΔv).$
From the marginal probability distribution functions
, we can define the marginal distribution functions
of the discrete variables
$PX(xi)=∑jP(xi,vj), Pv(vj)=∑iP(xi,vj),$
which are related to
$PX(xi)=pX(x)Δx, PV(vj)=pV(v)Δv.$
are used to define the information entropies
$S[PX]≡∑iPX(xi)SX(xi)≡−∑iPX(xi) log PX(xi)S[PV]≡∑iPV(vj)SV(vj)≡−∑iPV(vj) log PV(vj),$
where the self-entropies
are defined by
$SX(xi)≡−log PX(xi), SV(vj)≡−log PV(vj).$
Similarly, we use
to the entropies
$S[pX]≡∫−L/2+L/2dx pX(x)sX(x)≡−∫−L/2+L/2dx pX(x) log pX(x)S[pV]≡∫−∞+∞dv pV(v)sV(v)≡−∫−∞+∞dv pV(v) log pV(v),$
$sX(x)≡−log pX(x), sV(v)≡−log pV(v).$
We can also confirm the following relations,
The conditional entropies
are defined from the probability distribution functions
$P(xi,vj), PX(xi)$
, and
$SP(V|X)≡−∑i,jP(xi,vj) log (P(xi,vj)PX(xi)) =S[P]−S[PX]=SP(X,V)−SP(X)≥0,SP(X|V)≡−∑i,jP(xi,vj) log (P(xi,vj)P(vj)) =S[P]−S[PV]=SP(X,V)−SP(V)≥0.$
Similarly, the conditional entropies
are defined from
, and
$Sp(V|X)≡−∫−L/2L/2dx∫˘∞+∞dv p(x,v) log (p(x,v)pX(x)) =S[p]−S[pX]=Sp(X,V)−Sp(X)≥log(Δv),Sp(X|V)≡−∫−L/2L/2dx∫˘∞+∞dv p(x,v) log (p(x,v)pV(v)) =S[p]−S[pV]=Sp(X,V)−Sp(V)≥log(Δx).$
To represent the mutual dependence of the random variables
, the mutual information
) is defined by
$I(X,V)≡−∑i,jP(xi,vj) log (P(xi,vj)PX(xi)PV(vj))=SP(X)+SP(V)−SP(X,V)=SP(X)−SP(X|V)=SP(V)−SP(V|X)=−∫−L/2L/2dx∫˘∞+∞dv p(x,v) log (P(x,v)PX(x)PV(v))=Sp(X)+Sp(Y)−Sp(X,Y)=Sp(X)−Sp(X|V)=Sp(V)−Sp(V|X)≥0.$
The mutual information takes the same non-negative value whether it is defined from the distribution functions of the discrete variables or from those of the continuous variables.
Hereafter, we express the time dependence of the distribution function by writing
$pX(x,t)=∫−∞+∞dv p(x,v,t)=n(x,t)/(n0L)$
, and
$pV(v,t)=∫−L/2+L/2dx p(x,v,t)=⟨f⟩(v,t)/n0$
. It is well known that, for the solution
of the Vlasov equation in Eq.
, the entropy
defined from
$S[p]≡Sp(X,V)≡−∫−L/2+L/2dx∫−∞+∞dv p(x,v,t) log p(x,v,t)$
is an invariant,
On the other hand, the entropies defined from
$S[pX]≡Sp(X)≡−∫−L/2+L/2dx pX(x,t) log pX(x,t)$
$S[pV]≡Sp(V)≡−∫−∞+∞dv pV(v,t) log pV(v,t)$
depend on time
in general,.
The relative entropy (or the Kullback–Leibler divergence) of the distribution
at time
relative to the initial distribution
is defined as
$S(pV,t||pV,0)≡∫−∞+∞dv pV(v,t) log (pV(v,t)pV(v,0)),$
which takes only non-negative values and vanishes if and only if
. The non-negativity is derived from the inequality
$log(x−1)=−log x≥1−x$
. The relative entropy
is the average difference between the information quantities,
$sV(v,0)−sV(v,t)=−log pV(v,0)−(−log pV(v,t))=log[pV(v,t)/pV(v,0)]$
, and represents the amount of information lost when approximating the correct velocity distribution
at time
by the initial distribution function
. From Eq.
, we have
$Ωt(v0)=log [pV(u(v0,t),t)pV(u(v0,t),0)],$
$S(pV,t||pV,0)=∫−∞+∞dv0 pV(v0,0) log[pV(u(v0,t),t)pV(u(v0,t),0)]=∫−∞+∞dv0 pV(v0,0) Ωt(v0).$
When the initial distribution function is the Maxwellian, using Eqs.
leads to
is the increase in the kinetic energy of electrons per unit volume and equals the decrease in the electric field energy density,
$ΔE(t)=∫−∞+∞dv f0(v0)ΔE(u(v0,t),t)=−⟨E2⟩(t)−⟨E2⟩(0)8π=−n0ΔQ(X|V).$
We now call two systems described by
as the
-system and the
-system, respectively. Then, we can regard
in Eq.
as the energy transfer per electron from the
-system to the
-system. Recalling that
represents the increase in the entropy
during the time interval from 0 to
and that
is time-independent, we can use Eq.
to get
From Eqs.
(E35) (E36)
, and
, we find
Since the initial distribution
is given by the Maxwellian, we regard the
-system here as the thermal reservoir with the temperature
. Then, Eq.
implies that the heat transfer
from the thermal reservoir to the
-system and the change in the conditional entropy of the
-system in contact with the thermal reservoir satisfy the inequality in the form of the second law of thermodynamics.
When the values of
are close to each other, we obtain
$S(pV,t||pV,0)=−∫−∞+∞dv pV(v,t) log [pV(v,0)pV(v,t)]≃∫−∞+∞dv pV(v,t)[−(pV(v,0)pV(v,t)−1)+12(pV(v,0)pV(v,t)−1)2]=12∫−∞+∞dv pV(v,t)(pV(v,0)pV(v,t)−1)2=12∫−∞+∞dv [pV(v,t)−pV(v,0)]2pV(v,t)≃12∫−∞+∞dv [pV
Using Eq.
and the ordering parameter
for the perturbation amplitude,
is derived. We also find from Eqs.
hold up to
L. D.
J. Exp. Theor. Phys.
L. D.
J. Phys. USSR
K. M.
Ann. Phys.
N. G.
Van Kampen
B. U.
Theoretical Methods in Plasma Physics
), Chap. 12.
D. R.
Introduction to Plasma Theory
John Wiley & Sons
New York
), Chap. 6.
R. D.
F. L.
The Framework of Plasma Physics
Perseus books
), Chap. 6.
J. Plasma Phys.
J. Plasma Phys.
Del Sarto
, and
The Vlasov Equation I. History and General Properties
, ISTE-Wiley Ed. (
), Sec.2.5.3.
F. F.
Introduction to Plasma Physics and Controlled Fusion
rd ed. (
), Chap. 7.
G. W.
F. W.
Phys. Rev. Lett.
Phys. Plasmas
, and
R. A.
Plasma Phys. Controlled Fusion
J. Plasma Phys.
Phys. Plasmas
A. I.
M. F.
A. G.
, and
Plasma Phys. Rep.
J. A.
Elements of Information Theory
nd ed. (
), Chap. 2.
P. J.
Z. Naturforsch.
Phys. Plasmas
A. A.
E. P.
, and
R. Z.
Suppl. Nucl. Fusion Part
P. A.
M. H.
, and
M. R.
Phys. Rev. Lett.
Del Sarto
Plasma Phys. Control. Fusion
N. J.
M. H.
, and
K. V.
J. Comput. Phys.
, and
J. Comput. Phys. Ann. Phys.
R. D.
J. D.
Plasma Confinement
Redwood City, CA
), Chap. 5.
D. J.
D. J.
Adv. Phys.
D. J.
D. J.
, and
S. R.
Fundamentals of Classical Statistical Thermodynamics
T. H.
Waves in Plasmas
American Institute of Physics
New York
), p.
Plasma Physics and Controlled Fusion
nd ed. (
Springer Berlin Heidelberg
), p.
© 2024 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | {"url":"https://pubs.aip.org/aip/pop/article/31/10/102101/3315119/Time-evolutions-of-information-entropies-in-a-one","timestamp":"2024-11-05T12:12:41Z","content_type":"text/html","content_length":"921644","record_id":"<urn:uuid:10f9ab8a-508c-42ad-a53b-2b617ada4104>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00302.warc.gz"} |
Blackjack "Cover Your Losses" Promotion
I have an account with an online poker room that is connected to a casino and is having a promotion that will cover my blackjack losses this weekend. The stipulation is that the losses are returned
to me in the form of a bonus that must be cleared by generating rake at their poker tables and expires after 30 days. I estimate that I can clear as much as $500 in bonus over the next 30 days, so
assume that any money I lose up to $500 this weekend is returned to me. I can tolerate to lose as much as $2000 at blackjack this weekend.
What is my optimal strategy for maximizing the return from this promotion? How much should I bet and how much would I need to be up before I stop? I was thinking I could bet $500 the first hand (and
double and split as necessary since I can tolerate risking $2000) and stop if I lose, but would I continue betting if I won?
The rules here are 4 decks, dealer stands S17, doubling after split allowed, no hitting split aces, can only split to two hands, and no surrendering for a return of ~99.6%. Thanks in advance to
anyone willing and knowledgeable enough to help with this.
it does seem to me that you would stop once you had the $500 in losses, going over that being OK if you lost a double-down. Why lose more on purpose?
As for winning, you should just set a $ amount ahead of time to quit, that would be the same as usual. If you were looking to manage a losing session and win instead, well, hell, those are the kind
of problems you like to have!
the next time Dame Fortune toys with your heart, your soul and your wallet, raise your glass and praise her thus: “Thanks for nothing, you cold-hearted, evil, damnable, nefarious, low-life, malicious
monster from Hell!” She is, after all, stone deaf. ... Arnold Snyder
Quote: odiousgambit
it does seem to me that you would stop once you had the $500 in losses, going over that being OK if you lost a double-down. Why lose more on purpose?
As for winning, you should just set a $ amount ahead of time to quit, that would be the same as usual. If you were looking to manage a losing session and win instead, well, hell, those are the
kind of problems you like to have!
I agree that it makes sense to stop betting if I reach $500 in losses and that I shouldn't start with a bet that would put me at more than $500 if I lost it, unless I'm doubling or splitting. I'm
still not sure about when to stop if I'm winning, I don't normally play blackjack and I don't really set goals for winning. Ideally, I'd like to continue betting until it stops being +EV to do so.
For example, suppose I bet $500 and win. Lets assume for simplicity that blackjack is a coinflip. If I were to bet $1000, Half the time I'd lose and my $500 win gets wiped out, the other half of the
time I win $1000, which should result in +$250 in expected value. The max bet at this site happens to be $3000, so supposing I were up $2500 it seems I could bet $3000 and that would result in +$250
expected value as well. This logic seems to suggest that the best strategy is to bet however much I'm up plus $500 until I lose. However, I'm just speculating and I don't know if that's actually the
most +EV strategy.
Theoretically you will get the most EV by playing with a strategy that maximizes the probability to claim the cashback. So you could set a target so that 99% of time you will lose the $500 and claim
the cashback and 1% of time you will win something big. But you also need to factor in the 0.4% house edge of the game as well as the time and effort to turn the cashback into withdrawable funds and
possible losses while doing so, so I wouldn't probably go to more than a $5000 win target.
Also if the maximum cashback is 100% to $500 I wouldn't bet more than $250 at the start to avoid losses exceeding $500 in case you have to split/double down. If I lost the first $250 bet, I would
then bet $125 and so on.
Quote: sangaman
Ideally, I'd like to continue betting until it stops being +EV to do so.
If you are currently W dollars ahead, you should obviously be betting W+500 to maximize your EV.
If you bet W+500, the EV of the bet is
0.498*(2W+500) = 0.996W + 249.
As long as this value is greater than W, you are in the positive territory. Therefore, your stop criteria is $249/0.004 = $62250
"When two people always agree one of them is unnecessary"
Quote: weaselman
If you are currently W dollars ahead, you should obviously be betting W+500 to maximize your EV.
If you bet W+500, the EV of the bet is
0.498*(2W+500) = 0.996W + 249.
As long as this value is greater than W, you are in the positive territory. Therefore, your stop criteria is $249/0.004 = $62250
Alright that makes sense. The max bet is $3000 so I'd have to stop there, but I could use bet $500, then $1000, then $2000 assuming I win each hand and stop if I lose any of them. I'd win $3500
roughly 1/8th of the time that way for $437.50 EV. I'd double if it's advantageous to do so, so occasionally I'd win more than $3500 or lose more than the $500 I can recover.
I suppose it'd be optimal to continue betting even after I'm up $3500 despite the fact that I can't bet more than $3000, right? I don't know if I'd actually keep going at that point because I
wouldn't want variance to get out of hand, but would another a $3000 bet at that point be theoretically correct?
Quote: sangaman
I suppose it'd be optimal to continue betting even after I'm up $3500 despite the fact that I can't bet more than $3000, right?
Well, it depends on what you call "optimal". It is not a +EV bet. If you are up $3500, and bet $3000, you will not be collecting the bonus if you lose, and therefore are betting at a regular -0.4%
"When two people always agree one of them is unnecessary"
Quote: sangaman
Alright that makes sense. The max bet is $3000 so I'd have to stop there, but I could use bet $500, then $1000, then $2000 assuming I win each hand and stop if I lose any of them. I'd win $3500
roughly 1/8th of the time that way for $437.50 EV. I'd double if it's advantageous to do so, so occasionally I'd win more than $3500 or lose more than the $500 I can recover.
Actually it would hurt your EV to make a bet where you could end up losing more than $500 if you doubled down and lost. So at $4000 balance it would more optimal to bet half of that, ie. $2000 to
make sure you can never end up losing more than what the cashback amount is going to be.
Quote: sangaman
I suppose it'd be optimal to continue betting even after I'm up $3500 despite the fact that I can't bet more than $3000, right? I don't know if I'd actually keep going at that point because I
wouldn't want variance to get out of hand, but would another a $3000 bet at that point be theoretically correct?
Yes, ignoring the problems with doubling down (assuming it's an 50/50 bet) yes it would be theoretically correct to bet $3000 at $4000 balance, but it might only increase the EV by a modest amount.
So it boils down to whether the risk of losing $3000 is worth the modest increase in EV to you.
Quote: weaselman
Well, it depends on what you call "optimal". It is not a +EV bet. If you are up $3500, and bet $3000, you will not be collecting the bonus if you lose, and therefore are betting at a regular
-0.4% disadvantage.
Not true because he still has $1000 left which he can bet again and claim the cashback if it loses, so a $3000 bet at $4000 balance would be still +EV.
Quote: Jufo81
Not true because he still has $500 left which he can bet again or claim the cashback, so $3000 bet would be still +EV.
How is it +EV? He has $3500, bets $3000, and ends up with $3488 on average.
"When two people always agree one of them is unnecessary"
Quote: weaselman
How is it +EV? He has $3500, bets $3000, and ends up with $3488 on average.
EDIT: Corrected values
Simplifying it as a coin flip game with 0.4% house edge so the probability to win is 0.498 and the probability to lose is 0.502:
Probability to reach $7000 from $4000 balance ($500 initial deposit plus $3500 in winnings) is unknown P. It satisfies the following recursive equation:
P = 0.498 + 0.502*0.498^2*P
(either $3000 bet wins or $3000 bet loses followed by two consecutive winning bets of $1000 and $2000 which makes us reach $4000 balance once again)
-> P = 0.5688
Probability to end up with only $500 cashback from $4000 balance = 1 - 0.5688 = 0.4312
EV = $500*0.4312 + $7000*0.5688 = $4197.3
which is $197.3 more than the current balance of $4000.
Collecting $500 cashback will bring you back to zero, not +500. $7000*0.56 = 3920 < 4000
"When two people always agree one of them is unnecessary"
Quote: weaselman
Collecting $500 cashback will bring you back to zero, not +500. $7000*0.56 = 3920 < 4000
My numbers were expressed as total values and not relative to profit/loss. Expressing it as profit/loss:
Starting balance +$3500 (not $4000 mind you)
Probability to reach +$6500 from +$3500 remains unchanged: P = 0.5688 (see corrections I made in previous post)
EV = $0*(1-0.5688) + $6500*0.5688 = $3697.2
which is $197.2 more than the current profit of +$3500. The value is exactly the same as I got in my previous post.
Quote: sangaman
I suppose it'd be optimal to continue betting even after I'm up $3500 despite the fact that I can't bet more than $3000, right? I don't know if I'd actually keep going at that point because I
wouldn't want variance to get out of hand, but would another a $3000 bet at that point be theoretically correct?
To answer this more precisely with numbers, assuming a coin flip game (0.4% house edge) the EV with finishing at $4000 target ($3500 profit) is:
EV = 0.498^3*$3500 = $432.3
The EV of finishing at $7000 target (a $3000 bet at $4000 balance) is:
EV = 0.498^3*0.5688*$6500 = $456.6
So the answer is: yes the EV increases by $24.4 by shooting to a $7000 target instead of $4000. And in case you are already sitting at $4000 balance then shooting for $7000 is worth +$197.2 to you
per previous calculations.
Quote: Jufo81
Not true because he still has $1000 left which he can bet again and claim the cashback if it loses, so a $3000 bet at $4000 balance would be still +EV.
Quote: weaselman
How is it +EV? He has $3500, bets $3000, and ends up with $3488 on average.
I think Jufo is right. If I'm up $4k and bet $3k, I lose .004 * 4000 or $16 EV. However, the times that I lose and go down to $1000, I can then make a bet of $1500 which as we've shown before has an
EV of +$250. So I'm losing $16 EV by making the $3000 bet, but when I lose (a little more than half the time) I can make up for it with a +$250 EV bet that more than compensates for the $16. If you
assume I lose half the time, betting $3000 when up $4000 and betting $1500 after a loss should have an EV of 0.5 * 250 - 16 = $109.
Quote: Jufo81
Actually it would hurt your EV to make a bet where you could end up losing more than $500 if you doubled down and lost. So at $4000 balance it would more optimal to bet half of that, ie. $2000 to
make sure you can never end up losing more than what the cashback amount is going to be.
Why would it hurt my EV? Sure, I'm exposing myself to a risk of actual losses, but I'm also increasing my upside if I double and win. I think I want to minimize the house edge as much as possible,
and I'm only going to double down when I have the edge. I don't think it's any different than doubling when none of your losses are covered, you'll want to double any time you have the edge and it's
a better option than hitting or standing. Maybe I'm missing something.
Quote: sangaman
I think Jufo is right. If I'm up $4k and bet $3k, I lose .004 * 4000 or $16 EV. However, the times that I lose and go down to $1000, I can then make a bet of $1500 which as we've shown before has
an EV of +$250. So I'm losing $16 EV by making the $3000 bet, but when I lose (a little more than half the time) I can make up for it with a +$250 EV bet that more than compensates for the $16.
If you assume I lose half the time, betting $3000 when up $4000 and betting $1500 after a loss should have an EV of 0.5 * 250 - 16 = $109.
Yes, but if you lose the $3000 hand, you will only have $1000 left (your $500 initial deposit plus $500 remainining winnings) so you should bet $1000 (not $1500) followed by $2000 in an attempt to
reach $4000 again and you would be exactly back to the same balance you were at. The EV calculations I made above assumed betting like this.
Quote: sangaman
Why would it hurt my EV? Sure, I'm exposing myself to a risk of actual losses, but I'm also increasing my upside if I double and win. I think I want to minimize the house edge as much as
possible, and I'm only going to double down when I have the edge. I don't think it's any different than doubling when none of your losses are covered, you'll want to double any time you have the
edge and it's a better option than hitting or standing. Maybe I'm missing something.
It's a bit difficult to explain but the EV does decrease rapidly unless your total losses are covered (and they wouldn't be for any losses that are over $500). Alternatively you could always hit
instead of doubling and never split but then you are in fact playing a game with ~2% house edge.
Sorry, made that last post before looking at the posts on the second page. Jufo, your equations make sense to me. Starting from 0, it doesn't seem that bad to me to sacrifice $24.40 of EV to reduce
variance and practically double the chances I walk away a winner from this promotion. However, if I get to $3500 profit, continuing to $6500 is worth $197.20 and that seems too juicy to walk away
from. I imagine that if I got to $6500 it would be similarly appealing to shoot for $9500. It's a bit of a paradox.
Realistically, what would you guys do if you were offered this same promotion? At what point would you stop? I'm leaning towards stopping at $3500 profit. Betting $3000 would be a bit gut-wrenching,
and I wouldn't be able to double after split since that would put too much of my bankroll at risk so the house edge bumps up a bit. Maybe that is too conservative, but it would be nice to have a
realistic chance of walking away a winner.
Thanks to all who have replied.
Quote: Jufo81
Yes, but if you lose the $3000 hand, you will only have $1000 left (your $500 initial deposit plus $500 remainining winnings) so you should bet $1000 (not $1500) followed by $2000 in an attempt
to reach $4000 again and you would be exactly back to the same balance you were at. The EV calculations I made above assumed betting like this.
Oops, you are right. It would be a $1000 bet after a losing $3000 bet, not $1500, but the EV is still ~$250 for that particular bet.
Quote: weaselman
How is it +EV? He has $3500, bets $3000, and ends up with $3488 on average.
It is not money you play, you play position on rebate.
The fact that a $500 balance allows you to bet $1000 with an advantage, softens the loss of $3000 on a $3500 position.
Hence a $3000 bet on $3500 might still be favourable.
Quote: sangaman
Sorry, made that last post before looking at the posts on the second page. Jufo, your equations make sense to me. Starting from 0, it doesn't seem that bad to me to sacrifice $24.40 of EV to
reduce variance and practically double the chances I walk away a winner from this promotion. However, if I get to $3500 profit, continuing to $6500 is worth $197.20 and that seems too juicy to
walk away from. I imagine that if I got to $6500 it would be similarly appealing to shoot for $9500. It's a bit of a paradox.
Yeah, with rebate offers like this situations arise where you have to weigh the risk vs. reward, ie. are you willing to risk $4000 already obtained winnings for a $197 more EV? So it boils down to
what is a meaningful sum of money to you. Someone with a large bankroll who could repeat this same offer many times would probably shoot for a much higher target than someone who can only do it once.
Quote: sangaman
Realistically, what would you guys do if you were offered this same promotion? At what point would you stop? I'm leaning towards stopping at $3500 profit. Betting $3000 would be a bit
gut-wrenching, and I wouldn't be able to double after split since that would put too much of my bankroll at risk so the house edge bumps up a bit. Maybe that is too conservative, but it would be
nice to have a realistic chance of walking away a winner.
What also factors in is how much effort is it to turn that $500 rebate into cash and can you do it without a loss, because with a high target you are most likely going to claim the rebate. And note
that in blackjack the odds for win/lose are not even close to 50/50 but rather 47.5%/52.5% so to have three consecutive wins to reach a $4000 balance is not 1/8 shot but rather 0.475^3 = 10.7%. Of
course if one or more of those hands happens to include a Blackjack/double down/split, you might already be at a higher balance than $4000 after winning just three hands. But personally I wouldn't
push my luck to shoot for a higher than 1 in 10 shot to win.
If you do decide to make the $3000 bet from a $4000 balance and end up losing, you can console yourself by thinking about Wizard's signature:
"It's not whether you win or lose; it's whether or not you had a good bet." :-p | {"url":"https://wizardofvegas.com/forum/questions-and-answers/math/5528-blackjack-cover-your-losses-promotion/","timestamp":"2024-11-04T07:42:26Z","content_type":"text/html","content_length":"97991","record_id":"<urn:uuid:d897e360-3fe3-430d-9181-21969eee1ae0>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00401.warc.gz"} |
Formative Unit 6 Exam
Download Formative Unit 6 Exam
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Formative - Unit 5 Assessment (Geometry)
Multiple Choice. Each question is worth 3 points. Partial credit may be awarded if work is shown. If there is no work
shown and the answer is incorrect, you will recieve no credit.
Section Total: 9 points
1. What is the volume of a sphere with a radius of 10 cm?
a. 4188.79 cm³
c. 1675.52 cm³
b. 418.88 cm³
d. 33510.32 cm³
2. What is the surface area of the figure shown below?
a. 324 cm²
b. 312 cm²
c. 648 cm²
d. 336 cm²
3. Find the total area of the ring shown below.
a. 51.1 in²
b. 65 in²
c. 204.2 in²
d. 15.7 in²
Protractor Section. Use of a protractor is required for all questions in the part. Complete all three parts of each
question. Each question is worth 6 points. Section Total: 12 points
4. a) Draw an angle of 163º in the space below.
b) Label the angle as
in your diagram above.
c) Classify the angle by its size on the line to the right. ___________________
a) Measure the angle and record the measurement on the line to the right. __________________
b) Label the angle as
in the picture above.
c) Classify the angle by its size on the line to the right. __________________
Applications. Each question in this part is worth 6 points. Each question is structured to help you see all steps you
must complete. All work must be shown for full credit.
Section Total: 18 points
6. How many boxes 16 in. by 12 in. by 10 in. will fit in 1250 ft³ of warehouse space?
(1 ft³ = 1728 in³)
7. How much wire is needed to anchor a 30 foot pole to a spot 20 feet from its base? Allow
of a foot for
fastening and round your answer to the nearest tenth of a foot.
a) Find the length of the wire
b) Find the total length needed for fastening, and round your answer appropriatly
8. What must be the height of a cylindrical 750-gallon tank if it is 4 ft in diameter? (1 ft³ 7.48 gal) Round to
the nearest foot.
a) Look at the units of your answer. If you need to convert to a new unit, do so first.
b) Determine the shape and what formula you will use. (Write it below)
c) Substitute all values that you know into the formula and solve for whichever value is missing.
(Include units)
Calculations. Each question is worth 4 points. All work must be shown for full credits. Total: 16 points
(figure not drawn to scale)
Find the measure of the missing angles.
10. Find the lateral area and round to the nearest tenth. Include units in your answer.
11. Find the volume of the figure shown and round your answer to the nearest thousandths. Include
units in your answer.
a) Find the area of the figure above (include units) ___A =______________________
b) Find the perimeter of the figure above (include units) _P =_______________________
Formative - Unit 5 Assessment (Geometry)
Answer Section
1. ANS: A
2. ANS: C
3. ANS: A
PTS: 1
PTS: 1
PTS: 1
4. ANS:
drawing, labelling, obtuse
PTS: 1
5. ANS:
112º, labelling, obtuse
PTS: 1
6. ANS:
1125 boxes
PTS: 1
7. ANS:
PTS: 1
8. ANS:
8 in.
PTS: 1
9. ANS:
85, 125
PTS: 1
10. ANS:
59.2 ft²
PTS: 1
11. ANS:
443.488 mm³
PTS: 1
12. ANS:
145 in², 74 in.
PTS: 1 | {"url":"https://studyres.com/doc/11467133/formative-unit-6-exam","timestamp":"2024-11-01T18:54:28Z","content_type":"text/html","content_length":"61419","record_id":"<urn:uuid:7d2d2be7-874b-4bda-9138-f90b5dfda65f>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00740.warc.gz"} |
10 research outputs found
Given a convex disk $K$ and a positive integer $k$, let $\delta_T^k(K)$ and $\delta_L^k(K)$ denote the $k$-fold translative packing density and the $k$-fold lattice packing density of $K$,
respectively. Let $T$ be a triangle. In a very recent paper, K. Sriamorn proved that $\delta_L^k(T)=\frac{2k^2}{2k+1}$. In this paper, I will show that $\delta_T^k(T)=\delta_L^k(T)$.Comment: arXiv
admin note: text overlap with arXiv:1412.539
It is conjectured that for every convex disks K, the translative covering density of K and the lattice covering density of K are identical. It is well known that this conjecture is true for every
centrally symmetric convex disks. For the non-symmetric case, we only know that the conjecture is true for triangles. In this paper, we prove the conjecture for a class of convex disks
(quarter-convex disks), which includes all triangles and convex quadrilaterals
In this paper, we introduce an $m$-fold illumination number $I^m(K)$ of a convex body $K$ in Euclidean space $\mathbb{E}^d$, which is the smallest number of directions required to $m$-fold illuminate
$K$, i.e., each point on the boundary of $K$ is illuminated by at least $m$ directions. We get a lower bound of $I^m(K)$ for any $d$-dimensional convex body $K$, and get an upper bound of $I^m(\
mathbb{B}^d)$, where $\mathbb{B}^d$ is a $d$-dimensional unit ball. We also prove that $I^m(K)=2m+1$, for a $2$-dimensional smooth convex body $K$. Furthermore, we obtain some results related to the
$m$-fold illumination numbers of convex polygons and cap bodies of $\mathbb{B}^d$ in small dimensions. In particular, we show that $I^m(P)=\left\lceil mn/{\left\lfloor\frac{n-1}{2}\right\rfloor}\
right\rceil$, for a regular convex $n$-sided polygon $P$
Given a convex disk $K$ and a positive integer $k$, let $\vartheta_T^k(K)$ and $\vartheta_L^k(K)$ denote the $k$-fold translative covering density and the $k$-fold lattice covering density of $K$,
respectively. Let $T$ be a triangle. In a very recent paper, K. Sriamorn proved that $\vartheta_L^k(T)=\frac{2k+1}{2}$. In this paper, we will show that $\vartheta_T^k(T)=\vartheta_L^k(T)$
This paper proves the following statement: If a convex body can form a fivefold translative tiling in $\mathbb{E}^3$, it must be a parallelotope, a hexagonal prism, a rhombic dodecahedron, an
elongated dodecahedron, a truncated octahedron, a cylinder over a particular octagon, or a cylinder over a particular decagon, where the octagon and the decagon are fivefold translative tiles in $\
mathbb{E}^2$. Furthermore, it presents an example of multiple tiles in $\mathbb{E}^3$ with multiplicity at most 10 which is neither a parallelohedron nor a cylinder | {"url":"https://core.ac.uk/search/?q=author%3A(Sriamorn%2C%20Kirati)","timestamp":"2024-11-02T14:27:56Z","content_type":"text/html","content_length":"135998","record_id":"<urn:uuid:c6942dfc-7069-4b34-bc5c-839dc1e24574>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00081.warc.gz"} |
Security of BLS batch verification
By JP Aumasson (Taurus), Quan Thoi Minh Nguyen, and Antonio Sanso (Ethereum Foundation)
Thanks to Vitalik Buterin for his feedback.
In a 2019 post Vitalik Buterin introduced Fast verification of multiple BLS signatures. Quoting his own words this is
a purely client-side optimization that can voluntarily be done by clients verifying blocks that contain multiple signatures
The original post includes some preliminary security analysis, but in this post weā d like to formalize it a bit and address some specific risks in the case of:
• Bad randomness
• Missing subgroup checking
We described several attacks that work in those cases, and provide proof-of-concept implementations using the Python reference code.
The batch verification construction
This technique works as follows, given n aggregate signatures S_i, respective public keys P_i, each over a number m_i of messages (note that each aggregate signature may cover a distinct number of
messages, that is, we can have m_i\neq m_j for i\neq j):
e(S_i, P) = \prod_{j=1}^{m_i} e(P_{i,j}, M_{i,j}), i=1,\dots,n
The naive method thus consists in checking these n equalities, which involves n+\sum_{i=1}^n m_i pairings, the most calculation-heavy operation.
To reduce the number of pairings in the verification, one can further aggregate signatures, as follows: the verifier generates n random scalars r_i \geq 1, and aggregates the signatures into a single
S^\star = r_1S_1 + \cdots r_n S_n
the verifier also ā updatesā the signed messages (as their hashes to the curve) to integrate the coefficient of their batch, defining
M_{i,j}'=r_i M_{i,j}, i=1,\dots,n, j=1,\dots,m_i
Verification can then be done by checking
e(S^\star,G)=\prod_{i=1}^n \prod_{j=1}^{m_i} e(P_{i,j},M_{i,j}')
Verification thus saves n-1 pairing operations, but adds n+\sum_{i=1}^n m_i scalar multiplications. Note that if the verification fails, then the verifier canā t tell which (aggregate) signatures
are invalid.
In particular, if m_i=1, \forall i, then verification requires n+1 pairings and 2n multiplications, instead of 2n pairings. Depending on the relative cost of pairings vs. scalar multiplications, the
speed-up may vary (see this post for pairings implementations benchmarks)
Security goals
Informally, the batch verification should succeed if and only if all signatures would be individually successfully verified. One or more (possibly all) of the signers may be maliciously colluding.
Note that this general definition implicitly covers cases where
• Attackers manipulate public keys (without necessarily knowing the private key),
• Malicious signers pick their signature/messages tuples depending on honest signersā input.
For a formal study of batch verification security, we refer to the 2011 paper of Camenisch, Hohenberger, Ć stergaard Pedersen.
Note that in the Ethereum context, attackers have much less freedom than this attack model allows, but we nonetheless want security against strong attackers to prevent any unsafe overlooked scenario.
The importance of randomness
The randomness has been already discussed in the original post. Buterin points out that if coefficients are constant, then an attacker could validate invalid signature:
the randomizing factors are necessary: otherwise, an attacker could make a bad block where if the correct signatures are C_1 and C_2, the attacker sets the signatures to C_1+D and C_2-D for some
deviation D. A full signature check would interpret this as an invalid block, but a partial check would not.
This observation generalizes to predictable coefficients, and more specifically to the case where \alpha attackers collude and any subset of \beta\leq \alpha coefficients are predictable among those
assigned to attackersā input.
Note that the coefficient canā t be deterministically derived from the batch to verify, for this would make coefficients predictable to an attacker.
Buterin further discusses the possible ranges of r_i to keep the scheme safe:
We can also set other r_i values in a smaller range, eg. 1ā ¦2^{64} , keeping the cost of attacking the scheme extremely high (as the ri values are secret, thereā s no computational way for the
attacker to try all possibilities and see which one works); this would reduce the cost of multiplications by ~4x.
A simple attack
If coefficients are somehow predictable, then the above trivial attack can be generalized to picking as signatures S_1=(C_1+D_1) and S_2=(C_2+D_2) such that r_1 D_1=-r_2 D_2.
If coefficients are uniformly random b-bit signed values, then there is a chance 1/2^{b-1} that two random coefficients satisfy this equality (1 bit being dedicated to the sign encoding), and thus
that verification passes. Otherwise, the chance that r_1 D_1=-r_2 D_2 for random coefficients is approximately 2^{-n}, with n the size of the subgroup in which lie D_1 and D_2.
However, the latter attack, independent of the randomness size, will fail if signatures are checked to fall in the highest-order subgroup (see the section The importance of subgroup checks).
Signature manipulation
In practice, BLS signatures are not purely used as signatures, but implementers take advantage of other informal and implicit security properties such as uniqueness and ā commitmentā . That is, if
the private key is fixed, given a message not controlled by the signer, the signer canā t manipulate the signature. In the case of bad randomness, such use cases may be insecure.
For instance, letā s say there is a lottery based on the current time interval, i.e., at time interval t outside of signersā control if S_i = \mathsf{Sign}(sk_i, t), (S_i)_x \mod N = 0 (where N is
just some small number, e.g., the number of signers) then the signer i wins the lottery. The first two signers can collude to win the lottery as follows. The first signer chooses a random point P \in
G_1 and offline bruteforces a k such that (S_1ā )_x \mod N = (S_1 - kP)_x \mod N = 0 where S_1 = \mathsf{Sign}(sk_1, t). The second signer computes S_2ā = S_2 + (k r_1/r_2)P where S_2 = \mathsf
{Sign}(sk_2, t). We have
r_1S_1' + r_2S_2' + r_3S_3 = r_1(S_1 - kP) + r_2(S_2 + kr_1/r_2P) + r_3S_3
that is, calculating further:
r_1S_1 - r_1kP + r_2S_2 + kr_1P + r_3S_3 = r_1S_1 + r_2S_2 + r_3S_3
What it means is that the first and second signer can manipulate the signatures so that the first signer wins the lottery while making batch verification valid.
The importance of subgroup checks
The current state-of-the-art pairing curves such BLS12-381 are not prime-order, for this reason the BLS signatures IRTF document warns about it and mandates a subgroup_check(P) while performing some
operations. For batch verification, subgroup check is essential, as previously commented.
The lack of subgroup validation appears to have a greater impact with batch verification than with standard BLS validation, where points in small subgroup are not known to trivially be exploitable,
as noted in the BLS signatures IRTF document:
For most pairing-friendly elliptic curves used in practice, the pairing operation e (Section 1.3) is undefined when its input points are not in the prime-order subgroups of E_1 and E_2. The
resulting behavior is unpredictable, and may enable forgeries.
and in the recent Michael Scottā s paper:
The provided element is not of the correct order and the impact of, for example, inputting a supposed \mathbb{G}_2 point of the wrong order into a pairing may not yet be fully understood.
The difficulty of performing an actual attack is also due the fact that \mathsf{gcd}(h_1,h_2)=1 where h_1 and h_2 are the respective cofactors of E_1 and E_2. Hence pushing the two pairingā s input
to lie in the same subgroup is not possible. Letā s see an example based on BLS12-381, where
h_1=3 \times 11^2 \times 10177^2 \times 859267^2 \times 52437899^2
h_2=13^2 \times 23^2 \times 2713 \times 11953 \times 262069 \times p_{136}.
Choosing an invalid signature S_1 of order 13 and a public key P_{1,1} of order 3, a potential attack would succeed (to pass batch verification) with probability 1/39=1/3 \times 1/13. The attack
assumes an implementation that multiplies the public keys by r_i's rather than the messages (in order to gain speed when there a distinct messages signed by a same key) as described here and
implemented for example by Milagro.
An attack then works as follows in order to validate a batch of signatures that includes a signature that would not have passed verification:
• Pick S_1 of order 13, and P_{1,1} of order 3. The chance that r_1 S_1 = r_1 P_{1,1} = \mathcal{O} is 1/39, which is the attack success rate. In such case, we have:
• S^\star = r_2S_2 + \cdots r_n S_n.
• Suppose, without loss of generality, that m_1=1 (namely the first batch to verify is a single signature). That is, P'_{1,1} = r_1P_{1,1} = \mathcal{O}.
• The right part of the verification equation becomes \prod_{i=1}^n \prod_{j=1}^{m_i} e(P_{i,j},M_{i,j}') that is, e(P'_{1,1},M_{1,1})\prod_{j=2}^{m_i} e(P'_{i,j},M_{i,j})=1 \times \prod_{j=2}^
{m_i} e(P'_{i,j},M_{i,j}).
It follows that verification will pass when all other signatures are valid, even if P_1's signature S_1 is not valid.
Note that the Ethereum clients (Lighthouse, Lodestar, Nimbus, Prysm, Teku) will already have performed subgroup validation upon deserialization preventing such an attack.
Implementations cheat sheet
Secure implementations of batch BLS verification must ensure that:
• Group elements (signatures, public keys, message hashes) do not represent the point to infinity and belong to their respective assigned group (BLS12-381ā s \mathbb{G}_1 or \mathbb{G}_2).
• The r_i coefficients are chosen of the right size, using a cryptographically random generator, without side channel leakage.
• The r_i coefficients are non-zero, using constant-time comparison, and if zero reject it and generate a new random value (if zero is hit multiple times, verification should abort, for something
must be wrong with the PRNG).
• The number of signatures matches the number of public keys and of message tuples.
Additionally, implementations may prevent DoS by bounding the number of messages signed.
It sounds like you are recommending using only 128-bit scalars for the randomness, as opposed to (roughly) 256-bit scalars (i.e., picked uniformly in the entire scalar field)? This should help
1 Like
As a note, many eth2 clients are using 64-bit scalars as suggested by @vbuterin here. I had a discussion with @asn a while back and more recently with Kev Aundray and they both mentioned the same 128
bit like you @alinush. My goal with this is to verify that value formally because we (Lodestar) and the Lighthouse team use Vitalikā s suggested 64-bit scalars. In Vitalik we all trust so sure that
number is correct but thought it prudent to just revisit. Pinging @AgeManning and @dapplion for visibility.
\kappa-bits will give you 2^{-\kappa} probability of accepting an invalid batch of signatures.
What k you use is an application-dependent choice. Like @vbuterin said in the article you cited, \kappa=64 may be enough in applications where the attacker doesnā t get too many attempts.
Clearly, after \approx 2^{64} attempts, an attacker will succeed. So applications where signatures are flying left & right should likely not use such a low \kappa.
My sense is a conservative one. Use \ge 128-bit security! Always. \kappa=128. Why? It might make up for problems that are hard to foresee, whether introduced by you or not. (e.g., what if your RNG
has less entropy than you believed, and those 64-bit ā uniformā scalars are not that uniform after all).
PS: Hard to tell whether the ā In Vitalik we all trust so sure that number is correctā is a tongue-in-cheek comment. Still, all hats being off to Vitalik, it would be preferable for folks to also
do their own analysis of what makes sense in their application setting. (This, of course, can be difficult.)
1 Like
Thanks!! Great explanation. I def put a lot of faith in Vitalik and it was absolutely serious. But you are correct, Iā m a punny guyā ¦ and in all seriousness, they say that if you hear the same
thing from more than one source there might be some truth to it. Well after a couple people mentioned that 64 might be two low I figured it was worth bringing the discussion up again knowing that
number is in production based on the post mentioned. I would like to get confirmation from some others as well but I will look at implementing 128 to see what effect it has on performance to add
context for the discussion. Iā ve also pinged some of the pertinent decision-makers on this thread and we can see where things land as a group. Thanks again for the quick response @alinush !!! You
rock thoroughly!
1 Like
Let me add to that.
There are in general two ways of deriving the random coefficients for batch verification in various contexts. In the following, I will assume you use \kappa-bit coefficients.
Option A. You sample them randomly when you verify, as described in this post. In this case, as alinush says, you get a soundness error of 1/2^{\kappa} (per signature!).
Option B. You derive the coefficients via a random oracle (in practice, a secure hash function prefixed with some domain separator). Importantly, you would input everything the adversary can control
into the hash. In this case, if you assume the adversary can make Q hash evaluations (Q queries of the random oracle), the soundness error would be larger, namely, Q/2^{\kappa}.
This option is used for example in PeerDAS. The advantage is that verification is deterministic.
I donā t know whether some eth clients use Option B, but if so, then 64 bit is certainly a bad choice.
In any case, I would recommend 128 bit, similar to what @alinush said.
2 Likes
Clearly, after ā 2ā ¶ā “ attempts, an attacker will succeed. So applications where signatures are flying left & right should likely not use such a low Īŗ.
Attackers have a max window of 12 seconds to make those 2ā ¶ā “ attempts, and ~4 seconds in practice for their attestations to be included. It is not practical to make 2ā ¶ā “ attempts signature
forgery attempts within 4 seconds over the network. Thatā s 2.56 x 10¹⠶ attempts per second. Assuming a 5GHz CPU can do 1 attempt per cycle, thatā s only 5 x 10ā ¹, so you would need 10ā · cores
for this.
In reality, networking is way way slower, 10000x slower (latency numbers every dev should know Latency Numbers Every Programmer Should Know Ā· GitHub). And this assumes no hop between your 10ā ·
cores and the victim.
Furthermore, the random 64-bit coefficients are resampled at each batch verification attempts, so unless the verifier doesnā t use a cryptographic RNG, the attacker has no way to adjust the scaling
factors to the verifier.
Lastly every single node that listen to the attestation channel need to be fooled, a single wrong signature will blacklist you from the peer pool.
So for all intent and purpose, you only get one shot to fool a single peer that somehow leaked their RNG or blinding factors, and in this one shot you will in the process be blacklisted by all the
other peers that seeded their blinding factors differently.
3 Likes
So for the BLS batch verification to work we need to do subgroup checks on all inputs. Can we also construct a similar probability based scheme to do subgroup checks of multiple inputs at once?
Looking at performance Iā m a bit surprised. There was very very little difference using supranationalā s blst library. 64 and 128 bit runs were conducted identically with exception of the bits of
randomness. Creating the randomness is included in the times.
aggregateWithRandomness uses MSM to aggregate an array of pkā s and sigā s with blinding and results in a single aggregated pk and sig.
Those are in preparation for verifyMultipleAggregateSignaturesSameMessage. That same method is used in this test but there is also the inclusion of the single verification of a common message using
verifyMultipleAggregateSignatures is a wrapper around mul_n_aggregate.
In all tests a ā setā denotes a message/public key/signature group.
64 bits 128 bits
============= ==============
ā aggregateWithRandomness - 1 sets 152.9080 us/op 232.0550 us/op
ā aggregateWithRandomness - 16 sets 1.4847 ms/op 1.7781 ms/op
ā aggregateWithRandomness - 128 sets 7.9076 ms/op 7.9060 ms/op
ā aggregateWithRandomness - 256 sets 15.3735 ms/op 16.0832 ms/op
ā aggregateWithRandomness - 512 sets 30.4996 ms/op 30.7407 ms/op
ā aggregateWithRandomness - 1024 sets 62.6190 ms/op 64.8119 ms/op
ā aggregateWithRandomness - 2048 sets 125.0660 ms/op 125.5888 ms/op
ā Same message - 1 sets 773.3730 us/op 848.0510 us/op
ā Same message - 8 sets 1.4562 ms/op 1.6816 ms/op
ā Same message - 32 sets 2.7797 ms/op 2.9791 ms/op
ā Same message - 128 sets 8.4415 ms/op 8.8811 ms/op
ā Same message - 256 sets 16.5267 ms/op 16.7729 ms/op
ā Same message - 512 sets 32.0515 ms/op 32.1655 ms/op
ā Same message - 1024 sets 62.7327 ms/op 65.6105 ms/op
ā Same message - 2048 sets 130.9331 ms/op 129.4787 ms/op
ā verifyMultipleAggregateSignatures - 1 sets 937.1170 us/op 999.6820 us/op
ā verifyMultipleAggregateSignatures - 8 sets 1.4355 ms/op 1.5259 ms/op
ā verifyMultipleAggregateSignatures - 32 sets 3.7716 ms/op 3.9609 ms/op
ā verifyMultipleAggregateSignatures - 128 sets 12.7219 ms/op 13.3319 ms/op
ā verifyMultipleAggregateSignatures - 256 sets 24.0479 ms/op 25.3813 ms/op
ā verifyMultipleAggregateSignatures - 512 sets 48.4746 ms/op 50.3668 ms/op
ā verifyMultipleAggregateSignatures - 1024 sets 96.7417 ms/op 102.7143 ms/op
ā verifyMultipleAggregateSignatures - 2048 sets 189.2175 ms/op 205.2370 ms/op | {"url":"https://ethresear.ch/t/security-of-bls-batch-verification/10748","timestamp":"2024-11-10T20:44:32Z","content_type":"text/html","content_length":"49191","record_id":"<urn:uuid:08f1487f-b0d5-4cbd-8b21-4cc5546d4d00>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00194.warc.gz"} |
MA001: College Algebra
Take this exam if you want to earn college credit for this course. This course is eligible for college credit through Saylor Academy's Saylor Direct Credit Program.
The Saylor Direct Credit Final Exam requires a proctoring fee of $5. To pass this course and earn a Credly Badge and official transcript, you will need to earn a grade of 70% or higher on the Saylor
Direct Credit Final Exam. Your grade for this exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again a maximum of 3 times, with a
14-day waiting period between each attempt.
We are partnering with SmarterProctoring to help make the proctoring fee more affordable. We will be recording you, your screen, and the audio in your room during the exam. This is an automated
proctoring service, but no decisions are automated; recordings are only viewed by our staff with the purpose of making sure it is you taking the exam and verifying any questions about exam integrity.
We understand that there are challenges with learning at home - we won't invalidate your exam just because your child ran into the room!
1. Desktop Computer
2. Chrome (v74+)
3. Webcam + Microphone
4. 1mbps+ Internet Connection
Once you pass this final exam, you will be awarded a Credly Badge and can request an official transcript. | {"url":"https://learn.saylor.org/course/view.php?id=24","timestamp":"2024-11-06T17:48:16Z","content_type":"text/html","content_length":"331985","record_id":"<urn:uuid:e061119b-042f-4e72-9b8d-94cb2d757708>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00281.warc.gz"} |
Quora answer: Why does life use a quaternary system (A, T, G, C) to encode information instead of a binary system?
There is a mathematical reason that codons are four. The alphabet this code produces has 64 permutations. 64 is a special number, the lowest number which is 4^3 or 2^6 which means that it can be
transformed from two dimensional to three dimensional and not lose any information. This is the minimum number for which this is true. Thus it is a mathematically singular point in the number series
of information transformation efficiency.
In comments I have been asked to elaborate.
4x4x4 is a cube. (2x2x2)x(2x2x2)=8×8 is a flat matrix with 64 squares like a chess board.
4 codons ^ 3 places in the DNA string = 64 information units.
But the real secret here is the fact that this structure is reversible and substitutable without change and that is why there are 20 codons. If you reverse the codon sequences of three of if you
substitute the two pairs of bases for each other then it does not change the fact that there are 20 sources 8×2 and 12×4. You get this by substituting yin for yang and reversing the hexagrams. This
leads to 20 groups of hexagrams that are impervious to these changes. This makes DNA a code impervious to change based on direction and substitution and explains why there are exactly 20 amino acids.
But because it is a code it has start and stop codons and so the mapping is not perfect between the reversable/substitutable case and the actual assignment of codons to amino acids but it is close.
There are several codon mappings to the Amino Acids and to the start and stop codons and they have an interesting pattern and development. See the following for some of the most interesting research
on Amino Acid to Codon mappings which finds that the mappings are not random as they have been previously portrayed.
1) Petoukhov S.V. & He M. Symmetrical Analysis Techniques for Genetic Systems and
Bioinformatics: Advanced Patterns and Applications. 2010, Hershey, USA: IGI Global. 271 p. (this book has a special section about I Ching and the genetic code!).
2) He M., Petoukhov S.V. Mathematics of bioinformatics: theory, practice, and applications. USA: John Wiley & Sons, Inc., 295 p. (I attach the cover of this book with symbols from I Ching!).
Articles on the site http://arxiv.org/ :
1. Petoukhov S.V. (2008b) The degeneracy of the genetic code and Hadamard matrices. 1-8. Retrieved February 22, 2008, from http://arXiv:0802.3366
2. Petoukhov S.V. (2008c) Matrix genetics, part 1: permutations of positions in triplets and
symmetries of genetic matrices. 1-12. Retrieved March 06, 2008, from http://arXiv:0803.0888. (версия 2 послана 29 марта 2010 года и находится на http://arxiv.org/abs/0803.0888v2 )
3. Petoukhov, S.V. (2008d). Matrix genetics, part 2: the degeneracy of the genetic code and the octave algebra with two quasi-real units (the “Yin-Yang octave algebra”). 1-23. Retrieved
March 23, 2008, from http://arXiv:0803.3330.
4. Petoukhov, S.V. (2008e). Matrix genetics, part 3: the evolution of the genetic code from the
viewpoint of the genetic octave Yin-Yang-algebra. 1-22. Retrieved May 30, 2008, from http:// arXiv:0805.4692
5. Petoukhov, S.V. (2008f). Matrix genetics, part 4: cyclic changes of the genetic 8-dimensional Yin-Yang-algebras and the algebraic models of physiological cycles. 1-22. Retrieved
September 17, 2008, from http://arXiv:0809.2714
6. S.Petoukhov (2010). Matrix genetics, part 5: genetic projection operators and direct sums. May 18, 2010, from http://arXiv:1005.5101v1
One way to think about this is through the game of Chess. I believe that the Game of Chess is right at this boundary where there is efficient information transformation between dimensions. A Chess
board is two dimensional with 64 squares 8×8. When I analyze the pieces in chess I get the same amount of information in the pieces that exist in the chess board. Thus each side contains
differentiated forms of embodied information that completely map to the chess board. This is why there is conflict, both sides are complete mappings of the territory under contention.I will leave it
as an exercise to the student to prove or disprove this claim. I don’t have my analysis anymore and so I would have to do it all over to prove that what I am saying is correct, and I don’t have time
to do that right now. But if it is true as I claim, then a lot flows form this. The game gets is perfect form from its being right on the boundary between two and three dimensions and embodying the
transform between them in the board and pieces. Because of this efficiency of transformation the minds of the two players when immersed in the game are interacting right at this threshold of
efficiency and effectiveness of information transformation, and are thus able to communicate semiotic-ally within the game very effectively. This combination of efficiency and effectiveness I call
efficacious. Chess is an extremely efficacious symbolic communication system.
Now the DNA and RNA of the cell is taking advantage of exactly the same mathematical singularity where there is transformation between dimensions without data loss. This is one of the reasons that
replication in life is so efficient. In this case we are going from the coded strand to the three dimensional molecule via the copying mechanism in RNA. But the fact this dimensional transformation
of the information can be done at this singularity of perfect transformation means that there is no re coding involved. We can see this in magic squares and cubes of order 64. The magic square to
cube mapping by the numbers allows us to see how all the numbers are distributed in each with no gaps or re-categorization necessary
Another example of this structure at the social level is the I Ching and its place in Ancient China as a core text by which all changes were seen as part of a per-mutational system exactly at this
threshold. It is fascinating to think that both the west and the east had cultural artifacts poised at this threshold of efficient communication. In one civilization it was a game and in the other an
oracular system given philosophical significance.
Comments are closed at this time. | {"url":"https://think.net/quora-answer-why-does-life-use-a-quaternary-system-a-t-g-c-to-encode-information-instead-of-a-binary-system/","timestamp":"2024-11-04T20:30:18Z","content_type":"application/xhtml+xml","content_length":"39464","record_id":"<urn:uuid:107adb70-d3a7-4e4c-9079-caf3b1da0c14>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00651.warc.gz"} |
Day 9 Project: Credit Card Validator
Welcome to the day 9 project for the 30 Days of Python series! For this project, we're going to be writing a simple credit card validator. When the program is complete, we're going to be able to
determine whether a given card number is valid or not.
It's actually a lot easier than you might think, and I'm going to explain the algorithm we're going to use in detail to help you out. You'll find this explanation below, along with the project brief.
After that, we'll present a model solution as part of a code along walkthrough.
As always, try your best to do this on your own, but if you get really stuck, follow along with the solution until you feel like you can finish it yourself. Remember, you can always look back at your
notes, or back the previous days' posts if you forget how something works!
A quick explanation of the algorithm
The algorithm we're going to use to verify card numbers is called the Luhn algorithm, or Luhn formula. This algorithm is actually used in real-life applications to test credit or debit card numbers
as well as SIM card serial numbers.
The purpose of the algorithm is to identify potentially mistyped numbers, because it can determine whether or not it's possible for a given number to be the number for a valid card.
The way we're going to use the algorithm is as follows:
1. Remove the rightmost digit from the card number. This number is called the checking digit, and it will be excluded from most of our calculations.
2. Reverse the order of the remaining digits.
3. For this sequence of reversed digits, take the digits at each of the even indices (0, 2, 4, 6, etc.) and double them. If any of the results are greater than 9, subtract 9 from those numbers.
4. Add together all of the results and add the checking digit.
5. If the result is divisible by 10, the number is a valid card number. If it's not, the card number is not valid.
Let's look at this step by step for a valid number so we can see this in action. The number we're going to use is 5893804115457289, which is a valid Maestro card number, but not one which is in use.
Number Operation
5893804115457289 Starting number
589380411545728X Remove the last digit
827545114083985X Reverse the remaining digits
16214585218016318810X Double digits at even indices
725585218073981X Subtract 9 if over 9
Now we sum these digits and add the checking digit:
7 + 2 + 5 + 5 + 8 + 5 + 2 + 1 + 8 + 0 + 7 + 3 + 9 + 8 + 1 + 9
If we perform this series if additions, we get 80. 80 is divisible by 10, so the card number is valid.
The brief
The program you write for this project should do the following:
1) It should be able to accept a card number from the user. For this project, you can assume that the number will be entered as a single string of characters (i.e. there won't be any spaces between
the numbers). However, you should be able to accept a card number with spaces at the start or end of the string.
If you want to challenge yourself, you should try to be more versatile with regards to the format that you accept card numbers in.
You may want to turn the user's input into a list of numbers, as that will make it easier to work with.
2) The program should validate that card number using the Luhn algorithm described above. You should implement this algorithm yourself.
After removing the checking digit and reversing the card number, you'll need a for loop to go over the credit card numbers. As you go through each digit, you must find a way to determine whether a
digit is in an odd or an even position. Remember you can check the model solution if you get stuck!
3) Once the validation is complete, the program should inform the user whether or not the card number is valid by printing a string to the console.
When you need to get to the step where you reverse the numbers you could use the reversed function, which will accept any sequence type:
language = "Python"
numbers = [1, 2, 3, 4, 5]
letters = ("a", "b", "c", "d", "e")
language = reversed(language) # 'nohtyP'
numbers = reversed(numbers) # [5, 4, 3, 2, 1]
letters = reversed(letters) # ('e', 'd', 'c', 'b', 'a')
reversed will give us back a lazy type (like range), so we can't directly print it; however, it is iterable. We can therefore use the result in a for loop, for example.
If our numbers are in a list, we can use the reverse method. This directly modifies the original list:
numbers = [1, 2, 3, 4, 5]
print(numbers) # [5, 4, 3, 2, 1]
You can use whichever technique you prefer.
When testing your solution, you can use your own card number, or you can find valid card numbers online that are used for testing payment methods. For example, Stripe has a range of test card numbers
you can use.
With that, you're ready to tackle the exercise. Good luck!
Our solution
Below you'll find a written walkthrough for our solution, but we have a link to a video version you'd prefer to watch a code-along video instead.
As always, we're going to break write out solution in small chunks, and I suggest you do the same, testing regularly to make sure you haven't made an error early on. If you try to write everything in
one go, it can very difficult to track down these earlier errors, because they get buried.
Our first step is just going to be accepting a card number from the user:
card_number = input("Please enter a card number: ")
Since we need to be able to accept card numbers where spaces have been added at the start or end, we need to do a small amount of processing on the string we get back from the user. In this case
strip is all we need, which will take care of any extra whitespace.
card_number = input("Please enter a card number: ").strip()
For this solution, I'm going to be creating a list from this string, and I'm doing this for a couple of reasons. First, the reverse method is a very clean way of reversing the card number, and
second, the pop method allows me to very neatly remove and store the check digit.
Let's start by converting the card_number:
card_number = list(input("Please enter a card number: ").strip())
No need to use split here, because I just want to split every character in the string into different list items. This is the default behaviour if we pass a string to the list function.
Now that we have our list, let's extract our check digit and reverse the remaining numbers:
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
Now that we have our reversed numbers, we need to take the digit at every even index and double it. If the number ends up being over 9, we need to subtract 9 from the result.
I think the easiest way to tackle this is to use a counter for the index. We can then use a for loop to increment this counter while iterating over the card_number list.
Inside the for loop, we can then check whether or not the index is divisible by 2. If it is, we know we have an even index. If you need a refresher on how to check if a number is divisible by
another, we spoke about this in yesterday's Fizz Buzz project.
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
index = 0
for digit in card_number:
if index % 2 == 0:
print("Even index")
print("Odd index")
# Increment the index counter for each iteration
index = index + 1
We can also make use of the enumerate function that we learnt about today so that we don't have to keep track of this counter ourselves:
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
for index, digit in enumerate(card_number):
if index % 2 == 0:
print("Even index")
print("Odd index")
Now that we have this part of the loop set up, we can replace these print calls with something useful.
The first thing we need to add is a place to store our modified digits. For this, we're going to create an empty list called processed_digits. We're then going to populate this list by appending
items from inside the for loop.
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
processed_digits = []
for index, digit in enumerate(card_number):
if index % 2 == 0:
print("Even index")
print("Odd index")
For odd indices, our task is quite simple. We just need to convert the digit to an integer (remember we currently have a list of strings), and then we need to call append to add the integer to
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
processed_digits = []
for index, digit in enumerate(card_number):
if index % 2 == 0:
print("Even index")
For even indices, we need to perform a few different steps. First, we need to convert the digit to an integer and double the value.
We then need to check if the result of this operation is greater than 9. If it is, we need to subtract 9 from the result. We then need to add this number to the processed_digits.
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
processed_digits = []
for index, digit in enumerate(card_number):
if index % 2 == 0:
doubled_digit = int(digit) * 2
# Subtract 9 from any results that are greater than 9
if doubled_digit > 9:
doubled_digit = doubled_digit - 9
Awesome! Now we have our list of processed digits, so all that's left is to add them together and check the result.
To add the numbers digits together, we're going to use a for loop again. This time we're going to create a variable called total, and we're going to set an initial value to the check_digit.
For each iteration of the loop, we're then going to add another digit to this total, and we'll end up with the sum of all of our processed digits, as well as the check digit.
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
processed_digits = []
for index, digit in enumerate(card_number):
if index % 2 == 0:
doubled_digit = int(digit) * 2
# Subtract 9 from any results that are greater than 9
if doubled_digit > 9:
doubled_digit = doubled_digit - 9
total = int(check_digit)
for digit in processed_digits:
total = total + digit
If we try our initial card number (5893804115457289), we should get a total of 80.
While this works, there's a much simpler way of adding the numbers in an iterable. We can just pass the iterable to sum:
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
processed_digits = []
for index, digit in enumerate(card_number):
if index % 2 == 0:
doubled_digit = int(digit) * 2
# Subtract 9 from any results that are greater than 9
if doubled_digit > 9:
doubled_digit = doubled_digit - 9
total = int(check_digit) + sum(processed_digits)
Not only is this much shorter, it's also quite a lot easier to read and understand.
Now that we have the sum of the card digits, we just need to test whether or not the total is divisible by 10. We can use the same method as we did for testing even indices.
card_number = list(input("Please enter a card number: ").strip())
# Remove the last digit from the card number
check_digit = card_number.pop()
# Reverse the order of the remaining numbers
processed_digits = []
for index, digit in enumerate(card_number):
if index % 2 == 0:
doubled_digit = int(digit) * 2
# Subtract 9 from any results that are greater than 9
if doubled_digit > 9:
doubled_digit = doubled_digit - 9
total = int(check_digit) + sum(processed_digits)
# Verify that the sum of the digits is divisible by 10
if total % 10 == 0:
With that, we're done! We have a fully functioning validator for card numbers.
Additional Resources
If you're interested in learning more about the sum function we used above, you can find information in the official documentation. | {"url":"https://teclado.com/30-days-of-python/python-30-day-9-project/","timestamp":"2024-11-09T10:34:21Z","content_type":"text/html","content_length":"182533","record_id":"<urn:uuid:1860a737-b356-4fba-b461-0918e1d64db3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00567.warc.gz"} |
How to implement a formal expectation operator over an unknown distribution
12114 Views
8 Replies
5 Total Likes
How to implement a formal expectation operator over an unknown distribution
I need to do some simplification of an expression involving averages over a stochastic variable (in order to verify a long analytical calculation). The easiest way to do that, I figured, were if I
could implement an operator which would basically be short-hand for the averaging procedure, with all the appropriate properties. Then of course this operator would be present in the final
expression, which is fine, and would enable me to compare easily with my own calculations. So assuming I use x for the stochastic variable, I tried defining av using
av[y_ + z_] := av[y] + av[z]
av[c_ y_] := c av[y] /; FreeQ[c, x]?
av[c_] := c /; FreeQ[c, x]
Then when I write
D[av[x y], y]
I get
which is fine, but when I write
D[av[Exp[-x y]], y]
I get
-E^(-x y) y
instead of
-y av[Exp[-x y]]
as I want; i.e., the av is removed somehow.
I tried using UpValue for teaching Mathematica that it could interchange differentiation and av, but apparently that is not the problem. I might be going about this entirely the wrong way, but I'd be
grateful for any input. Note the builtin Expectation function does not accomplish it either.
8 Replies
Just to avoid misunderstanding, I should add that when applying the differential operator, it should not be with respect to the stochastic variable, but with respect to some parametric variable
characterizing for example a family of stochastic variables. To be concrete, one could consider diffusion where x(t) is the position as a function of time, i.e. a stochastic variable. Then we could
consider for example the average position av[x(t)], or the average velocity av[D[x(t),t]]=D[av[x(t)],t].
No, it's definitely not the identity operator. That would be a special case. It's important to remember that it is an operator, not a function. One example could be
av[f[x]]=Integrate[f[x] p[x], {x,-Infinity,Infinity}]
where the p could be the Gaussian distribution for example. (Note that it is strictly speaking incorrect to write it in this way, because on the left-hand side, x is a stochastic variable, whereas it
is the integration variable on the right-hand side.)
So the differential operator does not act on it, but commutes with it. I still don't understand why Mathematica does not apply my rule, specifically in your second line of code (Mathematica (chain
rule), after the first == ... that's exactly the pattern to which my rule applies, and the chain rule has already been applied once at this stage.
My first thought would be to use a fake derivative - call it inertD.
Then you can probably make it behave however you want w.r.t. av, and at the end replace it with D if you need to
I have a guess as to what's going on.
Your definition of derivative for av disobeys the chain rule. I think Mathematica is first applying the chain rule, then your rule.
We can 'hack' our way around this (kind of ugly)
ClearAttributes[D, Protected]
(* avoid Mathematica's chain rule *)
D[f_[HoldPattern[av][g_]], x_] := Module[{u},
av[D[g, x]] (D[f[u], u] /. u -> av[g])
In[1]:= D[Log[av[Exp[-b x]]], b]
Out[1]= -(av[E^(-b x) x]/av[E^(-b x)])
Hi chip.
Thanks for your reply. I think you're right that it has to do with the chain rule, but I don't exactly understand it. When applying the chain rule, it still needs to apply the derivative to the av
operator, and then my rule would apply, wouldn't it?
I guess your hack could be formulated as a upvalue also, which if so probably would be better practice.
@Peter: I didn't want to introduce a new operator, because I would like to take advantage of functions such as Series, which call D.
When applying the chain rule, it still needs to apply the derivative to the av operator, and then my rule would apply, wouldn't it?
When applying the chain rule, Mathematica would only need to evaluate D[av(u), u], which is 1 (df/dx = df/du du/dx).
Now that I look at it again, I think both answers (yours and Mathematica's) are correct. In fact, can't we use both results to conclude av(x) is the identity function?
Your way (special D rule): D[Log[av[f(x)]], x] == D[av[f(x)], x] / av[f(x)] == av[D[f(x), x]] / av[f(x)] == av[f'(x)] / av[f(x)].
Mathematica (chain rule): D[Log[av[f(x)]], x] == D[av[f(x)], x] / av[f(x)] == (D[av(u), u] /. u -> f(x)) D[f(x), x] / av[f(x)] == 1 * f'(x) / av[f(x)].
This means av[f'(x)] == f'(x), and letting f(x) == x^2/2 gives us av(x) == x?
Well, turns out there's still a problem:
In[9]:= D[Log[av[Exp[-b x]]], b]
Out[9]= -((E^(-b x) x)/av[E^(-b x)])
instead of
-(av[(E^(-b x) x)]/av[E^(-b x)])
What's going on?
I think I got it:
In[1]:= av /: D[av[f___], x_] := av[D[f, x]]
In[2]:= av[y_ + z_] := av[y] + av[z]
In[3]:= av[c_ y_] := c av[y] /; FreeQ[c, x]
In[4]:= av[c_] := c /; FreeQ[c, x]
In[5]:= D[av[x y], x]
Out[5]= y
In[6]:= D[av[Exp[-x y]], x]
Out[6]= -y av[E^(-x y)]
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use | {"url":"https://community.wolfram.com/groups/-/m/t/148668","timestamp":"2024-11-11T13:41:22Z","content_type":"text/html","content_length":"133273","record_id":"<urn:uuid:0e5d9b1f-03b1-42d4-b41d-95d02dd0acd5>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00364.warc.gz"} |
Bottom-up Clustering Techniques
This is by far the mostly used approach for speaker clustering as it welcomes the use of the speaker segmentation techniques to define a clustering starting point. It is also referred as
agglomerative clustering and has been used for many years in pattern classification (see for example Duda and Hart (1973)). Normally a matrix distance between all current clusters (distance of any
with any) is computed and the closest pair is merged iteratively until the stopping criterion is met.
One of the earliest research done in speaker clustering for speech recognition was proposed in Jin et al. (1997), using the Gish distance (Gish et al., 1991) as distance matrix, with a weight to
favor neighbors merging. As stoping criterion, the minimization of a penalized version (to avoid over-merging) of the within-cluster dispersion matrix is proposed as
where is the number of clusters considered, is the covariance matrix of cluster , with acoustic segments and indicating the determinant.
Around the same time, in Siegler et al. (1997) the KL2 divergence distance was used as a distance metric and a stopping criterion was determined with a merging threshold. It shows that the KL2
distance works better than the Mahalanobis distance for speaker clustering. Also in Zhou and Hansen (2000) the KL2 metric is used as a cluster distance metric. In this work they first split the
speech segments into male/female and perform clustering on each one independently; this reduces computation (the number of cluster-pair combinations is smaller) and gives them better results.
In general, the use of statistics-based distance metrics (not requiring any models to be trained) is limited in speaker clustering as they implicitly define distances between single mean and
covariance matrices from each set, which in speaker clustering falls short many times in modeling the amount of data available from one speaker. Some people have adapted these distances and obtained
multi-Gaussian equivalents.
In Rougui et al. (2006) they propose a distance between two GMM models based on the KL distance. Given two models and , with and Gaussian mixtures each, and Gaussian weights and , the distance from
to is
where is one of the Gaussians from the model.
In Beigi et al. (1998) a distance between two GMM models is proposed by using the distances between the individual Gaussian mixtures. A distance matrix of between all possible Gaussian pairs in the
two models is processed (distances proposed are the Euclidean, Mahalanobis and KL) and then the weighted minima for each row and column is used to compute the final distance.
In Ben et al. (2004) and Moraru et al. (2005) cluster models are obtained via MAP adaptation from a GMM trained on the whole show. A novel distance between GMM models is derived from the LK2 distance
for the particular case where only means are adapted (and therefore weights and variances are identical in both models). Such distance is defined as
where and are the mean components for the mean vector for Gaussian , is the variance component for Gaussian and M, D are the number of mixtures and dimension of the GMM models respectively.
In Ben et al. (2004) a threshold is applied to such distance to serve as stopping criterion, while in Moraru et al. (2005) the BIC for the global system is used instead.
Leaving behind the statistics-based methods, in Gauvain et al. (1998) and Barras et al. (2004) a GLR metric with two penalty terms is proposed, penalizing for large number of segments and clusters in
the model, with tuning parameters. Iterative Viterbi decoding and merging iterations find the optimum clustering, which is stopped using the same metric.
Solomonov et al. (1998) also uses GLR and compares it to KL2 as distance matrices and iteratively merges clusters until it maximizes the estimated cluster purity, defined as the average over all
segments and all clusters of the ratio of segments belonging to cluster among the closest segments to segment (which belongs to ). The same stopping criterion is used in Tsai et al. (2004), where
several methods are presented to create a different reference space for the acoustic vectors that better represents similarities between speakers. The reference space defines a speaker space to which
feature vectors are projected, and the cosine measure is used as a distance matrix. It is claimed that such projections are more representative of the speakers.
Other research is done using GLR as distance metric, including Siu et al. (1992) for pilot-controller clustering and Jin et al. (2004) for meetings diarization (using BIC as stopping criterion).
The most commonly used distance and stopping criteria is again BIC, which was initially proposed for clustering in Shaobing Chen and Gopalakrishnan (1998) and Chen and Gopalakrishnan (1998). The
pair-wise distance matrix is computed for each iteration and the pair with biggest BIC value is merged. The process finishes when all pairs have a BIC. In some later research (Chen et al. (2002),
Tritschler and Gopinath (1999), Tranter and Reynolds (2004), Cettolo and Vescovi (2003) for Italian language and Meinedo and Neto (2003) for Portuguese language) propose modifications to the penalty
term and differences in the segmentation setup.
In Sankar et al. (1995) and Heck and Sankar (1997) the symmetric relative entropy distance (Juang and Rabiner, 1985) is used for speaker clustering towards speaker adaptation in ASR. This distance is
similar to Anguera (2005) and equivalent to Malegaonkar et al. (2006), both used for speaker segmentation. It is defined as
where is defined as
An empirically set threshold on the distance is used as a stopping criterion. Later on, the same authors propose in Sankar et al. (1998) a clustering based on a single GMM model trained on all the
show and the weights being adapted on each cluster. The distance used then is a weighted by counts entropy change due to merging two clusters (Digalakis et al., 1996).
In Barras et al. (2004), Zhu et al. (2005), Zhu et al. (2006) and later Sinha et al. (2005) propose a diarization system making use of speaker identification techniques in the area of speaker
modeling. A clustering system initially proposed in Gauvain et al. (1998) is used to determine an initial segmentation in Barras et al. (2004), Zhu et al. (2005) and Zhu et al. (2006), while a
standard speaker change detection algorithm is used in Sinha et al. (2005). The systems then use standard agglomerative clustering via BIC, with a penalty value set to obtain more clusters than
optimum (under-cluster the data). On the speaker diarization part, it first classifies each cluster for gender and bandwidth (in broadcast news) and uses a Universal Background Model (UBM) and MAP
adaptation to derive speaker models from each cluster. In most cases a local feature warping normalization (Pelecanos and Sridharan, 2001) is applied to the features to reduce non-stationary effects
of the acoustic environment. The speaker models are then compared using a metric between clusters called cross likelihood distance (Reynolds et al., 1998), and defined as
where indicates that the model has been MAP adapted from the UBM model. An empirically set threshold stops the iterative merging process.
The same cross-likelihood metric is used in Nishida and Kawahara (2003) to compare two clusters. In this paper emphasis is given to the selection of the appropriate model when training data is very
small. It proposes a vector quantization (VQ) based method to model small segments, by defining a model called common variance GMM (CVGMM) where Gaussian weights are set uniform and variance is tied
among Gaussians and set to the variance of all models. For each cluster BIC is used to select either GMM or CVGMM as the model to be used.
Some other people integrate the segmentation with the clustering by using a model-based segmentation/clustering scheme. This is the case in Ajmera et al. (2002), Ajmera and Wooters (2003) and Wooters
et al. (2004) where an initial segmentation is used to train speaker models that iteratively decode and retrain on the acoustic data. A threshold-free BIC metric (Ajmera et al., 2003) is used to
merge the closest clusters at each iteration and as stopping criterion.
In Wilcox et al. (1994) a penalized GLR is proposed within an traditional agglomerative clustering approach. The penalty factor favors merging clusters which are close in time. To model the clusters,
a general GMM is built from all the data in the recording and only the weights are adapted to each cluster as in Sankar et al. (1998). A refinement stage composed of iterative Viterbi decoding and EM
training follows the clustering, to redefine segment boundaries, until likelihood converges.
In Moh et al. (2003) a novel approach to speaker clustering is proposed using speaker triangulation to cluster the speakers. Given a set of clusters and the group of non-overlapped acoustic segments
which populate the different subsets/clusters. The first step generates the coordinates vector of each cluster according to each segment (modeled with a full covariance Gaussian model) by computing
the likelihood of each cluster to each segment. The similarity between two clusters is then defined as the cross correlation between such vectors as
merging those clusters with higher similarity. This can also be considered as a projection of the acoustic data into a speaker space prior to the distance computation.
user 2008-12-08 | {"url":"https://xavieranguera.com/phdthesis/node19.html","timestamp":"2024-11-03T09:32:38Z","content_type":"text/html","content_length":"25083","record_id":"<urn:uuid:8f9ed240-0f72-45b0-ad09-d651243e1301>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00296.warc.gz"} |
Mind and Nature Leonardo Review
Mind and Nature: Selected Writings on Philosophy, Mathematics and Physics
by Herman Weyl; edited and with introduction by Peter Pesic
Princeton University Press, Princeton, New Jersey, 2009
272 pp., illus. 43 b/w. Trade, $35.00
ISBN: 978-0-691-13545-8.
Reviewed by Martha Patricia Niño Mojica
In Mind and Nature, Weyl shows the development of what we understand as modern science. In this context, he presents mathematics and physics as factors that open the world and also analyses the
regularity of the motion of the stars and other bodies. For justifying this idea, Weyl explores the work of Galileo Galilei, a pioneer of modern science, who was eager to affirm that the laws of
nature can be explained in mathematical terms. This radical rationalism is very logical if you observe the beauty and astonishing accuracy that accompany some mathematical processes. In addition, the
author defines God as the completed infinite (2009, p.47). For him, God is not merely a mathematician but mathematics itself because mathematics can be seen as the science of the infinite that helps
us explore the finite. In that sense, the essence of mathematics is Mathesis Universalis or universal knowledge. Galileo described the relation between philosophy and mathematics as philosophy
written in the book of nature. A text that no one can read unless he has mastered its code made of mathematical figures. This idea about nature is also analyzed through the work of Johannes Kepler,
who stated that “nature loves simplicity of unity” (2009, p.47). It would be interesting to contrast this set of ideas with more recent findings such as the idea of the genome of the codification of
living beings through sequences of DNA.
Weyl also highlight the work of Isaac Newton as one of the philosophers interested in the relation of mathematics, philosophy, and theology. His metaphysical notion of absolute space is based in the
classical mechanics that he delineated. He also points out at the importance of consciousness for imagining and conceptualizing our world. It is curious how Weyl affirms that Newton considered space
sensorium Dei or the divine omnipresence in all things (2009, p.45).
Dante’s Divina Comedia, for Weyl, is not only a great visionary poem but also a description of the cosmos in a geometrical way. In a homologous way, the works of Aristotle expose the idea of
perfection in nature and other forms such as the sphere. Other thinkers such as Democritous, Anaxagoras, Albert Einstein, Leonhard Euler, Gottfried Leibniz, Henri Pointcaré, Ernst Mach, David
Hilbert, Giordano Bruno, Martin Heidegger, René Descartes, are also discussed. The text does an interesting research about the history and development of the main paradigms modern physics. Even
though Weyl worked mainly as mathematician, he considered it very necessary to think not only to realize actions or creations but their how they do reflect in the realm of judgements and insights.
Without rethinking our actions, they will outrun reason, turn into routine and all will go astray from that moment on. Among the subjects treated in the book it is possible to find the study of
vectors in space, the incorporation of electromagnetism into the structure of space time, topological reflections about matter, and structure of the universe, God and the universe, symmetry and their
role in the theory of relativity, reflections about reality and nature and whether they can be totally described in mathematical terms, discussions about causality, determination, personal freedom
and quantum mechanics. The most interesting topics of the book concern electricity and gravitation, man and the foundations of science, and reflections about still unanswered questions in physics.
The book also has personal photographs and biographical information about Herman Weyl (1885 – 1955), who was a mathematician working in the development of quantum physics and general relativity. In
this text you can find his most important writings, some of them previously unpublished. He was described by Albert Einstein as one of the greatest mathematicians of the first half of the twentieth
century. His work was influenced by Immanuel Kant’s ideality of space and time. In spite of the fact that he is not as famous as Albert Einstein, Weyl’s work influenced Einstein, who incorporated in
his general theory what Weyl called relativity of magnitude. In other publications Weyl demonstrated point set topology surfaces, the principle of gauge invariance, and manifolds. He also was a
pioneer of quantum theory. The ideas exposed in book are interesting. Some biographical affirmations discredit his work and some parts of the introduction have a couple of comments about Weyl’s
personal life that discourage the reader. The selection of the other texts made by the editor is very good.
Leonardo Reviews is a scholarly review service published since 1968 by Leonardo,The International Society for the Arts, Sciences, and Technology. Publishers and authors interested in having their
print or electronic publications considered for review by the panel should contact Michael Punt, Editor-In-Chief, Leonardo Reviews. | {"url":"https://www.leoalmanac.org/mind-and-nature-leonardo-review/","timestamp":"2024-11-14T21:56:45Z","content_type":"text/html","content_length":"58143","record_id":"<urn:uuid:c61d6f65-f1b3-461d-8931-b628d1dc3dd7>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00102.warc.gz"} |
How far is Pasir Gudang from Jerantut
The road driving distance between Pasir Gudang to Jerantut is 410 Km. Depending on the vehicle you choose to travel, you can calculate the amount of CO2 emissions from your vehicle and assess the
environment impact. Check our
Fuel Price Calculator
to estimate the trip cost. | {"url":"https://www.distancesfrom.com/my/how-far-is-Pasir-Gudang-from-Jerantut/HowFarHistory/46401265.aspx","timestamp":"2024-11-09T07:59:06Z","content_type":"text/html","content_length":"182323","record_id":"<urn:uuid:9c980174-d1ff-43d9-b57c-6ffd7d8bb9fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00356.warc.gz"} |
Four Years Remaining
Posted by Konstantin 12.03.2009 No Comments
The yachts parked in the port of Palma de Mallorca might very well outnumber the presumably plentiful palm trees in this city. The photo below depicts just a small piece of the 3 kilometer-long
marina, stretched along the Paseo Maritimo coastline.
For the fun of it, let's try counting the yachts on this photo. Even though it is perhaps one of the most useless exercises one might come up with, I find it amusing and not without a relation to a
variety of more serious applications. Naturally, no sane computer scientist would attempt to do the counting manually when he can let a computer do this job for him. Therefore, we'll use a chance
here to exploit the fantastic Scilab Image Processing toolbox.
If we only want to know the number of masts, we don't need the whole image, a single line of the image that crosses all of the masts would suffice. Some pixels of this line will correspond to the sky
— a brief examination shows that the RGB colour of these pixels is approximately (0.78, 0.78, 0.78). The remaining pixels will be the masts and the ropes. For each pixel, we can compute how similar
this pixel's colour is to the colour of the sky (the simple euclidean distance will do OK), and thus we shall end up with a single vector, the peaks in which will correspond to the masts we need to
// Read image
img = imread("yachts.jpg");
// Get the proper line
img_line = img(505,:,:);
// Subtract sky colour
img_line = sqrt(sum((img_line - 0.78*ones(img_line)).^2, 3));
Let us now do the counting.
1. Geometric approach
A look on the picture above will tell you that a threshold of about 0.2 might be good for discriminating the masts from the background. Once we apply it, there are at least two easy ways to derive
the number of masts.
• Firstly, we could count the number of pixels over the threshold and divide it by four, which seems like roughly the average width of a mast.
answer = sum(img_line > 0.2)/4
The result is ~135.
• Secondly, we could count the number of times the pixel value crosses the 0.2 boundary:
answer = sum(img_line(2:$) > 0.2 & img_line(1:($-1)) <= 0.2);
Here the result will be 158.
2. Spectral approach
The discrete Fourier transform of a vector is a natural way for counting repeating peaks in the signal.
ff = abs(fft(center(img_line)));
plot(0:300, ff(1:301));
The interpretation of the resulting figure should, in theory, be simple. If the photo only contained n uniformly distributed masts, the spectrum would have one clear peak at n. In our case,
unfortunately, things do not seem to be as easy. With some wishful thinking one might see major peaks around 50, 150 and 300. Now taking the previous results into consideration let us presume that
150 is the correct peak. Then 50 could correspond to the 50 "larger" masts on the image, and 300, perhaps, to the ropes.
3. Statistical approach
The original image is 1600 pixels wide. If the photo contained n masts, distributed uniformly along its width, a randomly chosen 32-pixel region would contain about n/50 masts on average. Therefore,
let us select some of these random regions and count the number of masts in them manually. After multiplying the resulting numbers by 50 we should obtain estimates of the total number of masts.
rand("seed", 1);
for i=1:9
m = floor(rand()*(1600-32))+1;
imshow(img([474:566], [m:(m+32)],:))
Here are the estimates I got here: 200, 50, 50, 150, 150, 100, 100, 100, 100. The average comes at 111 with a standard deviation of about 50.
There are serious discrepances among the results produced by various approaches. However, if you attempt to count the masts manually, you will quickly discover that there is indeed an inherent degree
of uncertainty in the number of visible masts, with the true number lying somewhere between 100 and 150. Consequently, the considered approaches to mast counting were not too far off, after all.
Tags: Fun, Image processing, Scilab, Travel | {"url":"https://fouryears.eu/tags/travel/","timestamp":"2024-11-10T02:56:51Z","content_type":"application/xhtml+xml","content_length":"29034","record_id":"<urn:uuid:8b0fe37a-108e-4f53-80b5-b0964a4626cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00824.warc.gz"} |
large-update method
Recently, so-called self-regular barrier functions for primal-dual interior-point methods (IPMs) for linear optimization were introduced. Each such barrier function is determined by its (univariate)
self-regular kernel function. We introduce a new class of kernel functions. The class is defined by some simple conditions on the kernel function and its derivatives. These properties enable us to …
Read more
A New and Efficient Large-Update Interior-Point Method for Linear Optimization
Recently, the authors presented a new large-update primal-dual method for Linear Optimization, whose $O(n^\frac23\,\log\frac{n}{\e})$ iteration bound substantially improved the classical bound for
such methods, which is $O\br{n\log\frac{n}{\e}}$. In this paper we present an improved analysis of the new method. The analysis uses some new mathematical tools developed before when we considered a
whole family of … Read more | {"url":"https://optimization-online.org/tag/large-update-method/","timestamp":"2024-11-11T11:14:07Z","content_type":"text/html","content_length":"85767","record_id":"<urn:uuid:2f881370-3e10-4031-bdf2-159d1de724f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00839.warc.gz"} |
Modeling the Marginals and the Dependence separately
When introducing copulas, it is commonly admitted that copulas are interesting because they allow to model the marginals and the dependence structure separately. The motivation is probably Sklar’s
theorem, which says that given some marginal cumulative distribution functions (say $F$ and $G$, in dimension 2), and a copula (denoted $C$), then we can generate a multivariate cumulative
distribution function with marginals the one specified previously, using
But this separability might be misleading. Consider the case of a fully parametric model,
$\left\{\begin{array}{l}F=F_{\alpha_0}\in\{F_\alpha;\alpha\in A\}\\G=G_{\beta_0}\in\{G_\beta;\beta\in B\} \\ C=C_{\gamma_0}\in\{C_\gamma;\gamma\in \Gamma\}\end{array}\right.$
Assume that those distributions are continuous, so that we can write the likelihood using densities,
$\mathcal{L}=\prod_{i=1}^n f_{\alpha}(X_i)\cdot g_{\beta}(Y_i)\cdot c_\gamma(F_{\alpha}(X_i),G_{\beta}(Y_i))$and the log-likelihood is
$\log\mathcal{L}=\underbrace{\sum_{i=1}^n \log f_{\alpha}(X_i)}_{\log\mathcal{L}(\alpha)}+\underbrace{\sum_{i=1}^n \log g_{\beta}(Y_i)}_{\log\mathcal{L}(\beta)}{\color{blue}+\sum_{i=1}^n \log c_\
The first part is the log-likelihood if we consider the first marginal (only). The second part is the log-likelihood if we consider the second marginal (only). If the two components are not
independent (i.e. the copula density $c$ is not equal to 1 everywhere) the third part cannot be considered as null, and so, in a general context,
$(\alpha^\star,\beta^\star)eq (\widehat{\alpha},\widehat{\beta})$
$(\alpha^\star,\beta^\star,\gamma^\star)=\text{argmax}\{\log \mathcal{L}\}$
$\left\{\begin{array}{l}\widehat{\alpha}=\text{argmax}\{\log \mathcal{L}(\alpha)\}\\ \widehat{\beta}=\text{argmax}\{\log \mathcal{L}(\beta)\}\end{array}\right.$
In order to illustrate this point, consider a bivariate lognormal distribution (obtained by taking the exponential of a Gaussian vector)
> mu1=1
> mu2=2
> MU=c(mu1,mu2)
> s1=1
> s2=sqrt(2)
> r=.8
> SIGMA=matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
> library(mnormt)
> set.seed(1)
> Z=exp(rmnorm(25,MU,SIGMA))
If we believe that marginals and correlations can be treated separately, we can start with marginal distributions.
> library(MASS)
> (p1=fitdistr(Z[,1],"lognormal"))
meanlog sdlog
1.1686652 0.9309119
(0.1861824) (0.1316508)
> (p2=fitdistr(Z[,2],"lognormal"))
meanlog sdlog
2.2181721 1.1684049
(0.2336810) (0.1652374)
Based on those marginal distributions, define $\tilde{U}_i=F_{\widehat{\alpha}}(X_i)$ and $\tilde{V}_i=G_{\widehat{\beta}}(Y_i)$, and consider the maximum likelihood estimator $\widehat{\gamma}}$ of
the copula parameter, obtained from this pseudo sample,
$\widehat{\gamma}}=\text{argmax}\left\{\sum_{i=1}^n \log c_\gamma(\tilde U_i,\tilde V_i) \right\}$
Numerically, we get (since we consider a Gaussian copula, which is the true copula generated here)
> library(copula)
> Gcop=normalCopula(.3,dim=2)
> U=cbind(plnorm(Z[,1],p1$estimate[1],p1$estimate[2]),
+ plnorm(Z[,2],p2$estimate[1],p2$estimate[2]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
Estimate Std. Error z value Pr(>|z|)
rho.1 0.86530 0.03799 22.77
But clearly, we did not treat the dependence structure separately, since it was a function of marginal distributions,
$\widehat{\gamma}}=\text{argmax}\left\{\sum_{i=1}^n \log c_\gamma(F_{\widehat{\alpha}}(X_i),G_{\widehat{\beta}}(Y_i)) \right\}$
If we consider a global optimization problem, then results are different. The joint density can be derived (see e.g. Mostafa & Mahmoud (1964))
> dbivlognorm=function(x,theta){
+ mu1=theta[1]
+ mu2=theta[2]
+ s1=theta[3]
+ s2=theta[4]
+ r=theta[5]
+ a1=(log(x[,1])-mu1)/s1
+ a2=(log(x[,2])-mu2)/s2
+ d=1/(2*pi*s1*s2*sqrt(1-r^2))*1/(x[,1]*x[,2])*
+ exp(-(a1^2-2*r*a1*a2+a2^2)/(2*(1-r^2)))
+ return(d)
+ }
> LogLik=function(theta){
+ return(-sum(log(dbivlognorm(Z,theta))))}
> optim(par=c(0,0,1,1,0),fn=LogLik)$par
[1] 1.1655359 2.2159767 0.9237853 1.1610132 0.8645052
The difference is not huge, but still. The estimators are not identical. From a statistical point of view, we can hardly treat the marginals and the dependence structure separately.
Another point we should keep in mind is that the estimation of the copula parameter depends on the margins, not only through the parameters, but more deeply, through the choice of the marginal
distributions (that might be misspecified). For instance, if we assume that margins are exponentially distributed,
> (p1=fitdistr(Z[,1],"exponential"))
> (p2=fitdistr(Z[,2],"exponential"))
the estimation of the parameter of the Gaussian copula yields
> U=cbind(pexp(Z[,1],p1$estimate[1]),
+ pexp(Z[,2],p2$estimate[1]))
> fitCopula(Gcop,data=U,method="ml")
fitCopula() estimation based on 'maximum likelihood'
and a sample of size 25.
Estimate Std. Error z value Pr(>|z|)
rho.1 0.87421 0.03617 24.17 <2e-16 ***
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The maximized loglikelihood is 15.4
Optimization converged
The problem is that since we misspecify marginal distribution, our pseudo sample is defined on the unit-interval, but there is no chance that we get uniform margins. If we generate a sample of size
500 with the code above,
> x <- U[,1]; y <- U[,2]
> xhist <- hist(x, plot=FALSE) ; yhist <- hist(y, plot=FALSE)
> top <- max(c(xhist$counts, yhist$counts))
> nf <- layout(matrix(c(2,0,1,3),2,2,byrow=TRUE), c(3,1), c(1,3), TRUE)
> par(mar=c(3,3,1,1))
> plot(x, y, xlab="", ylab="",col="red",xlim=0:1,ylim=0:1)
> par(mar=c(0,3,1,1))
> barplot(xhist$counts, axes=FALSE, ylim=c(0, top),
+ space=0,col="light green")
> par(mar=c(3,0,1,1))
> barplot(yhist$counts, axes=FALSE, xlim=c(0, top),
+ space=0, horiz=TRUE,col="light blue")
If we compare with the previous case, when marginal distribution were well-specified, we can clearly see that the dependence structure depends on marginal distributions,
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (April 1, 2014). Modeling the Marginals and the Dependence separately. Freakonometrics. Retrieved November 8, 2024 from https://doi.org/10.58079/ouux
3 thoughts on “Modeling the Marginals and the Dependence separately”
1. Is it possible to fix the the formila display?
2. Unfortunately problems with showing formulas. Would be great if you could fix this – looks like an extremely helpful post otherwise 😉
3. very useful article! Thank you!
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://freakonometrics.hypotheses.org/13576","timestamp":"2024-11-08T05:48:11Z","content_type":"text/html","content_length":"159989","record_id":"<urn:uuid:16c41c56-ba9c-4443-afef-bec5875d2e86>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00809.warc.gz"} |
Computing Tangent Space Basis Vectors for an Arbitrary Mesh
Eric Lengyel • March 15, 2004
Modern bump mapping (also known as normal mapping) requires that tangent plane basis vectors be calculated for each vertex in a mesh. This article presents the theory behind the computation of
per-vertex tangent spaces for an arbitrary triangle mesh and provides source code that implements the proper mathematics.
[Edit: this derivation first appeared in Mathematics for 3D Game Programming & Computer Graphics, 1st ed., 2001. An updated derivation appears in Foundations of Game Engine Development, Volume 2:
Rendering, 2019.]
Mathematical Derivation
We want our tangent space to be aligned such that the x axis corresponds to the u direction in the bump map and the y axis corresponds to the v direction in the bump map. That is, if Q represents a
point inside the triangle, we would like to be able to write
\(\mathbf Q − \mathbf P_0 = (u − u_0)\mathbf T + (v − v_0)\mathbf B,\)
where \(\mathbf P_0\) is the position of one of the vertices of the triangle, and \((u_0, v_0)\) are the texture coordinates at that vertex. The vectors T and B are the tangent and bitangent vectors
aligned to the texture map, and these are what we’d like to calculate.
Suppose that we have a triangle whose vertex positions are given by the points \(\mathbf P_0,\) \(\mathbf P_1,\) and \(\mathbf P_2,\) and whose corresponding texture coordinates are given by \((u_0,
v_0),\) \((u_1, v_1),\) and \((u_2, v_2).\) Our calculations can be made much simpler by working relative to the vertex \(\mathbf P_0,\) so we let
\(\mathbf Q_1 = \mathbf P_1 - \mathbf P_0\)
\(\mathbf Q_2 = \mathbf P_2 - \mathbf P_0\)
\((s_1, t_1) = (u_1 − u_0, v_1 − v_0)\)
\((s_2, t_2) = (u_2 − u_0, v_2 − v_0).\)
We need to solve the following equations for T and B.
\(\mathbf Q_1 = s_1 \mathbf T + t_1 \mathbf B\)
\(\mathbf Q_2 = s_2 \mathbf T + t_2 \mathbf B\)
This is a linear system with six unknowns (three for each T and B) and six equations (the x, y, and z components of the two vector equations). We can write this in matrix form as follows.
\(\begin{bmatrix}(\mathbf Q_1)_x & (\mathbf Q_1)_y & (\mathbf Q_1)_z \\ (\mathbf Q_2)_x & (\mathbf Q_2)_y & (\mathbf Q_2)_z\end{bmatrix} = \begin{bmatrix}s_1 & t_1 \\ s_2 & t_2\end{bmatrix}\begin
{bmatrix}T_x & T_y & T_z \\ B_x & B_y & B_z\end{bmatrix}\)
Multiplying both sides by the inverse of the \((s, t)\) matrix, we have
\(\begin{bmatrix}T_x & T_y & T_z \\ B_x & B_y & B_z\end{bmatrix} = \dfrac{1}{s_1 t_2 - s_2 t_1}\begin{bmatrix}t_2 & -t_1 \\ -s_2 & s_1\end{bmatrix}\begin{bmatrix}(\mathbf Q_1)_x & (\mathbf Q_1)_y &
(\mathbf Q_1)_z \\ (\mathbf Q_2)_x & (\mathbf Q_2)_y & (\mathbf Q_2)_z\end{bmatrix}\)
This gives us the (unnormalized) T and B vectors for the triangle whose vertices are \(\mathbf P_0,\) \(\mathbf P_1,\) and \(\mathbf P_2.\) To find the tangent vectors for a single vertex, we average
the tangents for all triangles sharing that vertex in a manner similar to the way in which vertex normals are commonly calculated. In the case that neighboring triangles have discontinuous texture
mapping, vertices along the border are generally already duplicated since they have different mapping coordinates anyway. We do not average tangents from such triangles because the result would not
accurately represent the orientation of the bump map for either triangle.
Once we have the normal vector N and the tangent vectors T and B for a vertex, we can transform from tangent space into object space using the matrix
\(\begin{bmatrix}T_x & B_x & N_x \\ T_y & B_y & N_y \\ T_z & B_z & N_z\end{bmatrix}.\)
To transform in the opposite direction (from object space to tangent space—what we want to do to the light direction), we can simply use the inverse of this matrix. It is not necessarily true that
the tangent vectors are perpendicular to each other or to the normal vector, so the inverse of this matrix is not generally equal to its transpose. It is safe to assume, however, that the three
vectors will at least be close to orthogonal, so using the Gram-Schmidt algorithm to orthogonalize them should not cause any unacceptable distortions. Using this process, new (still unnormalized)
tangent vectors \(\mathbf{T'}\) and \(\mathbf{B'}\) are given by
\(\mathbf{T'} = \mathbf T - (\mathbf N \cdot \mathbf T)\mathbf N\)
\(\mathbf{B'} = \mathbf B − (\mathbf N \cdot \mathbf B)\mathbf N − (\mathbf{T'} \cdot \mathbf B)\mathbf{T'} / \mathbf{T'}^2.\)
Normalizing these vectors and storing them as the tangent and bitangent for a vertex lets us use the matrix
\(\begin{bmatrix}T'_x & T'_y & T'_z \\ B'_x & B'_y & B'_z \\ N'_x & N'_y & N'_z\end{bmatrix}\)(*)
to transform the direction to light from object space into tangent space. Taking the dot product of the transformed light direction with a sample from the bump map then produces the correct
Lambertian diffuse lighting value.
It is not necessary to store an extra array containing the per-vertex bitangent since the cross product \(\mathbf N \times \mathbf{T'}\) can be used to obtain \(m \mathbf{B'},\) where \(m = \pm 1\)
represents the handedness of the tangent space. The handedness value must be stored per-vertex since the bitangent \(\mathbf{B'}\) obtained from \(\mathbf N \times \mathbf{T'}\) may point in the
wrong direction. The value of m is equal to the determinant of the matrix in Equation (*). You might find it convenient to store the per-vertex tangent vector \(\mathbf{T'}\) as a four-dimensional
entity whose w coordinate holds the value of m. Then the bitangent \(\mathbf{B'}\) can be computed using the formula
\(\mathbf{B'} = T'_w(\mathbf N \times \mathbf{T'}),\)
where the cross product ignores the w coordinate. This works nicely for vertex shaders by avoiding the need to specify an additional array containing the per-vertex m values.
Bitangent versus Binormal
The term binormal is commonly used as the name of the second tangent direction (that is perpendicular to the surface normal and u-aligned tangent direction). This is a misnomer. The term binormal
pops up in the study of curves and completes what is known as a Frenet frame about a particular point on a curve. Curves have a single tangent direction and two orthogonal normal directions, hence
the terms normal and binormal. When discussing a coordinate frame at a point on a surface, there is one normal direction and two tangent directions, which should be called the tangent and bitangent.
Source Code
The code below generates a four-component tangent T in which the handedness of the local coordinate system is stored as \(\pm 1\) in the w-coordinate. The bitangent vector B is then given by \(\
mathbf B = T_w(\mathbf N \times \mathbf T).\)
#include "TSVector4D.h"
struct Triangle
unsigned short index[3];
void CalculateTangentArray(long vertexCount, const Point3D *vertex,
const Vector3D *normal, const Point2D *texcoord, long triangleCount,
const Triangle *triangle, Vector4D *tangent)
Vector3D *tan1 = new Vector3D[vertexCount * 2];
Vector3D *tan2 = tan1 + vertexCount;
ZeroMemory(tan1, vertexCount * sizeof(Vector3D) * 2);
for (long a = 0; a < triangleCount; a++)
long i1 = triangle->index[0];
long i2 = triangle->index[1];
long i3 = triangle->index[2];
const Point3D& v1 = vertex[i1];
const Point3D& v2 = vertex[i2];
const Point3D& v3 = vertex[i3];
const Point2D& w1 = texcoord[i1];
const Point2D& w2 = texcoord[i2];
const Point2D& w3 = texcoord[i3];
float x1 = v2.x - v1.x;
float x2 = v3.x - v1.x;
float y1 = v2.y - v1.y;
float y2 = v3.y - v1.y;
float z1 = v2.z - v1.z;
float z2 = v3.z - v1.z;
float s1 = w2.x - w1.x;
float s2 = w3.x - w1.x;
float t1 = w2.y - w1.y;
float t2 = w3.y - w1.y;
float r = 1.0F / (s1 * t2 - s2 * t1);
Vector3D sdir((t2 * x1 - t1 * x2) * r, (t2 * y1 - t1 * y2) * r,
(t2 * z1 - t1 * z2) * r);
Vector3D tdir((s1 * x2 - s2 * x1) * r, (s1 * y2 - s2 * y1) * r,
(s1 * z2 - s2 * z1) * r);
tan1[i1] += sdir;
tan1[i2] += sdir;
tan1[i3] += sdir;
tan2[i1] += tdir;
tan2[i2] += tdir;
tan2[i3] += tdir;
for (long a = 0; a < vertexCount; a++)
const Vector3D& n = normal[a];
const Vector3D& t = tan1[a];
// Gram-Schmidt orthogonalize
tangent[a] = (t - n * Dot(n, t)).Normalize();
// Calculate handedness
tangent[a].w = (Dot(Cross(n, t), tan2[a]) < 0.0F) ? -1.0F : 1.0F;
delete[] tan1; | {"url":"https://terathon.com/blog/tangent-space.html","timestamp":"2024-11-11T00:03:06Z","content_type":"application/xhtml+xml","content_length":"10827","record_id":"<urn:uuid:668ca6eb-df01-4d2b-ac42-910db29c6e50>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00583.warc.gz"} |
Tag-Teaming with Orac: Bad, Bad Breast Cancer Math in JPANDS
My friend, fellow ScienceBlogger, and BlogFather Orac asked me to take a look at a paper that purportedly shows that abortion is a
causative risk factor for breast cancer, which he posted about
this morning. When the person who motivated me to start what’s turned out to be a shockingly
successful blog asks for something, how could I possibly say no? Especially when it’s such a great example
of the misuse of mathematics for political purposes?
The paper is “The Breast Cancer Epidemic: Modeling and Forecasts Based on Abortion and Other Risk
Factors”, by Patrick S. Carroll, published in the Journal of American Physicians and Surgeons (JPANDS).
Before getting to the meat of the paper, there’s a couple of preliminary things to say about it.
In an ideal world, every time you read a paper, you’d study every bit of it in great, absorbing detail. But in the real world, you can’t do that. There are too many papers; if you tried to give every
paper a full and carefully detailed reading, even if you never stopped to eat and sleep,
you’d be falling further behind every day. So a major skill that you acquire when you learn
to do research is how to triage, and decide how much attention to give to different kinds of papers.
One thing that you should always consider when you set out to look at a paper is look at
its conclusions. In general, there are a few basic kinds of papers. There are papers presenting
entirely new information; there are papers that are adding something new to an established
consensus; there are papers that are just piling more data onto an established consensus; and there
are papers that are refuting an established consensus. The way that you read a paper depends on
what kind of paper it is.
If a paper is just piling on more evidence, you look at the data that it presents – and don’t pay a lot attention to much else, because they’re rehashing what’s already been said. The only really
interesting thing in a paper like that is the data. So you focus your attention on the data, how it
was gathered and how it was analyzed, and what (if anything) it adds to what we already know.
If a paper adds something new to a consensus, then you give it more careful attention. You’re
still focused primarily on the data, but you also want to carefully look at how the data was
gathered and analyzed, to see if the new information that they’re adding is valid.
The first and last kinds of paper: the ones that present something totally new, and the ones that refute something for which there is a lot of strongly supported data, you read with
much greater care and attention to detail. These are the papers that make the strongest claims,
and which haven’t been carefully looked at by many different people yet, so they require the most
careful attention and analysis. This paper is a member of that last class: it’s claiming to find
a statistical link which many careful studies have not found. So it’s in line for a very
careful reading.
So what’s the source? Well, it’s published in JPANDS. JPANDS in a terrible journal. In fact, the first post on Good Math/Bad Math was a critique of a JPANDS paper that used some of the worst
statistics that I’ve ever seen published. That’s bad – the paper is appearing in a very low-credibility journal with a history of not carefully reviewing statistical analysis. That’s certainly not
to justify ignoring the paper – but the quality of the journal is a valid consideration. A paper about
this topic that appears in a prestigious cancer our epidemiology journal has more credibility than
a paper that appears in a journal known for publishing garbage. It, quite naturally, brings
to mind the question “Why publish this work in a non-MEDLINE indexed, low quality journal?” Like I said, it’s not enough to ignore the paper, but it does raise red flags right away: this is a paper
where you’re going to have to give the data and its analysis a very careful read.
So, on to the paper. What the paper does is select a set of potential risk factors for breast cancer, and then compare the incidence of those risk factors in a group of populations with the incidence
breast cancer in those same populations. That’s a sort-of strange approach. At best, that approach can
show a statistical correlation, but it’s going to be a weak one – because it doesn’t maintain any link
between individuals with risk factors and the incidence of disease. In general, you use
a correlative study like that when you can’t associate risk factors and incidences with specific
individuals. The author does address this point: he says that it’s difficult for epidemiologists to
obtain information about whether a particular woman had an abortion. So that addresses that criticism, but
the fact remains that it’s going to be much harder to establish a causal link rather than a correlative link using this methodology.
To try to build a model, he selects a list of 7 risk factors: abortion, higher age at first live
birth, childlessness, number of children, breastfeeding, hormonal contraceptive use, and hormone
replacement therapy. This list raises some red flags. It omits a large number of well-known risk factors
which could easily outweigh the factors that are included in the list: smoking, alchohol, genetic risk,
race. (Orac has more to say about that.) But what’s also important to notice is that these factors are
not independent. The number of women who breastfeed are, obviously, strongly correlated with the
number who’ve had children. The women who have a large number of children are much more likely to have
their first child at a younger age than the women who had only one or two children. And it ignores
important correlative factors: higher income women tend to have fewer children, later age at first birth,
and higher rates of breastfeeding. This list looks fishy.
But what comes next is where things just totally go off the rails. He takes the 7 risk factors,
and using information from public health services, does a linear regression of risk factor versus cancer incidence over time. If the linear regression doesn’t produce a strong positive correlation,
he throws it away. The fact that this means that he’s asserting that well-known and well-supported
correlations should be discarded as invalid isn’t even mentioned. But what’s worse is, it’s clearly quite
On page two, he shows a graph of the data for “mean age of first live birth” plotted against breast
cancer risk. How does he assemble the graph for the linear regression? For each year, he takes the
complete set of women born that year. Then he computes the average age of first birth for all women born
that year, and tries to correlate it with the breast cancer incidence for women born in that year. That’s
ridiculous. It is a completely unacceptable and invalid use of statistics. Anyone who’s
even taken a college freshman course in stats should know that that is absolutely ridiculous. It’s very
deliberately ignoring independence from other variables, in obviously foolish ways. I just don’t even
know how to mock this, because it’s so off-the-wall ridiculous.
There’s another obvious problem with the whole methodology, which pales in comparison to
the dreadful way that they selected data. But I’ll mention it anyway. Linear regression and correlation
coefficient measures how well a linear relationship matches the data. It doesn’t test for
anything else. But there are numerous correlative and/or causal relationships that don’t show a
simple linear relationship. For example, if you look at alcohol consumption plotted against
various diseases, there’s often an initial decrease in risk, which bottoms out and is followed by a large increase in risk. There are often threshold effects, where something doesn’t start
to have an impact until beyond a minimum threshold. And so on. There’s a lot more to things that
just linear correlation. But all that the author considers is linear correlation. He gives no reason
for that, and makes no attempt to justify it. It’s just presented as if it’s beyond question.
Based on those linear regressions, he totally discards everything without a strong linear correlation
as being irrelevant factors that don’t need to be included in the model. That
leaves him with only two factors: fertility (number of live births) and abortion. So then, once
again building on the assumption that linear relationships are the only things that matter, he says
that they can model the breast cancer incidence via a simple linear equation:
Y[i] = a + b[1]x[1i] + b[2]x[2i] + e[i]
In this, Y[i] is the breast cancer incidence in the group of women of age i;
x[1i] is a measure of the number of abortions; x[2i] is a measure of fertility.
They then do another linear regression using this equation to come up with coefficients for
the two measured quantities. The coefficient for fertility is -0.0047, with a 95% confidence interval
ranging from -0.0135 to _0.0041. In other words, according to their measure, fertility – the rate
of live birth – is not a significant factor in breast cancer rates compared to
Right there, we can stop looking at the paper. When a mathematical model generates an
incredibly ridiculous result, something which is in direct and blatant contradiction with
the known data, you throw the model right out the window, because it’s worthless. The notion that
abortion as a risk factor for breast cancer completely dwarfs the reduction in risk after
childbirth – when we know that having children causes a dramatic decrease in the risk of
breast cancer – is unquestionably wrong. If it were true, what it would mean is that the
number of cases of breast cancer among women who had no children but had an abortion (which,
from what I can estimate from data from a variety of websites is somewhere around 15%) is
so high that it can completely dwarf the risk reduction among women who did
have children (>80%). If that were the case, it would be incredibly obvious in the statistics
of breast cancer rates – you’d have a small sub-population causing an inordinately huge
portion of the breast cancer rates. We know that things like that are easily visible: that’s how
we discovered the so-called “breast cancer genes” – a small group of women were
dramatically more likely to have breast cancer that the population at large.
So we’ve got a model which doesn’t fit reality. What a real scientist does when this happens
is to say “Damn, I was wrong. Back to the ol’ drawing board”, and try to find a new model that
does fit with reality.
But not this intrepid author. He tries to handwave his way past the fact that his model is
wrong, by saying “The coefficient of fertility is rather small, with the 95% confidence interval straddling zero. Some improvement in breastfeeding may be offsetting fertility decline.” No, sorry,
you can’t say “My mathematical model has absolutely no relation with reality, but that’s probably because one of the factors that I excluded is probably important, and so now I’m going to go on
pretending that the model works.”
The model is wrong. Invalid models to not produce valid results. Stop. Do not pass go. Do not collect $200. Do not get your paper published in a decent journal. Do get laughed at by people who aren’t
clueless jackasses.
At this point, we can see just why this paper appeared in a journal like JPANDS. Because it’s
crap that’s just attempting to justify a political position using incredibly sloppy math; math so
bad that a college freshman should be able to see what’s wrong with it. But for the “reviewers” at JPANDS, apparently a college freshman level of knowledge of statistics isn’t necessary for reviewing
a paper on statistical epidemiology.
0 thoughts on “Tag-Teaming with Orac: Bad, Bad Breast Cancer Math in JPANDS”
1. spudbeach
Like I told the high school physics class I taught: The difference between a scientist and a crank is that a scientist can admit a mistake and throw away a theory.
I guess this paper just gives another example of that.
2. gg
MarkCC wrote: “So what’s the source? Well, it’s published in JPANDS. JPANDS in a terrible journal.”
It’s quite fascinating you mention this, because I just received last week a mass mailing spearheaded by Frederick Seitz, pushing a petition denying the existence of global warming. It included
as proof a January, 2000 op-ed in the Wall Street Journal, and a 2007 paper in – you guessed it – JPANDS.
3. bsci
The JPANDS mission statement includes:
“…a commitment to publishing scholarly articles in defense of the practice of private medicine, the pursuit of integrity in medical research…Political correctness, dogmatism and orthodoxy will be
challenged with logical reasoning, valid data and the scientific method.”
They same link also says they have a new focus and name (from Medical Sentinal) and “We have eliminated the news capsules that tend to be old news by the time a quarterly journal is published,
and reduced the political commentary in favor of more articles of a scientific nature, particularly if they are relevant to contemporary policy debates.”
They do have double-blind peer review so just because you criticize them here doesn’t rule out your chances of getting published. š
They have published on cancer/abortion and vaccine/autism links in the past.
It is the journal of the AAPS (which is a very political organization):
4. Jeb, FCD
I had to stop reading here:
When a woman is nulliparous, an induced abortion has a greater
carcinogenic effect because it leaves breast cells in a state of
interrupted hormonal development in which they are more
Seems to me one round of birth control pills after an abortion should, given the logic in the quote, “reset” the breast cells to their quiescent state.
5. Mark C. Chu-Carroll
“Seems to me” is bad science. There are plenty of things that seem to make sense, but which are wrong. So doing a study like this to examine the question makes sense.
The problem with the paper isn’t that they’re examining that hypothesis. The problem is *how* they examine it.
There’ve been other studies that examine that question, and their results have been, pretty uniformly, that there’s no statistically significant difference.
What this author does is slap together a piss-poor analysis to create the result he wants, without regard for what the data actually says.
6. other bill
Don’t forget Simpson’s Paradox. This is a rather common problem when dealing with aggregated data in epidemiology and economics, and one of its fingerprints is a disconnect between known
individual relationships and what you see in aggregate relationships.
7. other bill
The paper is rather vague on the data collection methods, but note that the plots have varying time axes (1926-1950, 1923-1968, 1926-1954) and the regression is only based on a further subset of
15 data points in table 1 out of 25 in Figure 3.
The response variable is roughly monotone in time (figure 3), so any predictor that is also monotone is going to have a kick-ass correlation coefficient, no causality needed.
8. John Johnson
It gets even worse. This kind of study violates the learn-confirm paradigm by model-building and conclusion-drawing with the same set of data, and no attempt at any sort of validation of the
Nor is there any sort of well-known model-building approach, such as stepwise regression, even attempted. (Ok, ok, so this is second year statistics, but still, easily accessible.)
9. PalMD
“I just don’t even know how to mock this”
I’ll help you…”it’s not even wrong”.
10. Jim
Thank you for posting this information. Unfortunately, the paper will be quoted by those groups who want to show “proof” that you should “be fruitful and multiply” and “not have abortions”. I can
respect moral arguments for not having abortions, but the paper doesn’t provide scientific support for it.
11. Rev.Enki
Given the proposed model (leaving the breast cells in a state of “interrupted hormonal development) I can’t see what the difference would be between an abortion and a miscarriage, except that the
latter would generally be far more common, at least during early pregnancy. That wouldn’t necessarily make his argument false (though it’s pretty obvious it is, for other reasons) but it would
suggest that focusing on the politically hot abortion issue would be a bit suspect when the real effect would be considerably more general than that. Why isn’t he out there terrorizing women who
just had miscarriages instead, or as well?
Now *that* is a rhetorical question.
12. Glen
Hey Mark,
“It’s very deliberately ignoring independence from other variables”
Did you mean a different word than “independence” there?
Analysis problems aside, when you get a result that flies in the face of accepted understanding, you have to say “well, either I’m completely wrong, or I’ve really found something worth
investigating”, and the next place to go is to try and figure out where you went wrong (since that’s much more likely).
Only when you can’t find any alternative explanation for what you find do you start looking at what has gone before and try to figure out why it might be wrong.
In either case (you results don’t fit because you’re wrong or they don’t fit because you’re right and everyone else is wrong), you have a lot of further work to do before you can publish.
The first response to a weird result should not be “publish!”.
13. Doug
[Then he computes the average age of first birth for all women born that year, and tries to correlate it with the breast cancer incidence for women born in that year. ]
I’d say a high schooler shouldn’t make this mistake. He wants to imply something about causality from an average? Averages come as mathematical information, not physical causes as basically
everyone knows.
[But all that the author considers is linear correlation. He gives no reason for that, and makes no attempt to justify it. It’s just presented as if it’s beyond question.]
Of course, the math gives him a linear correlation. Math must speak the truth, so the linear correlation must indicate soemthing real. (please notice the sarcasm here.)
[In other words, according to their measure, fertility – the rate of live birth – is not a significant factor in breast cancer rates compared to abortion.]
Never mind that live birth causes massive chemical changes in women’s bodies. Especially, in the few days/week after children leave the womb.
[The notion that abortion as a risk factor for breast cancer completely dwarfs the reduction in risk after childbirth – when we know that having children causes a dramatic decrease in the risk of
breast cancer – is unquestionably wrong. ]
Strange, I pre-thought it would work the other way around. Nonetheless, I did at least think that having children would change the risk in breast cancer… as obviously having children yields
massive chemical and biological and phyiscal changes. How can anyone think otherwise?
14. Joseph Hertzlinger
I just don’t even know how to mock this, because it’s so off-the-wall ridiculous.
The top three states in home ownership are West Virginia, Mississippi, and Alabama. Clearly, poverty is a risk factor for home ownership.
Not absurd enough?
Okay. To return to topic of the paper being discussed, there’s also a negative correlation between UFO sightings and abortion. Clearly, the saucer people are pro-life.
15. other bill
This paper presents a cohort analysis, a rather basic tool for comparing populations. The unit of analysis is the cohort and not the elements comprising the cohort. The meaningfulness of a
predictor (such as years of nullparity) depends on how common it is to the members of a cohort and the linearity of the effect on the outcome. It is difficult to project cohort results down to
members of the cohort (which is what the author wishes to do) unless the members of the cohort have common exposure, which is not the case here. He’s not aggregating a linear relationship.
Some passing technical notes: (statistical) independence is not required for regression analysis. That’s part of what the inv(X`X)X` term adjusts for in a regression formula. The coefficients are
partial coefficients, adjusted for the other regressors. It also means his marginal tests are irrelevant to the partial contribution of a regressor (the coefficient in the presence of all other
regressors). What he should have looked at was the partial coefficients for his seven predictors. Of course with a sample size of 15, very little is going to be “significant”. I’d guess that he
originally tried that, went “oops” and started hunting for something publishable.
Linear correlation coefficients pick up on all sorts of monotone trends quite nicely. As Mark notes, they botch non-monotone ones.
There is nothing unusual or wrong about looking at cohort data in a scatter plot. That’s how you pick up non-linear relationships. He’s trying to cram three plots into one in the paper (X vs. Y,
X vs Cohort Year, and Y vs Cohort Year). A scatterplot matrix would have been more informative here, as well as highlighting the shifting sample sizes.
16. SLC
As I stated in a comment on Dr. Oracs’ blog, the notion that the anti-abortionists have any interest in scientific integrity is piffle. Their only interest is in making abortion illegal and lying
to achieve that objective is considered perfectly acceptable. It’s called lying for Joshua of Nazareth.
1. Philip Larson
You might be taken seriously if you weren’t committed to propaganda yourself.
At a minimum, don’t refer to people using designations they would not accept.
1. MarkCC Post author
You might be taken more seriously if you had any clue that you were replying to a seven year old comment.
1. Philip Larson
Yes, I had noticed that before I replied.
That you are bothered by something that isn’t true is curious.
17. MoBo
Before we talk about who’s being biased and selective, I think it’s worth actually looking into what breast cancer is at the molecular level.
That’s an easy-to-understand, recent paper by one of the giants in breast cancer research – Dr. Russo et al., and a fact sheet by Breast Cancer Prevention Institute:
18. Monado, FCD
Thanks for publishing this. I don’t have even second-year statistics so every little bit helps. Conclusion: the author and the journal have an axe to grind.
I think you mean “JPANDS is a terrible journal.”
One of the big risk factors identified demographically by, of all people, Adele Davies (the author of popular books about nutrition and vitamins half a century ago), was the amount of fat in the
diet before menarche. She correlated breast cancer rates with countries and noted that girls who moved to high-fat countries (e.g. the U.S.) from low-fat countries (e.g. Japan) tended to keep the
good statistics. I did not go to the original papers to see if she was reporting them accurately. But I’d be very interested to see someone follow up on her hypothesis. | {"url":"http://www.goodmath.org/blog/2007/10/25/tag-teaming-with-orac-bad-bad-breast-cancer-math-in-jpands/","timestamp":"2024-11-10T21:14:29Z","content_type":"text/html","content_length":"163562","record_id":"<urn:uuid:26bf4682-0ec5-470a-b93c-8a1edeff1ff7>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00688.warc.gz"} |
Question ID - 54551 | SaraNextGen Top Answer
In the above Question find apparent weight of the object?
a) 3 N b) Zero c) 2 N d) 0.2 N
In the above Question find apparent weight of the object?
But as satellite is a freely falling body, | {"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=54551","timestamp":"2024-11-08T21:32:29Z","content_type":"text/html","content_length":"16450","record_id":"<urn:uuid:335756a3-2170-4ac2-affa-f12a018e5bd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00835.warc.gz"} |
Excluded Middle in Coq
Excluded middle is an important theorem in classical logic. However in intuitionistic logic, the excluded middle theory doesn’t satisfy the constructive principle. The excluded middle law is provable
in Coq and often confused with some other theorems in Coq.
Proof by Contradiction
Proof by contradiction is an important technique in practical proof works, it means: in order to prove $\Phi$, use $\neg \Phi$ as a new given and attempt to deduce a false statement($\bot$).
Excluded Middle in Classical Logic
These five laws in classical logic could be added as axioms safely into constructive logic, without causing any inconsistency. We cannot prove the negation of such an axiom. | {"url":"http://sighingnow.github.io/programming%20languages/excluded_middle.html","timestamp":"2024-11-12T08:41:30Z","content_type":"text/html","content_length":"11022","record_id":"<urn:uuid:c83780e9-3d52-408d-945c-6fa6dce38999>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00561.warc.gz"} |
Percentage Change Calculator
You can use our calculator to work out the percentage increase or decrease from one value to another. Or, scroll down to read our explanation of what percentage change is and how to calculate it.
Disclaimer: Whilst every effort has been made in building our calculator tools, we are not to be held liable for any damages or monetary losses arising out of or in connection with their use. Full
Table of contents:
Understanding percentage change
Let's begin by discussing what percentage change is, and we can then go through how to calculate it, with some examples to help.
Percentage change is a fundamental mathematical concept that allows us to measure and compare the relative difference between two values. Performing this calculation can help us see how much
something has increased or decreased in value or quantity over time.
You'll find percentage change figures widely referenced in fields such as finance, economics and statistics. For example, you may find yourself looking at a statement for an investment and wondering
what the percentage increase (in interest) has been compared to your initial investment.
You'll also find occasions in everyday life where calculating percentage change is useful. For example, there may be a sale at your local electrical store, and you might find yourself wanting to
compare the price of an item before and after a discount, to evaluate the discount.
How to calculate percent change
To calculate percentage change, subtract your initial value from your final value to get the difference, and then divide that figure by the initial value. Multiply the result by 100 to get the
percentage: (finalValue − initialValue) / initialValue × 100.
Here's the step-by-step process for calculating percentage change:
1. Determine the initial value (old value) and the final value (new value).
2. Subtract the initial value from the final value: final value - initial value.
3. Divide the resulting figure by the initial value.
4. Multiply the result by 100 to convert to a percentage.
The result from above represents the percentage change between the initial value and the final value. If the result is positive, it indicates an increase, while a negative result signifies a
Let's have a look at this as a formula...
Percentage = (finalValue - initialValue) / initialValue × 100
Using this formula, when numbers go up, you can work out the percentage increase. When numbers go down, you can work out the percentage decrease. You need to make sure you do the calculation in the
correct order, using PEMDAS. So, numbers within brackets first.
Percentage discount example
Example scenario 1: You are shopping for a new laptop, and you come across a model that initially costs $1,000. However, you notice that it is on sale at a discounted price. You want to calculate the
percentage change in price.
First, let's get our values:
• Initial price: $1,000 (old value).
• Discounted price: $800 (new value).
Now, we can calculate the percentage change:
• Subtract the initial price from the discounted price: $800 - $1,000 = -$200.
• Divide the result by the initial price: -$200 / $1,000 = -0.2.
• Multiply the resulting figure by 100 to convert it into a percentage: -0.2 * 100 = -20%.
The calculated percentage change is -20%, indicating a 20% discount on the original price.
Percentage increase example
Example scenario 2: You find out that your landlord is increasing your rent from $650 to $700 per month and you want to know what the percentage increase is. Let's work it out...
• New number - old number = increase
• $700 (new rent) - $650 (old rent) = $50 increase
• (increase / old number) × 100
• $50 (increase) / $650 (old number) = 0.0769.
• Then × 100 = 7.69% increase
The increase in rent in our example works out at 7.69%.
To check your calculation, you can use calculator at the top of the page. Simply enter your 650 figure into the first box and 700 into the second.
I hope you found our calculator and article useful. You can learn more about how to calculate percentage change in our article here. And, it's well worth bookmarking our main percentage calculator
for everyday percentage calculations.
Calculator by Alastair Hazell
If you have any problems using our percentage change calculator, please contact us. | {"url":"https://www.thecalculatorsite.com/math/percentage-change.php","timestamp":"2024-11-08T20:52:06Z","content_type":"text/html","content_length":"312419","record_id":"<urn:uuid:4c78e2d4-408f-4222-9a27-6231f5d902a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00891.warc.gz"} |
Power and Sample Size
How Do I Perform Power and Sample Size Calculations for a One Sample t-Test?
Power and Sample Size One Sample t-Test Customer Data
Using the One Sample t-Test, we determined that Customer Types 1 and 3 resulted in Fail to reject H0: μ=3.5 . A failure to reject H0 does not mean that we have proven the null to be true. The
question that we want to consider here is What was the power of the test? Restated, What was the likelihood that given Ha: μ≠3.5 was true, we would have rejected H0 and accepted Ha? To answer
this, we will use the Power and Sample Size Calculator.
Tip: Typical sample size rules of thumb address confidence interval size and robustness to normality (e.g. n=30). Computing power is more difficult because it involves the magnitude of change in mean
to be detected, so one needs to use the power and sample size calculator.
Please use the following guidelines when using the power and sample size calculator:
Power >= .99 (Beta Risk is <= .01) is considered Very High Power
Power >= .95 and < .99 (Beta Risk is <= .05) is High Power
Power >= .8 and < .95 (Beta Risk is <= .2) is Medium Power. Typically, a power value of .9 to detect a difference of 1 standard deviation is considered adequate for most applications. If the data
collection is difficult and/or expensive, then .8 might be used.
Power >= .5 and < .8 (Beta Risk is <= .5) is Low Power not recommended.
Power < .5 (Beta Risk is > .5) is Very Low Power do not use!
1. Click SigmaXL > Statistical Tools > Power and Sample Size Calculators > 1 Sample t-Test Calculator. We will only consider the statistics from Customer Type 3 here. We will treat the problem as a
two sided test with Ha: Not Equal To to be consistent with the original test.
2. Enter 27 in Sample Size (N). The difference to be detected in this case would be the difference between the sample mean and the hypothesized value i.e. 3.6411 3.5 = 0.1411. Enter 0.1411 in
Difference. Leave Power value blank, with Solve For Power selected (default). Given any two values of Power, Sample size, and Difference, SigmaXL will solve for the remaining selected third
value. Enter the sample standard deviation value of 0.6705 in Standard Deviation. Keep Alpha and Ha at the default values as shown:
3. Click OK. The resulting report is shown:
4. A power value of 0.1836 is very poor. It is the probability of detecting the specified difference. Alternatively, the associated Beta risk is 1-0.1836 = 0.8164 which is the probability of failure
to detect such a difference. Typically, we would like to see Power > 0.9 or Beta < 0.1. In order to detect a difference this small we would need to increase the sample size. We could also set the
difference to be detected as a larger value.
5. First we will determine what sample size would be required in order to obtain a Power value of 0.9. Click Recall SigmaXL Dialog menu or press F3 to recall last dialog. Select the Solve For Sample
Size button as shown. It is not necessary to delete the entered sample size of 27 it will be ignored. Enter a Power Value of .9:
6. Click OK. The resulting report is shown:
7. A sample size of 240 would be required to obtain a power value of 0.9. The actual power is rarely the same as the desired power due to the restriction that the sample size must be an integer. The
actual power will always be greater than or equal to the desired power.
8. Now we will determine what the difference would have to be to obtain a Power value of 0.9, given the original sample size of 27. Click Recall SigmaXL Dialog menu or press F3 to recall last
dialog. Select the Solve For Difference button as shown:
9. Click OK. The resulting report is shown:
10. A difference of 0.435 would be required to obtain a Power value of 0.9, given the sample size of 27.
Power and Sample Size One Sample t-Test Graphing the Relationships between Power, Sample Size, and Difference
In order to provide a graphical view of the relationship between Power, Sample Size, and Difference, SigmaXL provides a tool called Power and Sample Size with Worksheets. Similar to the Calculators,
Power and Sample Size with Worksheets allows you to solve for Power (1 Beta), Sample Size, or Difference (specify two, solve for the third). You must have a worksheet with Power, Sample Size, or
Difference values. Other inputs such as Standard Deviation and Alpha can be included in the worksheet or manually entered.
1. Open the file Sample Size and Difference Worksheet.xls, select the Sample Size & Diff sheet tab. Click SigmaXL > Statistical Tools > Power & Sample Size with Worksheets > 1 Sample t-Test. If
necessary, check Use Entire Data Table. Click Next.
2. Ensure that Solve For Power (1 Beta) is selected. Select Sample Size (N) and Difference columns as shown. Enter the Standard Deviation value of 1. Enter .05 as the Significance Level value:
Note: By setting Standard Deviation to 1, the Difference values will be a multiple of Standard Deviation.
Click OK. The output report is shown below:
3. To create a graph showing the relationship between Power, Sample Size and Difference, click SigmaXL > Statistical Tools > Power & Sample Size Chart. Check Use Entire Data Table. Click Next.
4. Select Power (1 Beta), click Y Axis (Y); select Sample Size (N), click X Axis (X1); select Difference, click Group Category (X2). Click Add Title. Enter Power & Sample Size: 1 Sample t-Test:
5. Click OK. The resulting Power & Sample Size Chart is displayed:
Web Demos
Our CTO and Co-Founder, John Noguera, regularly hosts free Web Demos featuring SigmaXL and DiscoverSim
Click here to view some now!
Contact Us
Phone: 1.888.SigmaXL (744.6295)
Support: Support@SigmaXL.com
Sales: Sales@SigmaXL.com
Information: Information@SigmaXL.com | {"url":"https://www.sigmaxl.com/OneSampleT.shtml","timestamp":"2024-11-12T15:18:16Z","content_type":"application/xhtml+xml","content_length":"47542","record_id":"<urn:uuid:ee1f950b-195c-4547-8dd4-29caf39b932d>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00878.warc.gz"} |
COLLECT and MAX
I have been trying to find the right combination of formulas to do this in place. I am trying to report the name of a contact in the same row as a max value. I have attached an example photo to help
describe my situation
In the scenario, I am trying to look up the maximum number in the Parts Column and return the managers name on that row. The formula will be in a sheet summary. In the example above, the max value
would be 300 and the formula should return the contact "Corey Gill"
Any help will be appreciated. I think I am making the solution more complicated than it needs to be.
Best Answer
• @COREY GILL I believe that combining INDEX, MATCH and MAX will get you what you're looking for. This sample formula is using INDEX to evaluate the Manager column, and combining MATCH(MAX) to
determine the highest Parts value to return as the row to return for the INDEX formula, which should get you exactly what you're looking for. There are probably other ways to do this, but this
should work.
=INDEX(Manager:Manager, MATCH(MAX(Parts:Parts), Parts:Parts))
Hope that helps!
• @COREY GILL I believe that combining INDEX, MATCH and MAX will get you what you're looking for. This sample formula is using INDEX to evaluate the Manager column, and combining MATCH(MAX) to
determine the highest Parts value to return as the row to return for the INDEX formula, which should get you exactly what you're looking for. There are probably other ways to do this, but this
should work.
=INDEX(Manager:Manager, MATCH(MAX(Parts:Parts), Parts:Parts))
Hope that helps!
• @Alexander Ford Since you we're so quick on the last one, maybe you can help me take my formulas a bit further. Below is a similar example but with a corresponding date for each entry.
I have created a formula to find the 30 day rolling average of the number of parts. It looks like this:
=AVG(COLLECT(Parts:Parts, Date:Date, >=TODAY(-30), Parts:Parts, Parts:Parts >= 1))
What I would like to have to complement this 30 day average is to identify the Manager that had the highest number of parts in the last 30 days. In the example above, it would be Corey G as the
manager with the highest number of parts in the last 30 days. Any insight?
• This one took a little hacking but I believe it should work. For the MATCH function, it seems you have to include the 0 at the end of the formula for non-sorted search type. In retrospect, I'm
not sure why this wasn't required for the previous formula in this thread to function correctly, but someone much smarter than myself might be able to provide some feedback on that... :)
=INDEX(Manager:Manager, MATCH(MAX(COLLECT(Parts:Parts, Date:Date, >TODAY(-30))), Parts:Parts, 0))
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/71500/collect-and-max","timestamp":"2024-11-14T23:56:29Z","content_type":"text/html","content_length":"409716","record_id":"<urn:uuid:5ef62131-0e26-48f2-b1e2-13c6a8769e92>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00681.warc.gz"} |
Backtracking not working on a Hitori matrix
Why Backtracking Fails on Your Hitori Matrix: A Guide to Debugging
Let's say you're building a Hitori solver using backtracking. You've got your code all set, but it's not working. You're staring at your Hitori matrix, wondering why your backtracking algorithm isn't
finding the right solution. This is a common problem, and often boils down to a misunderstanding of how backtracking should work within the constraints of a Hitori puzzle.
Here's a common scenario:
def solve_hitori(grid):
for row in range(len(grid)):
for col in range(len(grid[0])):
if grid[row][col] != 0:
# Try marking the cell as filled or empty
grid[row][col] = 0
if is_valid(grid):
if solve_hitori(grid):
return True
grid[row][col] = 1
return False
def is_valid(grid):
# Checks for valid Hitori rules
# (same number not adjacent horizontally or vertically)
# (at least one filled cell in each row and column)
This code attempts a brute-force backtracking solution. However, it misses a crucial point about Hitori: the puzzle requires you to make cells empty, not fill them.
The Problem: Backtracking the Wrong Way
In this example, the code tries to solve the Hitori puzzle by marking cells as either filled or empty. This isn't the correct approach because Hitori requires you to remove numbers, not add them. The
backtracking algorithm should be designed to mark cells as empty, not fill them.
Here's a more accurate analogy: Imagine a maze with some paths blocked. You can't solve it by blindly adding paths; you need to remove the wrong ones to find the right route. Similarly, Hitori
requires you to remove numbers to find the correct solution.
Fixing the Code: Focusing on Empty Cells
To correct the backtracking algorithm, modify it to prioritize making cells empty:
def solve_hitori(grid):
for row in range(len(grid)):
for col in range(len(grid[0])):
if grid[row][col] != 0:
# Try making the cell empty
grid[row][col] = 0
if is_valid(grid):
if solve_hitori(grid):
return True
# If empty doesn't work, restore the original value
grid[row][col] = 1
return False
def is_valid(grid):
# Checks for valid Hitori rules
# (same number not adjacent horizontally or vertically)
# (at least one filled cell in each row and column)
This modified code focuses on making cells empty. The is_valid function should ensure that the new empty cell doesn't violate any Hitori rules.
Additional Considerations
• Pruning: To make your backtracking algorithm more efficient, consider adding pruning techniques. For example, if a row or column has only one cell with a specific number, that cell must be
• Constraint Propagation: You can further optimize your solution by using constraint propagation techniques. For example, if two adjacent cells have the same number, you know at least one of them
must be empty.
Understanding Hitori
Hitori, meaning "lone wolf" in Japanese, is a logic puzzle where the goal is to find the lone instance of each number in a grid. This is achieved by marking cells as empty, creating isolated
instances of each number.
Key Rules of Hitori:
• No adjacent same numbers: Numbers cannot be adjacent horizontally or vertically.
• One filled cell per row and column: Each row and column must have at least one filled cell.
• All numbers must be present: Every number from 1 to N (where N is the size of the grid) must be present in the final solution.
By understanding the core principles of Hitori and incorporating them into your backtracking algorithm, you can overcome the challenges of solving this logic puzzle efficiently. | {"url":"https://laganvalleydup.co.uk/post/backtracking-not-working-on-a-hitori-matrix","timestamp":"2024-11-09T01:50:09Z","content_type":"text/html","content_length":"83122","record_id":"<urn:uuid:03347911-080c-4883-84c4-c7efc529d5aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00595.warc.gz"} |
Change of rings
Jump to navigation Jump to search
This article includes a
list of references
, related reading or
external links
but its sources remain unclear because it lacks inline citations
(November 2015) (Learn how and when to remove this template message)
This article
may contain too much repetition or redundant language
(December 2015) (Learn how and when to remove this template message)
In algebra, given a ring homomorphism ${\displaystyle f:R\to S}$, there are three ways to change the coefficient ring of a module; namely, for a right R-module M and a right S-module N,
• ${\displaystyle f_{!}M=M^{S}}$, the induced module.
• ${\displaystyle f_{*}M=\operatorname {Hom} _{R}(S,M)}$, the coinduced module.
• ${\displaystyle f^{*}N=N_{R}}$, the restriction of scalars.
They are related as adjoint functors:
${\displaystyle f_{!}:{\text{Mod}}_{R}\leftrightarrows {\text{Mod}}_{S}:f^{*}}$
${\displaystyle f^{*}:{\text{Mod}}_{S}\leftrightarrows {\text{Mod}}_{R}:f_{*}.}$
This is related to Shapiro's lemma.
Restriction of scalars[edit]
Throughout this section, let ${\displaystyle R}$ and ${\displaystyle S}$ be two rings (they may or may not be commutative, or contain an identity), and let ${\displaystyle f:R\to S}$ be a
homomorphism. Restriction of scalars changes S-modules into R-modules. In algebraic geometry, the term "restriction of scalars" is often used as a synonym for Weil restriction.
Suppose that ${\displaystyle M}$ is a module over ${\displaystyle S}$. Then it can be regarded as a module over ${\displaystyle R}$ where the action of ${\displaystyle R}$ is given via
{\displaystyle {\begin{aligned}M\times R&\longrightarrow M\\(m,r)&\longmapsto m\cdot f(r)\end{aligned}}}
where ${\displaystyle m\cdot f(r)}$ denotes the action defined by the ${\displaystyle S}$-module structure on ${\displaystyle M}$.^[1]
Interpretation as a functor[edit]
Restriction of scalars can be viewed as a functor from ${\displaystyle S}$-modules to ${\displaystyle R}$-modules. An ${\displaystyle S}$-homomorphism ${\displaystyle u:M\to N}$ automatically becomes
an ${\displaystyle R}$-homomorphism between the restrictions of ${\displaystyle M}$ and ${\displaystyle N}$. Indeed, if ${\displaystyle m\in M}$ and ${\displaystyle r\in R}$, then
${\displaystyle u(m\cdot r)=u(m\cdot f(r))=u(m)\cdot f(r)=u(m)\cdot r\,}$.
As a functor, restriction of scalars is the right adjoint of the extension of scalars functor.
If ${\displaystyle R}$ is the ring of integers, then this is just the forgetful functor from modules to abelian groups.
Extension of scalars[edit]
Extension of scalars changes R-modules into S-modules.
In this definition the rings are assumed to be associative, but not necessarily commutative, or to have an identity. Also, modules are assumed to be right modules. The modifications needed in the
case of left modules are straightforward.
Let ${\displaystyle f:R\to S}$ be a homomorphism between two rings, and let ${\displaystyle M}$ be a module over ${\displaystyle R}$. Consider the tensor product ${\displaystyle M^{S}=M\otimes _{R}S}
$, where ${\displaystyle S}$ is regarded as a left ${\displaystyle R}$-module via ${\displaystyle f}$. Since ${\displaystyle S}$ is also a right module over itself, and the two actions commute, that
is ${\displaystyle r\cdot (s\cdot s')=(r\cdot s)\cdot s'}$ for ${\displaystyle r\in R}$, ${\displaystyle s,s'\in S}$ (in a more formal language, ${\displaystyle S}$ is a ${\displaystyle (R,S)}$-
bimodule), ${\displaystyle M^{S}}$ inherits a right action of ${\displaystyle S}$. It is given by ${\displaystyle (m\otimes s)\cdot s'=m\otimes ss'}$ for ${\displaystyle m\in M}$, ${\displaystyle
s,s'\in S}$. This module is said to be obtained from ${\displaystyle M}$ through extension of scalars.
Informally, extension of scalars is "the tensor product of a ring and a module"; more formally, it is a special case of a tensor product of a bimodule and a module – the tensor product of an R-module
with an ${\displaystyle (R,S)}$ bimodule is an S-module.
One of the simplest examples is complexification, which is extension of scalars from the real numbers to the complex numbers. More generally, given any field extension K < L, one can extend scalars
from K to L. In the language of fields, a module over a field is called a vector space, and thus extension of scalars converts a vector space over K to a vector space over L. This can also be done
for division algebras, as is done in quaternionification (extension from the reals to the quaternions).
More generally, given a homomorphism from a field or commutative ring R to a ring S, the ring S can be thought of as an associative algebra over R, and thus when one extends scalars on an R-module,
the resulting module can be thought of alternatively as an S-module, or as an R-module with an algebra representation of S (as an R-algebra). For example, the result of complexifying a real vector
space (R = R, S = C) can be interpreted either as a complex vector space (S-module) or as a real vector space with a linear complex structure (algebra representation of S as an R-module).
This generalization is useful even for the study of fields – notably, many algebraic objects associated to a field are not themselves fields, but are instead rings, such as algebras over a field, as
in representation theory. Just as one can extend scalars on vector spaces, one can also extend scalars on group algebras and also on modules over group algebras, i.e., group representations.
Particularly useful is relating how irreducible representations change under extension of scalars – for example, the representation of the cyclic group of order 4, given by rotation of the plane by
90°, is an irreducible 2-dimensional real representation, but on extension of scalars to the complex numbers, it split into 2 complex representations of dimension 1. This corresponds to the fact that
the characteristic polynomial of this operator, ${\displaystyle x^{2}+1,}$ is irreducible of degree 2 over the reals, but factors into 2 factors of degree 1 over the complex numbers – it has no real
eigenvalues, but 2 complex eigenvalues.
Interpretation as a functor[edit]
Extension of scalars can be interpreted as a functor from ${\displaystyle R}$-modules to ${\displaystyle S}$-modules. It sends ${\displaystyle M}$ to ${\displaystyle M^{S}}$, as above, and an ${\
displaystyle R}$-homomorphism ${\displaystyle u:M\to N}$ to the ${\displaystyle S}$-homomorphism ${\displaystyle u^{S}:M^{S}\to N^{S}}$ defined by ${\displaystyle u^{S}=u\otimes _{R}{\text{id}}_{S}}$
Co-extension of scalars (coinduced module)[edit]
This section is empty.
You can help by
adding to it
(November 2015)
Relation between the extension of scalars and the restriction of scalars[edit]
Consider an ${\displaystyle R}$-module ${\displaystyle M}$ and an ${\displaystyle S}$-module ${\displaystyle N}$. Given a homomorphism ${\displaystyle u\in {\text{Hom}}_{R}(M,N_{R})}$, define ${\
displaystyle Fu:M^{S}\to N}$ to be the composition
${\displaystyle M^{S}=M\otimes _{R}S{\xrightarrow {u\otimes {\text{id}}_{S}}}N_{R}\otimes _{R}S\to N}$,
where the last map is ${\displaystyle n\otimes s\mapsto n\cdot s}$. This ${\displaystyle Fu}$ is an ${\displaystyle S}$-homomorphism, and hence ${\displaystyle F:{\text{Hom}}_{R}(M,N_{R})\to {\text
{Hom}}_{S}(M^{S},N)}$ is well-defined, and is a homomorphism (of abelian groups).
In case both ${\displaystyle R}$ and ${\displaystyle S}$ have an identity, there is an inverse homomorphism ${\displaystyle G:{\text{Hom}}_{S}(M^{S},N)\to {\text{Hom}}_{R}(M,N_{R})}$, which is
defined as follows. Let ${\displaystyle v\in {\text{Hom}}_{S}(M^{S},N)}$. Then ${\displaystyle Gv}$ is the composition
${\displaystyle M\to M\otimes _{R}R{\xrightarrow {{\text{id}}_{M}\otimes f}}M\otimes _{R}S{\xrightarrow {v}}N}$,
where the first map is the canonical isomorphism ${\displaystyle m\mapsto m\otimes 1}$.
This construction shows that the groups ${\displaystyle {\text{Hom}}_{S}(M^{S},N)}$ and ${\displaystyle {\text{Hom}}_{R}(M,N_{R})}$ are isomorphic. Actually, this isomorphism depends only on the
homomorphism ${\displaystyle f}$, and so is functorial. In the language of category theory, the extension of scalars functor is left adjoint to the restriction of scalars functor.
See also[edit]
• Dummit, David (2004). Abstract algebra. Foote, Richard M. (3 ed.). Hoboken, NJ: Wiley. pp. 359–377. ISBN 0471452343. OCLC 248917264.
• J.P. May, Notes on Tor and Ext
• NICOLAS BOURBAKI. Algebra I, Chapter II. LINEAR ALGEBRA.§5. Extension of the ring of scalars;§7. Vector spaces. 1974 by Hermann.
Further reading[edit]
^ Dummit 2004, p. 359. | {"url":"https://static.hlt.bme.hu/semantics/external/pages/tenzorszorzatok/en.wikipedia.org/wiki/Extension_of_scalars.html","timestamp":"2024-11-06T02:31:35Z","content_type":"text/html","content_length":"126907","record_id":"<urn:uuid:995ef8d1-8fbe-435a-b805-b0f799d35483>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00045.warc.gz"} |
Dealing With Small Time Values In Excel - ManyCoders
Key Takeaway:
• Creating custom formats for small time values in Excel allows for more efficient data presentation, making it easier to understand and work with time data. Utilizing the format cells window and
custom number format codes can greatly improve data presentation.
• Rounding small time values in Excel can be done precisely using the ROUND function, or with improved accuracy using ROUNDDOWN and ROUNDUP functions. The MROUND function can also be employed to
round off time values to the nearest multiple, either by 1 or 0.5.
• To simplify time values, the TRUNC and INT functions can be used to truncate decimal places in small time values or transform small time values to integers, respectively. The FLOOR and CEILING
functions can also be used to round off time values to the nearest multiple, either rounding down or rounding up.
Are you feeling overwhelmed with small time values in Excel? This article will provide you with easy to follow methodologies on how to manage those tiny time values. Learn the best tips and tricks to
simplify your Excel workflows!
Creating Custom Formats for Small Time Values in Excel
Understanding small time values in Excel is crucial, especially if the data is time-sensitive. Let’s learn how to create custom formats. One way is the Format Cells window. With this, you can modify
and customize the display. We’ll also explore custom number format codes. These tools make managing small time values in Excel easier, focusing on analyzing data without any hindrance is now
Image credits: manycoders.com by Harry Jones
Creating a Custom Format using the Format Cells Window
Creating a custom format using the Format Cells Window can be extremely helpful when dealing with time values in Excel. It simplifies understanding and handling data like elapsed time and duration.
Here’s how:
1. Select the cell range containing the small time values that need to be formatted.
2. Right-click and choose the Format Cells option from the menu.
3. Choose the number tab, if it’s not already selected.
4. Select Custom from the list of Category options.
5. Input your format code in the Type field according to your needs.
6. Click OK to implement it.
Custom formats enhance readability and insight for small time values. Professional Excel users often attest to their necessity in businesses and situations across all industries. They help people
better understand responsibilities and make smarter decisions based on accurate information.
Learning how to customize Excel cells allows users to get more out of the software. Utilizing custom number format codes is a great way to improve data presentation.
Utilizing Custom Number Format Codes to Improve Data Presentation
Customizing small time values in Excel can be tricky since they are presented in hours: minutes: seconds format. But, using Custom Number Format Codes can help!
Excel provides different codes for custom number formatting depending on the requirement. For example, you can use square brackets to ensure three characters are shown (e.g., [hh]:mm:ss). You can
also display minutes and seconds above 60 seconds using “00” (e.g., mm:ss). This flexibility makes user’s work more manageable.
When setting custom format codes, consider user expectations before starting. This makes data appear uniform across different sheets and reduces confusion from standard representations.
Recently, a friend presented stats with hourly logs over four days. He used customized cell formats for each hour between 0-24 even though some values were in minutes. This helped him present data
simply and uniformly, creating a visually appealing document.
Rounding small time values is another essential aspect when working with Excel. We’ll explore tips for this in the next section, while minimizing any variations that may affect computations.
Rounding Small Time Values in Excel
Excel and time values? Tricky. No easy way to round decimals beyond seconds. Let’s explore the magical world of Excel time functions! We’ll investigate the benefits and limitations of different
methods. Plus, practical examples to help you make your decision. Ready to dive in?
Image credits: manycoders.com by Yuval Duncun
Utilizing the ROUND Function for Precise Rounding
To round time values in Excel, select the cell or cells containing the time value. Then, click on the “Formulas” tab and select “ROUND” from the Math & Trig drop-down menu. Enter the cell reference
followed by “,2” as arguments for the function. This will make Excel round to two decimal places.
This function isn’t just for small time values; it can be used for any numerical data in Excel. Utilizing it ensures accuracy without sacrificing much detail. To make sure you’re not missing out on
important details, start using this tool! Finally, explore ROUNDDOWN AND ROUNDUP functions for improved accuracy.
Exploring ROUNDDOWN AND ROUNDUP functions for improved accuracy
If you’re working with small time values in Excel, accuracy is key! The ROUNDDOWN and ROUNDUP functions can help you achieve this. Just type “=ROUNDDOWN(cell, num_digits)” or “=ROUNDUP(cell,
num_digits)” into an adjacent cell.
Replace “cell” with the address of the original cell (e.g., A1) and “num_digits” with the number of decimal places you want to keep (e.g., 3).
Check your work by comparing the rounded value to the original time value.
Save your work, and consider automating this process with macros or other Excel tools.
Also, the MROUND Function is a great tool for dealing with small time values in Excel. It lets you round numbers up or down, depending on the interval you specify.
Take advantage of Excel features to improve your skills and make your work more accurate!
Using the MROUND Function
As an Excel user, you may have faced a challenge with small time values. Annoying to work with, especially when precision is key. Luckily, MROUND function works wonders here! We’ll look at how to use
it to round off tiny time values to their closest multiple or 0.5.
With this, you’ll save time and banish any errors from your Excel spreadsheets.
Image credits: manycoders.com by David Washington
Rounding off Small Time Values to the nearest Multiple
To round off small time values, select the cell or cells that contain them. Next, go to the “Formulas” tab and click on “Math & Trig” under “Function Library”. Choose “MROUND” from the list and enter
the value you want to round off and the nearest multiple.
Rounding off small time values will ensure consistency and accuracy, especially for payroll and scheduling applications. Note that MROUND rounds half away from zero. Combining it with ROUNDUP or
ROUNDDOWN can give you more control over how it rounds the data.
Lastly, you can also round time values using 0.5 increments. Look out for our guide on how to do this!
Rounding off Small Time Values to the Nearest 0.5
To round off small time values in Excel to the nearest 0.5, follow these steps:
1. Select a cell in your worksheet.
2. Input the formula “=MROUND(time value, 0.5)” (replacing “time value” with the specific value).
3. Press enter.
Let’s take an example: if we have a small time value of 2.28 minutes, typing in MROUND(2.28, 0.5) will give us the result of 2.5 minutes.
When dealing with small time values in Excel, accuracy matters. But rounding them off makes them easier to understand. The MROUND function makes this task a breeze. However, make sure your formatting
settings include fractional numbers or decimal places won’t show up accurately.
Also, when using this technique with dates or times across different worksheets or workbooks, use copy-paste methods instead of converters. This way, data integrity is better maintained, and
computation speeds stay fast.
Now on to our next section section/trick/tool: “Simplifying Time Values using TRUNC and INT functions”.
Simplifying Time Values using TRUNC and INT functions
Dealing with small time values in Excel can be difficult. If you’ve seen numbers less than 1 when formatting a cell as time, you know what I mean. Good thing is, TRUNC and INT functions help simplify
it. Let’s explore them in detail. We’ll first look at using the TRUNC function to cut off decimal places. Then, we’ll learn how to convert small time values to integers with the INT function.
Image credits: manycoders.com by Harry Washington
Utilizing the TRUNC function to truncate Decimal Places in Small Time Values
TRUNC is a powerful Excel tool to make time values simpler. It’s great for small, decimal-heavy numbers. Follow this 6-step guide to get started:
1. Pick the cell you want to use TRUNC on.
2. Type “TRUNC(“ without quotes in the bar.
3. Select the cell with your time value.
4. Add “,” and then enter the number of decimals you want to keep.
5. Close the parentheses and hit enter.
6. You’ll see the new, simplified value in the cell.
TRUNC makes it easier to read small time values and simplifies complex calculations with small time increments. For example, if you’re managing project deadlines and need to calculate hourly job
hours in minutes – like 15-minute tasks that pay $80/hour – then applying the TRUNC function will help.
Using Excel functions like TRUNC allows users to make the most of Excel. Another powerful tool to transform small-time fractions into integers is the INT Function – let’s dive into that next!
Transforming Small Time Values to Integers with INT function
INT is the way to go when transforming small time values into integers in Excel. Three steps: pick a cell, type “=INT(“, and enter a cell with the original small time value. This creates a whole
number, not a decimal that’s been rounded. The INT function simplifies data processing.
Fun Fact: INT figures out how many times 1 fits into a given numeric value and returns an integer. (Source: Microsoft)
For more precise rounding off of small time values, FLOOR and CEILING functions are better than just rounding.
Employing FLOOR and CEILING Functions to Round off Time Values to the nearest multiple
Are you working with time values in Excel? Need to round them to the nearest increment? FLOOR and CEILING functions are here to help! This will be a game-changer for those who frequently work with
small time values.
We’ll start with the FLOOR function to round down small numbers. Then, using the CEILING function to round up small numbers. Trust me – this will save you lots of headaches later on!
Image credits: manycoders.com by David Arnold
Using the FLOOR Function to round down Small Time Values to the nearest multiple
1. Choose the cell or range of cells with your time values.
2. Go to Formula Bar. Type =FLOOR(cell/multiple,0). Then press Enter.
3. Replace “cell” with the address of your chosen cell. Replace “multiple” with the number you want to use for rounding.
4. The 0 at the end tells the rounding mode. This means round downwards.
5. Drag down/Fill Series formula. See results.
This method has many advantages. It avoids errors due to small differences in duration times. It also reduces “jitter” precisely by rounding down short durations. Plus, it increases data accuracy and
enhances data analysis.
Using the CEILING Function to Round up Small Time Values to the nearest multiple
Open Excel.
1. Create a new sheet.
2. Enter the time value in cell A1.
3. In cell B1, type the multiple value you want to round off to. For example, to round off every 15 minutes, type 0:15 or .25.
4. In cell C1, write =CEILING(A1,B1).
5. Press Enter.
6. Copy the formula down for additional cells.
This technique ensures that time values are aligned with project deadlines. The CEILING function rounds up a number towards a specified multiple. Or you may use the FLOOR, ROUNDUP or ROUNDDOWN
functions instead.
Small time values require precision. Excel’s functions make calculations easier, avoiding complex manual calculations.
Forbes magazine states that mastering Office applications like Excel can increase employment prospects by over 80%. Developing expertise in these tools should be part of any job seeker’s strategy.
Some Facts About Dealing with Small Time Values in Excel:
• ✅ Excel can display time values down to fractions of a second, but they are stored as decimals. (Source: Excel Easy)
• ✅ When entering time values, use the format [h]:mm:ss to ensure accurate calculations. (Source: Ablebits)
• ✅ To add or subtract time values in Excel, use the SUM or SUMIF functions with the appropriate format. (Source: ExcelJet)
• ✅ Excel has built-in functions like HOUR, MINUTE, and SECOND for extracting time values from cells. (Source: Excel Campus)
• ✅ When dealing with time values across time zones, use the CONVERT function to make accurate calculations. (Source: Spreadsheet Planet)
FAQs about Dealing With Small Time Values In Excel
What are small time values in Excel?
Small time values in Excel are numbers that are less than 1 but still represent a time value. For example, 0.25 would represent 6:00 AM if cell format is converted to time.
How can I add small time values in Excel?
To add small time values in Excel, you can simply use the SUM function. For example, to add 0.25 and 0.50 together, you would use the formula =SUM(0.25,0.50) which would result in 0.75.
What is the best way to format small time values in Excel?
The best way to format small time values in Excel is by using the “Time” format. Simply select the cells containing the time values, right-click and select “Format cells,” and then choose the “Time”
How do I convert small time values to minutes?
To convert small time values to minutes in Excel, you can simply multiply the time value by 1440 (the number of minutes in a day). For example, if you have a time value of 0.25 (representing 6:00
AM), you can convert it to minutes using the formula =0.25*1440, which would result in 360.
Can I use small time values in calculations with regular time values in Excel?
Yes, small time values can be used in calculations with regular time values in Excel. When both small and regular time values are present in the same formula, Excel will automatically convert the
small time values to their equivalent regular time value.
Are there any shortcuts for entering small time values in Excel?
Yes, one shortcut for entering small time values in Excel is to simply enter the time in 24-hour format. For example, to enter the time 6:00 AM (represented by the small time value 0.25), you can
simply enter the number 6:00 in the cell and Excel will automatically convert it to the small time value. | {"url":"https://manycoders.com/excel/dealing-with-small-time-values/","timestamp":"2024-11-01T21:06:26Z","content_type":"text/html","content_length":"87726","record_id":"<urn:uuid:3491970e-8897-41b6-ac38-3aa03c12b047>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00268.warc.gz"} |
Banach Space - (Computational Mathematics) - Vocab, Definition, Explanations | Fiveable
Banach Space
from class:
Computational Mathematics
A Banach space is a complete normed vector space, meaning it is a vector space equipped with a norm that allows for the measurement of vector lengths and distances. The completeness of a Banach space
ensures that every Cauchy sequence converges within the space, making it a fundamental concept in functional analysis. This property plays a vital role in the convergence of iterative methods and
their ability to find solutions to linear systems.
congrats on reading the definition of Banach Space. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. In a Banach space, the norm defines the topology, allowing for the analysis of convergence and continuity of functions.
2. Banach spaces are crucial in the study of linear operators, particularly when evaluating their boundedness and continuity.
3. Every finite-dimensional normed vector space is a Banach space because all Cauchy sequences converge in finite dimensions.
4. Common examples of Banach spaces include spaces like $$l^p$$ (spaces of p-summable sequences) and $$L^p$$ (spaces of p-integrable functions).
5. The Hahn-Banach theorem, which extends linear functionals while preserving norms, is an essential result applicable in Banach spaces.
Review Questions
• How does the completeness property of Banach spaces affect the convergence of sequences and iterative methods?
□ The completeness property of Banach spaces ensures that every Cauchy sequence converges within the space, which is crucial for iterative methods used to solve linear systems. If an iterative
method generates a Cauchy sequence, we can be confident that it will converge to a solution within the Banach space. This characteristic allows for reliable applications of various
algorithms, ensuring that solutions can be achieved through iterations without diverging.
• Discuss how norms in Banach spaces contribute to understanding linear operators and their properties.
□ Norms in Banach spaces provide a framework for measuring the size of vectors and the action of linear operators. By defining norms, we can assess whether linear operators are bounded, which
is significant for establishing continuity. This understanding aids in analyzing the stability of solutions derived from iterative methods applied to linear systems. Norms also facilitate
comparisons between different operators and their effectiveness in solving equations.
• Evaluate the significance of Banach spaces in the context of solving real-world problems through numerical methods.
□ Banach spaces are fundamental in applying numerical methods to solve real-world problems because they provide the structure needed for analyzing convergence and stability. By ensuring that
sequences converge within these spaces, we can apply iterative techniques confidently to find approximations of solutions. The completeness property helps ensure that errors diminish over
iterations, making numerical algorithms robust and reliable. Consequently, many practical applications in engineering and science rely on concepts from Banach spaces for effective
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/computational-mathematics/banach-space","timestamp":"2024-11-14T10:57:56Z","content_type":"text/html","content_length":"159204","record_id":"<urn:uuid:1eff5ff9-fcb9-4cf1-8331-23a146439605>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00546.warc.gz"} |
Digital Math Resources
Title Description Thumbnail Curriculum Topics
Surface Area, Applications of Transformations, Definition of a Circle, Applications of Polygons,
This collection of math clip art on Geometry Concepts contains over 100 resources Modeling Shapes, 3-Dimensional Figures, Applications of 3D Geometry, Exploring Coordinate Systems,
that provide a visual and interactive way to teach geometric concepts. Math clip Coordinate Systems, Applications of Coordinate Geometry, Applications of Points and Lines,
art is an invaluable tool for teachers, as it allows them to create visually Definition of a Quadrilateral, Applications of Triangles, Numerical Expressions, Geometric
appealing and informative materials that capture students' attention and reinforce Constructions with Angles and Planes, Geometric Constructions with Points and Lines, Length,
key concepts. This collection is particularly useful for elementary math Definition of a Polygon, Definition of a Triangle, Exponential and Logarithmic Functions and
instruction, offering a wide range of ten frame models that can be easily Equations, Graphs of Exponential and Logarithmic Functions, Parallel Lines, Perpendicular Lines,
incorporated into lessons, worksheets, and presentations. Identifying Shapes, Proportions, Applications of Quadrilaterals and Geometric Constructions with
This collection aggregates all the math examples around the topic of Triangular Area and Perimeter of Triangles
Area and Perimeter. There are a total of 40 Math Examples.
Overview This collThis collection aggregates all the math examples around the topic Volume
of Volume. There are a total of 24 Math Examples.
This collection aggregates all the math clip art around the topic of Nets. There 3-Dimensional Figures
are a total of 5 images.
Overview This collection aggregates all the math examples around the topic of Surface Area
Surface Area. There are a total of 24 Math Examples.
This collection aggregates all the math examples around the topic of Quadrilateral Area and Perimeter of Quadrilaterals
Area and Perimeter. There are a total of 24 Math Examples.
OvervieThis collection aggregates all the math clip art around the topic of Definition of a Polygon
Polygons. There are a total of 5 images.
Overview This collection aggregates all the math videos and resources in this 3-Dimensional Figures, Pyramids, Cylinders, Applications of 3D Geometry, Triangular Prisms and
series: Geometry Applications Video Series: 3D Geometry. There are a total of 18 Rectangular Prisms
OThis collection aggregates all the math videos and resources in this series:
Geometry Applications Video Series: Quadrilaterals. There are a total of 12 Applications of Quadrilaterals and Definition of a Quadrilateral
OverviewThis collection aggregates all the math videos and resources in this 3-Dimensional Figures, Cubes, Cones, Triangular Prisms, Pyramids, Cylinders and Rectangular Prisms
series: 3D Geometry Animations.
This collection aggregates all the math examples around the topic of Polygon Definition of a Polygon
Classification. There are a total of 36 Math Examples.
OverviewThis is a collection of issues of Math in the News that deal with Surface Area and Volume
applications of Surface Area and Volume.
OvervieThis is a collection of issues of Math in the News that deal with 3-Dimensional Figures and Applications of 3D Geometry
applications of 3D geometry.
OverviewThis is a collection of issues of Math in the News that deal with business Applications of Exponential and Logarithmic Functions, Data Analysis and Volume
Definition of an Angle, Geometric Constructions with Points and Lines, 3-Dimensional Figures,
Exploring Coordinate Systems, Definition of a Triangle, Definition of a Point, Definition of a
This collection aggregates all the definition image cards around the topic of Circle, Definition of Transformations, The Distance Formula, Definition of a Polygon, Definition of
Geometry vocabulary. There are a total of 58 terms. a Plane, Definition of a Line, Midpoint Formula, Applications of Polygons, Area and Perimeter of
Triangles, Area and Perimeter of Quadrilaterals, Pyramids, Trig Expressions and Identities, Right
Triangles, Definition of a Quadrilateral, Applications of Points and Lines and Proportions
Perpendicular Lines, Parallel Lines, Applications of Points and Lines, Applications of Circles,
This collection aggregates all the Google Earth Voyager Stories. There are a total Cylinders, Applications of Angles and Planes, Applications of Polygons, Pyramids, Triangular Prisms
of 17 stories. , Rectangular Prisms, 3-Dimensional Figures, Applications of Surface Area and Volume, Rational
Functions and Equations, Surface Area, Volume, Applications of Triangles and Applications of
This is a collection of Math in the News stories that focus on the topic of Data Data Analysis, Data Gathering, Probability, Percents and Ratios and Rates
VIDEO: 3D Geometry Animation: Antiprism
This is part of a series of video animations of three-dimensional figures. These
animations show different views of these figures: top, side, and bottom. Many of 3-Dimensional Figures and Triangular Prisms
these figures are a standard part of the geometry curriculum and being able to
recognize them is important.
VIDEO: 3D Geometry Animation: Cone
This is part of a series of video animations of three-dimensional figures. These
animations show different views of these figures: top, side, and bottom. Many of 3-Dimensional Figures and Cones
these figures are a standard part of the geometry curriculum and being able to
recognize them is important.
VIDEO: 3D Geometry Animation: Cube
This is part of a series of video animations of three-dimensional figures. These
animations show different views of these figures: top, side, and bottom. Many of 3-Dimensional Figures and Cubes
these figures are a standard part of the geometry curriculum and being able to
recognize them is important.
3D Geometry Animation: Cylinder
This is part of a series of video animations of three-dimensional figures. These
animations show different views of these figures: top, side, and bottom. Many of 3-Dimensional Figures and Cylinders
these figures are a standard part of the geometry curriculum and being able to
recognize them is important.
VIDEO: 3D Geometry Animation: Octahedron
This is part of a series of video animations of three-dimensional figures. These
animations show different views of these figures: top, side, and bottom. Many of 3-Dimensional Figures
these figures are a standard part of the geometry curriculum and being able to
recognize them is important.
VIDEO: 3D Geometry Animation: Pyramid
This is part of a series of video animations of three-dimensional figures. These
animations show different views of these figures: top, side, and bottom. Many of 3-Dimensional Figures and Pyramids
these figures are a standard part of the geometry curriculum and being able to
recognize them is important.
VIDEO: 3D Geometry Animation: Rectangular Prism
This is part of a series of video animations of three-dimensional figures. These
animations show different views of these figures: top, side, and bottom. Many of 3-Dimensional Figures and Rectangular Prisms
these figures are a standard part of the geometry curriculum and being able to
recognize them is important.
VIDEO: 3D Geometry Animation: Tetrahedron
This is part of a series of video animations of three-dimensional figures. These 3-Dimensional Figures
animations show different views of these figures: top, side, and bottom. Many of
these figures are a standard part of the geometry curriculum and being able to
recognize them is important. | {"url":"https://www.media4math.com/NY-7.G.6","timestamp":"2024-11-05T03:32:24Z","content_type":"text/html","content_length":"102130","record_id":"<urn:uuid:50346904-9d28-4cd5-bb4a-5bcd75fce185>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00107.warc.gz"} |
Leaflet and Math - Rotation of point in a Map
Hello everybody,
I write in the XoJo forum because I cannot find a solution to my problem on the Intenet and I hope there is someone who is familiar with Leaflet (link: Leaflet - a JavaScript library for interactive
maps - https: // leafletjs .com).
I have developed an air navigation program with XoJo. I am almost at the end of the program and I would not like to give up right now for a nonsense to which I cannot find a solution.
The question I am about to ask you is not inherent to XoJo: it is about the representation of a rotated line in the Leaflet map.
As you can see in the figure, there are two red lines C-> P1 and C-> P2 and there are blue lines C-> A, C-> B, ā ¦, C-> E.
The red line C-> P1 is the starting line, the C-P2 line is the finish line. They are input data.
The blue lines are lines I calculated with the following mathematical formula:
// Data input
Cx = 10.247500 '=> C
Cy = 43.557778 '=> C
Ax = 10.395555 '=> P1x
Ay = 43.838056 '=> P1y
Bx = 10.64953 '=> P2x
By = 43.62996 '=> P3y
// Calculate the angle of C-> P2 with the axis
Var z, w, Angle As Double
w = ((Ax-Cx) * (Bx-Cx)) + ((Ay-Cy) * (By-Cy))
z = Sqrt (((Ax-Cx) ^ 2) + ((Ay-Cy) ^ 2)) * Sqrt (((Bx-Cx) ^ 2) + ((By-Cy) ^ 2))
Angle = (ACos (w / z)) * 180 / Pi
Var iStepRotazione As UInteger = 10 'Let's start with 10° of rotation
Var xPoint, yPoint As Double
While iStepRotation <Angle
xPoint = (Ax-Cx) * Cos (iStepRotazione * Pi / 180) + (Ay-Cy) * Sin (iStepRotazione * Pi / 180) + Cx
yPoint = - (Ax-Cx) * Sin (iStepRotation * Pi / 180) + (Ay-Cy) * Cos (iStepRotation * Pi / 180) + Cy
iStepRotazione = iStepRotazione + 10 'Step of 10 °
sReturn = sReturn + Str (yPoint) + "," + Str (xPoint) + ";"
That is, I take point P1 and rotate it by 10 ° until reaching P2.
With the calculated points, I create the lines (in Leaflet) C->A, C->B, etc ā ¦
As you can see in the figure, the rotated points (A, B, ā ¦, F) form smaller lines (the blue lines).
Iā m not a great mathematician but I believe the function is correct.
Having said that, do you think it is a mathematical problem or a Web-Mercator problem?
Do you know anything about it?
Thank you.
This looks like an issue of the Mercator projection. Iā ll need to get on a proper computer to be sure, but it looks like those lines are the same actual length on the globe, but appear to shrink
when projected.
They seem to get smaller each time.
Im curious to know what happens if you go fully ā round the clockā ?
Can you confirm that ALL your variables are all doubles? (the dim / var lines are not shown here)
Are you sure that this code is current?
you could rotate by
xEnd = xStart + Cos(Radian) * distance
yEnd = yStart + Sin(Radian) * distance
(Start is C)
this cos/sin is more or less a direction, values from -1 to 1.
you go along with a multiplication.
To help me better, here you find the XoJo source and the HTML file to be able to see what XoJo calculates.
XoJo code for test: Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
file HTML: Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
Hi MarkusR
Var Radian As Double = iStepRotazione * PI / 180 ' PI = 3.14159265
xPoint = Ax + Cos(Radian) * ??distance??
yPoint = Ay + Sin(Radian) * ??distance??
Is the distance the length of the segment to be calculated (the blue line) ?
yes length is
length=sqrt(dx * dx + dy * dy)
C to P1
your suggestion doesnā t work well (see image below). Thanks for helping me out anyway!
@J_Andrew_Lipscomb and @Jeff_Tullin also think itā s Web Mercator fault.
Letā s wait if they can give me a suggestion ā ¦
Here is part of the code modified with your function:
//******* I rotate point P1 many times (10°) until I reach point P2
Var iStepRotazione As UInt16 = 10 'Step 10° rotation
Var dx, dy, length As Double
length=sqrt(dx * dx + dy * dy)
Var xPoint,yPoint As Double
While iStepRotazione < Angolo
Var Radian As Double = iStepRotazione * PI / 180
xPoint = Ax + Cos(Radian) * length
yPoint = Ay + Sin(Radian) * length
iStepRotazione = iStepRotazione + 10 'Step di 10°
TextArea1.AddText("L.polyline([ [" + Str(yPoint) + "," + Str(xPoint) + "], [43.557778, 10.247500] ], {color: 'blue'}).addTo(mymap); //C->Calculation.." + EndOfLine)
Instead of Ax/Ay it should be Cx/Cy, but that will only make the blue lines shorter (will not fix the difference in size between them).
By = 43.62996 '=> P3y
Should not this be By = 43.62996 '=> P2y
// Calculate the angle of C-> P2 with the axis
Var z, w, Angle As Double
w = ((Ax-Cx) * (Bx-Cx)) + ((Ay-Cy) * (By-Cy))
z = Sqrt (((Ax-Cx) ^ 2) + ((Ay-Cy) ^ 2)) * Sqrt (((Bx-Cx) ^ 2) + ((By-Cy) ^ 2))
Angle = (ACos (w / z)) * 180 / Pi
I am not sure what this calculation is supposed to be. How is it supposed to be the angle with the axis of C ā P2?
This calculation contains the Ax and Ay values which are related to the point P1 and not P2.
So why are they here when you are trying to calculate the angle of C ā P1?
And when you have the deltaX and the deltaY values, it seems to me that one would use the ATan rather than ACos. ACos forces you to calculate the hypotenuse of the right triangle with deltaX and
deltaY sides which is just extra work.
Var myAngle As Double
myAngle = ATan((By - Cy) / (Bx - Cx)) * 180 / Pi
Angle: 52.00 is the answer I get from your calculation which is incorrect just looking at the graphic.
myAngle: 10.18 which is more reasonable.
I will stop at this point because perhaps I am misinterpreting something. Are you trying to calculate the angle between C-P1 and C-P2? This issue is discussed below. If this is indeed what you want
then your comments in your code need to be cleaned up. If you can confirm that this is where you are going, then I will try and return to the problem and look for additional issues downstream.
62.18 is the angle between C-P2 and the axis. If you want the angle between C-P1 and C_P2 (which would be the about the answer you are getting from your calculation than this code would be a lot ā
cleanerā in my opinion.
Var angleP1, angleP2, deltaP1_P2 As Double
angleP1 = ATan((Ay - Cy) / (Ax - Cx)) * 180 / Pi
angleP2 = ATan((By - Cy) / (Bx - Cx)) * 180 / Pi
deltaP1_P2 = angleP1 - angleP2
MessageBox("deltaP1_P2: " + deltaP1_P2.ToString)
51.97 is the value for deltaP1_P2
I would add just one thing. The problem has nothing to do with Mercator projection. It has a negligible effect over such a small area as your map includes.
Your next problem is going to be that when you calculate the A, B, ā ¦ E points you need a ā lengthā of the vector.
What are you going to use? The length of C-P1 and C-P2 are not identical.
0.317 vs 0.409. So what algorithm are you going to use?
Are you going to use the mean of those two values?
Are you going to gradually increase the length of the vector as you move from A to E?
Isnā t this a little closer to what you want?
You have to settle on what ā lengthā is going to be. (See Above)
But then the angle (theAngle) is actually subtracted from the angle of P1 (angleP1) with progressively larger deltas as iStepRotation increases in size.
Var iStepRotation As Double = 10 'Let's start with 10° of rotation
Var xPoint, yPoint As Double
Var theAngle As Double
While iStepRotation < deltaP1_P2
theAngle = angleP1 - iStepRotation
Var Radian As Double = theAngle * PI / 180
xPoint = Cx + Cos(Radian) * length
yPoint = Cy + Sin(Radian) * length
iStepRotation = iStepRotation + 10 'Step di 10°
// TextArea1.AddText("L.polyline([ [" + Str(yPoint) + "," + Str(xPoint) + "], [43.557778, 10.247500] ], {color: 'blue'}).addTo(mymap); //C->Calculation.." + EndOfLine)
The red lines look identical in length but when I calculate their lengths using the values provided of
Cx = 10.247500 '=> C
Cy = 43.557778 '=> C
Ax = 10.395555 '=> P1x
Ay = 43.838056 '=> P1y
Bx = 10.64953 '=> P2x
By = 43.62996 '=> P3y
I get different values for the lengths of the red lines. Can you explain this?
Var lengthP1, lengthP2 As Double
lengthP1 = Sqrt((Ax - Cx) ^ 2 + (Ay - Cy) ^ 2)
lengthP2 = Sqrt((Bx - Cx) ^ 2 + (By - Cy) ^ 2)
The distortion of the projection does not turn out to be the issue (itā s pretty small), but I have managed to verify that the original code produces shrinking lines,
I wasnā t gone
but i asked to my friend who has a math degree to confirm my mathematical routine and he verified that everything works.
To prove that it is not a problem of calculation but of Mercator just do the following test:
to move 360° the line BUT in the center of the map with coordinates [0,0] (the origin).
In the center of the map, things show, everything works fine ! !
Here is how it looks in the center of the map and how it is distorted moving latitude.
This confirms that it is a Web-Mercator projection issue.
Does anyone know the formula or where to find information?
HERE LINK FOR HTML: Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
HERE LINK FOR XOJO: Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
Maybe I came up to the solutionā ¦ Maybe !
Could you convert me the following from JavaScript to XoJo?
Because I do not understand the condition in the ternary operator:
if (Math.abs(Ī Ī») > Math.PI) Ī Ī» = Ī Ī»>0 ? -(2*Math.PI-Ī Ī») : (2*Math.PI+Ī Ī»);
(Math.abs(Ī Ī») > Math.PI) Ī Ī» = Ī Ī»>0 What condition is it? I donā t understand itā ¦
Thank you.
P.S.: Ī Ī» is a variable (as Double)
The javascript:
X = Y>0 ? A : B;
maps exactly to this Xojo:
X = if (Y>0, A, B)
Personally I prefer the javascript syntax.
So here we have:
if (G>H) X = X>0 ? A : B;
which is to say, in Xojo:
if (G>H) then X = if (X>0, A, B)
1. On this new image, where is the C-P2 line? What does it look like?
2. Where is the ā center of the mapā ? Is it the equator?
Perhaps the misunderstanding comes from the basic fact that the distance of a degree of latitude is approximately 111 kilometers all over the globe but the distance of a degree of longitude varies
from zero at the poles to approximately 111 kilometers at the equator.
If you assume that over most of the globe a degree of longitude equals a degree of latitude in terms of distance, you will end up confused.
The fact that the surface of the globe is curved creates a problem when trying to map a huge area (like a continent). But it is easy to create a map of a small area (such as that shown by the
original poster) with very little distortion. On such a map, a distance measured with a ruler will be basically the same no matter where on the small map or what direction the ruler is oriented.
If you create your small map by cutting out a small area of a Mercator map of the whole planet and blowing it up, then you will get a lot of distortion if you are mapping something like Iceland. It
will be ā stretchedā from east to west. But generally, this is not the way you would create a map of Iceland. If you only have to deal with a small area, then a ā simpleā map is easy to create.
I am not very confident that my arguments are correct.
Cx = 10.247500
Cy = 43.557778
and you are expecting x and y distances to be the same, there will be problems irrespective of map projections.
1 Like | {"url":"https://forum.xojo.com/t/leaflet-and-math-rotation-of-point-in-a-map/69172","timestamp":"2024-11-11T13:48:02Z","content_type":"text/html","content_length":"59041","record_id":"<urn:uuid:f9d119b4-ee94-412c-b620-28c34c55841f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00603.warc.gz"} |
Segment Trees and Max Queues
Some time ago, I was doing a problem on HackerRank that in introduced me to two new data structures that I want to write about. The problem is called Cross the River.
The premise is this:
You're standing on a shore of a river. You'd like to reach the opposite shore.
The river can be described with two straight lines on the Cartesian plane, describing the shores. The shore you're standing on is $Y=0$ and another one is $Y=H$.
There are some rocks in the river. Each rock is described with its coordinates and the number of points you'll gain in case you step on this rock.
You can choose the starting position arbitrarily on the first shore. Then, you will make jumps. More precisely, you can jump to the position $(X_2,Y_2)$ from the position $(X_1,Y_1)$ in case $\
left|Y_2−Y_1\right| \leq dH$, $\left|X_2−X_1\right| \leq dW$ and $Y_2>Y_1$. You can jump only on the rocks and the shores.
What is the maximal sum of scores of all the used rocks you can obtain so that you cross the river, i.e. get to the opposite shore?
No two rocks share the same position, and it is guaranteed that there exists a way to cross the river.
Now, my first instinct was to use dynamic programming. If $Z_i$ is the point value of the rock, and $S_i$ is the max score at rock $i$, then $$ S_i = \begin{cases} Z_i + \max\{S_j : 1 \leq Y_i - Y_j
\leq dH,~|X_i - X_j| \leq dW\} &\text{if rock is reachable} \\ -\infty~\text{otherwise,} \end{cases} $$ where we assume the existence of rocks with $Y$ coordinate $0$ of $0$ point value for all $X.$
Thus, we can sort the rocks by their $Y$ coordinate and visit them in order. However, we run into the problem that if $dW$ and $dH$ are large we may need to check a large number of rocks visited
previously, so this approach is $O(N^2).$
My dynamic programming approach was the right idea, but it needs some improvements. Somehow, we need to speed up the process of looking through the previous rocks. To do this, we do two things:
1. Implement a way to quickly find the max score in a range $[X-dW, X + dW]$
2. Only store the scores of rocks in range $[Y-dH, Y)$
To accomplish these tasks, we use two specialized data structures.
Segment Trees
Segment trees solve the first problem. They provide a way to query a value (such as a maximum or minimum) over a range and update these values in $\log$ time. The key idea is to use a binary tree,
where the nodes correspond to segments instead of indices.
For example suppose that we have $N$ indices $i = 0,1,\ldots, N-1$ with corresponding values $v_i.$ Let $k$ be the smallest integer such that $2^k \geq N.$ The root node of our binary tree will be
the interval $[0,2^k).$ The first left child will be $[0,2^{k-1}),$ and the first right child will be $[2^{k-1},2^k).$ In general, we have for some node $[a,b)$ if $b - a > 1$, then the left child is
$[a,(b-a)/2),$ and the right child is $[(b-a)/2,b).$ Otherwise, if $b - a = 1$, there are no children, and the node is a leaf. For example, if $5 \leq N \leq 8$, our segment tree looks like this.
In general, there are $2^0 + 2^1 + 2^2 + \cdots + 2^k = 2^{k+1} - 1$ nodes needed. $2N - 1 \leq 2^{k+1} - 1 \leq 2^2(N-1) - 1$, so the amount of memory needed is $O(N).$ Here's the code for
constructing the tree.
class MaxSegmentTree {
private long[] maxes;
private int size;
public MaxSegmentTree(int size) {
int actualSize = 1;
while (actualSize < size) actualSize *= 2;
this.size = actualSize;
// if size is 2^k, we need 2^(k+1) - 1 nodes for all the intervals
maxes = new long[2*actualSize - 1];
Arrays.fill(maxes, Long.MIN_VALUE);
Now, for each node $[a,b),$ we store a value $\max(v_a,v_{a+1},\ldots,v_{b-1}).$ An update call consists of two parameters, an index $k$ and a new $v_k.$ We would traverse the binary tree until we
reach the node $[k, k+1)$ and update that node. Then, we update the max of each ancestor by taking the max of its left and right child since the segment of child is always contained in the segment of
the parent. In practice, this is done recursively like this.
class MaxSegmentTree {
public long set(int key, long value) {
return set(key, value, 0, 0, this.size);
* @param node index of node since binary tree is implement with array
* @param l lower bound of segement (inclusive)
* @param r upper bound of segement (exclusive)
private long set(int key, long value,
int node, int l, int r) {
// if not in range, do not set anything
if (key < l || key >= r) return maxes[node];
if (l + 1 == r) {
// return when you reach a leaf
maxes[node] = value;
return value;
int mid = l + (r-l)/2;
// left node
long left = set(key, value, 2*(node + 1) - 1, l, mid);
// right node
long right = set(key, value, 2*(node + 1), mid, r);
maxes[node] = Math.max(left, right);
return maxes[node];
A range max query takes two parameters: the lower bound of the range and the upper bound bound of the range in the form $[i,j).$ We obtain the max recursively. Let $[l,r)$ be the segment
corresponding to a node. If $[l,r) \subseteq [i,j),$ we return the max associated with $[l,r)$. If $[l,r) \cap [i,j) = \emptyset,$ we ignore this node. Otherwise, $[l,r) \cap [i,j) \neq \emptyset,$
and $\exists k \in [l,r)$ such that $k \not\in [i,j),$ so $l < i < r$ or $l < j < r.$ In this case, we descend to the child nodes. The algorithm looks like this.
class MaxSegmentTree {
* @param i from index, inclusive
* @param j to index, exclusive
* @return the max value in a segment.
public long max(int i, int j) {
return max(i, j, 0, 0, this.size);
private long max(int i, int j, int node, int l, int r) {
// if in interval
if (i <= l && r <= j) return maxes[node];
// if completely outside interval
if (j <= l || i >= r ) return Long.MIN_VALUE;
int mid = l + (r-l)/2;
long left = max(i, j, 2*(node+1) - 1, l, mid);
long right = max(i, j, 2*(node+1), mid, r);
return Math.max(left, right);
I prove that this operation is $O(\log_2 N).$ To simplify things, let us assume that $N$ is a power of $2$, so $2^k = N.$ I claim that the worst case is $[i,j) = [1, 2^k - 1).$ Clearly this is true
when $k = 2$ since we'll have to visit all the nodes but $[0,1)$ and $[3,4),$ so we visit $5 = 4k - 3 = 4\log_2 N - 3$ nodes.
Now, for our induction hypothesis we assume that the operation is $O(\log_2 N)$ for $1,2,\ldots, k - 1$. Then, for some $k$, we can assume that $i < 2^{k-1}$ and $j > 2^{k-1}$ since otherwise, we
only descend one half of the tree, and it reduces to the $k - 1$ case. Now, given $[i, j)$ and some node $[l,r)$, we'll stop there if $[i,j) \cap [l,r) = \emptyset$ or $[l,r) \subseteq [i,j).$
Otherwise, we'll descend to the node's children. Now, we have assumed that $i < 2^{k-1} < j,$ so if we're on the left side of the tree, $j > r$ for all such nodes. We're not going to visit any nodes
with $r \leq i,$ we'll stop at nodes with $l \geq i$ and compare their max, and we'll descend into nodes with $l < i < r$. At any given node on the left side, if $[l,r)$ is not a leaf and $l < i <
r$, we'll choose to descend. Let the left child be $[l_l, r_l)$ and the right child be $[l_r,r_r)$. The two child segments are disjoint, so we will only choose to descend one of them since only one
of $l_l < i < r_l$ or $l_r < i < r_r$ can be true. Since $l_l = l < i$, we'll stop only at the right child if $l_r = i.$ If $i$ is not odd, we'll stop before we reach a leaf. Thus, the worst case is
when $i$ is odd.
On the right side, we reach a similar conclusion, where we stop when $r_l = j,$ and so the worst case is when $j$ is odd. To see this visually, here's an example of the query $[1,7)$ when $k = 3.$
Nodes where we visit the children are colored red. Nodes where we compare a max are colored green.
Thus, we'll descend at $2k - 1 = 2\log_2 N - 1$ nodes and compare maxes at $2(k-1) = 2(\log_2 N - 1)$ nodes, so $4\log_2 N - 3$ nodes are visited.
Max Queues
Now, the segment tree contains the max score at each $X$ coordinate, but we want to our segement tree to only contain values corresponding to rocks that are within range of our current position. If
our current height is $Y$, we want rocks $j$ if $0 < Y - Y_j \leq dH.$
Recall that we visit the rocks in order of their $Y$ coordinate. Thus, for each $X$ coordinate we add the rock to some data structure when we visit it, and we remove it when it becomes out of range.
Since rocks with smaller $Y$ coordinates become out of range first, this is a first in, first out (FIFO) situation, so we use a queue.
However, when removing a rock, we need to know when to update the segment tree. So, the queue needs to keep track of maxes. We can do this with two queues. The primary queue is a normal queue. The
second queue will contain a monotone decreasing sequence. Upon adding to the queue, we maintain this invariant by removing all the smaller elements. In this way, the head of the queue will always
contain the max element since it would have been removed otherwise. When we removing an element from the max queue, if the two heads are equal in value, we remove the head of each queue. Here is the
class MaxQueue<E extends Comparable<? super E>> extends ArrayDeque<E> {
private Queue<E> q; // queue of decreasing subsequence of elements (non-strict)
public MaxQueue() {
q = new ArrayDeque<E>();
public void clear() {
public E poll() {
if (!super.isEmpty() && q.peek().equals(super.peek())) q.poll();
return super.poll();
public E remove() {
if (!super.isEmpty() && q.peek().equals(super.peek())) q.remove();
return super.remove();
public boolean add(E e) {
// remove all the smaller elements
while (!q.isEmpty() && q.peek().compareTo(e) < 0) q.poll();
return super.add(e);
public boolean offer(E e) {
// remove all the smaller elements
while (!q.isEmpty() && q.peek().compareTo(e) < 0) q.poll();
return super.offer(e);
public E max() {
return q.element();
With these two data structures the solution is pretty short. We keep one segment tree that stores the current max at each $X$ coordinate. For each $X$, we keep a queue to keep track of all possible
maxes. The one tricky part is to make sure that we look at all rocks at a certain height before updating the segment tree since lateral moves are not possible. Each rock is only added and removed
from a queue once, and we can find the max in $\log$ time, so the running time is $O(N\log N)$, where $N$ is the number of rocks. Here's the code.
public class CrossTheRiver {
private static final int MAX_X = 100000;
public static void main(String[] args) throws IOException {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
PrintWriter out = new PrintWriter(new BufferedWriter(new OutputStreamWriter(System.out)));
StringTokenizer st = new StringTokenizer(in.readLine());
int N = Integer.parseInt(st.nextToken()); // rocks
int H = Integer.parseInt(st.nextToken()); // height
int dH = Integer.parseInt(st.nextToken()); // max y jump
int dW = Integer.parseInt(st.nextToken()); // max x jump
Rock[] rocks = new Rock[N];
for (int i = 0; i < N; ++i) { // read through rocks
st = new StringTokenizer(in.readLine());
int Y = Integer.parseInt(st.nextToken());
int X = Integer.parseInt(st.nextToken()); // 0 index
int Z = Integer.parseInt(st.nextToken());
rocks[i] = new Rock(X, Y, Z);
long[] cumulativeScore = new long[N];
MaxSegmentTree sTree = new MaxSegmentTree(MAX_X + 1);
ArrayList<MaxQueue<Long>> maxX = new ArrayList<MaxQueue<Long>>(MAX_X + 1);
for (int i = 0; i <= MAX_X; ++i) maxX.add(new MaxQueue<Long>());
int i = 0; // current rock
int j = 0; // in range rocks
while (i < N) {
int currentY = rocks[i].y;
while (rocks[j].y < currentY - dH) {
// clear out rocks that are out of range
if (maxX.get(rocks[j].x).isEmpty()) {
sTree.set(rocks[j].x, Long.MIN_VALUE);
} else {
sTree.set(rocks[j].x, maxX.get(rocks[j].x).max());
while (i < N && rocks[i].y == currentY) {
// get previous max score from segment tree
long previousScore = sTree.max(rocks[i].x - dW, rocks[i].x + dW + 1);
if (rocks[i].y <= dH && previousScore < 0) previousScore = 0;
if (previousScore > Long.MIN_VALUE) { // make sure rock is reachable
cumulativeScore[i] = rocks[i].score + previousScore;
// keep max queue up to date
// now update segment tree
for (int k = i - 1; k >= 0 && rocks[k].y == currentY; --k) {
if (cumulativeScore[k] == maxX.get(rocks[k].x).max()) {
sTree.set(rocks[k].x, cumulativeScore[k]);
long maxScore = Long.MIN_VALUE;
for (i = N - 1; i >= 0 && H - rocks[i].y <= dH; --i) {
if (maxScore < cumulativeScore[i]) maxScore = cumulativeScore[i];
No comments have been posted yet. You can be the first! | {"url":"https://www.phillypham.com/Segment%20Trees%20and%20Max%20Queues","timestamp":"2024-11-12T16:50:06Z","content_type":"text/html","content_length":"51294","record_id":"<urn:uuid:467cb301-fc2d-4e4f-9844-fd0b89b0b12c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00536.warc.gz"} |
Built-In Training
Train deep learning networks using built-in training functions
After defining the network architecture, you can define training parameters using the trainingOptions function. You can then train the network using the trainnet function. Use the trained network to
predict class labels or numeric responses.
Deep Network Designer Design and visualize deep learning networks
dlnetwork Deep learning neural network
trainingOptions Options for training deep learning neural network
trainnet Train deep learning neural network (Since R2023b)
TrainingInfo Neural network training information (Since R2023b)
show Show training information plot (Since R2023b)
close Close training information plot (Since R2023b)
Learning Rate Schedules
piecewiseLearnRate Piecewise learning rate schedule (Since R2024b)
warmupLearnRate Warm-up learning rate schedule (Since R2024b)
polynomialLearnRate Polynomial learning rate schedule (Since R2024b)
exponentialLearnRate Exponential learning rate schedule (Since R2024b)
cosineLearnRate Cosine learning rate schedule (Since R2024b)
cyclicalLearnRate Cyclical learning rate schedule (Since R2024b)
testnet Test deep learning neural network (Since R2024b)
accuracyMetric Deep learning accuracy metric (Since R2023b)
aucMetric Deep learning area under ROC curve (AUC) metric (Since R2023b)
fScoreMetric Deep learning F-score metric (Since R2023b)
precisionMetric Deep learning precision metric (Since R2023b)
recallMetric Deep learning recall metric (Since R2023b)
rmseMetric Deep learning root mean squared error metric (Since R2023b)
predict Compute deep learning network output for inference
minibatchpredict Mini-batched neural network prediction (Since R2024a)
scores2label Convert prediction scores to labels (Since R2024a)
confusionchart Create confusion matrix chart for classification problem
sortClasses Sort classes of confusion matrix chart
classifyAndUpdateState (Not recommended) Classify data using a trained recurrent neural network and update the network state
Featured Examples | {"url":"https://se.mathworks.com/help/deeplearning/builtin-training.html?s_tid=CRUX_lftnav","timestamp":"2024-11-02T11:31:20Z","content_type":"text/html","content_length":"95397","record_id":"<urn:uuid:2a02153e-a567-4d2b-b5f1-5a4053dca054>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00785.warc.gz"} |
Comparing Heap and Sleep Sorting Algorithms
Published on Saturday, February 24, 2024
Imagine you’re building an app and need to sort a massive list of data – maybe product prices, customer names, or high scores. Choosing the right sorting algorithm can make a huge difference in
performance. Today, we’ll pit two popular contenders against each other: heap and sleep.
Before we dive into the code, let’s briefly explore the basics of both algorithms. If you’re eager to see the action, feel free to jump straight to the code comparison here.
Heap Sort
Heap Sort is a powerful sorting algorithm that’s often used in various applications due to its efficiency and in-place nature.
A Brief History
Heap sort was first described in 1964 by J. W. J. Williams. However, Robert W. Floyd quickly improved upon Williams’ algorithm in the same year, making it possible to sort the array in-place without
requiring extra memory.
How It Works
Heap sort works by first building a max heap from the input array. A max heap is a complete binary tree where the value of each node is greater than or equal to the values of its children. Once the
heap is built, the largest element is at the root.
1. Build Max Heap: Create a max heap from the input array.
2. Extract Maximum: Swap the root element (largest element) with the last element of the heap.
3. Heapify: Restore the max heap property by calling the heapify function on the root node.
4. Repeat: Repeat steps 2 and 3 until the entire array is sorted.
Time Complexity
The time complexity of heap sort is $O(n \log n)$ in both the average and worst-case scenarios. This makes it a very efficient sorting algorithm for large datasets.
Advantages and Disadvantages
• Efficient for large datasets
• In-place sorting, requiring minimal extra memory
• Can be used for priority queues
• Can be slightly slower than quicksort in the average case
• May not be as stable as other sorting algorithms
When to Use Heap Sort
Heap sort is a good choice for:
• Large datasets: Its $O(n \log n)$ time complexity makes it suitable for sorting large arrays.
• Priority queues: Heap sort can be used to implement priority queues efficiently.
• Applications where space efficiency is important: Heap sort is an in-place algorithm, requiring minimal extra memory.
In conclusion, heap sort is a powerful and efficient sorting algorithm that’s widely used in various applications. Understanding its principles and advantages can help you make informed decisions
when choosing a sorting algorithm for your specific needs.
Sleep Sort
Sleep Sort stands out as a unique and somewhat humorous entry in the world of sorting algorithms. It emerged in 2011 on the anonymous online forum 4chan and gained traction on the popular tech
discussion platform Hacker News.
A Lighthearted Approach
Sleep sort takes a rather unconventional approach to sorting. It works by creating separate threads for each element in the input array. Each thread then “sleeps” for a duration proportional to the
value of its corresponding element. Once a thread wakes up, it adds its element to a final sorted list.
Think of it like this: Imagine sorting a list of tasks by their deadlines. Sleep sort would assign each task a separate worker. The worker for the task with the furthest deadline would sleep the
longest, while the one with the closest deadline would wake up first. In the end, the tasks would be completed (and thus sorted) in order of their deadlines.
Here’s a simplified breakdown of the process:
1. Thread Creation: For each element in the array, a separate thread is created.
2. Sleeping Beauty: Each thread sleeps for a time proportional to its associated element’s value.
3. Wake Up Call: When a thread wakes up, it adds its element to a final sorted list.
4. The Grand Finale: Once all threads finish sleeping, the final list contains the elements in sorted order.
Not Exactly Lightning Speed
While the concept is lighthearted and entertaining, sleep sort is not a champion for efficiency. Its time complexity is a hefty $O(n^2)$, which means its sorting time increases significantly as the
list size grows. This makes it impractical for real-world applications where speed is a critical factor.
A Learning Opportunity
Despite its limitations as a practical tool, sleep sort offers a valuable learning experience. It showcases alternative approaches to sorting and highlights the importance of time complexity when
choosing an algorithm.
In conclusion, sleep sort serves as a reminder that sorting algorithms can be both innovative and entertaining. However, for real-world scenarios, it’s best to stick with established algorithms that
deliver superior performance.
The Clash
We put both algorithms to the test with a battlefield of 3500 random numbers. Now, let’s see who emerges victorious!
Now that we have some data to test on, we want to add the algorithm for the heap sort. This goes as follows.
And of course the sleep sort as well, otherwise we won’t have anything to compare against.
Now, let’s test the two against one another.
Delve deeper:
For even more sorting options, explore our collection of sorting algorithms. Want to get your hands dirty with the code? Head over to heap sort VS. sleep sort Implementation.
The Winner
Brace yourselves! The benchmark revealed that the heap sort is a staggering 3048.57x faster than its competitor! That translates to running the heap sort almost 3049 times in the time it takes the
sleep sort to complete once!
The A.I. Nicknames the Winners:
We consulted a top-notch AI to give our champion a superhero nickname. From this day forward, the heap sort shall be known as The Heap Hero! The sleep sort, while valiant, deserves recognition too.
We present to you, The Snooze Button!
The Choice is Yours, Young Padawan
So, does this mean the heap sort is the undisputed king of all sorting algorithms? Not necessarily. Different algorithms have their own strengths and weaknesses. But understanding their efficiency
(which you can learn more about in the Big-O Notation post) helps you choose the best tool for the job!
This vast world of sorting algorithms holds countless possibilities. Who knows, maybe you’ll discover the next champion with lightning speed or memory-saving magic!
This showdown hopefully shed light on the contrasting speeds of heap and sleep sorting algorithms. Stay tuned for more algorithm explorations on the blog. | {"url":"https://hasty.dev/blog/sorting/heap-vs-sleep","timestamp":"2024-11-06T05:09:06Z","content_type":"text/html","content_length":"28099","record_id":"<urn:uuid:1173bd66-ad22-4fca-9214-39d8df626247>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00853.warc.gz"} |
Non Homogeneous Transformation
Volume 09, Issue 02 (February 2020)
Non Homogeneous Transformation
DOI : 10.17577/IJERTV9IS020033
Download Full-Text PDF Cite this Publication
Mr. T. Srinivasarao , Dr. V. Mallipriya, 2020, Non Homogeneous Transformation, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 09, Issue 02 (February 2020),
• Open Access
• Authors : Mr. T. Srinivasarao , Dr. V. Mallipriya
• Paper ID : IJERTV9IS020033
• Volume & Issue : Volume 09, Issue 02 (February 2020)
• Published (First Online): 11-02-2020
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Non Homogeneous Transformation
University College of Engg. Science & Technology Adikavi Nannaya University
Andhra Pradesh India
Dr.V.Mallipriya Asst.Professor Dept.Of Math.
Adikavi Nannaya University Andhra Pradesh
Abstract: A linear transformation
A:V F W F maps 0V 0W that helps to confirm that the range is the subspace of the
vector space W (F) and the null space is the subspace of V (F).
So, if the range space is a plane or a k dimensional hyperplane in the n dimensional space, then the null space is a line or an n k dimensional subspace. Further, they both have the point of
intersection at the origin resulting in the orthogonal complement spaces of each other.
Since the row space of the matrix of A is range space and the column null space is the null space of A, ifV F W F , then the
row space and the column null space become the orthogonal complement subspaces ofV F . So, it can be followed that a linear transformation can be mapped to the Euclidean space or a plane or a line to
the zero space or a line or a plane as its orthogonal complement spaces respectively.
In the present discussion, restricting the discussion to 3 dimensions, we wish to find a transformation that do not map the spaces through the origin and that identifies any point in the space at a
fixed distance from the given point.
The set of points at a fixed distance from a fixed point make a sphere in n space. In a Euclidean space, the standard basis vectors are the unit vectors along the co-ordinate axes and the
representation of an arbitrary point in the Euclidean space is (x, y,
z) which when transformed into the spherical co-ordinate system, the angles between the co-ordinate axes are considered to be right angles and so, the angle of measure is considered from the positive
part of x axis at the origin to the projection of the terminal ray is denoted by and the angle between the projection and the terminal ray at the origin is denoted by . For each
unique pair , and , we get a unique point identified by the transformation from the given point.
If the fixed point in the space is Pa,b,c and Qx, y, z is any point in the space, then the fixed distance
x a2 , 0 2
and 0
is the radius of the sphere satisfying the conditions
tan1 y and cos1 z
Chapter 1:
Definition: 1.1: A: 3
3 given by
Ax, y, z cos sin, sin sin, cos is a transformation for a fixed triad
, , that can transform any point in the Euclidean space to any other point required depending on the unique triad.
x , , cos sin
A y , , sin sin
z , , cos
cos sin
sin sin
cos cos
sin cos 0
0 sin
0 sin
0 cos cos sin
sin cos 0 0 1 0 0 sin sin
0 0 1 cos 0 sin 0 cos
Remark 1.2: for each fixed value of , taking 0 2 , a horizontal circle is formed which identifies the locus of the circle while increasing by a small angle, the new circle is formed and the family of
these circles will form the entire sphere upon which every point can be identified with the help of the transformation A.
To identify the transformation that uses the spherical co-ordinate system, conveniently, shift the rectangular frame of reference
OXYZ to PX YZsuch that OX & PX ,OY & PY ,OZ & PZ are respectively parallel. The new rectangular frame will
identify the point Qx, y, z from Pa,b,c and all those points which are at the constant distance PQ .
Definition: 1.3: a transformation A: 3 3 given by
Ax, , , y , , , z , , a cos sin,b sin sin, c cos
that does not pass through origin for a,b,c 0,0,0 .
This transformation can be called a non homogeneous transformation.
The matrix form of this linear transformation will be
x , , a cos sin
A y , , b sin sin
z , , c cos
0 sin
0 cos
a cos sin
b sin cos 0 0 1 0 0 b sin sin
c 0 0 1 cos 0 sin 0 c cos
Chapter 2:
Definition 2.1: the shape of polygon P is translation invariant or simply a polygon P is translation invariant under a transformation if
Ax, k , , y , k , , z , k , Ax , , , y , , , z , ,
For a fixed value of , the transformation A acts on a circle. Further, for the fixed value of , the transformation rotates
the n roots of unity 2k ,1 k n & n 3 by treating them as the vertices, they form a regular n gon joining them
k n
in the order of rotation, which is translation invariant under the transformation A.
Definition 2.2: a group (G, ) is a translation group if the operation is a metric or a geometric operation on it.
Theorem 2.3 : a group formed by 2k / 1 k n & n 3 under additional of angles is a translation group.
k n
Clearly addition of angles is a metric on a circle.
For each fixed value of , 0 , the non homogeneous transformation defines a circle (a circle horizontal if it is
considered to be in the Euclidean space) upon which for a particular value of k
,1 k n , the translation group under
k j
2k 2 j 2k j mod 2 ,1 k, j n & n 3
n n n
2 j
2 j
See that 0 2 & 0 2 result in 0 mod 2 2
n n n n
But, if the resultant angle is equal to 2 , then we consider it as 0 or 2 represent the vertex of the n gon upon the positive part of the x – axis.
This verifies that is closed under .
is the member of such that
2k 2n k mod 2
which verifies the identity and inverse
2 k nk n n 2
Note: the translation is geometrically invariant. However, this is the source of discussion for the permutation groups with instances of dihedral group.
1. Conway, John Horton; Delgado Friedrichs, Olaf; Huson, Daniel H; Thurston Willam P.(2001), On three dimensional space groups, Beltrage zur Algebra und Geometrie, 42(2): 475-507, arXiv:math.MG/
9911185,MR 1865535
2. Coornaert, M; Delzant.T; Papadopoulos,A.(1990), Geometrie et theorie des groups[Geometry and Group Theory],
3. Serre, Jean Pierre(1977), Linear representations of finite groups, Berlin, New York. Springer Verlag. ISBN 978 0 691 08017 8. MR 0347778
4. Wussing, Hans(2007), The Genesis of Abstract Group Concept: A contribution to the History of the Origin of Abstract Group Theory, New York:
Dover Publications, ISBN 978 0 486 45868 7
5. Arfken, George. Mathematical Mthods for Physicists, Academic Press. ISBN 0123846544
6. Keith, Sandra, Visualizing Linear Algebra using Maple.Upper Saddle River, NJ: Prentice Hall, 2001
7. Schreier, O. And Sperner, E., Introdution to Modern Algebra and Matrix Theory, 2nd Ed., Chelsea Publishing Co., New York, 1955
8. Handbook of Finite Translation Planes, Normal L.Johnson, Vikram Jha, Mauro Biliotti, Chapman & Hall/CRC, Taylr & Francis Group, University of Leece, Italy, ISBN 13: 978 58488 605 1
You must be logged in to post a comment. | {"url":"https://www.ijert.org/non-homogeneous-transformation","timestamp":"2024-11-14T12:08:55Z","content_type":"text/html","content_length":"68324","record_id":"<urn:uuid:5ea21c5f-2db5-423a-85a1-51bb97dae101>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00477.warc.gz"} |
Hierarchy builders and type classes
What do you think of Hierachy Builder (https://github.com/math-comp/hierarchy-builder). Is this something that you could use in your own projects? Is it something that we could recommend blindly to
mathematician newcomers?
I dont know much about it. From what @Robbert Krebbers says, it is basically a generalization of what math-comp/ssreflect are doing?
unfortunately their approach doesn't work for us because we need the hierarchy to interact well with typeclasses, and the unification algorithm that typeclasses use is incompatible with ssreflect
heirarchies. that's okay for them as they use their own apply: tactic everywhere which uses the new(er) unification, and they dont use type classes at all. but we are mixing canonical structures and
typeclasses (for various reasons, one of them being entirely out of our control: Proper instances so we can rewrite -- ssreflect uses normal = everywhere but we really need setoids), and so we have
no choice, things have to work with old unification.
so we had to heavily tweak their approach to make it work "most of the time", and even then about once a month we discover a situation where one of our instances doesnt fire and using apply: instead
makes it work. it is quite painful.
so based on that @Théo Zimmermann I would answer your question about recommending it blindly with "no". there are some major caveats.
OK, thanks! Indeed, the incompatibility with setoid rewrite is quite a bummer given how important the latter is (especially to mathematicians who are used to work with quotients).
correspondingly, https://github.com/coq/coq/issues/6294 is easily #1 on the list of coq bugs that we'd really like to see fixed.^^ (but yes I know it's a really hard problem.)
@Cyril Cohen @Kazuhiko Sakaguchi you may be interested in the points made by Ralf. Can we do something to make HB more setoid friendly? (apart from reimplementing setoids...)
@Ralf Jung is setoids the only issue you have? I imagine it is because of binders in Iris, am I correct? ("newcomer mathematician" -> "needs setoids" seems a quite strong assumption to me, this is
why I'm asking for more details)
BTW: I tried porting the iris hierarchy to the hierarchy builder to see what whould happen but a) it really doesn't do type class resolution (so the field types are basically the Set Printing All
output of the original version) and b) after some fields it just started reporting internal errors to me. I might still have the branch somewhere.. I should probably report bugs?
Enrico Tassi said:
Ralf Jung is setoids the only issue you have? I imagine it is because of binders in Iris, am I correct? ("newcomer mathematician" -> "needs setoids" seems a quite strong assumption to me, this is
why I'm asking for more details)
we use typeclasses for a lot of things, not just setoids. setoids is just the thing we control least.^^
not sure what you mean by binders here. iris has tons of objects where Coq's = is just not the right notion of equality, we need something more extensional. on top of that we use step-indexing which
gives rise to a family dist: nat -> T -> T -> Prop of equivalence relations, and we also need good support for rewriting with dist n.
@Janno Please open an issue about your porting experience, it can be interesting to us developing it. HB does nothing about type classes, nothing at all, so it is not clear to me what you expect.
(maybe we should move these messages in the HB stream)
(Is there such a stream?)
There is!
This topic was moved here from #Coq users > Hierarchy builders and type classes by Théo Zimmermann
Enrico Tassi said:
Cyril Cohen Kazuhiko Sakaguchi you may be interested in the points made by Ralf. Can we do something to make HB more setoid friendly? (apart from reimplementing setoids...)
I am not sure there would be an issue with setoid. HB only handles hierarchies themselves, fields can be anything, in particular a setoid equality that you can base instances of Proper on, I think.
Interaction issues between canonical structures and typeclasses appear mainly when you rely on both of them at the same time to infer one object. If it happens with HB, I am curious to see how
@Cyril Cohen
I am not sure there would be an issue with setoid. HB only handles hierarchies themselves, fields can be anything, in particular a setoid equality that you can base instances of Proper on, I
as mentioned before, the issue with this is that TC search is unable to apply many lemmas whose statement involves canonical structures, including in some cases the projections of the structures
the only reason ssreflect works at all is that they use apply: instead of apply
this is a long-standing problem in Coq's unification algorithm (https://github.com/coq/coq/issues/6294)
What is the status of the hierarchy builder and type classes? Did anyone experiment? Is there a detailed proposal on what to do?
@Janno Did you ever write down your experience?
@Enrico Tassi @Cyril Cohen @Kazuhiko Sakaguchi you write:
We believe it would take a minor coding effort to retarget HB to another bundling approach.
Is more information available?
We did not try. I still think it would not be too hard, but my priority is to have math-comp be ported to HB. We are getting close, but we are not there yet. For example we managed to compile the odd
order theorem on top of MC+HB, but we still have some performance issues to solve before declaring victory.
If you really want TC support in HB, we happily accept contributions ;-)
We could try to have a look at it, but it would be good to have some guidance on the "minor coding effort".
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237868-Hierarchy-Builder-devs-.26-users/topic/Hierarchy.20builders.20and.20type.20classes.html","timestamp":"2024-11-06T08:57:18Z","content_type":"text/html","content_length":"21671","record_id":"<urn:uuid:b0b4815a-79a9-4337-bd25-aace05944dd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00795.warc.gz"} |
seminars - On class groups of random number fields
• 일시 : 3월 8일(월), 오후 4시 30분~5시 30분
3월 15일(월), 오후 4시 30분~5시 30분
3월 22일(월), 오후 4시 30분~5시 30분
• Zoom : ID 858 8649 9647 / PW 810093
I will begin by recalling the classical Cohen-Lenstra-Martinet heuristics on the statistical behaviour of class groups of number fields in families. I will then present joint work with Hendrik W.
Lenstra Jr. in which we rephrase the heuristics in terms of Arakelov class groups of number fields, thereby explaining the otherwise somewhat mysterious looking probability weights in the original
heuristics; but also disprove the heuristics in two different ways, and propose corrections.
(meeting information also available at https://researchseminars.org/seminar/ArithmeticMonday) | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=79&sort_index=Time&order_type=asc&l=en&document_srl=813784","timestamp":"2024-11-05T20:02:28Z","content_type":"text/html","content_length":"46427","record_id":"<urn:uuid:f5452726-18df-4276-984f-7bef6aac0c88>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00640.warc.gz"} |
CS5800: Assignment 4 solution
1. (30 points)
Suppose you are a high-level manager in a software firm and you are managing n software projects.
You are asked to assign m of the programmers in your firm among these n projects. Assume that all of
the programmers are equally competent.
After some careful thought, you have figured out how much benefit i programmers will bring to project j.
View this benefit as a number. Formally put, for each project j, you have computed an array Aj [0..m] where
Aj [i] is the benefit obtained by assigning i programmers to project j. Assume that Aj [i] is nondecreasing
with increasing i. Further make the economically sound assumption that the marginal benefit obtained
by assigning an ith programmer to a project is non-increasing as i increases. Thus, for all j and i ≥ 1,
Aj [i + 1] − Aj [i] ≤ Aj [i] − Aj [i − 1].
(a) Design a greedy algorithm to determine how many programmers you will assign to each project such
that the total benefit obtained over all projects is maximized. Your answer should be in the form of
a sequence of clearly stated steps. Pseudo-code is optional.
(b) Justify the correctness of your algorithm and analyze its efficiency in space and time.
2. (30 points)
The Museum of Fine Art is planning to construct a new wing to showcase paintings of contemporary
art. The new wing consists of a single, long corridor. Paintings roughly of size 2×2 feet will be hung along
both walls of this corridor. Their centers are placed along distances x1, x2, …xn from the start of their
respective walls, all at the same height.
An architect is trying to design how to light this corridor. She has to fit linear panels of light (fluorescent
tube lights) above each wall, along it. Each panel of light reliably provides light within a horizontal span
of m feet at the height at which the paintings are hung (same for all panels). Each painting is lit if it is
within the horizontal span of at least one panel. Panels of lights are significantly longer than the span of
each painting (m 2), so she is ready to assume that each painting is a dot at its center. Her problem is
to place a minimum number of panels of lights along and above each wall so that each painting is lit.
(a) State the technical problem (i.e. state the actual computational problem without context, in a way
that it can be applied to any other context).
(b) Provide an efficient algorithm to compute the centers of the panels of light along one wall, obeying
the above constraints. Your algorithm should be in the form of a sequence of clear and precise steps.
Pseudo-code is optional.
(c) Justify the correctness of your algorithm and analyze its efficiency in space and time.
3. (30 points)
“Elixir of Life” is a milk bank that provides mother’s milk to newborn babies in their critical first few
months of life. They accept donations from mothers, homogenize and pasteurize it and then package them
into vials of 1, 5, 10, 20 and 50 ounces. Then they supply to local hospitals for a fee to cover their costs
for processing and packaging, which must be done in extremely hygienic conditions. As the milk bank is
sustained only through donations, there are a limited number of vials of each size.
(a) When new parents come to the bank with a prescription of m ounces, the bank must dispense the
amount to them. Note that due to hygiene issues the bank is not allowed to open the packaged vial
to dispense part of an amount. However you may assume that m is an integral number. Design an
algorithm to dispense exactly m ounces using the minimum number of vials. Your answer should
include a short description about why your algorithm returns the optimal answer. Your answer must
be in the form of a sequence of clear and precise steps. Pseudo-code is optional.
(b) Justify the correctness of your algorithm and analyze its efficiency in space and time.
(c) How will you know if dispensing the amount is even possible? | {"url":"https://jarviscodinghub.com/product/cs5800-assignment-4-solution/","timestamp":"2024-11-03T10:10:44Z","content_type":"text/html","content_length":"105303","record_id":"<urn:uuid:fcc098af-4d78-4751-9dec-3612c02ec5be>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00393.warc.gz"} |
On the natural frequency and vibration mode of composite beam with non-uniform cross-section
In this paper, the vibratory properties and expression of natural modes of laminated composite beam with variable cross-section ratios of elastic modulus and density along the axis of the beam have
been investigated via theoretical analysis. Based on the generalized Hamilton principle, the longitudinal and transverse vibration equations have been deduced by the means of variational method.
Then, the natural frequencies of longitudinal and transverse vibration modes have been obtained using the method of power series, which agree well with finite element simulations. The first-order
natural frequencies of longitudinal and transverse of composite beams are plotted as a function of the elastic modulus or densities difference of two components. With distinct material
characteristics, the effect of shape factor on the first and second order lateral modes of composite beam is also revealed. In addition, the study shows that the boundary conditions impose a strong
effect on the shape factor. The method presented in this paper is not only suitable for the laminated composite beam with variable cross-section, but will also be applicable to more general cases of
composite beams of complex geometry and component in vibration mechanics. This controllable vibration performance achieved in this paper may shed some light on and stimulate new architectural design
of composite engineering structures.
1. Introduction
In mechanical, aerospace and civil structures and vehicles, laminated structural components have received wide applications thanks to their distinct advantages of high strength, stiffness-to-weight
ratios and high bending rigidity [1, 2]. An important example is the widely used steel-concrete components in bridges [3, 4]. For structural applications, the eigenmodes and vibration characteristics
of composite beams are of great importance [5]. Extensive theoretical and numerical approaches were carried out to explore the natural frequencies of laminated structures. Tseng et al. [6] applied
the stiffness analysis method involving shear deformation and rotation inertia to determine the natural frequencies of laminated beams with arbitrary curvatures. Banerjee [7] presented the frequency
equation and mode shapes of composite Timoshenko beams by symbolic computing. Rao et al. [8] proposed a higher-order mixed theory for determining natural frequencies of several laminated simply
supported beams. Chen et al. [9] combined the state space method and the differential quadrature method for freely vibrating laminated beams. An excellent overview of recent advances in straight and
curved composite beam models may be found in [10]. Wu and Chen [11] and Qu et al. [12] compared a few free vibration modes of laminated beams using the shear deformation-based theories. By using the
Rayleigh-Ritz (R-R) method, Gunda et al. [13] studied large amplitude vibration of laminated composite beam with symmetric and asymmetric layup orientations.
Most previous studies focused on composite beams with uniform axial properties, while composite beams with variable cross-sections may be needed to better fit the stress distribution or to achieve a
better structural performance [14-17]. Present study emphasizes the normal mode analysis of composite beam with non-uniform constituent cross-section properties. The attention is limited to composite
beams with a constant overall geometrical section dimension, but the height ratio between the components varies along the axial direction (Fig. 1), the configuration of which is similar to the widely
used and studied scarf joint technique for fabricating composite structures [18-20]. It is noted that the algorithms presented here are also applicable to study other composite beams with more
complex cross-section properties, for example, the jagged joint interface between two components.
Any general structural deformation can be regarded as a superposition of its fundamental normal modes, therefore, the present study emphasizes the normal mode analysis of composite beam with
non-uniform constituent cross-section properties. To focus on the effect of constituent ratio, the attention is limited to composite beams with a constant overall geometrical section dimension, but
the height ratio between the components varies along the axial direction (Fig. 1).
Many previous analytical and computational studies focused on the nonlinear vibration equations, using differential quadrature method (DQM) [21-22], dynamic stiffness method [23], finite element
method (FEM) [24] and finite difference method [25], etc. In this paper, we adopt the variational method to obtain the kinetic equations of composite beams with variable cross-section ratios of
elastic modulus, and to deduce the analytical expression of natural modes of longitudinal and transverse vibrations. Numerical validations are carried out using finite element simulations.
Though this study focuses on a rather simple composite beam with two components attached on their inclined surface (Fig. 1), the algorithms presented here can be easily extended to be applicable to
studying composite beams of complex geometry and multiple components. One example is to study the vibration mode and frequency of the helicopter rotor blades which consist of multiple different
material components and each component’s cross-section may be varying along its length direction. This study is also intended to stimulate new design of composite structures. A widely used composite
beam structure is the composite bridges usually composed of a uniform layer of steel, a uniform layer of concrete and probably a uniform layer of fiber reinforced polymer deck. We believe that a more
appropriate design of the bridge with varying cross-section of each material component (for example, increasing the thickness of the concrete layer in the middle region of the bridge) may be able to
spread the load on the bridge, thus exploiting the inherent advantages of each of its material. Besides, it would be routine to analyze the natural frequency and vibration mode of a number of
composite structures with various joint interface geometries [14, 17] based on the formulas presented here.
2. Modeling and mathematics formulation
Consider a beam of rectangular cross section, composed of two materials with their ratio changing along the axial direction. This is equivalent to bonding two rectangular beams with variable cross
section, (Fig. 1). Such a configuration may be applicable to the composite components in bridges to allow a better fitting of the bending moment distribution of the bridge due to external loading or
used as a scarf joint for composite structures. It is worth noting that our normal mode analysis algorithms presented below are not limited to this configuration, but can be easily adapted to study
other composite beams with more complex $x$-$y$ cross-section properties. All beams considered here are Bernoulli-Euler beams.
The elastic modulus, cross-sectional area, moment of inertia, length and density of the upper beam are ${E}_{1}$, ${A}_{1}\left(x\right)$, ${I}_{1}\left(x\right)$, $l$, ${\rho }_{1}$, respectively,
while the ones of lower beam are ${E}_{2}$, ${A}_{2}\left(x\right)$, ${I}_{2}\left(x\right)$, $l$, ${\rho }_{2}$. The width of the beam is $b$. Thus, the cross-sectional areas of upper and lower
beams are:
while ${a}_{10}+{a}_{20}=a$ and $k=\left({a}_{20}-{a}_{10}\right)/l$.
The moment of inertia of beams can be derived as:
According to the neutral plane assumption, the longitudinal (axial) deformation $u$ of upper beam and lower beam are the same. We can also obtain the similar conclusion with the transverse
deformation $w$. Shear deformation is neglected.
The kinetic energy of the system is:
${T}_{1}=\frac{\left[{\int }_{0}^{1}{\rho }_{1}{A}_{1}\left(x\right)\left({\stackrel{˙}{u}}_{1}^{2}+{\stackrel{˙}{w}}_{1}^{2}\right)dx\right]}{2},{T}_{2}=\frac{\left[{\int }_{0}^{1}{\rho }_{2}{A}_{2}
The energy of the system is:
${V}_{1}=\frac{\left\{{\int }_{0}^{1}\left[{E}_{1}{A}_{1}\left(x\right){\left(\frac{\partial {u}_{1}}{\partial x}\right)}^{2}+{E}_{1}{I}_{1}\left(x\right){\left(\frac{{\partial }^{2}{w}_{1}}{\partial
${V}_{2}=\frac{\left\{{\int }_{0}^{1}\left[{E}_{2}{A}_{2}\left(x\right){\left(\frac{\partial {u}_{2}}{\partial x}\right)}^{2}+{E}_{2}{I}_{2}\left(x\right){\left(\frac{{\partial }^{2}{w}_{2}}{\partial
Using variational principle, one obtains:
$\delta {\int }_{{t}_{0}}^{{t}_{1}}Tdt=\delta {\int }_{{t}_{0}}^{{t}_{1}}\left({T}_{1}+{T}_{2}\right)dt=\delta {\int }_{{t}_{0}}^{{t}_{1}}\frac{\left[{\int }_{0}^{1}\left[{\rho }_{1}{A}_{1}\left(x\
right)+{\rho }_{2}{A}_{2}\left(x\right)\right]\left({\stackrel{˙}{u}}_{}^{2}+{\stackrel{˙}{w}}_{}^{2}\right)dx\right]}{2}dt$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}={\int }_{{t}_{0}}^
{{t}_{1}}{\int }_{0}^{1}\left[{\rho }_{1}{A}_{1}\left(x\right)+{\rho }_{2}{A}_{2}\left(x\right)\right]\left(-\stackrel{¨}{u}\delta u-\stackrel{¨}{w}\delta w\right)dxdt,$
$\delta {\int }_{{t}_{0}}^{{t}_{1}}Vdt=\delta {\int }_{{t}_{0}}^{{t}_{1}}\left({V}_{1}+{V}_{2}\right)dt=\delta {\int }_{{t}_{0}}^{{t}_{1}}\frac{1}{2}{\int }_{0}^{1}\left[{E}_{1}{A}_{1}\left(x\right)+
{E}_{2}{A}_{2}\left(x\right)\right]{\left(\frac{\partial u}{\partial x}\right)}^{2}dxdt$$+\delta {\int }_{{t}_{0}}^{{t}_{1}}\frac{1}{2}{\int }_{0}^{1}\left[{E}_{1}{I}_{1}\left(x\right)+{E}_{2}{I}_{2}
\left(x\right)\right]{\left(\frac{{\partial }^{2}w}{\partial {x}^{2}}\right)}^{2}dxdt$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}=-{\int }_{{t}_{0}}^{{t}_{1}}{\int }_{0}^{1}\frac{\partial
}{\partial x}\left\{\left[{E}_{1}{A}_{1}\left(x\right)+{E}_{2}{A}_{2}\left(x\right)\right]\frac{\partial u}{\partial x}\right\}\delta udxdt$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+{\
int }_{{t}_{0}}^{{t}_{1}}{\int }_{0}^{1}\left[{E}_{1}{I}_{1}\left(x\right)+{E}_{2}{I}_{2}\left(x\right)\right]\left(\frac{{\partial }^{2}w}{\partial {x}^{2}}\right)\delta \left(\frac{{\partial }^{2}
w}{\partial {x}^{2}}\right)dxdt$$={\int }_{{t}_{0}}^{{t}_{1}}\left\{{\int }_{0}^{1}-\frac{\partial }{\partial x}\left\{\left[{E}_{1}{A}_{1}\left(x\right)+{E}_{2}{A}_{2}\left(x\right)\right]\frac{\
partial u}{\partial x}\right\}\delta udx\right\$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+{\int }_{0}^{1}\frac{{\partial }^{2}}{\partial {x}^{2}}\left\{\left[{E}_{1}{I}_{1}\left(x\
right)+{E}_{2}{I}_{2}\left(x\right)\right]\left(\frac{{\partial }^{2}w}{\partial {x}^{2}}\right)\right\}\delta wdx}dt.$
Based on the generalized Hamilton principle, the following equations with independent $\delta u$ and $\delta w$ are obtained:
$\left[{\rho }_{1}{A}_{1}\left(x\right)+{\rho }_{2}{A}_{2}\left(x\right)\right]\frac{{\partial }^{2}u}{\partial {t}^{2}}-\left[{E}_{1}\frac{\partial {A}_{1}\left(x\right)}{\partial x}+{E}_{2}\frac{\
partial {A}_{2}\left(x\right)}{\partial x}\right]\frac{\partial u}{\partial x}$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}-\left[{E}_{1}{A}_{1}\left(x\right)+{E}_{2}{A}_{2}\left(x\right)\
right]\frac{{\partial }^{2}u}{\partial {x}^{2}}=0,$
$\left[{\rho }_{1}{A}_{1}\left(x\right)+{\rho }_{2}{A}_{2}\left(x\right)\right]\frac{{\partial }^{2}w}{\partial {t}^{2}}+\left[{E}_{1}{I}_{1}\left(x\right)+{E}_{2}{I}_{2}\left(x\right)\right]\frac{{\
partial }^{4}w}{\partial {x}^{4}}$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+2\left[{E}_{1}\frac{\partial {I}_{1}\left(x\right)}{\partial x}+{E}_{2}\frac{\partial {I}_{2}\left(x\right)}
{\partial x}\right]\frac{{\partial }^{3}w}{\partial {x}^{3}}+\left[{E}_{1}\frac{{\partial }^{2}{I}_{1}\left(x\right)}{\partial {x}^{2}}+{E}_{2}\frac{{\partial }^{2}{I}_{2}\left(x\right)}{\partial {x}
^{2}}\right]\frac{{\partial }^{2}w}{\partial {x}^{2}}=0.$
Fig. 1Beams with different cross-sectional areas
3. Mathematical transformation and calculation
3.1. The longitudinal deformation equation
The longitudinal deformation equation is considered first. From Eq. (8), it is assumed that the main vibration mode of beam is:
$u\left(x,t\right)=U\left(x\right)b\mathrm{s}\mathrm{i}\mathrm{n}\left(\omega t+\phi \right).$
Combining Eqs. (8) and (10):
$U\mathrm{"}\mathrm{"}+\frac{{\alpha }_{2}{U}^{\mathrm{"}}}{\left({\alpha }_{1}+{\alpha }_{2}x\right)}+\frac{\left({\alpha }_{3}+{\alpha }_{4}x\right){\omega }^{2}U}{\left({\alpha }_{1}+{\alpha }_{2}
where ${\alpha }_{1}=\left({E}_{1}{a}_{10}+{E}_{2}{a}_{20}\right)b$, ${\alpha }_{2}=\left({E}_{1}-{E}_{2}\right)kb$, ${\alpha }_{3}=\left({\rho }_{1}{a}_{10}+{\rho }_{2}{a}_{20}\right)b$, ${\alpha }_
{4}=\left({\rho }_{1}-{\rho }_{2}\right)kb$. Obviously, ${\alpha }_{2}e \text{0}$. Assume that:
It follows that ${\alpha }_{2}>0$.
In Eq. (11), ${\alpha }_{2}/\left({\alpha }_{1}+{\alpha }_{2}x\right)$ and $\left({\alpha }_{3}+{\alpha }_{4}x\right){\omega }^{2}/\left({\alpha }_{1}+{\alpha }_{2}x\right)$ are analytical as $x\in \
left[0,l\right]\text{,}$ which may be solved using power series. Suppose that:
$U=\sum _{n=0}^{\mathrm{\infty }}{a}_{n}{x}^{n}.$
Substitute Eq. (13) into Eq. (11):
$\sum _{n=2}^{\mathrm{\infty }}{\alpha }_{1}n\left(n-1\right){a}_{n}{x}^{n-2}+\sum _{n=2}^{\mathrm{\infty }}{\alpha }_{2}n\left(n-1\right){a}_{n}{x}^{n-1}+\sum _{n=1}^{\mathrm{\infty }}{\alpha }_{2}n
{a}_{n}{x}^{n-1}$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\sum _{n=0}^{\mathrm{\infty }}{\alpha }_{3}{\omega }^{2}{a}_{n}{x}^{n}+\sum _{n=0}^{\mathrm{\infty }}{\alpha }_{4}{\omega }^
To obtain the solution, make each power coefficient of $x$ equal to zero, leads to:
${a}_{2}=\frac{-{\alpha }_{3}{\omega }^{2}{a}_{0}-{\alpha }_{2}{a}_{1}}{2{\alpha }_{1}},\dots ,{a}_{r+2}=\frac{-{\alpha }_{2}\left(r+1{\right)}^{2}{a}_{r+1}-{\alpha }_{3}{\omega }^{2}{a}_{r}-{\alpha
}_{4}{\omega }^{2}{a}_{r-1}}{{\alpha }_{1}\left(r+2\right)\left(r+1\right)},\mathrm{}\mathrm{}\mathrm{}\mathrm{}r\ge 1.$
${a}_{0}$ and ${a}_{1}$ are different constants.
Assuming that the beam is stationary initially:
Substitute Eq. (16) into Eq. (10), one deduces $\phi =0$.
Suppose the one end of the beam is fixed while the other is free. This kind of boundary condition is taken as an example in the calculating process:
The fundamental modes of the system are:
${U}_{i}\left(x\right)=\sum _{n=1}^{\mathrm{\infty }}{a}_{n}\left({\omega }_{i}\right){x}^{n},\mathrm{}\mathrm{}\mathrm{}i=1,\mathrm{}2,\dots .$
The frequency equation is:
$\sum _{n=1}^{\mathrm{\infty }}n{a}_{n}\left(\omega \right){l}^{n-1}=0.$
$u\left(x,t\right)=\sum _{i=1}^{\mathrm{\infty }}{U}_{i}\left(x\right){b}_{i}\mathrm{s}\mathrm{i}\mathrm{n}\left({\omega }_{i}t\right).$
3.2. The transverse deformation equation
The transverse deformation is explored next. The main vibration equation is:
$w\left(x,t\right)=Y\left(x\right)b\mathrm{s}\mathrm{i}\mathrm{n}\left(\omega t+\phi \right).$
Substitute it into Eq. (9):
$\left({\beta }_{1}{x}^{3}+{\beta }_{2}{x}^{2}+{\beta }_{3}x+{\beta }_{4}\right){Y}^{IV}+\left(6{\beta }_{1}{x}^{2}+4{\beta }_{2}x+2{\beta }_{3}\right)Y\mathrm{"}\mathrm{"}\mathrm{"}$$\mathrm{}\
mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\left(6{\beta }_{1}x+2{\beta }_{2}\right)Y\mathrm{"}\mathrm{"}-\left({\alpha }_{3}+{\alpha }_{4}x\right){\omega }^{2}Y=0,$
${\alpha }_{3}=\left({\rho }_{1}{a}_{10}+{\rho }_{2}{a}_{20}\right)b,\mathrm{}\mathrm{}\mathrm{}{\alpha }_{4}=\left({\rho }_{1}-{\rho }_{2}\right)kb,$${\beta }_{1}=\left({E}_{1}-{E}_{2}\right)\frac{b
{k}^{3}}{3},\mathrm{}\mathrm{}\mathrm{}{\beta }_{2}=b{k}^{2}\left[{E}_{1}\left({a}_{10}-\frac{a}{2}\right)-{E}_{2}\left(\frac{a}{2}-{a}_{20}\right)\right],$${\beta }_{3}=bk\left[{E}_{1}{\left({a}_
{10}-\frac{a}{2}\right)}^{2}-{E}_{2}{\left(\frac{a}{2}-{a}_{20}\right)}^{2}\right],$${\beta }_{4}=b\left\{\left[{E}_{1}{\left({a}_{10}-\frac{a}{2}\right)}^{3}-{E}_{2}{\left(\frac{a}{2}-{a}_{20}\
Suppose Eq. (22) has the solution in the form of (see Appendix for justification):
$Y=\sum _{n=0}^{\mathrm{\infty }}{a}_{n}{x}^{n},\mathrm{}\mathrm{}\mathrm{}\mathrm{}0\le x\le l.$
Substitute the above equation into Eq. (22), yields:
$\sum _{n=4}^{\mathrm{\infty }}{\beta }_{1}n\left(n-1\right)\left(n-2\right)\left(n-3\right){a}_{n}{x}^{n-1}+\sum _{n=4}^{\mathrm{\infty }}{\beta }_{2}n\left(n-1\right)\left(n-2\right)\left(n-3\
right){a}_{n}{x}^{n-2}$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\sum _{n=4}^{\mathrm{\infty }}{\beta }_{3}n\left(n-1\right)\left(n-2\right)\left(n-3\right){a}_{n}{x}^{n-3}+\sum _{n=4}^
{\mathrm{\infty }}{\beta }_{4}n\left(n-1\right)\left(n-2\right)\left(n-3\right){a}_{n}{x}^{n-4}$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\sum _{n=3}^{\mathrm{\infty }}6{\beta }_{1}n\
left(n-1\right)\left(n-2\right){a}_{n}{x}^{n-1}+\sum _{n=3}^{\mathrm{\infty }}4{\beta }_{2}n\left(n-1\right)\left(n-2\right){a}_{n}{x}^{n-2}$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\
sum _{n=3}^{\mathrm{\infty }}2{\beta }_{3}n\left(n-1\right)\left(n-2\right){a}_{n}{x}^{n-3}+\sum _{n=2}^{\mathrm{\infty }}6{\beta }_{1}n\left(n-1\right){a}_{n}{x}^{n-1}$$\mathrm{}\mathrm{}\mathrm{}\
mathrm{}\mathrm{}\mathrm{}+\sum _{n=2}^{\mathrm{\infty }}2{\beta }_{2}n\left(n-1\right){a}_{n}{x}^{n-2}-\sum _{n=0}^{\mathrm{\infty }}{\alpha }_{3}{\omega }^{2}{a}_{n}{x}^{n}-\sum _{n=0}^{\mathrm{\
infty }}{\alpha }_{4}{\omega }^{2}{a}_{n}{x}^{n+1}=0.$
To obtain the solution, set each power coefficient of $x$ equal to zero, thus:
${a}_{4}=\frac{\left({\alpha }_{3}{\omega }^{2}{a}_{0}-4{\beta }_{2}{a}_{2}-12{\beta }_{3}{a}_{3}\right)}{24{\beta }_{4}},$$\dots ,$${a}_{r+4}=\frac{{I}_{r+4}}{{\beta }_{4}\left(r+4\right)\left(r+3\
right)\left(r+2\right)\left(r+1\right)},$${I}_{r+4}=-{\beta }_{3}\left(r+3\right)\left(r+2{\right)}^{2}\left(r+1\right){a}_{r+3}-{\beta }_{2}\left(r+2{\right)}^{2}\left(r+1{\right)}^{2}{a}_{r+2}$$\
mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}-{\beta }_{1}\left(r+2\right)\left(r+1{\right)}^{2}r{a}_{r+1}+{\alpha }_{3}{\omega }^{2}{a}_{r}+{\alpha }_{4}{\omega }^{2}{a}_{r-1},$$r\ge 1.$
${a}_{0}$, ${a}_{1}$, ${a}_{2}$ and ${a}_{3}$ are different constant. Some boundary conditions are listed in order to discuss the problem further.
Assuming that the beam is stationary initially:
Substitute Eq. (27) into Eq. (21), yields $\phi =0$. Suppose the one end of the beam is fixed while the other is free:
The main modes for the systems are obtained:
${Y}_{i}\left(x\right)=\sum _{n=2}^{\mathrm{\infty }}{a}_{n}\left({\omega }_{i}\right){x}^{n},\mathrm{}\mathrm{}\mathrm{}i=1,\mathrm{}2,\dots .$
The frequency equation is:
$\sum _{n=2}^{\mathrm{\infty }}n\left(n-1\right){a}_{n}\left(\omega \right){l}^{n-2}=\sum _{n=3}^{\mathrm{\infty }}n\left(n-1\right)\left(n-2\right){a}_{n}\left(\omega \right){l}^{n-3}=0.$
$w\left(x,t\right)=\sum _{i=1}^{\mathrm{\infty }}{Y}_{i}\left(x\right){b}_{i}\mathrm{sin}\left({\omega }_{i}t\right).$
4. Results and discussion
To compare with the results obtained by methods mentioned above, finite element simulations using ABAQUS [26] are employed to calculate both the first-order longitudinal and transverse natural modes
of beams in six cases. Data in Table 1 reveals the relative difference between the analytical results and the finite element results. In the analytical procedure, a seventh-order polynomial is
adopted in the equations.
Table 1Comparison between ABAQUS results and ones obtained by analytical methods
Longitudinal frequency Transverse frequency
${a}_{10}={a}_{20}=$ 0.05 m
Analysis (rad/s) Abaqus (rad/s) Difference Analysis (rad/s) Abaqus (rad/s) Difference
${\rho }_{1}={\rho }_{2}=$ 7.8×10^3 kg/m^3${E}_{1}={E}_{2}=$ 200×10^9 Pa 8063.5 7953.7 1.36 % 512.7 509.7 0.59 %
${\rho }_{1}={\rho }_{2}=$ 7.8×10^3^kg/m^3
${E}_{1}=$200×10^9 Pa 7542.7 7439.7 1.37 % 480.3 473.0 1.52 %
${E}_{2}=$ 150×10^9 Pa
${\rho }_{1}={\rho }_{2}=$ 7.8×10^3^kg/m^3
${E}_{1}=$200×10^9 Pa 6983.2 6886.2 1.39 % 436.0 422.4 3.12 %
${E}_{2}=$ 100×10^9 Pa
${\rho }_{1}={\rho }_{2}=$ 8.0×10^3^kg/m^3
7962.3 7853.7 1.36 % 506.2 503.2 0.59 %
${E}_{1}={E}_{2}=$ 200×10^9 Pa
${\rho }_{1}=$8.0×10^3^kg/m^3
${\rho }_{2}=$ 6.0×10^3^kg/m^3 8512.0 8395.5 1.36 % 541.2 540.0 0.22 %
${E}_{1}={E}_{2}=$ 200×10^9 Pa
${\rho }_{1}=$8.0×10^3^kg/m^3
${\rho }_{2}=$ 4.0×10^3^kg/m^3 9194.0 9066.4 1.38 % 584.6 581.0 0.62 %
${E}_{1}={E}_{2}=$ 200×10^9 Pa
4.1. The longitudinal frequency
Table 2 reveals the effect of shape factor ${a}_{10}/{a}_{20}$ on the frequency, as it can be seen from the charts, the boundary conditions imposes a strong effect on the shape factor. If both beam
ends are fixed (or free), the longitudinal frequencies of the composite beam have little difference when the shape factors ${a}_{10}/{a}_{20}$ are reciprocal. Only with mixed boundary conditions, can
frequencies vary according to the shape factor ${a}_{10}/{a}_{20}$. In addition, with the same material characteristics and shape size, the boundary conditions of both ends fixed and free share no
In Fig. 2(a), the first-order natural frequency of composite beams is examined. The elastic modulus of the upper beam is fixed at ${E}_{1}=\mathrm{}$200×10^9 Pa, while the elastic modulus of the
lower beam is decreased (that is, ${E}_{1}-{E}_{2}$ rises from 0 to 100 (×10^9 Pa)), which leads to the declination of the first-order natural frequency of composite beams. For comparison, if a
reference beam is made by a uniform material whose elastic modulus is the average of the upper and the lower beam, a set of the first-order natural frequency of the uniform beam can be obtained (see
the blue curve of reference results in Fig. 2(a)). From Fig. 2(a), the first-order natural frequency of the composite beam is considerably different than that of the reference uniform beam, that is,
the difference between blue and black curve is especially significant. The black one composited of two bonded uniform beams, while the blue one is a uniform beam whose elastic modulus is the average
of the two prior components. This difference implies nonlinear coupling between the longitudinal modes of the two component beams. The shape gradient also plays an important role in the natural
frequency. In this case when the boundary condition is selected as fixed in the left and free in the right, the frequency is higher as ${a}_{10}/{a}_{20}$ rises.
Table 2Effects of the shape factor on the longitudinal frequency with different boundary conditions
Fixed in both left and right Free in both left and right Fixed in left and free in right
${\rho }_{1}={\rho }_{2}=$ 7.8×10^3 kg/m^3 ${a}_{10}/{a}_{20}=\text{1}/\text{3}$ 13710.0 13725.0 6630.6
${E}_{1}=$200×10^9 Pa ${a}_{10}/{a}_{20}=\text{1}/\text{1}$ 13761.0 13758.0 6886.2
${E}_{2}=$ 100×10^9 Pa ${a}_{10}/{a}_{20}=\text{3}/\text{1}$ 13710.0 13725.0 7096.8
${\rho }_{1}=$8.0×10^3 kg/m^3 ${a}_{10}/{a}_{20}=\text{1}/\text{3}$ 18187.0 18187.0 8873.6
${\rho }_{2}=$ 4.0×10^3 kg/m^3 ${a}_{10}/{a}_{20}=\text{1}/\text{1}$ 18119.0 18115.0 9066.4
${E}_{1}={E}_{2}=$ 200×10^9 Pa ${a}_{10}/{a}_{20}=\text{3}/\text{1}$ 18111.0 18188.0 9386.3
In Fig. 2(b), the first-order natural frequency of composite beam is plotted as a function of the densities difference of two components. As expected, the first-order natural frequency of composite
beams rises with the reduction of density of lower beam. The blue curve of reference solution in Fig. 2(b) is that of a uniform beam whose density is the average between upper and lower beam.
Obviously the reference results have a significant difference not only from the composite beam with constant cross-section but also from varying cross-section composite beam. And the first-order
natural frequency is greater when the size gradient gets bigger.
Fig. 2The first-order longitudinal frequency of beams with different cross-sectional areas; a10+a20=a= 0.1 m, l= 1 m
a)${E}_{1}=\mathrm{}$200×10^9 Pa, ${\rho }_{1}={\rho }_{2}=\mathrm{}$7.8×10^3 kg/m^3
b)${E}_{1}={E}_{2}=\mathrm{}$200×10^9 Pa, ${\rho }_{1}=\mathrm{}$8.0×10^3 kg/m^3
4.2. The transverse frequency and mode of vibration
Table 3 reveals the effects of the shape factor on transverse frequency with four representative boundary conditions combined by fixed, simple supported, and free conditions.
If the composite beam conditions at both ends are the same, the transverse frequency is the same when the shape factors ${a}_{10}/{a}_{20}$ are reciprocal, owing to the mirror symmetry. When the left
end is fixed, and the right end sets free, the transverse frequency of composite beam varies obviously according to the shape factor ${a}_{10}/{a}_{20}$. Therefore, the impacts of physical
characteristic and shape properties on the transverse frequency with the left-fixed and right-free boundary conditions are discussed on the following section.
Fig. 3 shows the effect of physical properties and constituent ratio on the transverse frequency of the composite beam. For a composite beam with left end fixed and right end free, if one wants to
improve the transverse frequency, a lower beam with bigger modulus or smaller density can be selected without changing the upper beam. Alternatively, one can tune the transverse frequency by changing
the shape factor (constituent ratio along the axial direction) of the composite beam. The bigger the shape factor ${a}_{10}/{a}_{20}$ is, the greater the transverse frequencies of the composite beam
Table 3Effects of the shape factor on the transverse frequency with different boundary conditions
Left and right fixed Left (fixed) right (simple-supported) Left and right freely supported Left (fixed) right (free)
${\rho }_{1}={\rho }_{2}=$ 7.8×10^3 kg/m^3 ${a}_{10}/{a}_{20}=\text{1}/\text{3}$ 2582.9 1995.6 1646.5 419.8
${E}_{1}=$200×10^9 Pa ${a}_{10}/{a}_{20}=\text{1}/\text{1}$ 3521.3 2679.8 2191.4 551.0
${E}_{2}=$ 100×10^9 Pa ${a}_{10}/{a}_{20}=\text{3}/\text{1}$ 2566.4 2010.9 1650.3 421.1
${\rho }_{1}=$8.0×10^3 kg/m^3 ${a}_{10}/{a}_{20}=\text{1}/\text{3}$ 3527.6 2713.5 2195.0 579.3
${\rho }_{2}=$ 4.0×10^3 kg/m^3 ${a}_{10}/{a}_{20}=\text{1}/\text{1}$ 2582.9 2050.0 1646.5 428.6
${E}_{1}={E}_{2}=$ 200×10^9 Pa ${a}_{10}/{a}_{20}=\text{3}/\text{1}$ 3521.3 2740.6 2191.4 610.1
Fig. 3The first-order transverse frequency with a10+a20=a= 0.1 m, l= 1 m
Fig. 4The mode of vibration of beams with different cross-sectional areas; E1= 200×109 Pa, E2= 1×109 Pa, ρ1=ρ2= 7.8×103 kg/m3, a10+a20=a= 0.1 m, l= 1 m
Comparing with a reference beam whose physical characteristics are the average of the ones of upper and lower beams, when changing the modulus of composite beam, the reference solutions of the single
beam can be observed in Fig. 3. As the modulus of the lower beam varies, the reference results are close to the transverse frequency of composite beam with a shape factor of ${a}_{10}/{a}_{20}=\text
{1}/\text{1}$, while as the density of the lower beam varies, the reference solution is close to the frequency of the composite beam with a shape factor of ${a}_{10}/{a}_{20}=\text{1}/\text{1}$. Fig.
4 reveals the effect of shape factor on the first and second order lateral modes of composite beam with distinct material characteristics. The left boundary condition is fixed, while the right is
free. In this case, the modulus of upper beam is taken as ${E}_{1}=\mathrm{}$200×10^9 Pa, while the lower is ${E}_{2}=\mathrm{}$1×10^9 Pa, mimicking metal-polymer composite. For the first mode, the
shape factor ${a}_{10}/{a}_{20}$ effect is small; however, for the second mode, the decrease of shape factor ${a}_{10}/{a}_{20}$ can induce the composite beam to bend more severely. At this time, the
deformation of the composite beam becomes more exaggerated with a larger curvature.
Authors Botong Li and Longlei Dong made equal contribution.
5. Conclusions
Using the variational method the kinetic equations of composite beams are obtained, and analytical expressions on natural frequencies of longitudinal and transverse vibration of composite beams are
deduced by using the method of power series. Finite element simulation is used to verify the theoretical framework. Effects of gradient ratio between components along the axial direction on the
natural frequencies of composite beams are revealed, and the first and second order transverse modes of composite beams are drawn. Some of the important findings of the paper are:
1) When the elastic modulus of lower beam is decreased, both the longitudinal and transverse first-order natural frequencies of composite beams tend to decline, while they rise with the reducing of
density of lower beam.
2) The shape gradient plays an important role in the natural frequency. For example, in a case where the left end is fixed and the right end is free, the first-order longitudinal frequency is higher
as the shape factor ${a}_{10}/{a}_{20}$ rises. And the effect of shape factor ${a}_{10}/{a}_{20}$ varies according to the boundary conditions. The coupling between the two bonded constituent beams is
3) With distinct material characteristics, the shape factor ${a}_{10}/{a}_{20}$ effect on the first order lateral mode is small; however,for the second mode, the decrease of shape factor ${a}_{10}/
{a}_{20}$ can induce the composite beam to bend more severely.
Through these findings, we emphasize that both the natural frequency and vibration mode are controllable in a significantly wide range. Though the volume of each component of the composite beam is
equal, by adjusting the elastic modulus or shape factor, the natural frequency or vibration mode could be changed to avoid or to achieve a specific vibration performance. For example, the natural
frequency of a bridge should avoid some specific range in case of resonance.
• Christensen Richard M. Mechanics of Composite Materials. John Wiley and Sons, New York, 1979.
• Jones Robert M. Mechanics of Composite Materials. Hemisphere, New York, 1975.
• Nakamura S.-I., Momiyama Y., Hosaka T., Homma K. New technologies of steel/concrete composite bridges. Journal of Constructional Steel Research, Vol. 58, 2002, p. 99-130.
• Brozzetti J. Design development of steel-concrete composite bridges in France. Journal of Constructional Steel Research, Vol. 55, 2000, p. 229-243.
• Jafari-Talookolaei Ramazan A., Maryam Abedi, Kargarnovin Mohammad H., Ahmadian Mohammad T. An analytical approach for the free vibration analysis of generally laminated composite beams with shear
effect and rotary inertia. International Journal of Mechanical Sciences, Vol. 65, 2012, p. 97-104.
• Tseng Y. P., Huang C. S., Kao M. S. In-plane vibration of laminated curved beams with variable curvature by dynamic stiffness analysis. Composite Structures, Vol. 50, 2000, p. 103-114.
• Banerjee J. R. Frequency equation and mode shape formulae for composite Timoshenko beams. Composite Structures, Vol. 51, 2001, p. 381-388.
• Rao Kamesware M., Desai Y. M., Chitnis M. R. Free vibrations of laminated beams using mixed theory. Composite Structures, Vol. 52, 2001, p. 149-160.
• Chen W. Q., Lv C. F., Bian Z. G. Elasticity solution for free vibration of laminated beams. Composite Structures, Vol. 62, 2003, p. 75-82.
• Hajianmaleki Mehdi, Qatu Mohamad S. Vibrations of straight and curved composite beams: a review. Composite Structures, Vol. 100, 2013, p. 218-232.
• Zhen Wu, Chen Wanji An assessment of several displacement-based theories for the vibration and stability analysis of laminated composite and sandwich beams. Composite Structures, Vol. 84, 2008,
p. 337-349.
• Qu Yegao, Long Xinhua, Li Hongguang, Meng Guang A variational formulation for dynamic analysis of composite laminated beams based on a general higher-order shear deformation theory. Composite
Structures, Vol. 102, 2013, p. 175-192.
• Gunda Jagadish Babu, Gupta R. K., Janardhan Ranga G., Rao Venkateswara G. Large amplitude vibration analysis of composite beams: simple closed-form solutions. Composite Structures, Vol. 93, Issue
2, 2011, p. 870-879.
• Yaning Li, Christine Ortiz, Mary C. Boyce A generalized mechanical model for suture interfaces of arbitrary geometry. Journal of the Mechanics and Physics of Solids, Vol. 61, Issue 4, 2013, p.
• Adams R. D., Wake W. C. Structural Adhesive Joints in Engineering. Elsevier, New York, 1984.
• Zeng Q.-G., Sun C. T. Novel design of a bonded lap joint. AIAA Journal, Vol. 39, Issue 10, 2001, p. 1991-1996.
• Haghpanah B., Chiu S., Vaziri A. Adhesively bonded lap joints with extreme interface geometry. International Journal of Adhesion and Adhesives, Vol. 48, 2014, p. 130-138.
• Nakano H., Omiya Y., Sekiguchi Y., et al. Three-dimensional FEM stress analysis and strength prediction of scarf adhesive joints with similar adherents subjected to static tensile loadings.
International Journal of Adhesion and Adhesives, Vol. 54, 2014, p. 40-50.
• Kimiaeifar A., Toft H., Lund E., et al. Reliability analysis of adhesive bonded scarf joints. Engineering Structures, Vol. 35, 2012, p. 281-287.
• Lubkin J. L. A theory of adhesive scarf joints. Journal of Applied Mechanics, Vol. 24, 1957, p. 255-260.
• Atlihan Gökmen, Çallioğlu Hasan, Şahin Conkur E., Topcu Muzaffer, Yücel Uğur Free vibration analysis of the laminated composite beams by using DQM. Journal of Reinforced Plastics and Composites,
Vol. 28, Issue 7, 2009, p. 881-892.
• Malekzadeh P., Vosoughi A. R. DQM large amplitude vibration of composite beams on nonlinear elastic foundations with restrained edges. Communications in Nonlinear Science and Numerical
Simulation, Vol. 14, 2009, p. 906-915.
• Damanpack A. R., Khalili S. M. R. High-order free vibration analysis of sandwich beams with a flexible core using dynamic stiffness method. Composite Structures, Vol. 94, Issue 5, 2012, p.
• Vidal P., Polit O. A family of sinus finite elements for the analysis of rectangular laminated beams. Composite Structures, Vol. 84, 2008, p. 56-72.
• Özütok A., Madenci E. Free vibration analysis of cross-ply laminated composite beams by mixed finite element formulation. International Journal of Structural Stability and Dynamics, Vol. 13,
Issue 2, 2013, p. 1250056.
• ABAQUS 6.11 User’s Manual. ABAQUS Inc., Providence, RI, 2011.
About this article
Bernoulli-Euler composite beams
natural frequencies
vibration mode
method of power series
The work of X. C. is supported by the National Natural Science Foundation of China (11172231 and 11372241), AFOSR (FA9550-12-1-0159), and ARPA-E (DE-AR0000396). The work of B. L. is supported by the
National Natural Science Foundation of China (11402188), the Fundamental Research Funds for the Central Universities (08143047) (2014gjhz16), and the Natural Science Foundation of Shaanxi
Copyright © 2015 JVE International Ltd.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/15842","timestamp":"2024-11-09T01:18:37Z","content_type":"text/html","content_length":"209610","record_id":"<urn:uuid:2349bf50-9048-483f-aac1-eb3516cda9e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00180.warc.gz"} |
[Solved] i. Find a. b. ii. Use the trapezium rule | SolutionInn
i. Find a. b. ii. Use the trapezium rule with 2 intervals to estimate the value of
i. Find
ii. Use the trapezium rule with 2 intervals to estimate the value of
Transcribed Image Text:
+6 e2x + 6 e2 dx,
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Answer rating: 77% (9 reviews)
i a int e2x6e2xdx int 16e2x dx x 32e2x C where C is the con...View the full answer
Answered By
Darwin Romero
I use a hands-on technique and am approachable to my students. I incorporate fun into my lessons when possible. And while my easy-going style is suitable for many subjects and grades, I am also able
to adapt my style to the needs of the student. I can describe myself as friendly, enthusiastic and respectful. As a teacher, we can easily get respect from the students if they would feel respected
0.00 0 Reviews 10+ Question Solved
Students also viewed these Mathematics questions
Study smarter with the SolutionInn App | {"url":"https://www.solutioninn.com/study-help/cambridge-international-as-a-level-mathematics-probability-statistics/i-find-a-b-ii-use-the-trapezium-rule-with","timestamp":"2024-11-13T01:29:39Z","content_type":"text/html","content_length":"80375","record_id":"<urn:uuid:c6938d3f-c6d4-4c37-a3b0-d53644f15a42>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00005.warc.gz"} |
< 1 >
In mathematics an operand is the object of a mathematical operation.
In functional analysis, physics and quantum mechanics, functions and arguments for clarity are often called operators and operands. An operand is the object or quantity that is operated on.
Example 1
You write the differential operator with the letter D as
... etc.
which stands for "take the derivative of...", so you get Dx = 1.
Example 2
The Laplace operator of a function describes for three dimensions
Example 3
The Hamilton operator H is used in the Schrödinger equation
Example 4
The energy operator Ê acts on the wave function ψ (r, t) and is written as
Deutsch Español Français Nederlands 中文 | {"url":"https://maeckes.nl/Operand%20GB.html","timestamp":"2024-11-12T02:12:59Z","content_type":"application/xhtml+xml","content_length":"5052","record_id":"<urn:uuid:b1e1c35d-d5f4-484d-8574-77867063a72c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00070.warc.gz"} |
Line segment covering of cells in arrangements
Given a collection L of line segments, we consider its arrangement and study the problem of covering all cells with line segments of L. That is, we want to find a minimum-size set L^′ of line
segments such that every cell in the arrangement has a line from L^′ defining its boundary. We show that the problem is NP-hard, even when all segments are axis-aligned. In fact, the problem is still
NP-hard when we only need to cover rectangular cells of the arrangement. For the latter problem we also show that it is fixed parameter tractable with respect to the size of the optimal solution.
Finally we provide a linear time algorithm for the case where cells of the arrangement are created by recursively subdividing a rectangle using horizontal and vertical cutting segments.
• Computational geometry
• FPT
• Geometric graph
• NP-hardness
• Vertex cover
ASJC Scopus subject areas
• Theoretical Computer Science
• Signal Processing
• Information Systems
• Computer Science Applications
Dive into the research topics of 'Line segment covering of cells in arrangements'. Together they form a unique fingerprint. | {"url":"https://research.nottingham.edu.cn:443/en/publications/line-segment-covering-of-cells-in-arrangements","timestamp":"2024-11-10T09:19:33Z","content_type":"text/html","content_length":"50864","record_id":"<urn:uuid:9356fb07-e0a6-4492-aca2-1155f86d8981>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00119.warc.gz"} |
The Caduceus of St. Alban: Vertical FM Contest entry: Aug 8th, 2010
Excellent mission. I was just begging for access to those
though. What a disappointment not to be able to go there
Same problem here.
I had to noclip the entrance because it's too small.
Top Posters In This Topic
I wasn't sure if I was missing some trigger that would change things but I felt that just one AI at ground level would've made the difference. Just take him off wall patrol and put him on the
grounds. I can appreciate giving the players space and time to work out climbing options but I think that at least one ground-level AI would have made this perfect.
It's frustrating to have such a fantastic mission feel like it's missing an element. When I originally requested that the AI patrols be increased I figured that you would place the AI a little
different. I still was unsure if I could really make this suggestion because the mission may have been intended to be more of a find-n-seek ...but I kept yearning to ghost around the ground level in
Please visit TDM's IndieDB site and help promote the mod:
(Yeah, shameless promotion... but traffic is traffic folks...)
I appreciate that people wanted extra Ai, but as Ive already said the performance hit would be too great..
Fids, whats the rules about releasing an updated version that just has a few tweaks and Ai...?
Same as feildmedic, separate versions (high end machine only etc)
Edited by Bikerdude
I appreciate that people wanted extra Ai, but as Ive already said the performance hit would be too great..
Fair enough. I am grateful for the performance.
Please visit TDM's IndieDB site and help promote the mod:
(Yeah, shameless promotion... but traffic is traffic folks...)
I haven't looked at the map file in the editor but maybe simplifying pathfinding with monsterclip could help the ai performance issues. I don't know enough about how the ai works but it's worth a
And great mission by the way.
I love this map.
Tweaking the AI after release would be great in my book but I see the logic of prohibiting it to some degree. It's a tough one. At the very least if you find that you can tweak it better make sure to
release the better version afterward.
If you were as willing to modify even non-contest maps from community feedback we would all be lucky.
How many commercial game developers actually listen to their audience after release?
As long as you make the changes your way I don't see it as a big artistic compromise.
I don't think fixing missions after the fact should be discouraged. I guarantee there are scores of folks here drooling over the pending re-release of Return to the City.
If you can improve things then do so, we will all be happier (as long as you don't pull a George Lucas and ruin the FM
Please visit TDM's IndieDB site and help promote the mod:
(Yeah, shameless promotion... but traffic is traffic folks...)
I haven't looked at the map file in the editor but maybe simplifying pathfinding with monsterclip could help the ai performance issues. I don't know enough about how the ai works but it's worth a
Im playing around with adding more Ai just to see how bad the performance hit will be..
Maybe more AI on expert and warn players to play on Hard if they have a low end PC.
Yes, very nice. Just been for another stroll around in notarget just for the views. I guess we all have our favourite spots.
Another download mirror at http://www.fidcal.com/darkuser/missions/stalban.pk4
Er you can, remember the notes in the readme - things that look claimable are....
Same problem here.
I had to noclip the entrance because it's too small.
It was very hard to squeeze through but I finally did it after a lot of tries. Strange this didn't appear during beta.
I suggest making the tunnel a bit taller. Thanks for the great mission.
Man, it's architecturally OUTSTANDING!
Task is not so much to see what no one has yet seen but to think what nobody has yet thought about that which everybody see. - E.S.
Great mission you really have improved a lot. Business as usual was already good but the difference with that one is huge when you look at this one now. Loved your skybox, how did you create it?
Overall performance was good on my 3500+, 1GB Ram and 7800 GT 256MB but at some point it started to stutter a lot. A quickload solved that problem. Like most of the people said a bit too easy because
of the AI but yeah I know it's very difficult to keep the performance as high as possible. Had a big laugh at some of your readables and your briefing
Is there supposed to be no blackjack or weapons in inventory at all? Just started this mission and want to make sure that is right before I continue.
System: Mageia Linux Cauldron, aka Mageia 8
Hey Bikerdude. You've done a pretty good job. The visuals were really great. Vertex blending, great lighting and a great choice of textures: Dark and grimy, just how I like it.
I applaud the fact that you tried to fake AO with grime decals around objects. AO is usually only a very subtle shading though, so the used decals are way too intense. Maybe we can do something about
that in the near future...
Anyway, thanks for this FM. Once again, although it lacked a little on gameplay, it was a great explorational and visual experience.
Ah, one more thing. You included Fidcal's key-texture-tweaks into your pk4, right? I think whether or not to use stuff like that should be up to the player to decide. A mapper should never force
gameplay settings on the player. (Yes, that also goes for playershadow, etc.)
Loved your skybox, how did you create it?
I believe it's the basic skybox prefab combined with the new skybox texture shipped with TDM 1.02, which I originally created for "Pandora's Box", so thanks...
I haven't even finished it yet, but it's really awesome!
Where is the exit secret tunnel? I've found the cave but, according to the poor guy's diary, it's a dead end.
BTW, think the ambient exterior light could be slightly reddish, because of the sunset.
It was very hard to squeeze through but I finally did it after a lot of tries. Strange this didn't appear during beta.
My fault, one of my testers ask for some items in the tunnel to be changed, and I forgot to ad the arg ' solid 0 ' in the update this will all be sorted, including the extra Ai in the update..
Where is the exit secret tunnel?
tip - look for someone who is asleep
Man, it's architecturally OUTSTANDING!
FAAAANNNTASTICCC 10/10 the hole way through Bikerdude
Thanks guys, positive feedback is always a great motivator.
Had a big laugh at some of your readables and your briefing
hehe, Im glad someone gets my humor :-D, I just need to get someone else to proof read my grammar and spelling from now on.
Is there supposed to be no blackjack or weapons in inventory at all? Just started this mission and want to make sure that is right before I continue.
yeah I had this also, its in the map when you look via DR so I think it my be a possible bug.
Edited by Bikerdude
□ Hey Bikerdude. You've done a pretty good job. The visuals were really great. Vertex blending, great lighting and a great choice of textures: Dark and grimy, just how I like it.
□ I applaud the fact that you tried to fake AO with grime decals around objects. AO is usually only a very subtle shading though, so the used decals are way too intense. Maybe we can do
something about that in the near future...
□ Anyway, thanks for this FM. Once again, although it lacked a little on gameplay, it was a great explorational and visual experience.
□ Ah, one more thing. You included Fidcal's key-texture-tweaks into your pk4, right? I think whether or not to use stuff like that should be up to the player to decide. A mapper should never
force gameplay settings on the player. (Yes, that also goes for playershadow, etc.)
□ I believe it's the basic skybox prefab combined with the new skybox texture shipped with TDM 1.02, which I originally created for "Pandora's Box"
• The vert blending was done for my by Fids
• yeah, it how I get around lights with no shadows and yeah we need way more decal textures in 1.03
• Yeah I'm currently doing an update with more Ai..
• Well I used it coz I couldn't see the keys myself.
• Yeah I was very pleased when I found that skybox, I wanted to use a low world light with shadows to simulate the mood, but I got hammered performance wise.
Btw, why has now one, including Fids himself commented on the fact i have used his name in both my Fm's...
Had a good time with this one. Very nicely done. Grats!
The AI wasn't very challenging. It seemed that the guards were all a bit deaf and blind, except one*
I did find a couple of issues.
I never needed the key for the inner vault door, even though it was supposed to be not pick-able. Picked it on the first try... it was just the metal crate inside there which was not pick-able.
I guess my thief got fatter drinking ale than the rest of the players did, because I could not get through the tunnel to the cave. No matter how hard I tried, turning and shaking the mouse, toggling
the crouch button, etc. I finally had to noclip into there.
Loved the heliocentric model of the solar system (even if the book by it says it wasn't the heliocentric model). Kudos for that!
*One guard caused me trouble,
up at the water tower (awesome water tower, btw). He attacked me on the elevator, but somehow we switched positions and I sent him down the elevator to the floor below.
I personally liked the key textures a lot... much more like the original Thief games.
Edited by PranQster
System: Mageia Linux Cauldron, aka Mageia 8
Btw, why has now one, including Fids himself commented on the fact i have used his name in both my Fm's...
I noticed that. Figured someone else would comment.
System: Mageia Linux Cauldron, aka Mageia 8
Did I not comment before? I forget. Anyway, you asked for it! (haha
Yeah funny and I appreciate the nod (so this is not personal
I said in beta I thought the briefing was weak and again, why no reference to the recipe book. I'm a master thief. What do I care about a worthless second hand recipe book? How did I even know? A
couple of sentences would give it meaning.
I said in beta I thought the briefing was weak and again, why no reference to the recipe book. I'm a master thief. What do I care about a worthless second hand recipe book? How did I even know? A
couple of sentences would give it meaning.
yeah, I Don't know where i got the idea for the cook book, I think we can blame Jdude with he suggestions, lol
In future (oldtown) will be grammar/spelling proof read by someone, so that missions aren't being let down by poor grasp of the queens English.
Edited by Bikerdude
Yeah, this Mission was fun. As many other already mentioned: great architecture, excellent detailing. And to make this short: this Mission is excellent and nothing I will say now, does change that.
I didn't like that the Builders were blind and deaf. I was dancing naked and brightly lit in front of them, singing a song about pagans and they just wouldn't care. I'm glad there was a recipe in the
book referring to drugs, so they were all stoned, I assume. And personally I wouldn't have minded more guards. But then, of course, I would have needed a blackjack. Lacking that, I was looking for
one. It was mentioned that a thief left his gear there. But there was nothing. And it was a pity that I had to noclip through the tunnel to see the spiders (and killing them was forbidden, again -
why is that?). I don't see the point in making a tunnel too narrow to get through it.
And there could've been a strong hint to that secret room with the pagan book.
I could've been faster through the mission, but I was looking at that orrery for many many minutes... that was so cool!
I will be happy to proof read, make suggestions or even ghost rewrite your future readables Biker. I'm sure lots of others could do so too.
BTW nothing wrong with the recipe book. What was wrong was not giving a reason for it in the briefing. I can think of three possibilities:
2. Rare recipe wanted by nobleman who dined there once.
3. Cook is making drugs from mushroom recipe; recipe valuable on black market.
4. Many of your friends lost to drugs. You hate drugs. Destroy the recipe book. | {"url":"https://forums.thedarkmod.com/index.php?/topic/11644-the-caduceus-of-st-alban-vertical-fm-contest-entry-aug-8th-2010/page/2/#comment-228963","timestamp":"2024-11-04T14:20:44Z","content_type":"text/html","content_length":"439219","record_id":"<urn:uuid:8a67ffa1-61bb-4ece-b516-39f617d9e96d>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00139.warc.gz"} |
Solved: Can someone explain to me how do we define what week to place a number for the required
Can someone explain to me how do we define what week to place a number for the required dates?
I understand part A but the other parts seem confusing.
For example, why do part B and C both have a required date in week 7?
Why do part E and D both have a required date in week 6?
Gross Requirements Plan Gross Material Requirements Plan for 50 Awesome Speaker Kits (As) with Order Release Dates Also Shown TABLE 14.3 WEEK TIME A. Required date 50 Order release date 50 1 week B.
Required date 100 Order release date 100 2 weeks C. Required date 150 Order release date 150 1 week E. Required date 200 Order release date 200 300 2 weeks F. Required date 300 Order release date 300
3 weeks D. Required date 600 200 Order release date 600 200 1 week G. Required date 300 Order release date 300 2 weeks © 2014 Pearson EducationInc 14-34 ,
Expert Answer
Note that as per the product structure,
two(2) units of B are fitted in one(1) unit of A
& three(3) units of C are fitted in one(1) unit of A.
Since B. and C are just at the lower hierarchy of ‘A’ in the product structure, the week when the order release of A happens, the required date of B and C happens.
So, since the order release of A happens in week 7, the required date of B, and C is in the same week (i.e. week 7). The numbers are as per the units required i.e. No. of B = 2 x No. of A = 2 x 50 =
100 and No. of C=3 x 50 = 150.
E, and D are more complicated. But the logic is the same as above.
Consider ‘E’ first.
Two(2) units of E are fitted in one(1) unit of B
Since E is just at the lower hierarchy of B in the product structure, the week when the order release of B happens, the required date of E happens.
So, since the order release of B happens in week 5, the required date of E is in the same week (i.e. week 5). The numbers are as per the units required i.e. No. of E = 2 x No. of B = 2 x 100 = 200
So, for making ‘B’ only, we need 200 units of E in week 5 ————————–1
Two(2) units of E are fitted in one(1) unit of C
Since E is just at the lower hierarchy of C in the product structure, the week when the order release of C happens, the required date of E happens.
So, since the order release of C happens in week 6, the required date of E is in the same week (i.e. week 6). The numbers are as per the units required i.e. No. of E = 2 x No. of C = 2 x 150 = 300
So, for making ‘C’ only, we need 300 units of E in week 6 ————————– 2
Combine 1 and 2 to get the overall requirement of E i.e. 200 in week 5 and 300 in week 6
Consider ‘D’ now
Two(2) units of D are fitted in one(1) unit of B
Since D is just at the lower hierarchy of B in the product structure, the week when the order release of B happens, the required date of D happens.
So, since the order release of B happens in week 5, the required date of D is in the same week (i.e. week 5). The numbers are as per the units required i.e. No. of D = 2 x No. of B = 2 x 100 = 200
So, for making ‘B’ only, we need 200 units of D in week 5 ————————–3
Two(2) units of D are fitted in one(1) unit of F
Since D is just at the lower hierarchy of F in the product structure, the week when the order release of F happens, the required date of D happens.
So, since the order release of F happens in week 3, the required date of D is in the same week (i.e. week 3). The numbers are as per the units required i.e. No. of D = 2 x No. of F = 2 x 300 = 600
So, for making ‘F’ only, we need 600 units of D in week 3 ————————–4
Combine 3 and 4 to get the overall requirement of D i.e. 200 in week 5 and 600 in week 3 | {"url":"https://grandpaperwriters.com/solved-can-someone-explain-to-me-how-do-we-define-what-week-to-place-a-number-for-the-required/","timestamp":"2024-11-11T10:02:58Z","content_type":"text/html","content_length":"46124","record_id":"<urn:uuid:f375cb7e-175c-4e23-ac0d-0fb2d20c46bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00145.warc.gz"} |
Theory of Combinatorial Algorithms
Mittagsseminar (in cooperation with J. Lengler, A. Steger, and D. Steurer)
Mittagsseminar Talk Information
Date and Time: Thursday, December 22, 2005, 12:15 pm
Duration: This information is not available in the database
Location: This information is not available in the database
Speaker: Florian Walpen
Boosted Sampling: Approximation Algorithms for Stochastic Optimization
Several combinatorial optimization problems choose elements to minimize the total cost of constructing a feasible solution that satisfies requirements of clients. In the Steiner Tree problem, for
example, edges must be chosen to connect terminals (clients). The authors consider a stochastic version of such a problem where the solution is constructed in two stages: Before the actual
requirements materialize, we can choose elements in a first stage. The actual requirements are then revealed, drawn from a pre-specified probability distribution F thereupon, some more elements may
be chosen to obtain a feasible solution for the actual requirements. However, in this second (recourse) stage, choosing an element is costlier by a factor of s > 1. The goal is to minimize the first
stage cost plus the expected second stage cost.
Upcoming talks | All previous talks | Talks by speaker | Upcoming talks in iCal format (beta version!)
Previous talks by year: 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996
Information for students and suggested topics for student talks
Automatic MiSe System Software Version 1.4803M | admin login | {"url":"https://ti.inf.ethz.ch/ew/mise/mittagssem.html?action=show&what=abstract&id=a5c80732c85160422aaf38141d8bc5a5afb04045","timestamp":"2024-11-04T08:49:10Z","content_type":"text/html","content_length":"13728","record_id":"<urn:uuid:002a079d-f4c2-4a69-ad7d-481cc99f62a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00833.warc.gz"} |
Easy Pete the Cat Step-by-Step Tutorial
Begin the Pete the cat outline by drawing his eyes. Use curved lines to sketch the half-circle shapes.
Draw two partial circles in each eye and shade between them, then add an upside-down triangle for Pete's nose.
Sketch Pete's head with overlapping curved lines for top, triangular ears, sides, and bottom.
Draw the whiskers and body using curved and "L" shaped lines to depict a cat's facial and body features.
Use straight and curved lines to draw leg and belly shapes, completing the whiskers with opposite side curves.
Draw a tennis shoe on Pete's front paw with an inverted triangle tongue and curved and straight lines.
Draw a straight line for the leg's back, add a shoe with an inverted triangle, curved sides and sole, and straight line laces.
Draw curved lines from Pete's back and hips, spiraling to form a tail, and a sneaker on his front foot.
Draw a complete Pete the cat outline with an inverted triangle, curved lines, and short straight lines.
Get the full tutorial with all drawing steps and a video tutorial via the link below. It's FREE!
You too can easily draw Pete the Cat following the simple steps.
Learn how to draw a great looking Pete the Cat with step-by-step drawing instructions, and video tutorial. | {"url":"https://easydrawingguides.com/web-stories/easy-pete-the-cat-step-by-step-tutorial/","timestamp":"2024-11-13T17:38:47Z","content_type":"text/html","content_length":"71474","record_id":"<urn:uuid:9e63b9df-eec1-47e6-8b83-ce0b3de25b42>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00234.warc.gz"} |
Cool Math Stuff
Today's page for Math Awareness Month is about a recent video that caused some huge debate. I saw the video a month or two ago, and was very intrigued. I showed it to some of my friends, and we were
arguing about the content for quite a while. It also spread rapidly around the math department at Andover, with some teachers bringing up in their classes.
Take a look at the page and try some of the exercises. You will find the outcomes very interesting and mind-boggling. The concept of infinity is difficult for any human being to grasp, making it tons
of fun to think about.
Comment below what you think of the video. Do you think it is accurate? What do you think the fallacies are? How could this be a part of string theory if it is mathematically flawed?
In math class last week, we were given the following problem:
I then did the math and determined that the limit would be -1/12. I then called my teacher over, and pointed to that answer. Recalling the video, I asked him if I could rewrite that -1/12 as
1+2+3+4+5+6+7+... as my final answer. Thankfully, he got the reference. In addition to being a funny anecdote, the fact that people got the joke shows how wide of an audience this information has
reached and captivated, which is amazing to see.
I explained a bit in my last post that April is Math Awareness Month, as well as linked to the poster on www.mathaware.org. In honor of this occasion, I plan to make my posts this month relevant to
the pages on the website and the mathematicians hosting them.
April 1st was a day on magic squares, and I am honored to have been the host of that page. There is a recent performance of me doing it, tutorials on how to make various magic squares, and different
activities and questions that can further your magic square experience. Click here to see the page. | {"url":"https://coolmathstuff123.blogspot.com/2014/04/","timestamp":"2024-11-03T22:33:54Z","content_type":"text/html","content_length":"69343","record_id":"<urn:uuid:6c6499da-a481-4655-a594-205a1aa4ca87>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00662.warc.gz"} |
Leetcode-D44-array-48 Rotate image &
1, Review
1,47. Full arrangement II
It's not bad. The idea is generally right. It's just a little debugging - when size=0, you can't enter the for loop, so you need to judge when size=1, and then directly append (path), and then return
class Solution:
def permuteUnique(self, nums: List[int]) -> List[List[int]]:
def dsf(size,nums,path,res):
for index in range(size):
if index!=0 and nums[index]==nums[index-1]:
if size==1:
return res
48. Rotate image
1. Thought of a method of transposing first and then according to the longitudinal axis symmetry in the middle
2. Look at the answer~
For the j-th element in the i-th row of the matrix, it appears at the j-th position in the penultimate i-th column after rotation
Try to write the code
3. Note that the rule appears in the j position of the penultimate i column, which is the penultimate!!! No wonder no one uses it. It's better to use the most straightforward one. Transpose it first
and then use it symmetrically. Ha ha, ha ha, but it's quite simple to write~
class Solution:
def rotate(self, matrix: List[List[int]]) -> None:
Do not return anything, modify matrix in-place instead.
n = len(matrix)
new_matrix = [[0]*n for _ in range(n)]
for i in range(n):
for j in range(n):
54. Spiral matrix (review tomorrow)
1. Matrix again hahaha
2. I thought of a hash table to record whether I passed.
Use recursion to indicate up, down, left and right movement.
3. It's good to think of recursion and try it, but the problem is that it can't
When doing this, I also found another problem, that is, if you return path, you will get None, but if you return directly and take path, you will be fine. I don't understand.
class Solution:
def spiralOrder(self, matrix: List[List[int]]) -> List[int]:
def clock(begin, l, w, matrix, path):
if begin >= l - begin and begin + 1 >= w - begin and l - begin - 1 <= begin and w - begin - 1 <= begin+1:
if begin < l - begin:
for i in range(begin, l - begin):
path += [matrix[begin][i]]
if begin + 1 < w - begin:
for j in range(begin + 1, w - begin):
path += [matrix[j][l - begin-1]]
if l - begin - 1 > begin:
for s in range(l - begin - 1, begin, -1):
path += [matrix[w - begin-1][s-1]]
if w - begin - 1 > begin:
for k in range(w - begin - 1, begin+1, -1):
path += [matrix[k-1][begin]]
clock(begin + 1, l,w, matrix, path)
path = []
clock(0, len(matrix[0]), len(matrix), matrix, path)
return path
3. Look at the answer
The answer is very similar to my original idea. Here, the direction vector and the loop with% 4 are very good~
class Solution:
def spiralOrder(self, matrix: List[List[int]]) -> List[int]:
rows, columns = len(matrix), len(matrix[0])
visited = [[False] * columns for _ in range(rows)]
total = rows * columns
order = [0] * total
directions = [[0, 1], [1, 0], [0, -1], [-1, 0]]
row, column = 0, 0
directionIndex = 0
for i in range(total):
order[i] = matrix[row][column]
visited[row][column] = True
nextRow, nextColumn = row + directions[directionIndex][0], column + directions[directionIndex][1]
if not (0 <= nextRow < rows and 0 <= nextColumn < columns and not visited[nextRow][nextColumn]):
directionIndex = (directionIndex + 1) % 4
row += directions[directionIndex][0]
column += directions[directionIndex][1]
return order
4. Try to write it yourself~
We still need to look at other people's code to a large extent and summarize it.
(1) Judge whether to return by whether all traversals are completed (in the for loop, record one quantity at a time, and conduct a total of size for loops)
(2) Don't make mistakes in the direction of directions. How do you move it? Change the horizontal axis or the vertical axis. Use% 4 to constantly change direction.
(3) Use the hash matrix visited to record whether it has been accessed
(4) Judge whether the next one has crossed the boundary or has been visited. If so, immediately raise the bow on the original basis and change the direction. If you don't cross the boundary, operate
in the original direction.
(5) Iterate again until all are recorded
class Solution:
def spiralOrder(self, matrix: List[List[int]]) -> List[int]:
rows,cols = len(matrix),len(matrix[0])
total = rows*cols
res = [0]*total
directions =[[0,1],[1,0],[0,-1],[-1,0]]
direction_index = 0
visited=[[False]*cols for _ in range(rows)]
for i in range(total):
res[i] = matrix[row][col]
nextrow,nextcol = row+directions[direction_index][0],col+directions[direction_index][1]
if nextrow>=rows or nextcol>=cols or visited[nextrow][nextcol]==True:
direction_index = (direction_index+1)%4
row = row+directions[direction_index][0]
col = col +directions[direction_index][1]
return res | {"url":"https://programming.vip/docs/leetcode-d44-array-48-rotate-image-54-spiral-matrix-review-tomorrow.html","timestamp":"2024-11-11T17:02:52Z","content_type":"text/html","content_length":"12598","record_id":"<urn:uuid:d5837207-baa8-42d0-b33b-fcb584874ba5>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00642.warc.gz"} |
Staff Research Profiles
Permanent Academic Staff Research Profiles
Dr Stefan Adams
Large deviation theory, probability theory, Brownian motions, statistical mechanics, gradient models, multiscale systems.
Professor Keith Ball
Functional analysis, high-dimensional and discrete geometry, information theory.
Professor Dwight Barkley
Applied and computational mathematics - nonlinear phenomena.
Dr David Bate
Geometric measure theory, real analysis.
Dr Christian Boehning
Algebraic geometry, representation and invariant theory, derived category methods in birational geometry, birational automorphism groups, unramified cohomology and applications of K-theory in
birational geometry.
Dr Ed Brambley
Aeroacoustics (mathematical modelling and computational theory); mathematical modelling of industrial metal forming; fluid dynamics; applied mathematics.
Dr Ferran Brosa Planella
Applied and industrial mathematics, mathematical modelling, asymptotic analysis, scientific computing, lithium-ion batteries, heat and mass transfer, continuum mechanics, moving boundary problems,
dynamical systems
Professor Gavin Brown
Algebraic geometry, especially classification, birational geometry and constructions of varieties (both using computational algebra and not).
Professor Nigel Burroughs
Mathematics applied to cell biology, (biophysical) models of dynamic spatial biological systems, analysis of experimental data using Bayesian model fitting methods (Markov chain Monte Carlo
Dr Inna (Korchagina) Capdeboscq
Group theory, groups of Lie type, finite simple groups.
Dr Siri Chongchitnan
Cosmology, theoretical astrophysics, mathematics education.
Dr Sam Chow
Diophantine equations, diophantine approximation, analytic number theory, additive combinatorics.
Dr Radu Cimpeanu
Applied mathematics, mathematical modelling, scientific computing, asymptotic analysis, computational fluid dynamics, interfacial flows, wave propagation, industrial mathematics.
Dr Andreas Dedner
Numerical analysis and scientific computing, higher order methods for solving non-linear evolution equations, generic software design for grid based numerical schemes, geophysical flows, radiation
Dr Emanuele Dotto
Algebraic topology, homotopy theory, algebraic K-theory, equivariant homotopy theory.
Professor Bertram During
Applied and computational partial differential equations: modelling, analysis, numerical analysis, optimal control, applications in socio-economics and finance
Dr Louise Dyson
Mathematical modelling of biological systems, especially the epidemiology of neglected tropical diseases and the analysis of biological systems in which noise plays an important role.
Professor Charles Elliott
Partial differential equations and their applications: analysis, geometric PDEs, free boundaries and interfaces, biology, social-sciences, materials, finite elements, numerical analysis,
Dr Adam Epstein
Complex analytic dynamics; Riemann surfaces; value-distribution theory.
Dr Martin Gallauer
Algebraic geometry & algebraic topology, motivic theory, tensor-triangular geometry, homotopy theory, rigid-analytic geometry, modular representation theory.
Professor Vassili Gelfreich
Analysis and dynamical systems.
Dr Agelos Georgakopoulos
Infinite graphs, and their interactions with other fields of mathematics.
Dr Tobias Grafke
Rare events, fluid dynamics and turbulence, large deviation theory, metastability, non-equilibrium statistical mechanics, active matter.
Professor John Greenlees
Algebraic topology, homotopy theory, equivariant cohomology theories, derived categories and commutative algebra.
Dr Adam Harper
Analytic number theory, and connections with probability and combinatorics.
Dr Randa Herzallah
Control theory focusing primarily on developing reliable control strategies for many-body, multi-scale, stochastic systems exhibiting characteristics such as nonlinearity, uncertainty and hysteresis.
Data analytics and machine learning. Systems’ modelling. Signal processing. Decentralized systems’ modelling and control. Quantum system’s modelling and control.
Dr Thomas Hudson
Micromechanics of materials: Crystalline defects, especially dislocations and their evolution; Thermodynamic limits: linking microscopic and macroscopic properties of solids; Metastability and
temperature-driven evolution of defects. Asymptotic methods in the Calculus of Variations, PDE and Stochastic Analysis (Gamma-convergence techniques, Stochastic Homogenization, Large Deviations
Theory). Coarse-graining for dynamical systems (The Mori-Zwanzig formalism).
Professor Matthew Keeling
Mathematical modelling of population dynamics, especially infectious diseases and evolution. I am interested in how heterogeneities impact on population dynamics, in particular spatial structure,
social networks and stochasticity. I study the following diseases: foot-and-mouth disease, bovine TB, influenza, measles, bubonic plague.
Dr Markus Kirkilionis
Mathematical biology, dynamic network models, complex systems, numerical analysis, pattern formation, physiologically structured Population models, (monotone) dynamical systems.
Dr Oleg Kozlovski
Dynamical systems, ergodic theory, mathematical physics, financial mathematics.
Professor David Loeffler
Modular and automorphic forms, Iwasawa theory, and p-adic analysis.
Dr Martin Lotz
Numerical optimization, computational complexity, probabilistic analysis of algorithms, computational geometry and topology, geometric probability and applications to dimension reduction.
Professor Vadim Lozin
Graph theory, combinatorics, discrete mathematics.
Professor Robert MacKay FRS
Dynamical systems theory and applications, complexity science.
Professor Diane Maclagan
Combinatorial and computational commutative algebra and algebraic geometry.
Dr Shreyas Mandre
Partial differential equations, fluid and solid mechanics, asymptotic and perturbation methods, computational methods, engineering science.
Dr Andras Mathe
Geometric measure theory, fractal geometry.
Professor Ian Melbourne
Ergodic theory and dynamical systems; links with stochastic analysis.
Dr Mario Micallef
Partial differential equations; differential geometry.
Dr Richard Montgomery
Extremal and Probabilistic Combinatorics, and connections with other fields.
Dr Joel Moreira
Dynamical systems, ergodic theory and applications to arithmetic Ramsey theory, combinatorics and number theory.
Professor Oleg Pikhurko
Extremal combinatorics and graph theory; random structures; algebraic, analytic and probabilistic methods in discrete mathematics.
Professor Mark Pollicott
Thermodynamic formalism, with applications to geometry, analysis and number theory.
Dr Rohini Ramadas
Combinatorial algebraic geometry, complex dynamics, tropical geometry, moduli spaces
Professor David Rand
Mathematical biology, pure and applied dynamical systems.
Professor Miles Reid FRS
Algebra and geometry, algebraic geometry, classification of varieties, minimal models of 3-folds and higher dimensional algebraic varieties, singularities of 3-folds and higher dimensional algebraic
varieties, orbifolds and their resolution, McKay correspondence.
Professor Magnus Richardson
Theoretical neuroscience, quantitative physiology, stochastics, statistics, machine learning.
Professor Filip Rindler
PDEs, calculus of variations, geometric measure theory.
Professor James Robinson
Partial differential equations in fluid dynamics; embedding properties of finite-dimensional sets; infinite-dimensional dynamical systems.
Dr Kat Rock
Dynamic, mechanistic models of vector-borne diseases. ODE, PDE and stochastic model approaches to directly address applied research or policy questions.
Professor Jose Rodrigo
Analysis, partial differential equations and theoretical fluid mechanics.
Professor Dmitriy Rumynin
Representation theory.
Dr Saul Schleimer
Geometric topology, group theory, and computation.
Dr Marco Schlichting
Algebraic K-theory and higher Grothendieck-Witt groups of schemes; A^1-homotopy theory and motivic cohomology; derived categories, algebraic topology and algebraic geometry.
Professor Felix Schulze
Geometric analysis, partial differential equations and differential geometry.
Dr Cagri Sert
Ergodic theory and dynamical Systems, Lie groups and their discrete subgroups, geometric and probabilistic group theory
Professor Richard Sharp
Ergodic theory, dynamical systems, applications to geometry, combinatorial and geometric group theory, quantum chaos and noncommutative geometry.
Professor Samir Siksek
Arithmetic geometry, rational points, modular curves.
Professor John Smillie
Translation surfaces and complex dynamics in higher dimensions.
Dr Vedran Sohinger
Nonlinear dispersive PDEs, harmonic analysis, and quantum many-body problems.
Professor James Sprittles
Applied mathematics, computational fluid dynamics, interfacial flows, porous media, rarefied gas flow.
Dr Björn Stinner
Modelling of free boundary problems, analysis of nonlinear PDEs, finite element methods.
Uncertainty quantification, inverse problems, probabilistic numerics, data science.
Dr Damiano Testa
Algebraic geometry, number theory.
Dr Florian Theil
Partial differential equations, discrete systems.
Dr Adam Thomas
Algebraic groups, finite groups of Lie type, Lie algebras, representation theory.
Professor Michael Tildesley
Mathematical modelling of infectious diseases. Modelling of control policies in the presence of partial information. I work on a range of diseases such as avian influenza, foot-and-mouth disease,
rabies and bovine tuberculosis.
Professor Peter Topping
Geometric analysis, nonlinear PDE, differential geometry.
Dr Gareth Tracey
Finite group theory, particularly: generation properties of finite groups; properties of almost simple groups; permutation groups and associated combinatorics; and probabilistic group theory.
Dr. Roger Tribe
Probability, in particular interacting particle systems and stochastic partial differential equations.
Professor Daniel Ueltschi
Statistical mechanics, probability theory.
Professor Karen Vogtmann FRS
Geometric group theory, low-dimensional topology, cohomology of groups.
Professor Marie-Therese Wolfram
Partial differential equations, mathematical modeling in socio-economic applications and the life sciences, numerical analysis.
Dr David Wood
Dynamical systems, bifurcations with symmetry, applications to biology and industry.
Professor Oleg Zaboronski
Non-equilibrium statistical mechanics of interacting particle systems, random matrices and integrable systems.
Dr Weiyi Zhang
Symplectic topology, complex geometry and their interactions.
Professor Nikolaos Zygouras
Probability (including integrable probability, random media, SPDEs, statistical mechanics). I am also interested in the interactions of probability with integrable systems, representation theory and | {"url":"https://warwick.ac.uk/fac/sci/maths/people/staff/profiles/","timestamp":"2024-11-13T09:15:53Z","content_type":"text/html","content_length":"53518","record_id":"<urn:uuid:cab92ec3-195f-401b-b3a3-bfb683b50ff1>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00368.warc.gz"} |
rate of change graph worksheet
Identifying Rate of Change (Graphs) Worksheet Download
Worksheets | Free - Distance Learning, worksheets and more ...
Rate of Change: Graphs | Worksheet | Education.com
Compare Rates Of Change Worksheet
Comparing Two Functions by Rate of Change Practice Worksheet ...
Worksheet: Rate of Change - Slope - Using Tables and Graphs ...
Rate Of Change: Graphs Worksheet
Constant Rate of Change: Graphs, Tables & Word Problems Create the ...
IXL | Rate of change of a linear function: graphs | 8th grade math
Rate Of Change Worksheet With Answers Pdf - Fill Online, Printable ...
Constant Rate of Change (unit rate) in a Graph by McBeee Math worksheets library
Slope and Rate of Change | Teaching algebra, Word problem ...
5.7 - Initial Value and Rate of Change | Linear Relations | MFM1P Math
IXL | Constant rate of change | 7th grade math
Rates of Change | CK-12 Foundation
Rates and Unit Rates Worksheets with Word Problems
rate of change | Math notebook, Maths algebra, Algebra
Constant rate of change graphs worksheets library
Rate Of Change: Graphs Worksheet
Solved Compare Rates of Change Practice Worksheet 1. Two | Chegg.com
Rate of Change Worksheet by Almighty Algebra worksheets library
Quiz & Worksheet - Rate of Change vs. Negative Rate of Change ...
Slope and Rate of Change online exercise for | Live Worksheets
How to Find the Rate of Change in Tables & Graphs - Lesson | Study.com
Slope and Rate of Change
Solved Section: Score: Worksheet 9.3-Rates of Change | Chegg.com
Comparing Rates of Change from Graphs, Tables, and Equations Stations Maze
Constant Rate Of Change Worksheet 7th Grade Pdf At Mollylane ...
Quiz & Worksheet - Average Rate of Change | Study.com
Average and Instantaneous Rate of Change - GeeksforGeeks
Linear Equations in Two Variables and Their Graphs learn online | {"url":"https://worksheets.clipart-library.com/rate-of-change-graph-worksheet.html","timestamp":"2024-11-01T23:54:19Z","content_type":"text/html","content_length":"25010","record_id":"<urn:uuid:fe8a32c7-e60a-4110-b117-d54667e32726>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00566.warc.gz"} |
Difference between APR and APY
It is important for you to know about the interest rates when you want to put your money into the interest bearing investments, finance a loan, get a savings account, checking, or money market
accounts. Banks and other financial institutions often use the terms APY and APR, but many people do not know what these terms really mean, or how they differ. These acronyms are widely used by the
banks, so every individual should know what the difference between the two is, and it can only be done if you know what these terms mean.
Definition of APR and APY
Annual Percentage Return, or APR, is a simple rate of interest that is earned by the account holders, or the investors. It is the annual interest rate, which does not take into account the
compounding of interest in a year. When you talk about APR in the context of savings, it actually represents the periodic rate or simply the rate. For example, if you deposit $1,000 in your account
with 10% APR, and this interest is only charged once a year, it means you will earn an interest of $100 after a year. However, you can also get to make money by earning interest on your interest and
this is what the APY is all about, because it takes into account the compounding of interest.
Annual Percentage Yield, or APY, is an interest yield that is received by an individual on the account balance he holds for a year as an investment or savings. It is the effective rate of return
earned annually after considering the concept of intra-year compounding interest. For example, based on the above example, if you have $1,000 in your savings account with the 10% rate of interest
that is paid bi-annually. For the first 6 months, you will be paying $50 ($1,000*10%/2). However, for the second half of the year, you will now be paying the interest on the total amount of $1,050
after adding the $50 earned in the first 6 months. The formula to calculate the APY is:
APY = (1+r/m)m – 1
Where, ‘m’ is the frequency of compounding in a year, such as, quarter, biannually, etc., and ‘r’ is the nominal rate of interest charged annually. Now, in the second half, the result will be $52.5
(1,050*10%/2), giving a total rate of interest earned in a year of $102.5, which is slightly higher than the APR of $100. The APY will now be 10.25% ($102.5/$1,000*100). The higher the frequency of
paying the interest in a year, the bigger will be the difference between APR and APY.
Borrower’s Approach to APR and APY
When you want to borrow a loan or apply for a mortgage or a credit card, you prefer to have the lowest rate of interest, and in order to get the real picture of the actual cost of the credit, you
need to understand the basic difference between the two. For example, when you apply for a loan, you may choose a lender that is providing the lowest possible rate, but it is highly likely that it
ends up costing you more than you originally thought it would, because the lender will be showing you the APR, and not the APY.
Lender’s Approach to APR and APY
Being a lender, you always look for the highest rate of interest and banks and other financial institutions usually conceal the APR and advertise the APY instead to attract the lenders, since there
is some compounding of interest involved during that financial year.
So, this is how both APR and APY are different from each other. The difference between the two rates can have a significant effect on the financial decisions of borrowers and investors. To summarize
it, you can say that the financial institutions usually highlight the APY in order to attract the investors in case of savings account, and show how high the rate of interest is. Whereas, when you
apply for a credit card or a loan, the APR is highlighted in order to hide the actual cost an individual will be paying. Therefore, whenever you apply for a loan or get a savings account, you should
follow the like with like approach, and should not compare the APY of one product with the APR of the other as it will provide you a true picture of which is more suitable for you.
Latest posts by Hira Waqar
(see all)
Search DifferenceBetween.net :
Email This Post
: If you like this article or our site. Please spread the word. Share it with your friends/family.
Leave a Response
Articles on DifferenceBetween.net are general information, and are not intended to substitute for professional advice. The information is "AS IS", "WITH ALL FAULTS". User assumes all risk of use,
damage, or injury. You agree that we have no liability for any damages. | {"url":"https://www.differencebetween.net/business/difference-between-apr-and-apy/","timestamp":"2024-11-13T04:49:42Z","content_type":"application/xhtml+xml","content_length":"84541","record_id":"<urn:uuid:26f39e06-4f90-4e0d-bd64-88b7394bccaf>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00151.warc.gz"} |
In defense of j
Posted on February 19, 2009 by Travis
I’m about to commit an act of mathematical heresy.
I am a complex analyst by trade, which means I am a mathematician who studies the properties and utility of using complex numbers in calculus operations. Complex numbers, you might recall, are
numbers of the form a + b i, where a and b real numbers and i is the imaginary number; i.e. i = $sqrt{-1}$. Complex numbers are essential tools of mathematics. Complex numbers provide natural and
beautiful ways to connect arithmetic and geometry, algebra and trigonometry, differentiation and integration, single-variable and multi-variable calculus, number theory and analysis, mathematics with
genetics… you name it.
The problem, however, is that no one outside of trained and dedicated mathematicians seems to know this. I myself have taught Complex Analysis as a class a number of times, and each time I found that
after fifteen weeks of showing students how the use of complex numbers fundamentally unifies almost all of the concepts they’ve learned about in their other math class, how they provide novel and
easy ways to solve problems raised in those other classes, and then how it goes on to produce results of surprising beauty and elegance and utility in its own right…. even after all that, I still
have a nontrivial subset of students who will say “That’s nice and all, but it isn’t real. It’s still all imaginary numbers.”
That stupid phrase imaginary has done more to damage the perception of complex numbers than anything else I can think of. I mean, by its very definition, the word imaginary describes something that
doesn’t exist or, at the very best, something utterly useless. You don’t see this self-destructive naming convention in the other sciences. Physicists don’t call gravity the phantom suck; chemists
don’t mix solutions with the element doesnotexistium; biologists haven’t relabeled the appendix as the useless organ, even though it’s clearly more accurate.
Mathematicians tried to fix this somewhat by changing the name of imaginary numbers to complex numbers, although in hindsight the decision to change the adjective from one that describes something
that does not exist to one that describes something that’s ungodly difficult wasn’t probably the optimal fix. No, I have a better solution.
It’s time to retire Euler’s choice of the letter i for the imaginary number $sqrt{-1}$.
I’m serious.
The problem with using i is that it stands for a number that is, well, imaginary. It’s a self-defeating prophecy. Students are quite capable of appreciating the use of $pi$ (pi) in geometry without
necessarily knowing that is the first letter in the Greek word for “perimeter.” Students are quite capable of appreciating the use of e in calculus and calling it the base of the natural logarithm
without necessarily ever knowing that it’s named for a Swiss mathematician whose name sounds like “Oiler.” So why can’t we give students a chance to appreciate the use of i in algebra… and geometry!
… and calculus!… without first telling them what the letter stands for.
We need to get rid of it.
I’m not suggesting that we go back to the $sqrt{-1}$ notation of yore — that’s a choice frought with peril. For a positive real number x, the radical symbol $sqrt{x}$ refers specifically to the
unique non-negative number r such that the product of r with itself is x, and given this definition, we have a number of important rules of radicals, such as
$big( sqrt{x} big) cdot big( sqrt{y} big) = sqrt{x cdot y}$.
However, even if we accept the existence of complex numbers (and I do!), there is no way to properly define $sqrt{-1}$, since there is no unique non-negative root of -1, as both i and 1/i are such
roots. And even if we could somehow unambiguously define what complex number we meant by $sqrt{-1}$, it still wouldn’t respect the previously established rules of radicals. As a simple example, if
the rules of radicals applied to the root of -1, we could conclude that
$-1 = big( sqrt{-1} big) big(sqrt{-1} big) = sqrt{(-1)(-1)} = sqrt{1} = 1.$
No, no, no… I have a better idea.
We should use the letter j instead of i.
Bear with me. I submit three criteria by which j is a better choice.
First: j is a neutral letter. It doesn’t stand for anything in general, and in particular it doesn’t stand for imaginary. Instead, we just say that j denotes the complex unit, a number that
multiplies against itself to yield -1.
Second: the letter j arises naturally from vector calculus. To be specific, the standard modern definition of the complex numbers is essentially to identify the complex number $a + b ,sqrt{-1}$ with
the 2-dimensional vector (a,b) in the Euclidean plane $mathbb{R}^2$. Vectors already have their own addition, whereas a “complex” vector product can be found by multiplying lengths and adding angles.
With these definitions it can be shown that
(a,b)(c,d) = (ac – bd, ad + bc )
for any vectors (a,b) and (c,d) in the plane. In particular, this implies that
(a,b) = (a,0)(1,0) + (b,0)(0,1)
for any real numbers a and b. Since we commonly identify the x-axis of the plane with the real line, it is equally common to identify a complex number realized as the vector (x,0) with the real
number x. In this case, the equation above can be written as
(a,b) = a (1,0) + b (0,1).
In vector calculus, we usually denote the vector (1,0) by i and the vector (0,1) by j. Thus, every complex number/plane vector can be written in the form
(a,b) = a i + b j,
which complete agrees with any student’s calculus intuition (although the calculus version is thinking of these as vector scalings rather than complex vector products).
Of course, the identification above also implies that the vector (1,0) is identified with the real number 1, so the equation above can equivalently be expressed as saying that any complex number can
be written as
(a,b) = a + b j,
which looks exactly like the standard form of a complex number with the vector j standing in for $sqrt{-1}$. Even more compelling, note that
j^2 = j j = (0,1) (0,1) = (-1, 0) = -1,
so j is indeed a square root of -1. Consequently, it makes sense to use j to denote the complex unit, particularly when we want to think of it as a number rather than a vector.
Third: I refers to multiplicative identities. Note that my argument above implied that the real number 1 is identified with the vector i; and if we further follow the font-style convention suggested
above, we would also write 1 as the complex number i.
That may look a little weird, but it ain’t a bad idea. In fact, it’s entirely consistent with another naming convention students are likely to see (at least in part) in vector calculus. In the ring
of square matrices, the unique multiplicative identity is denoted I; that is, it is the unique matrix element such that A I = I A = A for any matrix A. It turns out that a second common construction
of the complex numbers is to identify the complex number a + b $sqrt{-1}$ with the 2 x 2 matrix
with the usual matrix addition and matrix multiplication. In particular, the complex unit is represented by the matrix
which could be conceivably denoted by any letter (like J), while the real number 1 is identified with the matrix
which is the identity matrix, and is universally denoted by I. Hence, identifying 1 with I is in fact required when we use the matrix description of complex numbers (which is crucial in articulating
the connection between complex differentiability and vector-field differentiability).
Thus, I respectfully submit that henceforth we banish the term imaginary and with it, that horrible perpetrator of confusion and distrust, the letter “i.” In its place, let us choose only to speak of
the complex number j, and…
Wait… what? Electrical engineers have been doing this for ages?
Oh. Well, then, false alarm. Stupid idea.
Leave a Reply Cancel reply
This entry was posted in mathify. Bookmark the permalink. | {"url":"http://komplexify.com/blog/2009/02/19/in-defense-of-j/","timestamp":"2024-11-04T21:09:26Z","content_type":"text/html","content_length":"41975","record_id":"<urn:uuid:4779594a-2489-4b8d-8bad-d8f29d15ae91>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00670.warc.gz"} |
Unfamiliar formulation of Stokes Problem
• Thread starter the.drizzle
• Start date
In summary, the problem is that the developers have formulated the Stokes problem in a manner that is unfamiliar to me and I am not able to make it work in the manner I would expect it to.
Hello, I'm trying out the
escript python
FEM software package which is so far rather impressive, if for no other reason than the developers have included a Stokes Flow solver. The problem I'm having, however, is that they have formulated
the problem in a manner I have not encountered before, nor can seem to make it "work" in the manner I would expect it to. In particular, we have from
from section 6.1 of the users manual
We want to calculate the velocity field v and pressure p of an incompressible fluid. They are given as the solution of the Stokes problem
[tex]-\left( \eta \left( v_{i,j} + v_{j,i} \right) \right)_{,j} + p_{,i} = f_i + \sigma_{ij,j}[/tex]
where [itex]f_i[/itex] defines an internal force and [itex]\sigma_{ij,j}[/itex] is an initial stress. The viscosity may weakly depend on pressure and velocity. If relevant we will use the notation
[itex]\eta\left(v,p\right)[/itex]to express this dependency.
My basic problem is that I have not encountered what would normally be the Laplacian on the LHS of the above statement. That is, I would typically expect Stokes problem to be stated as
[tex]\Delta v - \nabla p = f[/tex]
which, components aside, does not seem to be an equivalent statement. Due to my application, the inclusion of the initial condition [itex]\sigma_{ij,j}[/itex] is unimportant, and conservation of mass
([itex]\nabla\cdot v=0[/itex]) is assumed in both cases.
So, can anyone tell me what I'm doing wrong, or where I might find a derivation of the quoted formulation so that I can actually apply it?
Last edited:
See the explanation in section 1.5.
Thanks, but I suppose I should clarify...
The problem I'm having is not one of indices vs. operator, what I'm failing to see is how
[tex]\nabla\cdot\left(\eta\left(\nabla v + \nabla^T v \right)\right)[/tex]
is equivalent (in some sense?) to
[tex]\eta\Delta v[/tex]
That is, I'm assuming that they mean that [itex]\nabla^T[/itex] denotes the adjoint to [itex]\nabla[/itex], but even then that doesn't seem to add up...
the.drizzle said:
Thanks, but I suppose I should clarify...
The problem I'm having is not one of indices vs. operator, what I'm failing to see is how
[tex]\nabla\cdot\left(\eta\left(\nabla v + \nabla^T v \right)\right)[/tex]
is equivalent (in some sense?) to
[tex]\eta\Delta v[/tex]
That is, I'm assuming that they mean that [itex]\nabla^T[/itex] denotes the adjoint to [itex]\nabla[/itex], but even then that doesn't seem to add up...
[itex]\nabla^Tv[/itex] denotes the TRANSPOSE of [itex]\nabla v[/itex]
If you sum them both and divide by 2, you get a symmetrical tensor called the "rate of stain tensor", let's call it ε
For an incompressilble flow ([itex]\nabla · v = 0[/itex]) the law that relates the "viscous stress tensor σ" (I think this one is also called deviatoric stress tensor) to the "rate of strain tensor
ε" is:
σ= 2η·ε
Now, in the equation of conservation of momentum, σ doesn't appear as such, but through its divergence. If you calculate its divergence (or just look it up, Navier-Poisson's Law), you get to the
[itex]\nabla · σ = - \nabla \times (η\nabla \times v)[/itex]
Since η is constant you can get it out of the curl expression. Applying this property of operators you finally get to the laplacian of v
[tex]\nabla \times \nabla \times \vec{v} = \nabla (\nabla \cdot \vec{v}) - \nabla^2 \vec{v} [/tex]
Hope I could clarify!
Brilliant, thank you!
FAQ: Unfamiliar formulation of Stokes Problem
1. What is the Stokes Problem?
The Stokes Problem is a mathematical model used to describe the motion of a viscous, incompressible fluid. It was first formulated by Sir George Stokes in the 19th century and is commonly used in
fluid dynamics research.
2. What is an unfamiliar formulation of the Stokes Problem?
An unfamiliar formulation of the Stokes Problem may refer to a different way of expressing or solving the equations that make up the model. This could include using different mathematical techniques
or incorporating additional variables.
3. Why is studying the Stokes Problem important?
Studying the Stokes Problem is important because it can help us better understand the behavior of fluids in various scenarios, such as in engineering applications or in natural phenomena. It is also
a fundamental model in fluid dynamics and can be used as a basis for more complex problems.
4. How is the Stokes Problem solved?
The Stokes Problem is typically solved using numerical methods, such as finite element analysis or finite difference methods. These involve breaking down the equations into smaller, discrete parts
and using algorithms to solve them iteratively.
5. What are some real-world applications of the Stokes Problem?
The Stokes Problem has many real-world applications, including analyzing the flow of fluids in pipes, predicting weather patterns, and understanding the behavior of blood flow in the human body. It
is also used in the design of various engineering systems, such as pumps and turbines. | {"url":"https://www.physicsforums.com/threads/unfamiliar-formulation-of-stokes-problem.589738/","timestamp":"2024-11-09T23:46:35Z","content_type":"text/html","content_length":"94717","record_id":"<urn:uuid:54d926ef-fd1b-475a-892d-1ab3adf890b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00756.warc.gz"} |
Catalog Entries
Select the Course Number to get further detail on the course. Select the desired Schedule Type to find available classes for the course.
MATH 203 - Linear Algebra
Prerequisite: MATH 106 or equivalent. (See Math Flowchart, Page 139.) Introduces linear algebra with emphasis on interpretation and the development of computational techniques. Topics include systems
of equations; matrices are utilized for the interpretation of vector spaces, subspaces, independence bases, dimension, inner product, outerproduct, orthogonal and orthonormal sets. Also the
transformation of matrices, matrix operations, inverses, conditions for invertibility, determinants and their properties. The characteristics equation and its eigenvalue are used for problem solving
and the development of linear transformations.
3.000 Credit hours
3.000 Lecture hours
0.000 Lab hours
0.000 Other hours
Levels: Undergraduate
Schedule Types: Hybrid (Synchronous), Individual Study, Lecture, On-line Study (Synchronous), On-line Study (Asynchronous)
Science/Math/Technology Division
Math Department
Course Attributes:
Gen Ed Mathematics, Liberal Arts Elective, Math Elective, Math/Science Elective, GE:Math&Quantitative Reasoning | {"url":"https://my.cayuga-cc.edu/prod/bwckctlg.p_display_courses?term_in=202470&one_subj=MATH&sel_subj=&sel_crse_strt=203&sel_crse_end=203&sel_levl=&sel_schd=&sel_coll=&sel_divs=&sel_dept=&sel_attr=","timestamp":"2024-11-10T21:37:35Z","content_type":"text/html","content_length":"11527","record_id":"<urn:uuid:ea33f408-4f00-41cf-a6b3-83a95315a916>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00549.warc.gz"} |
Math Colloquia - Conformal field theory in mathematics
Since Belavin, Polyakov, and Zamolodchikov introduced conformal field theory as an operator algebra formalism which relates some conformally invariant critical clusters in two-dimensional lattice
models to the representation theory of Virasoro algebra, it has been applied in string theory and condensed matter physics. In mathematics, it inspired development of algebraic theories such as
Virasoro representation theory and the theory of vertex algebras. After reviewing its development and presenting its rigorous model in the context of probability theory and complex analysis, I
discuss its application to the theory of Schramm-Loewner evolution. | {"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&sort_index=date&order_type=asc&page=5&document_srl=778404&l=en","timestamp":"2024-11-07T13:25:54Z","content_type":"text/html","content_length":"43882","record_id":"<urn:uuid:cd938437-2679-4bb2-9bd3-9b90c4dc8af1>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00765.warc.gz"} |
C.8.17.12 Wide Field Ophthalmic Photography 3D Coordinates Module
Include Table 10-5 “General Anatomy The concept code for Anatomic Region Sequence (0008,2218) shall be (81745001, SCT, "Eye"), and DCID 244 “Laterality” shall be used for Anatomic Region
Mandatory Macro Attributes” Modifier Sequence (0008,2220). Only a single Item shall be included in Anatomic Region Modifier Sequence (0008,2220).
Transformation Method Code (0022,1512) 1 Method used to map the 2D Pixel Image data in this SOP Instance to the 3D Cartesian coordinates in the Dimensional to Two Three Dimensional Map Sequence
Sequence (0022,1518).
Only a single Item shall be included in this Sequence.
See Section C.8.17.12.1.1 for further explanation.
>Include Table 8.8-1 “Code Sequence Macro DCID 4245 “Wide Field Ophthalmic Photography Transformation Method”
Transformation Algorithm (0022,1513) 1 Software algorithm that performed the mapping.
Only a single Item shall be included in this Sequence.
>Include Table 10-19 “Algorithm Identification Macro Attributes”
Ophthalmic Axial Length (0022,1019) 1 The axial length measurement used when performing the 2D pixel image mapping into 3D Cartesian coordinates, in mm.
Ophthalmic Axial Length (0022,1515) 1 The method used to obtain the Ophthalmic Axial Length.
Enumerated values:
Ophthalmic FOV (0022,1517) 3 The field of view used to capture the ophthalmic image, in degrees. The field of view is the maximum image size displayed on the image plane, expressed as
the angle subtended at the exit pupil of the eye by the maximum dimension 2r (where r equals the radius).
Two Dimensional to Three (0022,1518) 1 A sparsely sampled map of 2D image pixels (with sub pixel resolution) to 3D coordinates.
Dimensional Map Sequence
Each frame shall be referenced once and only once in this Sequence in Referenced Frame Numbers (0040,A136).
One or more Items shall be included in this Sequence.
>Referenced Frame Number (0008,1160) 1 References one or more frames within this SOP Instance to which this Sequence Item applies. The first frame shall be denoted as frame number one.
>Number of Map Points (0022,1530) 1 The number of points in the map. Shall include one or more points.
>Two Dimensional to Three (0022,1531) 1 See Section C.8.17.12.1.2 for further explanation.
Dimensional Map Data | {"url":"https://dicom.nema.org/medical/dicom/current/output/chtml/part03/sect_c.8.17.12.html","timestamp":"2024-11-07T03:30:22Z","content_type":"application/xhtml+xml","content_length":"29588","record_id":"<urn:uuid:8c5d2fb4-26b3-4ffd-b7cc-5d83a3fd4110>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00004.warc.gz"} |
Application of Vectors
There are two types of mathematical quantities that are used to explain the nature of things in term of direction and magnitude, and they are vector or a scalar.
Definition of Vector
Vectors are geometrical entities with magnitude and direction. The length of a vector signifies its magnitude, and it is depicted as a line with an arrow pointing towards the vector’s direction.
Vectors are used to express physical quantities like displacement, velocity, and acceleration. In addition, the invention of electromagnetic induction in the late nineteenth century ushered in the
use of vectors. Two vectors are same if their magnitude and direction are the same. This means that if we translate a vector to a new point (without rotating it), the vector we finish up with is the
same as the one we started with. Force vectors and velocity vectors are two instances of vectors. Both velocity and force are travelling in the same direction. The force’s intensity or the velocity’s
matching speed would be represented by the vector’s magnitude.
Applications of Vector Analysis:
Air traffic controllers use vectors to track flights, meteorologists use them to describe wind conditions, and computer programmers use them to create virtual worlds. In this lesson, we’ll go over
three vector applications that are often utilized in physics: work, torque, and magnetic force.
When a force is applied to an item or system, the term work is used to describe the energy that is added to or withdrawn from it. Work is maximum when the applied force is parallel to the motion of
the item, and no work is done when the force is applied perpendicular to the motion, according to experiment. As a result, the dot product of the force vector and the displacement vector may be used
to characterize the work done by a force.
Magnetic Force:
When a charged particle moves perpendicular to the magnetic field, the magnetic force on the particle is greatest, and when the particle moves parallel to the field, the magnetic force is zero. As a
result, the magnetic force may be expressed as the cross-product of the field strength vector and the velocity vector of the particle: F=qv x B, where F is the force acting on the particle, q is the
particle’s charge, v is the particle’s velocity, and B is the magnetic field’s vector. The force will be measured in newtons, the metric base-unit of force, if the velocity is measured in m/s and the
magnetic field is measured in tesla.
When you raise a baseball off a tabletop, you are exerting a force on the entire thing. When you press down on a doorknob, the door will spin on its hinges. Torque is a word used by scientists to
define the force-like quality that impacts an object’s rotation. The torque may be expressed as the cross-product of the force vector and the lever arm, a radially outward vector pointing from the
axis of rotation to the point where the force is applied to the object: T =r x F, where T is the torque, r is the lever arm, and F is the applied force.
Vectors are used to describe rotational motion:
In physics, we frequently need to explain rotating motion. If an item rotates, the following must be specified:
• The spinning axis around which the thing revolves
• The rotational direction of the item around that axis.
• The rotational speed of the item
To explain this form of rotating motion, we develop a new type of vector called a “axial vector.” The direction of the vector is chosen to be co-linear with the axis of rotation, and the magnitude of
the vector is chosen to indicate the rotational speed of the item. As a result, we only have two options for vector direction. Consider the wheels of a car driving away from you. Because the axis of
rotation is the axis of the wheel, we know that the angular velocity vector (which describes the wheel’s rotation) must point to the left or right.
Another right-hand rule is used to determine the vector’s direction. To distinguish it from the right hand rule for the cross product, we’ll call it “the right hand rule for axial vectors.” When you
curl your fingers in the direction of rotation and use the right hand rule for axial vectors, the vector points in the direction of your thumb. The wheels of the automobile travelling away from you
will turn so that the closest point to you moves up and the furthest point moves down. The rotation vector points to the left, according to the right hand rule.
Vector in quantum mechanics:
It is simple to emphasise the importance of linear algebra for physicists because Quantum Mechanics is totally reliant on it. Time domain (state space) control theory and tensor stresses in
materials are also relevant. In quantum physics, the state of a physical system is represented as a vector in a complex vector space. The state space of the system is defined by this vector. The
symbol | ⟩, sometimes known as a ket, represents such a physical state of a quantum system. This form is known as the Dirac notation, and it is widely used in quantum physics. A ket can alternatively
be called a state vector, ket vector, or just state.
A vector is a number that has both magnitude and direction associated with it. Vectors are useful in a variety of situations, including those that need force or velocity. In physics, vectors are
commonly employed to determine displacement, velocity, and acceleration. Vectors are arrows that indicate a magnitude and direction combination. | {"url":"https://unacademy.com/content/jee/study-material/mathematics/application-of-vectors/","timestamp":"2024-11-01T22:11:38Z","content_type":"text/html","content_length":"643021","record_id":"<urn:uuid:8f1a4ed1-ab46-4e96-a01c-02d10dd72be9>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00711.warc.gz"} |
Computation of diffusive shock acceleration using stochastic differential equations
The present work considers diffusive shock acceleration at non-relativistic shocks using a system of stochastic differential equations (SDE) equivalent to the Fokker-Planck equation. We compute
approximate solutions of the transport of cosmic particles at shock fronts with a SDE numerical scheme. Only the first order Fermi process is considered. The momentum gain is given by implicit
calculations of the fluid velocity gradients using a linear interpolation between two consecutive time steps. We first validate our procedure in the case of single shock acceleration and retrieve
previous analytical derivations of the spectral index for different values of the Péclet number. The spectral steepening effect by synchrotron losses is also reproduced. A comparative discussion of
implicit and explicit schemes for different shock thickness shows that implicit calculations extend the range of applicability of SDE schemes to infinitely thin 1D shocks. The method is then applied
to multiple shock acceleration that can be relevant for Blazar jets and accretion disks and for galactic centre sources. We only consider a system of identical shocks which free parameters are the
distance between two consecutive shocks, the synchrotron losses time and the escape time of the particles. The stationary distribution reproduces quite well the flat differential logarithm energy
distribution produced by multiple shock effect, and also the piling-up effect due synchrotron losses at a momentum where they equilibrate the acceleration rate. At higher momenta particle losses
dominate and the spectrum drops. The competition between acceleration and loss effects leads to a pile-up shaped distribution which appears to be effective only in a restrict range of inter-shock
distances of ~ 10-100 diffusion lengths. We finally compute the optically thin synchrotron spectrum produced such periodic pattern which can explain flat and/or inverted spectra observed in Flat
Radio spectrum Quasars and in the galactic centre.
Astronomy and Astrophysics
Pub Date:
July 1999
□ ACCELERATION OF PARTICLES;
□ SHOCK WAVES;
□ RADIATION MECHANISMS: NON-THERMAL;
□ GALAXY: CENTER;
□ GALAXIES: BL LACERTAE OBJECTS: GENERAL;
□ Astrophysics
accpeted in A& | {"url":"https://ui.adsabs.harvard.edu/abs/1999A%26A...347..391M/abstract","timestamp":"2024-11-05T20:48:21Z","content_type":"text/html","content_length":"43842","record_id":"<urn:uuid:90114250-c6c0-4764-94f2-3305dee3e87d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00510.warc.gz"} |
How to desugar Haskell code
Haskell's core language is very small, and most Haskell code desugars to either:
• lambdas / function application,
• algebraic data types / case expressions,
• recursive let bindings,
• type classes and specialization, or:
• Foreign function calls
Once you understand those concepts you have a foundation for understanding everything else within the language. As a result, the language feels very small and consistent.
I'll illustrate how many higher-level features desugar to the same set of lower-level primitives.
if is equivalent to a case statement:
if b then e1 else e2
-- ... is equivalent to:
case b of
True -> e1
False -> e2
This works because Bools are defined within the language:
data Bool = False | True
Multi-argument lambdas
Lambdas of multiple arguments are equivalent to nested lambdas of single arguments:
\x y z -> e
-- ... is equivalent to:
\x -> \y -> \z -> e
Functions are equivalent to lambdas:
f x y z = e
-- ... is equivalent to:
f = \x y z -> e
-- ... which in turn desugars to:
f = \x -> \y -> \z -> e
As a result, all functions of multiple arguments are really just nested functions of one argument each. This trick is known as "currying".
Infix functions
You can write functions of at least two arguments in infix form using backticks:
x `f` y
-- ... desugars to:
f x y
Operators are just infix functions of two arguments that don't need backticks. You can write them in prefix form by surrounding them with parentheses:
x + y
-- ... desugars to:
(+) x y
The compiler distinguishes operators from functions by reserving a special set of punctuation characters exclusively for operators.
Operator parameters
The parentheses trick for operators works in other contexts, too. You can bind parameters using operator-like names if you surround them with parentheses:
let f (%) x y = x % y
in f (*) 1 2
-- ... desugars to:
(\(%) x y -> x % y) (*) 1 2
-- ... reduces to:
1 * 2
Operator sections
You can partially apply operators to just one argument using a section:
(1 +)
-- desugars to:
\x -> 1 + x
This works the other way, too:
(+ 1)
-- desugars to:
\x -> x + 1
This also works with infix functions surrounded by backticks:
(`f` 1)
-- desugars to:
\x -> x `f` 1
-- desugars to:
\x -> f x 1
Pattern matching
Pattern matching on constructors desugars to case statements:
f (Left l) = eL
f (Right r) = eR
-- ... desugars to:
f x = case x of
Left l -> eL
Right r -> eR
Pattern matching on numeric or string literals desugars to equality tests:
f 0 = e0
f _ = e1
-- ... desugars to:
f x = if x == 0 then e0 else e1
-- ... desugars to:
f x = case x == 0 of
True -> e0
False -> e1
Non-recursive let / where
Non-recursive lets are equivalent to lambdas:
let x = y in z
-- ... is equivalent to:
(\x -> z) y
Same thing for where, which is identical in purpose to let:
z where x = y
-- ... is equivalent to:
(\x -> z) y
Actually, that's not quite true, because of let generalization, but it's close to the truth.
Recursive let / where cannot be desugared like this and should be treated as a primitive.
Top-level functions
Multiple top-level functions can be thought of as one big recursive let binding:
f0 x0 = e0
f1 x1 = e1
main = e2
-- ... is equivalent to:
main = let f0 x0 = e0
f1 x1 = e1
in e2
In practice, Haskell does not desugar them like this, but it's a useful mental model.
Importing modules just adds more top-level functions. Importing modules has no side effects (unlike some languages), unless you use Template Haskell.
Type classes desugar to records of functions under the hood where the compiler implicitly threads the records throughout the code for you.
class Monoid m where
mappend :: m -> m -> m
mempty :: m
instance Monoid Int where
mappend = (+)
mempty = 0
f :: Monoid m => m -> m
f x = mappend x x
-- ... desugars to:
data Monoid m = Monoid
{ mappend :: m -> m -> m
, mempty :: m
intMonoid :: Monoid Int
intMonoid = Monoid
{ mappend = (+)
, mempty = 0
f :: Monoid m -> m -> m
f (Monoid p z) x = p x x
... and specializing a function to a particular type class just supplies the function with the appropriate record:
g :: Int -> Int
g = f
-- ... desugars to:
g = f intMonoid
Two-line do notation
A two-line do block desugars to the infix (>>=) operator:
do x <- m
-- ... desugars to:
m >>= (\x ->
e )
One-line do notation
For a one-line do block, you can just remove the do:
main = do putStrLn "Hello, world!"
-- ... desugars to:
main = putStrLn "Hello, world!"
Multi-line do notation
do notation of more than two lines is equivalent to multiple nested dos:
do x <- mx
y <- my
-- ... is equivalent to:
do x <- mx
do y <- my
-- ... desugars to:
mx >>= (\x ->
my >>= (\y ->
z ))
let in do notation
Non-recursive let in a do block desugars to a lambda:
do let x = y
-- ... desugars to:
(\x -> z) y
The ghci interactive REPL is analogous to one big do block (with lots and lots of caveats):
$ ghci
>>> str <- getLine
>>> let str' = str ++ "!"
>>> putStrLn str'
-- ... is equivalent to the following Haskell program:
main = do
str <- getLine
let str' = str ++ "!"
putStrLn str'
-- ... desugars to:
main = do
str <- getLine
do let str' = str ++ "!"
putStrLn str'
-- ... desugars to:
main =
getLine >>= (\str ->
do let str' = str ++ "!"
putStrLn str' )
-- ... desugars to:
main =
getLine >>= (\str ->
(\str' -> putStrLn str') (str ++ "!") )
-- ... reduces to:
main =
getLine >>= (\str ->
putStrLn (str ++ "!") )
List comprehensions
List comprehensions are equivalent to do notation:
[ (x, y) | x <- mx, y <- my ]
-- ... is equivalent to:
do x <- mx
y <- my
return (x, y)
-- ... desugars to:
mx >>= (\x -> my >>= \y -> return (x, y))
-- ... specialization to lists:
concatMap (\x -> concatMap (\y -> [(x, y)]) my) mx
The real desugared code is actually more efficient, but still equivalent.
The MonadComprehensions language extension generalizes list comprehension syntax to work with any Monad. For example, you can write an IO comprehension:
>>> :set -XMonadComprehensions
>>> [ (str1, str2) | str1 <- getLine, str2 <- getLine ]
("Line1", "Line2")
Numeric literals
Integer literals are polymorphic by default and desugar to a call to fromIntegral on a concrete Integer:
1 :: Num a => a
-- desugars to:
fromInteger (1 :: Integer)
Floating point literals behave the same way, except they desugar to fromRational:
1.2 :: Fractional a => a
-- desugars to:
fromRational (1.2 :: Rational)
You can think of IO and all foreign function calls as analogous to building up a syntax tree describing all planned side effects:
main = do
str <- getLine
putStrLn str
return 1
-- ... is analogous to:
data IO r
= PutStrLn String (IO r)
| GetLine (String -> IO r)
| Return r
instance Monad IO where
(PutStrLn str io) >>= f = PutStrLn str (io >>= f)
(GetLine k ) >>= f = GetLine (\str -> k str >>= f)
Return r >>= f = f r
main = do
str <- getLine
putStrLn str
return 1
-- ... desugars and reduces to:
main =
GetLine (\str ->
PutStrLn str (
Return 1 ))
This mental model is actually very different from how IO is implemented under the hood, but it works well for building an initial intuition for IO.
For example, one intuition you can immediately draw from this mental model is that order of evaluation in Haskell has no effect on order of IO effects, since evaluating the syntax tree does not
actually interpret or run the tree. The only way you can actually animate the syntax tree is to define it to be equal to main.
I haven't covered everything, but hopefully that gives some idea of how Haskell translates higher level abstractions into lower-level abstractions. By keeping the core language small, Haskell can
ensure that language features play nicely with each other.
Note that Haskell also has a rich ecosystem of experimental extensions, too. Many of these are experimental because each new extension must be vetted to understand how it interacts with existing
language features.
7 comments:
1. “Non-recursive lets are equivalent to lambdas:” This is true superficially, but intermediate languages (like Core) still have non-recursive lets, as they have to be treated quite differently by
an optimizing compiler: Not much is known about a lambda-abstracted variable, but a let-bound value is known completely.
1. You're right. I added a caveat mentioning that this is not completely true because of let generalization.
2. Interesting! This is pretty close to 'free theorem equivalence' - I noticed your Morte could likely encode a free groupoid. These should cover a lot of it, and be sure to read the comments in
each of the first two:
3. if b then e1 else e2
-- ... is equivalent to:
case b of
True -> e1
False -> e2
4. Thanks to deniok@lj:
Prelude> (\True y -> ()) False `seq` 5
Prelude> (\True -> \y -> ()) False `seq` 5
*** Exception: :3:2-18: Non-exhaustive patterns in lambda
1. s/deniok/deni-ok/
5. Desugaring the do notation is not as simple as shown here because of `fail`. See https://wiki.haskell.org/MonadFail_Proposal for details. Here is the relevant excerpt:
Currently, the <- symbol is unconditionally desugared as follows:
do pat <- computation
let f pat = more
f _ = fail "..."
in computation >>= f | {"url":"https://www.haskellforall.com/2014/10/how-to-desugar-haskell-code.html","timestamp":"2024-11-03T06:14:54Z","content_type":"application/xhtml+xml","content_length":"100232","record_id":"<urn:uuid:09534fde-904d-4d1b-bbd3-e600aa55fb87>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00607.warc.gz"} |
From Math to Python: Black-Scholes-Merton model, part 2 - Diggers
The goal of this article is to present the well-known expressions of Black-Scholes-Merton model. Indeed, unlike other models, Black and Scholes originally managed to obtain closed-form expressions
for options, Greeks and other derivatives with simple assumptions. These terms will be re-explained later.
These expressions that will be presented in this article can be derived from the PDE that we proved on the previous article that you can find here: https://diggers-consulting.com/finance-de-marche/
As a reminder, here is the Black-Scholes-Merton model PDE:
Black-Scholes-Merton: Partial Differential Equation
Closed-form expressions
First and foremost, Black-Scholes-Merton model provides expressions for call and put options. You can find a refresher about those two financial contracts on the first article of the series: https://
Here are the expressions under BSM model:
Call & Put expressions under Black-Scholes-Merton model
The variable x is the stock price, K is the strike price, r is the risk-free rate, a is the dividend yield and N normal distribution.
Also, we have:
The last section aims to prove that the call expression presented above is indeed a solution of the partial differential equation. It is quite a technical proof that requires some mathematical
background. Therefore, it is not the most essential part of this article, and you can see it as a bonus !
Practical example
A concrete example will give the reader a clear idea of the notion of pricing and the use of those expressions.
For instance, let’s consider Apple stock (AAPL). On the 8th of August 2022, imagine an investor wants to buy 100 shares of a call option. The current price of the stock is S[AAPL]= 166$. The investor
wants a short-term maturity (September 2nd 2022) and a strike price, K=170$.
First, we can check what is the quote (meaning the the real-market price) on Yahoo Finance. If you go on AAPL ticker, you can find all the details you need about the stock. We are interested in the
price of a call option:
Yahoo Finance: stock description
Then, we just need to look for our financial instrument:
Call option quote of our example
At last, we can compare this price with the one can be obtained for with Black-Scholes-Merton model. A risk-free rate of 2%, an implied volatility of 22% and τ = 20/250 are the parameters considered.
The price obtained is equal to 2.46 which is actually pretty close from the quote provided.
The Greeks
Obtaining the price of an option is a good first step, but then it is important to be able to hedge any positions on this contract. There are different ways to hedge but here the focus has been
realized on different metrics called The Greeks.
According to Investopedia, the variables that are used to assess risk in the options market are commonly referred to as The Greeks. The most common Greeks used include the Delta, Gamma and Vega,
which are the first partial derivatives of the options pricing model. Let’s recall their expression:
• Delta, represents he change in the value of an option in relation to the movement in the market price of the underlying asset.
• Gamma, is the rate of change between an option’s delta and the underlying asset’s price.
• Vega, represents the rate of change between an option’s value and the underlying asset’s implied volatility.
Let’s take back the previous example to genuinely understand the relevance of Greeks for hedging purposes. The Greeks can be easily computed. For example, in this example, we have: Δ = 0,36. In order
to be Delta-neutral, the investor needs to buy 36 additional shares.
The Greeks under Black-Scholes-Merton model (V, is the value of an option)
Link with the library
Black-Scholes-Merton model is always a good way to start a pricing library as it gives a good first indication of where stock price stands.
The combination of Monte-Carlo Methods and analytical expressions such as those provided by BSM model is a real strength of our library. The main reason is it enables to verify that the functions are
well-implemented. It also gives an interesting dilemma between time and accuracy: some methods are faster but less accurate (analytical) and vice-versa.
Example of functions of the BlackScholesAnalytics class
Limitations of the model
Black-Scholes-Merton model is based on several assumptions. Among them, some are reasonable. Indeed, assuming the market perfectly liquid or modeling the stock with a log-normal distribution
(Geometric Brownian Motion) are far from being bad approximations for a financial model.
However, this model only enables to price European-style options which is a serious drawback as most of the options traded on the market are American-style options. In the Black-Scholes world, there
are no taxes nor fees on transactions which is not ideal.
But more importantly, the key limitations of this model are the ones made on interest rates and volatility. Both are assumed to be constant which is far from capturing the reality.
When approaching the question of volatility, there is a phenomenon that needs to be considered: the volatility smile.
The smile shows that the options that are furthest in the money (ITM) or out of the money (OTM) have the highest implied volatility. On the other hand, options with the lowest implied volatility have
strike prices at the money (ATM) or near the money.
Volatility smile phenomenon
There is a real debate among traders about which volatility should be considered for Black-Scholes-Merton model. Indeed, the volatility smile is not predicted by the Black-Scholes-Merton model as the
model predicts that the implied volatility curve is flat when plotted against varying strike prices.
Based on the model, it would be expected that the implied volatility would be the same for all options expiring on the same date with the same underlying asset, regardless of the strike price. Yet,
in the real world, it is not the case.
Therefore, some would say that the historical volatility should be considered as it is unique. Nonetheless, it gives poor result for options with a long expiration.
On the other hand, implied volatility can be computed easily with Black-Scholes-Merton model even thought it may not be relevant. Here is a function of our library (BlackScholesAnalytics class) that
enables to obtain it:
Implied volatility under Black-Scholes-Merton model
Interest Rates
Obviously, the Black-Scholes-Merton model gives a bad representation of interest rates as it assumes them constant. The concept of « risk-free » does not exist in real life even though the 10Y
Government yield could be the closest rate of this concept.
There are different ways to model interest rates. In the library, the main focus has been made on Vasicek model and its improved version: Hull-White model.
What’s next ?
This article on Black-Scholes-Merton model presented the well-known expressions that can be encountered. It showed that this model gives a good first approximation. Nevertheless, it also highlighted
the volatility as a key parameter in options pricing.
Thus, volatility and the issues spawned by this parameter need to be handled. The next article will be on stochastic volatility model. This means that the volatility is assumed to be random and
therefore, it displays its own dynamics. To do so, we will study Heston model !
How to get the call option price under Black-Scholes-Merton model ? Change of probability measure !
Previously, in this article, the expressions for call and put options have been presented. Making few computations, it is easy to see that there are indeed solutions of the Black-Scholes-Merton PDE.
But, how can we manage to obtain these expressions ? Where do they come from ?
This is the goal of this section which is quite technical and maybe more adapted for people working or interested in Quantitative Finance.
Martingales theory
Martingale theory is key to understand the overall proof of the call expression under Black-Scholes-Merton model. However, a whole course can be made on this notion. Therefore, in summary, a
martingale is a sequence of random variables for which at time t, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values.
More details can be found on the following link: https://people.maths.bris.ac.uk/~mb13434/mart_thy_notes.pdf
Risk-Neutral Pricing formula
The goal is not to prove the continuous-time risk-neutral pricing formula but rather to use it. Still, for those interested, let’s recall the important steps that enable to obtain this essential
First of all, a discount process, D(t), must be introduced such that: dD(t) = -R(t)D(t)dt.
Then, the differential of the discounted stock price process can be written as:
Thanks to Girsanov’s Theorem, the probability measure P-tilde can be introduced such that we may rewrite:
The measure define in Girsanov’s theorem is the risk-neutral measure because it is equivalent to the original measure P and it renders the discounted stock price D(t)S(t) into a martingale.
Therefore, we have:
In continuous-time, the change from the actual measure P to the risk-neutral measure P-tilde changes the mean rate of return of the stock but not the volatility.
Afterwards, the value of the portfolio process under the Risk-Neutral Measure must be investigated.
Using Itô’s product rule, we get:
Hence, the discounted value of the portfolio is a martingale under the risk-neutral probability measure.
This means that:
where V is the payoff of the derivative security.
This is the the continuous-time risk-neutral pricing formula:
Risk-Neutral Pricing Formula
Call expression under BSM model
Consider a stock, modeled as a generalized Brownian motion, that pays dividends over time at a rate A(t) per unit time. Here A(t) is a non-negative process. Dividends paid by a stock reduce its
value. Thus, the model for the stock price can be written as followed:
With constant coefficients σ, r, and a, this leads to:
According to the risk-neutral pricing formula, the price at time t of a European call expiring at time T with strike K is:
We can compute c(t,x) using the Independence Lemma. Then, the only thing remaining is computations !
At last, we make the change of variable z = y+σ√τ in the integral, which leads us to the following formula:
The expression for a put option can be obtained by applying the same method or by using Put-call parity.
Credits: Photo by Tyler Prahm on Unsplash | {"url":"https://diggers-consulting.com/finance-de-marche/from-math-to-python-black-scholes-merton-model-part-2","timestamp":"2024-11-06T02:28:38Z","content_type":"text/html","content_length":"80803","record_id":"<urn:uuid:52380a50-738e-48a4-8dae-36a76aca5e4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00701.warc.gz"} |