content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Bucket Sort Algorithm
Sorting algorithms are a big part of coding interviews for software engineers. If you’re preparing for an interview at a tech company, brushing up on all sorting algorithms is a must.
In this article, we’ll help you revisit the bucket sort algorithm:
• What Is Bucket Sort?
• How Does Bucket Sort Work?
• Bucket Sort Algorithm
• Bucket Sort Pseudocode
• Bucket Sort Code
• Bucket Sort Complexities
• FAQs on Bucket Sort
What Is Bucket Sort?
Bucket sort is a sorting algorithm that divides the elements into several groups called buckets. Once these elements are scattered into buckets, then sorting is done within each bucket. Finally,
these sorted elements from each bucket are gathered, in the right order, to get the sorted output.
In short, bucket sort follows what we can call a scatter-sort-gather method.
How Does Bucket Sort Work?
The first order of business is creating the buckets. The number of buckets depends on the kind of data. For example:
If the elements are real numbers between 0 and 1 (0 and 1 not included):
• We can conveniently organize them in 10 buckets.
• 1st bucket stores all numbers in the range [0, 0.1), 2nd bucket stores the range [0.1,0.2), and so on.
• Which buckets the edge cases (0.1, 0.2, and so on) go to can be easily calculated by a clear numerical relationship that we will form between bucket index and elements.
For example, say that bucket index is the integer part of number*10. Then for 0.1, its bucket index will be 1.
If the elements are between 0 and 1000:
• There can still be 10 buckets.
• Buckets can hold values from the ranges [0 to 100), [100 to 200), and so on, with the last bucket being [900 to 1000), maintaining an interval of 100 numbers.
• If we want to include 0 and 1000, we can extend the range of the last bucket by 1 to include 1000, and 0 is already included in the initial range distribution.
Numbers in the decimal system don’t need to be scattered in exactly 10 buckets. In the second example, the number of buckets could be 5 — the range size would become 200 instead of 100, and we would
keep all numbers between 0 and 199 in the first bucket, 200 to 399 in the second bucket, and so on.
After creating the buckets, we decide on a specific range of elements that could belong in each bucket. The range of any bucket should be a continuous interval, and these intervals should span the
input domain. We should also keep in mind that the ranges must not overlap.
Then, based on the range each element belongs to, we put it in its corresponding bucket.
After the elements are in their respective buckets, we sort each bucket’s contents using any fitting sorting algorithm. In some cases, we can also use the same bucket sort algorithm called
Finally, we take out the sorted elements from the buckets, in order, and join them to get all the elements in their sorted order.
Bucket Sort Algorithm
1. Create B* buckets, each initialized with zero values.
2. Assign the range of elements that each bucket can hold.
3. Scatter: Find the range each element belongs to and put each element into the bucket meant for its range
4. Sort: Iterate through all buckets and sort elements within each bucket.
5. Gather the sorted elements from each bucket.
While gathering from the sorted buckets in the right order and joining them, we know that we have sorted the elements fully since bucket-level sorting (i.e., inter-bucket sorting) was already done
when we put elements in the bucket!
*Note: The optimal value of the number of (B) and the range of buckets for a given dataset can be decided keeping the following goals in mind:
1. It should be easy to create a relationship between bucket index and numbers such that a range of numbers falls into each bucket.
2. If the nature of the distribution of input elements is known, the number of buckets and range should be such that there’s a roughly uniform distribution of elements in each range.
3. The range of buckets should span the complete range of the input and divide inputs into buckets based on non-overlapping ranges of discrete intervals.
4. The range of buckets should not overlap.
5. There’s a tradeoff between space and time complexity. Larger number of buckets means fewer elements per bucket and, therefore, faster sorting. However, to avoid unnecessary space usage, the fewer
the buckets, the better the space complexity.
Bucket Sort Pseudocode
Create B empty buckets
For each element (i=0 to array size-1)
Put each element into the bucket meant for its range.
For each bucket (i=0 to B-1)
Sort elements within each bucket
Concatenate all the sorted elements from each bucket
Output the sorted elements
end bucketSort()
Here, the iteration is done from 0 to B-1 for the B buckets. We decide the range of elements that can go into each bucket by establishing a relationship between the bucket’s index and elements (so
that a range of elements falls into each bucket).
Bucket Sort Code
Here is an example of an array of numbers between 0 and 1 being sorted using bucket sort.
We declare the function bucketSort(float numbers[], int size) and give it two function parameters: the array of numbers to be sorted and the size of the array. We then decide the number of buckets to
be created — for numbers in the decimal system, we usually take 10, as there are 10 digits from 0 to 9.
• First, we create the 10 buckets.
• Then, we decide the range of elements that can go into each bucket.
• Once the range of elements that can go into each bucket is decided, we take the array elements one by one and put them in their respective buckets.
• Next, we go through each bucket and sort the contents of it.
• Finally, the sorted elements from each bucket are collected, in order, and printed out.
#include <iostream>
#include <bits/stdc++.h>
using namespace std;
// Here numbers[] is the array we need to sort and size is the size of
// the array numbers[]
void bucketSort(float numbers[], int size)
// Creating 10 buckets, since it's the decimal number system (0-9
// digits).
int base = 10;
vector<float> bucket[base];
// Deciding which numbers go into which buckets and putting them there.
for (int i=0;i<size;i++)
// Since numbers are between 0 and 1, we multiply them by 10,
// the base, and take the integer part of the result as the index.
// Range of each bucketIndex is decided. When we multiply decimals
// between 0 and 1 with the base 10, we move the Most Significant
// Digit (MSD) to the left of the decimal. Since the index is of
// type int, it takes only this MSD, which corresponds to the bucket
// index. This is why buckets have numbers sorted according to the
// MSD.
// Since numbers are distributed evenly enough over the given range,
// the following relationship between buckets and input numbers is
// ideal for this case of decimals between 0 and 1 to determine
// the range of each bucket:
int bucketIndex=base*numbers[i];
// We now know which numbers[i] should go in which bucket index.
// Eg- 0.45 will become 0.45*10=4.5, but the bucket index is int,
// so it will take 4. So 4.5 now belongs to the bucket index 4,
// which is the 5th bucket since we start the index from 0.
// Iterating through individual buckets.
for (int b=0;b<base;b++)
// Sorting the contents of individual buckets. sort() sorts the
// elements in ascending order.
// Preparing to store sorted output in order.
int sortedIndex = 0;
for(int i=0;i<base;i++)
// Iterating through sorted elements of each bucket and storing
// them in order.
for(int b=0;b<bucket[i].size();b++)
// Sorted numbers are gathered from buckets.
numbers[sortedIndex]= bucket[i][b];
int main()
float arr[]= {0.45, 0.5, 0.76, 0.75, 0.24, 0.2, 0.1, 0.68};
bucketSort(arr, 8);
for (int i = 0; i<8; i++) {
cout << arr[i] << " ";
return 0;
Output Explained
Input: 0.45, 0.5, 0.76, 0.75, 0.24, 0.2, 0.1, 0.68
1. Scatter: The elements are entered in their respective buckets, in the order in which they occur in the original unsorted array input. In this case, the entering starts with bucket 4 for 0.45,
which occurs first in the array input.
2. Sort: At this stage, each bucket is sorted individually.
3. Gather: Finally, the elements are collected from the sorted buckets, in the right order, to get the final sorted output.
Output: 0.1 0.2 0.24 0.45 0.5 0.68 0.75 0.76
Other Input Types
If the numbers were integers between 0 and 100:
• The bucketing system can be such that the integer part of number/10 decides which bucket it belongs to
• The expression in that case would be: int BucketIndex= Numbers[i]/10;)
• This makes the buckets [0,10), [10,20), and so on, till [90,100)
If the elements were strings instead of numbers:
• We could make bucket index 0 to 25 for alphabets from a to z
• First characters that match with that bucket index could go in that bucket
Similarly, you can form an optimal relationship between bucket index and numbers for various cases.
Interesting observations:
- Counting sort is equivalent to bucket sort with bucket size 1
- Bucket sort with two buckets is equivalent to quicksort with pivot element always set to the middle value
- Top-down implementation of radix sort is bucket sort with both the range of values and the number of buckets being a power of 2 or the radix of the numbers that we are going to sort
Bucket Sort Complexity
Time Complexity
The time complexity of bucket sort depends on the internal sorting algorithm used. Here’s why:
• When there are too many numbers in the same range, it means that the elements are not uniformly distributed.
• In this case, the numbers will likely be placed in the same bucket, or some buckets will have way more elements than others. For example, among buckets B0 to B9, B7 has 90% of the elements.
• This means that our internal sorting algorithm will be doing most of the sorting work, making the performance of bucket sort dependent on the internal sorting algorithm used to sort the elements
within each bucket.
If prior information on the distribution of elements is absent, the standard library sort may be used.
Worst Case:
In general, bucket sort uses insertion sort for sorting the elements inside each bucket. Insertion sort performs really well when the count of elements that need sorting is small. The buckets are
usually set up in a way that each bucket contains a small number of elements, so we can take advantage of this quality.
However, in the worst-case scenario, all elements will end up in a single bucket despite our best efforts. And when that single bucket is sorted using insertion sort, it takes O(n2) time to perform
it on all n elements. Therefore, the worst-case runtime of Bucket Sort is O(n2).
Best Case:
The best-case complexity occurs when the elements are uniformly distributed — the number of elements in each bucket is almost equal.
The complexity can be even better if the numbers within each bucket are already sorted, and our sorting algorithm knows how to take advantage of that.
We can use any algorithm that best fits the data. For example, if insertion sort is used in this case, the best case time complexity will be O(n+b) since the best case for insertion sort occurs when
the data is already sorted.
Average Case:
Let’s calculate the expected number of numbers in each bucket. It is intuitive that if the ranges of the buckets are of equal size, then in the average case, the expected number of numbers in each
bucket would be the same for every bucket. Let that expected number be E.
The sum of the expected number of numbers for all of the buckets should be equal to n. Mathematically, E * n = b must hold. So, E = n / b.
Now, if we apply insertion sort to the numbers, then:
• Sorting a single bucket would take O(n2/b2) time
• Sorting b buckets would take O(n2/b2 * b) = O(n2/b) time
Choosing a value of “b” in the order of n would result in an expected time complexity of
O(n2/n) = O(n)
Space Complexity
The space required for bucket sort depends on the size of the input and the number of buckets created. Therefore, the space complexity of bucket sort is O(nb), where b is the number of buckets.
Bucket Sort FAQs
Question 1: When is bucket sort the most useful?
Bucket sort is mainly useful when the input is uniformly distributed over a range — so no one bucket has most of the elements and most buckets are not empty. It is often used to sort uniformly
distributed floating point values. One reason for this is that the range of each bucket can easily be determined.
Another reason is that it's easy to determine a relationship between the bucket index and the number, such that increasing bucket indexes corresponds to increasing range. It is also easy to divide
floating point numbers into discrete ranges.
Question 2: When should bucket sort be avoided?
When all or most values fall in just a few buckets, it might be wiser to directly go for a sorting algorithm that works best for that type of data.
Question 3: Is bucket sort a stable sorting algorithm?
Bucket sort is stable if the internal sorting algorithm used to sort it is also stable. Merge sort and insertion sort are examples of stable sorting algorithms that can be used internally. Quicksort
is an example of unstable sorting algorithms. (A sorting algorithm is said to be stable if for any two elements with the same key, the relative order of the two elements is preserved in the sorted
output as it is in the original input.)
Question 4: Is bucket sort an in-place sorting algorithm?
Since the inputs are sorted by placing them into several buckets, the sorting is not happening in-place. Bucket sort is not an in-place sorting algorithm.
Question 5: Can the ranges of individual buckets be of different intervals/sizes?
Yes. The intervals for each bucket do not always need to be of the same size. However, if a uniform enough distribution of elements is assumed and the range of input is known, equal intervals will
help ensure elements are distributed more uniformly within buckets.
Are You Ready to Nail Your Next Coding Interview?
Sorting algorithms interview questions feature in almost every coding interview for software developers. If you’re looking for guidance and help to nail these questions and more, sign up for our free
As pioneers in the field of technical interview prep, we have trained thousands of software engineers to crack the toughest coding interviews and land jobs at their dream companies, such as Google,
Facebook, Apple, Netflix, Amazon, and more!
Article contributed by Tanya Shrivastava | {"url":"https://www.interviewkickstart.com/blogs/learn/bucket-sort-algorithm","timestamp":"2024-11-04T00:41:09Z","content_type":"text/html","content_length":"155236","record_id":"<urn:uuid:eb3838f2-fdf5-433c-9495-feacb6ec3b6c>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00740.warc.gz"} |
Burg Method
Power spectral density estimate using Burg method
DSP System Toolbox / Estimation / Power Spectrum Estimation
The Burg Method block estimates the power spectral density (PSD) of the input frame using the Burg method. This method fits an autoregressive (AR) model to the signal by minimizing (least squares)
the forward and backward prediction errors. The block minimizes the errors by constraining the AR parameters to satisfy the Levinson-Durbin recursion.
The block computes the spectrum from the FFT of the estimated AR model parameters.
Input — Input
column vector | unoriented vector
Specify the input as a column vector or an unoriented vector. This input represents a frame of consecutive time samples from a single-channel signal.
Data Types: single | double
Output — Power spectral density estimate
column vector
Power spectral density estimate of the signal at N[fft] equally spaced frequency points, returned as a column vector. The frequency points are in the range [0,F[s]), where F[s] is the sampling rate
of the signal.
Data Types: single | double
Inherit estimation order from input dimensions — Inherit estimation order from input dimensions
off (default) | on
When you select the Inherit estimation order from input dimensions parameter, the order of the all-pole model (estimation order) is one less than the input frame size. Otherwise, the Estimation order
parameter determines the model order. The block computes the spectrum from the FFT of the estimated AR model parameters.
Estimation order — Order of AR model
6 (default) | nonnegative integer
Specify the estimation order of the AR model as a nonnegative integer.
To enable this parameter, clear the Inherit estimation order from input dimensions parameter.
Inherit FFT length from estimation order — Inherit FFT length from estimation order
off (default) | on
When you select this parameter, the FFT length N[fft] is one greater than the estimation order. To specify the number of points on which to perform the FFT, clear the Inherit FFT length from
estimation order parameter. You can then specify the FFT length as a power of 2 using the FFT length parameter. The block zero-pads or wraps the input to N[fft] before computing the FFT.
FFT length — FFT length
256 (default) | positive integer greater than or equal to 2
Enter the number of data points N[fft] on which to perform the FFT as a positive integer greater than or equal to 2. When N[fft] is larger than the input frame size, the block zero-pads each frame as
needed. When N[fft] is smaller than the input frame size, the block wraps each frame as needed.
To enable this parameter, clear the Inherit FFT length from input dimensions parameter.
Inherit sample time from input — Inherit sample time from input
on (default) | off
When you select the Inherit sample time from input parameter, the block computes the frequency data from the sample period of the input signal. For the block to produce a valid output, the following
conditions must hold:
• The input to the block is an original signal with no samples added or deleted (by insertion of zeros, for example).
• The sample period of the time-domain signal in the simulation equals the sample period of the original time series.
If these conditions do not hold, clear the Inherit sample time from input parameter. You can then specify a sample time using the Sample time of original time series parameter.
Sample time of original time series — Sample time of original time-domain signal
1 (default) | positive scalar
Specify the sample time of the original time-domain signal as a positive scalar.
To enable this parameter, clear the Inherit sample time from input parameter.
Block Characteristics
Data Types double | single
Multidimensional Signals No
Variable-Size Signals No
More About
Compare Power Spectral Density Estimation Methods
The Burg Method and Yule-Walker Method blocks return similar results for large frame sizes.
This table compares the features of the Burg Method block to the Covariance Method, Modified Covariance Method, and the Yule-Walker Method blocks.
Burg Covariance Modified Covariance Yule-Walker
Does not apply window to data Does not apply window to data Does not apply window to data Applies window to data
Characteristics Minimizes the forward and backward prediction errors in the Minimizes the forward prediction Minimizes the forward and backward Minimizes the forward prediction error in the
least squares sense, with the AR coefficients constrained to error in the least squares sense prediction errors in the least least squares sense (also called autocorrelation
satisfy the L-D recursion squares sense method)
Better resolution than Yule-Walker High resolution for short data Performs as well as other methods for large data
High resolution for short data records for short data records (more records records
accurate estimates)
Advantages Able to extract frequencies from
Able to extract frequencies from data consisting of p or more pure
Always produces a stable model data consisting of p or more pure sinusoids Always produces a stable model
Does not suffer spectral
Peak locations highly dependent on initial phase Can produce unstable models Can produce unstable models Performs relatively poorly for short data records
Can suffer spectral line-splitting for sinusoids in noise, or Peak locations slightly dependent
Disadvantages when order is very large Frequency bias for estimates of on initial phase Frequency bias for estimates of sinusoids in
sinusoids in noise noise
Frequency bias for estimates of sinusoids in noise Minor frequency bias for estimates
of sinusoids in noise
Conditions for Order must be less than or equal to Order must be less than or equal Because of the biased estimate, the
Nonsingularity 1/2 the input frame size to 2/3 the input frame size autocorrelation matrix is guaranteed to be
positive-definite, hence nonsingular
[1] Kay, S. M. Modern Spectral Estimation: Theory and Application. Englewood Cliffs, NJ: Prentice-Hall, 1988.
[2] Orfanidis, S. J. Introduction to Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1995.
[3] Orfanidis, S. J. Optimum Signal Processing: An Introduction. 2nd ed. New York, NY: Macmillan, 1985.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Usage notes and limitations:
• When the FFT length is not a power of 2, the executable generated from this block relies on prebuilt dynamic library files (.dll files) included with MATLAB^®. Use the packNGo function to package
the code generated from this block and all the relevant files in a compressed zip file. Using this zip file, you can relocate, unpack, and rebuild your project in another development environment
where MATLAB is not installed. For more details, see How To Run a Generated Executable Outside MATLAB.
• When the FFT length is a power of 2, you can generate standalone C and C++ code from this block.
Version History
Introduced before R2006a | {"url":"https://kr.mathworks.com/help/dsp/ref/burgmethod.html","timestamp":"2024-11-05T17:25:38Z","content_type":"text/html","content_length":"91608","record_id":"<urn:uuid:53c1d0a5-164f-4006-90ab-5aeb0197da98>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00489.warc.gz"} |
Numerical Modelling of Beach Profile Evolution with and without an Artificial Reef
College of Civil Engineering, Tongji University, Shanghai 200092, China
State Key Laboratory of Marine Geology, Tongji University, Shanghai 200092, China
The Lyell Centre for Earth and Marine Science and Technology, Institute for Infrastructure and Environment, Heriot-Watt University, Edinburgh EH14 4AS, UK
Authors to whom correspondence should be addressed.
Submission received: 29 September 2023 / Revised: 23 October 2023 / Accepted: 27 October 2023 / Published: 2 November 2023
With the recent development from grey infrastructures to green infrastructures, artificial reefs become more popular in coastal protection projects. To investigate the responses of beach profile
evolution to the presence of an artificial reef, a non-hydrostatic model is established. Both hydrodynamic and morphodynamic evolution for the beach with and without an artificial reef are compared
under regular wave conditions. In addition, the protected beach profile evolution by an artificial reef is discussed under irregular wave conditions. Three key parameters in non-hydrostatic
simulation are considered for sensitivity analysis, including maximum wave steepness criterium (maxbrsteep), water depth factor (depthscale), and equilibrium sediment concentration factor (sedcal).
The numerical results under regular wave conditions indicate that the artificial reef enhances wave attenuation by inducing wave breaking. In addition, the artificial reef reduces local flow velocity
and offshore sediment transport by 51%, therefore decrease the total erosion by 53%. Over the artificial reef, wave skewness and asymmetry go through a drastic change. Under irregular wave
conditions, short waves contribute to the wave energy mainly and reflection-induced standing wave effects decline considerably. It demonstrates that the artificial reef can protect the beach from
regular and irregular waves by reducing erosion and offshore transport of suspended sediments. Moreover, in the wave breaking area, the increase of maximum wave steepness criterium may give arise to
the wave height. The morphological evolution is more sensitive to water depth factor than equilibrium sediment concentration factor, because the former is a controlling factor for beach profile
characteristics while the latter forms the sandbar varying irregularly in shape.
1. Introduction
Coastal erosion and recession threaten public and property safety and restrict coastal developments. To address these issues, governments spare no effort to protect the coastlines with various
coastal defence techniques and structures. Hard engineering solutions can provide direct defences for eroded coasts, but such structures alter the sediment transport and morphological evolution
irreversibly [
]. Moreover, these constructions may damage the natural habits and cause the ecological degradation [
]. In recent decades, soft engineering solutions have been used to harmonize hydrodynamic conditions and ecological environments in coastline protections and restorations. Therefore, coastal
protections are subject to a great transformation from grey infrastructures to green infrastructures [
]. Among the soft engineering solutions, beach nourishment has become a mature technology after a centurial development since 1920s [
]. Meanwhile, the extended applications of this approach, such as sandbars and sand engines, have gained popularity in practices [
]. Nowadays, artificial reefs play an important role in beach nourishment for its potential in ecosystem service [
]. When it comes to morphological evolution under the coastal protection by soft engineering solutions, nourished beaches and artificial reefs are treated as an integral system in general.
Nevertheless, the structural functions of artificial reefs ought to be considered, because nourished beaches and artificial reefs plays different roles in coastal protections. In particular, beach
nourishment is utilized to feed the beach, whereas artificial reefs attenuate waves to mitigate erosions.
Traditional artificial reefs can decelerate flow and provide shelters for local communities to inhabit. For box-type artificial reefs, cut-opening ratio dominates the upwelling and back eddy [
], and the aspect ratio of reef arrangements affects the internal turbulence [
]. Incident angle, spacing distance and upper surface rugosity of untraditional artificial reefs may alter the wake region as well [
]. Furthermore, artificial reefs can protect the coast from excessive erosions indirectly. To better understand the hydrodynamic and morphodynamic responses to artificial reefs, physical models and
laboratory-scale numerical models have been adopted. Based on a series of experiments on flow structure and sediment transport around artificial reefs under pure current conditions, Shu et al.
revealed that sediments can hardly suspend in the area occupied by artificial reefs, while sediments between the artificial reefs are more likely to move [
]. Through laboratory experiments and numerical modelling, Zhang found that the edge scours along box-type artificial reefs are more intensive than the bed scours inside, and depositions occur in the
wake region mainly [
], which is consistent with the experimental results by Tang et al. [
]. Previous studies focused more on local impacts of artificial reefs, with little attentions to wave actions on the sheltered beaches. However, wave-induced erosion is a predominant hazard for sandy
beaches [
Unlike laboratory-scale numerical models, field-scale numerical models are capable to assess the regional influences of artificial reefs. Wu’s numerical results on the sediment transport in an
artificial reef area in Zhoushan, China, demonstrated that artificial reefs have limited effects on sediment movement and bed scour under current actions [
]. Guilherme et al. investigated the morphological evolution around the multi-purpose artificial reefs along the Gold Coast of Australia using numerical modelling [
]. They found that littoral sediment transport causes great accretions in the wave shadow area, and the artificial reefs can trap the sediment and stabilize the sandbar. Similarly, the artificial
reefs along Rhode Island, USA was found to dissipate wave energy and mitigate current conditions as well [
]. According to the experimental results by Yang et al., the combination of a gravel dam and artificial reefs can maximize the protection for eroded beaches [
]. Kuang et al. and Ma et al. investigated the cross-shore hydrodynamic and morphodynamic evolution under regular waves and irregular waves [
]. They conducted a series of experiments to distinguish the protection of nourished beach, artificial sandbar and artificial reef, with special attentions to the impacts of artificial reefs on wave
attenuation and dissipation. Meanwhile, the artificial reefs may reform the scarp on a nourished beach. Artificial reefs act as an important component in coastal protection projects. Therefore,
investigations on nourished beach profile evolution with and without artificial reef are practically important. Based on the experimental results by Ma et al., it is necessary to establish a
numerical model to get more detailed hydrodynamic, sediment transport and morphodynamic results which is hard to measure in the physical experiment to improve the understanding of the hydrodynamic
and morphodynamic responses to artificial reefs.
In the present study, a reduced two-layer non-hydrostatic model is established by XBeach to investigate nourished beach profile evolution under regular wave conditions. To pinpoint the driving
mechanisms of flow structure, wave propagation, sediment transport and morphological evolution, numerical results of the beach with and without an artificial reef are compared. In addition, the
effects of irregular waves are taken into considerations. Furthermore, sensitive analysis is conducted to examine the influences of three key parameters on the problem. The main objective is to
reveal the impacts of artificial reefs on beach profile evolution and assess the performance of a reduced two-layer non-hydrostatic model in simulating wave-dominated beaches in the field.
2. Methodology
2.1. Numerical Model
To investigate beach profile evolution under the protection of an artificial reef, a non-hydrostatic model is established by XBeach [
]. To resolve the wave dispersive issue of non-hydrostatic models, the reduced two-layer mode is utilized to improve the non-hydrostatic simulations [
]. The details of reduced two-layer non-hydrostatic models are specified as follows. More information can be referred online (
(accessed on 26 September 2023)).
In the non-hydrostatic XBeach model, the depth-averaged normalized dynamic pressure is derived in a similar fashion as the one-layer version of the SWASH model [
], so that the one-dimensional non-linear shallow water equations are given by
$∂ η ∂ t + ∂ h u ∂ x = 0$
$∂ u ∂ t + u ∂ u ∂ x = − g ∂ η ∂ x − 1 h ∫ − d η ∂ q ∂ x d z + ν h ∂ 2 u ∂ x 2 − c f u u h$
is the water level;
is the still water depth,
is the total water depth,
$h = η + d$
represents the velocity in x-direction;
is the gravity constant;
$ν h$
is the horizontal viscosity;
is the dynamic pressure;
is the normalized dynamic pressure by the water density
$q = p / ρ$
$c f$
is the dimensionless friction coefficient. The depth averaged dynamic pressure is computed assuming a linear vertical distribution of dynamic pressure with zero value at the surface.
$∫ − d η ∂ q ∂ x d z = ∂ ∂ x ∫ − d η q d z − q z = η ∂ η ∂ x + q z = − d ∂ − d ∂ x = 1 2 ∂ h q z = − d ∂ x + q z = − d ∂ − d ∂ x = 1 2 h ∂ q z = − d ∂ x + 1 2 q z = − d ∂ η − d ∂ x = 1 2 h ∂ q b ∂ x
+ 1 2 q b ∂ η − d ∂ x$
$q b$
is the normalized dynamic pressure at the bed. Stelling and Zijlema [
] applied the Keller-box method [
] to obtain the pressure gradient in the vertical, and then
$q b$
can be described as,
$q b = − h 2 ( ∂ q ∂ z z = η + ∂ q ∂ z z = − d )$
In order to compute the normalized dynamic pressure at the bed, the contributions of advective and diffusive terms to the vertical momentum balance are assumed to be negligible.
$∂ w s ∂ t + ∂ q ∂ z z = η = 0 ∂ w b ∂ t + ∂ q ∂ z z = − d = 0$
$w s$
is the vertical velocity at the surface and
$w b$
is the vertical velocity at the bed. Then
$q b$
can be described as,
$q b = h 2 ( ∂ w s ∂ t + ∂ w b ∂ t )$
The vertical velocity at the bed is set by the kinematic boundary condition.
Thus, the local continuity equation equation is given by
$∂ u ∂ x + w s − w b h = 0$
The reduced two-layer model assumes that the non-hydrostatic pressure is constant in the lower layer. This means that the non-hydrostatic pressure at the bottom has the same value as the
non-hydrostatic pressure between the layers. To simplify the reduced layer, the layer velocities are converted into a depth-averaged velocity
and a velocity difference
$Δ u$
$u l o w e r u u p p e r = 1 1 − α 1 − α u Δ u$
$u Δ u = α 1 − α 1 − 1 u l o w e r u u p p e r$
$u l o w e r$
$u u p p e r$
represents the velocity in the lower and upper layer;
is the layer distribution,
$α = h l o w e r / h$
$h l o w e r$
is the water depth of the lower layer. Therefore, the momentum equations are given by [
$∂ h u ∂ t + g h ∂ η ∂ x + ∂ ∂ x h u 2 + 1 + α 2 ∂ h q b ∂ x − q b ∂ d ∂ x = 0$
$h ∂ Δ u ∂ t + ∂ h u Δ u ∂ x + 1 2 ∂ h q b ∂ x + q b ( 1 − α ) ∂ η ∂ x = − τ b − ν 2 Δ u α ( 1 − α ) h$
$∂ h w s ∂ t + ∂ h u w s ∂ x − q b ( 1 − α ) = 0$
$τ b$
represents the bed shear stress;
is the kinematic viscosity. Due to the simplified non-hydrostatic pressure in the lower layer, the vertical velocity between the layers is neglected. Thus, only the continuity equation for the upper
layer is required,
$( 1 + α ) ∂ h u ∂ x + α − α 2 ∂ h Δ u ∂ x + 2 w s − u s ∂ η ∂ x − u z α ∂ z α ∂ x = 0$
$z α$
is the height of interface,
$z α = α h − d$
$u s$
represents the vertical velocity at the surface;
$u z α$
represents the velocity in x-direction at the surface. Based on the global continuity equation (Equation (1)), the water elevation can be determined.
The numerical modelling in the present study is set up based on the experiments on beach profile evolutions by Ma et al. [
] in a wave flume. The flume is 50 m in length, 0.8 m in width and 1.2 m in depth.
Figure 1
shows the beach with and without an artificial reef after beach nourishment. The beach slope is 1:10. The water depth is 0.5 m. The artificial reef is 1.8 m long and 0.3 m high. The median grain size
of model sand is 0.17 mm. Based on the experimental set-up, two types of beach profiles are established by one-dimensional grid system. The grid is 22 m in length, including the buffer area of 10 m
before the origin x = 0, and the study area of 12 m. The resolution of the buffer area is 0.05 m, and the grid is refined in the study area to 0.02 m. There are 762 grid cells in total.
The model is driven by wave, flow and tide at the offshore boundary. The wave boundary condition is specified as constants under regular wave conditions, and defined by JONSWAP spectrum to generate
irregular wave conditions. For lateral wave boundaries, Neumann boundary conditions are activated, where the longshore water level gradient is 0. Absorbing-generating boundary conditions are used for
flow in weakly-reflective conditions. The lateral flow boundaries are set as no flux walls. The tide boundary is set as uniform water level of 0.5 m.
2.2. Parameter Setting
The simulation lasts 1320 s, including 1320 timesteps. As the cut-off values, the water depth factor (depthscale) is set as 50, the maximum wave steepness criterium (maxbrsteep) is 0.4 and the
equilibrium sediment concentration factor (sedcal) is 1. The bed friction coefficient is 0.01. The median grain size D[50] is 0.00017 m. The porosity and sediment density are set as 0.4 and 1430 kg/m
^3, respectively. The morphological acceleration factor is 1 as default. The critical avalanching slope under water and above water are 0.2 and 30, respectively. The beach slope is 1:10. The wave
height and wave period under regular waves conditions are 0.1 m and 1.57 s. For comparison, the significant wave height and peak wave period under irregular wave conditions are 0.1 m and 1.57 s as
2.3. Validation
The model predicted wave surface and morphology are validated against the experimental results by Ma et al. [
], for the beach without the artificial reef under regular wave conditions.
Figure 2
shows the validation of the predicted wave surface profile time series by the non-hydrostatic model for a duration of 10 s starting at 60 s after the test began. After wave propagation from the
offshore area wave W9 to the wave breaking area W1, the regular sine waves shape has changed to steepening crests and widening troughs. At W9, the foreslope of crest is steep and the backslope of
crest is mild, so the wave surface is jagged, which illustrates that both wave skewness and asymmetry intensify during the propagation.
The beach profile at in the end of non-hydrostatic simulation is validated in
Figure 3
. The morphology has become a sandbar-trough-scarp profile. The wave-induced ero-sion creates a scarp at the waterline and causes suspended sediments transport through backwash. Then a sandbar occurs
at the same position of wave breaking point due to the accretions. Meanwhile, a trough on the landward side of the sandbar is formed by plunging waves. The simulated results agree well with the
experimental results in terms of the characteristics of beach profile.
model is applied to evaluate the performance of the non-hydrostatic model [
]. The
value is calculated by
$d = 1 − ∑ i = 1 n O i − P i 2 ∑ i = 1 n P i − O ¯ + O i − O ¯ 2$
are the simulation and observation data, respectively;
$O ¯$
is the average of the observation data;
is the number of observation data. The
value ranges from 0 to 1, where 0.65–1.0, 0.5–0.65, 0.2–0.5, 0–0.2 stands for excellent; very good; good; and poor model-data agreement.
Table 1
shows the excellent performance of the non-hydrostatic XBeach model in simulations of hydrodynamic and morphodynamic evolution.
3. Results
The protecting effects of artificial reefs are investigated by hydrodynamic characteristics, sediment transport, and beach profile evolution under regular wave conditions. The numerical results for
the bare beach and the protected beach are compared for qualitative and quantitative analysis.
3.1. Hydrodynamic Characteristics
The wave surfaces profile along the two beach profiles at 60 s is shown in
Figure 4
. As shown in
Figure 4
a, regular waves are present in the offshore area (x = 0–5.2 m). Then wave crest steepens and wave trough widens in the shallow water (x > 5.2 m). Also, the foreslope of wave profile becomes steep
while the backslope becomes mild. After wave breaking, waves reach the swash zone, where uprush and backwash occur alternatively, so the waterline fluctuates up and down. For the beach with the
artificial reef (
Figure 4
b), the artificial reef induces wave breaking earlier (x = 2.7–4.5 m) and breaks the regular waves into the double-crest waves. With the increase of water depth after the artificial reef, the
double-crest waves further develop, where the front crest is smaller than the back crest. Under this circumstance, the wave foreslope becomes mild, while the wave backslope becomes steep. Due to
shallow water effects, the front crest fully grows and once its foreslope reaches the critical value (
= 0.4), the fore crest breaks first. In the swash area, the waterline also fluctuates up and down due to alternating uprush and backwash.
Wave skewness and wave asymmetry evolve with the wave surface profile due to wave transformation over the beach and the artificial reef. For the beach without the artificial reef (
Figure 5
a), both wave skewness and wave asymmetry are close to 0 in the offshore area (x = 0–5.2 m). As water depth decreases, wave skewness rises to the peak value over the sandbar at x = 8.2 m. Meanwhile,
wave asymmetry is a negative value, which decreases rapidly for shallow water effects, and reach the peak over the trough at x = 8.4 m. Then both wave skewness and wave asymmetry maintain their peak
values with oscillations. The oscillations may come from the wave reflection and the associated standing waves. Wave skewness represents the asymmetry of velocity, while wave asymmetry stands for the
asymmetry of acceleration. Thus, there is a quarter phase difference between their oscillations. In
Figure 5
b, the artificial reef causes further wave reflection and associated standing waves, where wave skewness and wave asymmetry show sine-like or cosine-like oscillations. During the wave propagation,
the water depth decreases at the artificial reef top (x = 2.7–4.5 m), and then recovers before the beach. In this condition, wave skewness rises first, and then decreases over the recovered water
depth on the landward side of the artificial reef (x > 4.5 m). After that, wave skewness increases again due to shallow water effects over the beach. The cross-shore variation of wave asymmetry is
almost contrary to wave skewness. Thus, the foreslope and backslope of wave crest vary significantly during the wave propagation.
The cross-shore evolution trend of wave skewness and asymmetry over the artificial reef is similar to that over a low-crested structure (LCS) using a 2-D RANS-VOF model and laboratory observations by
Zou & Peng [
] and Peng et al. [
]. It was found that wave reflection increases/decreases the magnitude of wave skewness and wave asymmetry on the incident/transmission side of LCS. According to the bispectral analysis by Zou & Peng
], the observed wave shape evolution over a LCS can be attributed to the changes in the interplay of sum and difference nonlinear wave-wave interactions. Besides the cross-shore wave height
transformation, mean flow, wave skewness and asymmetry are the major drivers for beach morphological change with and without low crest structure such as sand bar, natural and artificial reef and
breakwater [
Time average velocity, upper layer and lower layer flow velocity and their difference are shown in
Figure 6
. For the beach without the artificial reef (
Figure 6
a), time average velocity in offshore area is close to 0, so waves remain sinusoidal. Standing waves due to wave reflection lead to sine-like or cosine-like oscillations in upper and lower velocity.
In the shallow water area (x > 5.2 m), time average velocity is negative and its absolute value rises first and then decreases. Moreover, the lower layer flow is more intensive than the upper layer
flow, and both of them have an offshore trend.
Figure 6
b illustrate that the artificial reef alters the local flow structures considerably. On seaward side of the artificial reef (x < 2.7 m), time average velocity is 0, but upper and lower velocity and
their difference are characterized with oscillations due to wave reflection. On the artificial reef top (x = 2.7–4.5 m), the time average velocity is relatively steady, while the upper and lower
velocity vary irregularly. In the shallow water area (x > 5.2 m), time average velocity rises first and then declines, where the negative offshore flow is much more intensive in the lower layer than
the upper layer.
3.2. Sediment Transport
As shown in
Figure 7
, for both beach profiles, the suspended load transport rates are negative, while the bedload transport rates are positive, i.e., suspended load transport is offshore and bedload transport is
onshore. In
Figure 7
a, the maximum suspended load transport rate is 7.3 × 10
/s, but the maximum bedload transport rate is 4.9 × 10
/s which is much less than the suspended load transport. Thus, the suspended load transport plays a predominant role in beach profile evolution. In
Figure 7
b, the maximum suspended load transport rate decreases to 3.6 × 10
/s in the presence of the artificial reef, i.e., the artificial reef reduces the offshore flow and suspended load transport by 51%.
3.3. Morphological Evolution
Figure 8
shows the initial and the final morphology and their difference. The final morphology is characterized by a sandbar-trough-scarp profile. The wave-induced erosion generates a scarp at the waterline.
The suspended sediments transport offshore with the backwash. At the wave breaking point, the sediment deposition forms a sandbar. Then plunging waves contribute to the trough on the landward side of
the sandbar. In fact, the variation of bed level is related to the cross-shore transport gradient. As shown in
Figure 8
a, the critical deposition point of final morphology occurs at the position (x = 7.3 m) where the maximum suspended load transport rate occurs, whose cross-shore gradient is 0. Meanwhile, the sandbar
crest is located at the position (x = 6.7 m) where the maximum cross-shore gradient of suspended load transport rate is. Under this circumstance, it is evident that suspended load dominates the beach
profile evolution. The maximum erosion depth occurs at the scarp, which decreases from 0.08 m to 0.07 m by 13% for additional protection by the artificial reef. Moreover, the artificial reef reduces
the total erosion amount per unit width from 0.17 m
to 0.08 m
by 53%.
4. Discussion
Based on the non-hydrostatic simulation under irregular wave conditions, hydrodynamic and morphodynamic evolution of the protected beach are discussed. In addition, sensitivity analysis is utilized
to figure out the impacts of three key model parameters on non-hydrostatic simulation using XBeach model.
4.1. Irregular Wave Effects
The beach profile evolution under irregular wave conditions is related to cross-shore significant wave height, wave skewness, wave asymmetry, flow structure, sediment transport and bed level changes.
Figure 9
shows the numerical results in non-hydrostatic simulation for the beach with the artificial reef under irregular wave conditions. In
Figure 9
a, total significant wave height corresponds to short wave significant wave height, because short waves (0.15–3.16 Hz) contribute to the wave energy mainly, and a small part of energy is transferred
to long waves (0.01–0.15 Hz) because of wave breaking. The artificial reef results in standing waves due to wave reflection, whose influences decline with the increasing distance to the artificial
reef. Such characteristics are consistent with the theoretical and measured values by Goda et al. [
]. On the artificial reef top (x = 2.7–4.5 m), total significant wave height and the significant wave height of short waves declines rapidly, so the artificial reef has a significant effect on wave
attenuations. Then shallow water result in an increase in wave height, but the significant wave height declines due to wave breaking. Simultaneously, the increased long waves are resulted from wave
breaking through roller-induced radiation stress. As shown in
Figure 9
b, the cross-shore wave skewness and asymmetry are similar to those under regular wave conditions (
Figure 5
b). However, the oscillation is reduced under the irregular wave conditions because the standing wave effects induced by the wave reflection decline considerably.
The time average velocity, upper and lower velocity are smoother under irregular wave conditions (
Figure 9
c). The time average velocity in the offshore area is 0, while the upper and lower velocity oscillate due to the wave reflection by the artificial reef, and varies with the distance to the artificial
reef. On the artificial reef top (x = 2.7–4.5 m), the time average velocity is negative, where both the upper and lower flow are offshore. Then the time average velocity recovers to 0, but then
becomes negative in the shallow water area. Similar to the phenomena under regular wave conditions (
Figure 6
b), the lower offshore flow is more intensive than the upper offshore flow.
The cross-shore sediment transport rates are similar to those under regular wave conditions (
Figure 7
b), i.e., the suspended load transport dominates the beach profile evolution. In
Figure 9
d, the maximum suspended load transport rate is around 3.4 × 10
/s. Therefore, the artificial reef reduces the offshore sediment transport effectively under both regular and irregular waves conditions. The final morphology in
Figure 9
e is characterized by a terrace-scarp profile. Compared with the morphology under regular wave conditions, the moving breaking point of irregular waves can hardly form the fixed trough and sandbar
observed for regular waves anymore. The total erosion amount per unit width is 0.08 m
under irregular wave conditions. Therefore, the artificial reef provides positive and persistent protection from excessive erosions.
4.2. Sensitivity Analysis
Maximum wave steepness criterium (maxbrsteep), water depth factor (depthscale), and equilibrium sediment concentration factor (sedcal) are considered for sensitivity analysis of the model.
Due to shallow water effects, the foreslope of wave crest is steepened. Once it reaches
, wave breaks. In the non-hydrostatic simulation, assuming the pressure distribution of foreslope is hydrostatic, then the foreslope is like a vertical wave surface by breaking. Once the foreslope
steepness is below the half of
, non-hydrostatic term is included.
Figure 10
shows the wave surface profile time series at W1 and W9 by the non-hydrostatic simulation using different values of
. The default value of
is 0.4, and the suggested range is from 0.3 to 0.8. For sensitivity analysis, five values of
are set for simulations ranging from 0.2 to 1.0 with a uniform step by 0.2. In the offshore area (W9),
can hardly affect the wave surface. In the wave breaking area (W1), the increase of
may increase the wave height.
The water depth factor (
) consists of
(threshold water depth above which cells are considered wet),
(threshold water depth above which Stokes drift is included),
(water depth at which is switched from critical avalanching slope under water to critical avalanching slope above water) and
(maximum bed level change due to avalanching).
Figure 11
shows the final morphology and position of scarp by different values of
in non-hydrostatic simulation. The default value of
is 1, and the suggested range is between 1 and 200. Then thirteen values of
are set for sensitivity analysis by 1, 2, 3, 4, 7, 10, 30, 50, 70, 90, 110, 130 and 150.
Figure 11
illustrates that with the increase of
, the scarp moves onshore. Meanwhile, the sandbar is formed but the trough varies irregularly. According to the fittings by least square method, a relationship is established between the position of
scarp (
) and
With the limits of water level and uprush height, the scarp no longer moves onshore for increasing depthscale at the position (x = 10.11 m).
Equilibrium sediment concentration factor (
) represents an equilibrium state of sediment erosion and deposition. If
is high, more sediments would be suspended to compensate for the gap of practical sediment concentration.
Figure 12
shows the final morphology by different values of
in non-hydrostatic simulation. Six values of
are set for sensitivity analysis by 0.1, 0.5, 1, 2, 3, 4, respectively. As shown in
Figure 12
, increasing
extends the erosion area, where the sandbar moves offshore and the scarp moves onshore. However, the sandbar is formed irregularly in terms of the shape. Such results demonstrate that the beach
profile evolution is more sensitive to
5. Conclusions
To investigate the impacts of artificial reefs on beach profile evolution, a non-hydrostatic model based on XBeach is established. Same as the previous experiment by Ma et al. [
], the beach with and without the artificial reef are examined. Hydrodynamic and morphodynamic responses to the artificial reef are focused on regular wave conditions. In addition, effects of the
artificial reef under irregular wave conditions are compared with those of regular wave conditions. Sensitivity analysis is conducted regarding three key model parameters in non-hydrostatic
simulation, i.e., maximum wave steepness criterium (
), water depth factor (
), and equilibrium sediment concentration factor (
The conclusions are as follows:
• The artificial reef causes wave reflection and wave breaking further offshore, therefore, effectively attenuate waves;
• The intensive offshore flow plays a dominant role in suspended load transport, and the artificial reef decelerates local flow and reduces the offshore sediment transport by 51%;
• Regular waves transform the initial plane beach into a sandbar-trough-scarp profile, where the artificial reef reduces the total erosion amount per unit width by 53%;
• Over the artificial reef, wave skewness and asymmetry undergo a drastic change;
• Under irregular wave conditions, short waves contribute to the wave energy mainly. Meanwhile, standing wave effects due to wave reflection by the artificial reef decline considerably;
• Irregular waves transform the initial plane beach to a terrace-scarp profile, where the artificial reef shows good performances in protecting beach from excessive erosions under both regular and
irregular wave conditions;
• In wave breaking area, the increase of maximum wave steepness criterium (maxbrsteep) may increase the wave height. With increasing water depth factor (depthscale), the scarp extends onshore until
x = 10.11 m due to the limit of water level and uprush height. Increasing equilibrium sediment concentration factor (sedcal) extends the erosion area, but form the sandbar irregularly varying
irregularly in shape.
Based on the non-hydrostatic simulation, the protecting effects of artificial reefs have been investigated alone. Additional protection measures should be considered for the coast which still suffers
from erosions after beach nourishment. Moreover, climate changes, such as sea level rise and recurrent wind storms, have been threatening sandy beaches around the world, which leads to shoreline
recessions considerably [
]. The non-hydrostatic model can be applied for the reliable simulation of beach responses to sea level rise and storm surges [
]. Furthermore, erosion hotspots indicate the erosive characteristics of sandy beaches predominately [
], so it is feasible to use the non-hydrostatic model to identify erosion hotspots during the beach profile evolution. The numerical results may help optimize local restoration effort to mitigate
erosion at the hotspots. To overcome the model sensitivities to the maximum wave steepness criterium (
), one- and two- phase Reynolds Averaged Navier–Stokes solver (RANS) and Volume of Fluid (VOF) surface capturing scheme (RANS-VOF) may be used to directly resolve the breaking point and the
overturning breaking wave surface profile and the resulting turbulence and morphological changes with and without an artificial reef [
]. With the further optimization of the non-hydrostatic model, a wider range of wave conditions and arrangements of multiple artificial constructions will be taken into accounts for further
investigations in near future.
Author Contributions
Conceptualization, C.K. and J.F.; methodology, X.H.; software, H.L.; validation, C.K., J.F. and X.H.; formal analysis, H.L.; investigation, J.F.; resources, R.Q.; data curation, H.L.;
writing—original draft preparation, C.K. and X.H.; writing—review and editing, J.F. and Q.Z.; supervision, R.Q. and Q.Z.; project administration, C.K.; funding acquisition, C.K. All authors have read
and agreed to the published version of the manuscript.
This research was funded by the National Key Research and Development Project of China (Grant No. 2022YFC3106205) and the National Natural Science Foundation of China (Grant Nos. 41976159 and
41776098). Professor Qingping Zou has been supported by the Natural Environment Research Council of UK (Grant No. NE/V006088/1) and Scottish Government (CRW2022_02).
Data Availability Statement
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to reasons related to national security.
Conflicts of Interest
The funders had no role in the design of the study, in the collection, analysis, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.
1. Vallam, S.; Annamalaisamy, S.S.; Ramesh, S.B. Sustainable hard and soft measures for coastal protection—Case studies along the Indian Coast. Mar. Georesour. Geotechnol. 2022, 40, 600–615. [Google
2. Ryzhakov, P.; Hermosilla, F.; Ubach, P.-A.; Oñate, E. Adaptive breakwaters with inflatable elements for coastal protection. Preliminary numerical estimation of their performance. Ocean Eng. 2022,
251, 110818. [Google Scholar] [CrossRef]
3. Celli, D.; Li, Y.; Ong, C.M.; Di Risio, M. The role of submerged berms on the momentary liquefaction around conventional rubble mound breakwaters. Appl. Ocean Res. 2019, 85, 1–11. [Google Scholar
4. Semeoshenkova, V.; Newton, A. Overview of erosion and beach quality issues in three Southern European countries: Portugal, Spain and Italy. Ocean Coast. Manag. 2015, 118, 12–21. [Google Scholar]
5. Cantasano, N.; Boccalaro, F.; Ietto, F. Assessing of detached breakwaters and beach nourishment environmental impacts in Italy: A review. Environ. Monit. Assess. 2022, 195, 127. [Google Scholar]
6. Ma, Z.; Melville, D.S.; Liu, J.; Chen, Y.; Yang, H.; Ren, W.; Zhang, Z.; Piersma, T.; Li, B. Rethinking China’s new great wall. Science 2014, 346, 912–914. [Google Scholar] [CrossRef]
7. Qi, H.S.; Liu, G.; Cai, F.; Zhu, J.; Liu, J.H.; Lei, G.; He, Y.Y.; Zheng, J.X.; Cao, H.M. Development trend and prospect of beach nourishment technology. J. Appl. Oceanogr. 2021, 40, 111–125. [
Google Scholar]
8. Schoonees, T.; Mancheño, G.A.; Scheres, B.; Bouma, T.J.; Silva, R.; Schlurmann, T.; Schüttrumpf, H. Hard structures for coastal protection, towards greener designs. Estuaries Coasts 2019, 42,
1709–1729. [Google Scholar]
9. Elko, N.; Briggs, T.R.; Benedet, L.; Robertson, Q.; Thomson, G.; Webb, B.M.; Garvey, K. A century of U.S. beach nourishment. Ocean Coast. Manag. 2021, 199, 105406. [Google Scholar] [CrossRef]
10. de Vriend, H.J.; van Koningsveld, M.; Aarninkhof, S.G.J.; de Vries, M.B.; Baptist, M.J. Sustainable hydraulic engineering through building with nature. J. Hydro-Environ. Res. 2015, 9, 159–171. [
Google Scholar] [CrossRef]
11. Dornhelm, R.B. The Coney Island public beach and boardwalk improvement of 1923. In Urban Beaches: Balancing Public Rights and Private Development; ASCE: Preston, VA, USA, 2003; pp. 52–63. [Google
12. Roest, B.; de Vries, S.; de Schipper, M.; Aarninkhof, S. Observed changes of a mega feeder nourishment in a coastal cell: Five years of Sand Engine morphodynamics. J. Mar. Sci. Eng. 2021, 9, 37.
[Google Scholar]
13. Aleixo, C.P.; Mendes, T.S.; Braz, S.T. Beach nourishment practice in mainland Portugal (1950–2017): Overview and retrospective. Ocean Coastal Manag. 2020, 192, 105211. [Google Scholar]
14. Hamm, L.; Capobianco, M.; Dette, H.H.; Lechugad, A.; Spanhoffe, R.; Stive, M.J.F. A summary of European experience with shore nourishment. Coast. Eng. 2002, 47, 237–264. [Google Scholar]
15. Bitan, M.; Zviely, D. Sand Beach Nourishment: Experience from the Mediterranean Coast of Israel. J. Mar. Sci. Eng. 2020, 8, 273. [Google Scholar] [CrossRef]
16. Ali, A.; Abdullah, M.R.; Safuan, C.D.M.; Afiq-Firdaus, A.M.; Bachok, Z.; Akhir, M.F.M.; Latif, R.; Muhamad, A.; Seng, T.H.; Roslee, A.; et al. Side-scan sonar coupled with scuba diving
observation for enhanced monitoring of benthic artificial reefs along the coast of Terengganu, Peninsular Malaysia. J. Mar. Sci. Eng. 2022, 10, 1309. [Google Scholar]
17. David da Costa, I.; Luís da Silva, S.J.; Costa, L.L.; Lima, J.S.; Zalmon, I.R. Reproductive potential and production role of artificial reefs—Southeastern Brazil. Estuar. Coast. Shelf Sci. 2022,
265, 107710. [Google Scholar]
18. Wang, G.; Wan, R.; Wang, X.X.; Zhao, F.F.; Lan, X.Z.; Cheng, H.; Tang, W.Y.; Guan, Q.L. Study on the influence of cut-opening ratio, cut-opening shape, and cut-opening number on the flow field of
a cubic artificial reef. Ocean Eng. 2018, 162, 341–352. [Google Scholar]
19. Nie, Z.; Zhu, L.; Xie, W.; Zhang, J.; Wang, J.; Jiang, Z.; Liang, Z. Research on the influence of cut-opening factors on flow field effect of artificial reef. Ocean Eng. 2022, 249, 110890. [
Google Scholar] [CrossRef]
20. Tang, Y.; Yang, W.; Sun, L.; Zhao, F.; Long, X.; Wang, G. Studies on factors influencing hydrodynamic characteristics of plates used in artificial reefs. J. Ocean Univ. China 2019, 18, 193–202. [
Google Scholar]
21. Zheng, Y.; Kuang, C.; Zhang, J.; Gu, J.; Chen, K.; Liu, X. Current and turbulence characteristics of perforated box-type artificial reefs in a constant water depth. Ocean Eng. 2022, 244, 110359.
[Google Scholar]
22. Maslov, D.; Pereira, E.; Duarte, D.; Miranda, T.; Ferreira, V.; Tieppo, M.; Cruz, F.; Johnson, J. Numerical analysis of the flow field and cross section design implications in a multifunctional
artificial reef. Ocean Eng. 2023, 272, 113817. [Google Scholar]
23. Zhang, J.; Zhu, L.; Liang, Z.; Sun, L.; Nie, Z.; Wang, J.; Xie, W.; Jiang, Z. Numerical study of efficiency indices to evaluate the effect of layout mode of artificial reef unit on flow field. J.
Mar. Sci. Eng. 2021, 9, 770. [Google Scholar]
24. Xue, D.; Wang, C.; Huang, T.; Pan, Y.; Zhang, N.; Zhang, L. Flow field effects and physical stability of pyramidal artificial reef with different slope angles. Ocean Eng. 2023, 283, 115059. [
Google Scholar]
25. Zhou, P.; Gao, Y.; Zheng, S. Three-dimensional numerical simulation on flow behavior behind trapezoidal artificial reefs. Ocean Eng. 2022, 266, 112899. [Google Scholar]
26. Jung, S.; Na, W.-B.; Kim, D. Rugosity and blocking indices of artificial reefs and their correlations with wake volume. Ocean Eng. 2022, 261, 112204. [Google Scholar]
27. Shu, A.; Wang, M.; Qin, J.; Wang, S.; Zhu, F. The characteristics for flow field distribution and sediment incipient movement around the typical artificial reefs area in Bohai Bay. SHUILI XUEBAO
2020, 51, 1223–1233. [Google Scholar]
28. Shu, A.; Qin, J.; Sun, T.; Yang, W.; Wang, M.; Zhu, J. Discussion on water and sediment dynamic characteristics and layout optimization of typical artificial reefs in Liaodong Bay of Bohai Sea.
SHUILI XUEBAO 2022, 53, 43–53. [Google Scholar]
29. Zhang, Q. Effects of Different Structures on Flow Resistance and Experiment on Local Scour of Artificial Reef. Master’s Thesis, Shanghai Ocean University, Shanghai, China, 13 May 2022. [Google
30. Tang, Y.; Wei, S.; Yang, M.; Wang, X.; Zhao, F. Experimental investigation of local scour around artificial reefs in steady currents. J. Ocean Univ. China 2022, 21, 445–456. [Google Scholar]
31. Luijendijk, A.; Hagenaars, G.; Ranasinghe, R.; Baart, F.; Donchyts, G.; Aarninkhof, S. The state of the world’s beaches. Sci. Rep. 2018, 8, 6641. [Google Scholar]
32. Eelsalu, M.; Parnell, K.E.; Soomere, T. Sandy beach evolution in the low-energy microtidal Baltic Sea: Attribution of changes to hydrometerological forcing. Geomorphology 2022, 414, 108383. [
Google Scholar]
33. Wu, X. Numerical Modelling of Sediment Transport in an Artificial Reef Area. Ph.D. Thesis, Shanghai Ocean University, Shanghai, China, 20 May 2019. [Google Scholar]
34. Vieira da Silva, G.; Hamilton, D.; Strauss, D.; Murray, T.; Tomlinson, R. Sediment pathways and morphodynamic response to a multi-purpose artificial reef—New insights. Coast. Eng. 2022, 171,
104027. [Google Scholar] [CrossRef]
35. Schuh, E.; Grilli, A.R.; Groetsch, F.; Grilli, S.T.; Crowley, D.; Ginis, I.; Stempel, P. Assessing the morphodynamic response of a New England beach-barrier system to an artificial reef. Coast.
Eng. 2023, 184, 104355. [Google Scholar]
36. Yang, L.P.; Yang, S.P.; Zhang, Z.Y.; Zhu, J.L.; Shi, B. Experimental study on wave dissipation and beach protection by gravel dam and porous square reef: Taking Beidaihe West Beach as the
example. Coastal Eng. 2022, 41, 223–232. [Google Scholar]
37. Kuang, C.; Ma, Y.; Han, X.; Pan, S.; Zhu, L. Experimental observation on beach evolution process with presence of artificial submerged sand bar and reef. J. Mar. Sci. Eng. 2020, 8, 1019. [Google
Scholar] [CrossRef]
38. Ma, Y.; Kuang, C.; Han, X.; Niu, H.; Zheng, Y.; Shen, C. Experimental study on the influence of an artificial reef on cross-shore morphodynamic processes of a wave-dominated beach. Water 2020, 12
, 2947. [Google Scholar] [CrossRef]
39. Gharagozlou, A.; Dietrich, J.C.; Karanci, A.; Luettich, R.A.; Overton, M.F. Storm-driven erosion and inundation of barrier islands from dune to region-scales. Coast. Eng. 2020, 158, 103674. [
Google Scholar] [CrossRef]
40. Roelvink, D.; McCall, R.; Mehvar, S.; Nederhoff, K.; Dastgheib, A. Improving predictions of swash dynamics in Xbeach: The role of groupiness and incident-band runup. Coast. Eng. 2018, 134,
103–123. [Google Scholar] [CrossRef]
41. Roelvink, D.; Reniers, A.; van Dongeren, A.; de Vries, J.V.T.; McCall, R.; Lescinski, J. Modelling storm impacts on beaches, dunes and barrier islands. Coast. Eng. 2009, 56, 1133–1152. [Google
Scholar] [CrossRef]
42. Cui, H.; Pietrzak, J.D.; Stelling, G.S. Optimal dispersion with minimized Poisson equations for non-hydrostatic free surface flows. Ocean Model. 2014, 81, 1–12. [Google Scholar]
43. de Ridder, M.P.; Smit, P.B.; van Dongeren, A.; McCall, R.; Nederhoff, K.; Reniers, A.J.H.M. Efficient two-layer non-hydrostatic wave model with accurate dispersive behaviour. Coast. Eng. 2021,
164, 103808. [Google Scholar] [CrossRef]
44. Zijlema, M.; Stelling, G.; Smit, P. SWASH: An operational public domain code for simulating wave fields and rapidly varied flows in coastal waters. Coast. Eng. 2011, 58, 992–1012. [Google Scholar
] [CrossRef]
45. Stelling, G.; Zijlema, M. An accurate and efficient finite-difference algorithm for non-hydrostatic free-surface flow with applica-tion to wave propagation. Int. J. Numer. Methods Fluids 2003, 43
, 1–23. [Google Scholar] [CrossRef]
46. Lam, D.C.L.; Simpson, R.B. Centered differencing and the box scheme for diffusion convection problems. J. Comput. Phys. 1976, 22, 486–500. [Google Scholar] [CrossRef]
47. Willmott, C.J. On the validation of models. Phys. Geogr. 1981, 2, 184–194. [Google Scholar] [CrossRef]
48. Zou, Q.; Peng, Z. Evolution of wave shape over a low-crested structure. Coastal Eng. 2011, 58, 478–488. [Google Scholar] [CrossRef]
49. Peng, Z.; Zou, Q.; Reeve, D.E.; Wang, B. Parameterisation and transformation of wave asymmetries over a low-crested breakwater. Coastal Eng. 2009, 56, 1123–1132. [Google Scholar] [CrossRef]
50. Peng, Z.; Zou, Q.; Lin, P. A partial cell technique for modeling the morphological change and scour. Coastal Eng. 2018, 131, 88–105. [Google Scholar] [CrossRef]
51. Ruessink, B.V.; Van Den Berg, T.J.J.; Van Rijn, L.C. Modeling sediment transport beneath skewed asymmetric waves above a plane bed. J. Geophys. Res. Oceans 2009, 114, 1–14. [Google Scholar] [
52. Gonzalez-Rodriguez, D.; Madsen, O.S. Seabed shear stress and bedload transport due to asymmetric and skewed waves. Coastal Eng. 2007, 54, 914–929. [Google Scholar] [CrossRef]
53. Hoefel, F.; Elgar, S. Wave-induced sediment transport and sandbar migration. Science 2003, 299, 1885–1887. [Google Scholar] [CrossRef]
54. Goda, Y.; Suzuki, Y. Estimation of incident and reflected waves in random wave experiments. Coastal Eng. 1976, 1976, 828–845. [Google Scholar] [CrossRef]
55. Dastgheib, A.; Martinez, C.; Udo, K.; Ranasinghe, R. Climate change driven shoreline change at Hasaki Beach Japan: A novel application of the Probabilistic Coastline Recession (PCR) model. Coast.
Eng. 2021, 172, 104079. [Google Scholar] [CrossRef]
56. Bonaldo, D.; Bucchignani, E.; Pomaro, A.; Ricchi, A.; Sclavo, M.; Carniel, S. Wind waves in the Adriatic Sea under a severe climate change scenario and implications for the coasts. Int. J. Clim.
2020, 40, 5389–5406. [Google Scholar] [CrossRef]
57. Forgiarini, A.P.P.; de Figueiredo, S.A.; Calliari, L.J.; Goulart, E.S.; Marques, W.; Trombetta, T.B.; Oleinik, P.H.; Guimaraes, R.C.; Arigony-Neto, J.; Salame, C.C. Quantifying the geomorphologic
and urbanization influence on coastal retreat under sea level rise. Estuar. Coast. Shelf Sci. 2019, 230, 106437. [Google Scholar] [CrossRef]
58. Thepsiriamnuay, H.; Pumijumnong, N. Modelling Assessment of Sandy Beaches Erosion in Thailand. Environ. Nat. Resour. J. 2018, 17, 71–86. [Google Scholar] [CrossRef]
59. Vousdoukas, M.I.; Ranasinghe, R.; Mentaschi, L.; Plomaritis, T.A.; Athanasiou, P.; Luijendijk, A.; Feyen, L. Sandy coastlines under threat of erosion. Nat. Clim. Chang. 2020, 10, 260–263. [Google
Scholar] [CrossRef]
60. Bagheri, M.; Zaiton Ibrahim, Z.; Bin Mansor, S.; Abd Manaf, L.; Badarulzaman, N.; Vaghefi, N. Shoreline change analysis and erosion prediction using historical data of Kuala Terengganu, Malaysia.
Environ. Earth Sci. 2019, 78, 1–21. [Google Scholar] [CrossRef]
61. Frohlich, M.F.; Smith, T.F.; Fidelman, P.; Baldwin, C.; Jacobson, C.; Carter, R.B. Legal barriers to adaptive coastal management at a coastal erosion hotspot in Florianópolis, Brazil. Mar. Policy
2021, 127, 104436. [Google Scholar] [CrossRef]
62. Yu, J.; Ding, Y.; Zhang, L.; Liu, P.; Fan, R. Erosion hotspot identified along the sandy coast of Shanwei: Characteristics and origin. Acta Oceanol. Sin. 2023, 42, 91–102. [Google Scholar] [
63. Bakhtyar, R.; Razmi, A.M.; Barry, D.A.; Yeganeh-Bakhtiary, A.; Zou, Q. Air-water two-phase flow modeling of turbulent surf and swash zone wave motions. Adv. Water Resour. 2010, 33, 1560–1574. [
Google Scholar] [CrossRef]
64. Lara, J.L.; Garcia, N.; Losada, I.J. RANS modelling applied to random wave interaction with submerged permeable structures. Coastal Eng. 2006, 53, 395–417. [Google Scholar] [CrossRef]
Figure 1.
Experiment set-up for the beach (
) without and (
) with an artificial reef [
Figure 2.
Validations of the predicted wave surface profile time series by non-hydrostatic model XBeach against the experiment [
Figure 3.
Validation of predicted beach profile by non-hydrostatic model against the experiment by Ma et al. [
Figure 4. Predicted wave surface profile and beach profile at 60 s for the beach (a) without and (b) with the artificial reef.
Figure 5. Predicted cross-shore profile of wave skewness and asymmetry for the beach (a) without and (b) with the artificial reef.
Figure 6. Time average velocity, upper and lower velocity and their difference for the beach (a) without and (b) with the artificial reef.
Figure 7. Suspended load and bedload transport rate for the beach (a) without and (b) with the artificial reef.
Figure 8. Initial and final morphology and their difference for the beach (a) without and (b) with the artificial reef.
Figure 9. (a) Total significant wave height, significant wave height of short waves and long waves; (b) wave skewness and asymmetry; (c) time average velocity, upper and lower velocity and their
difference; (d) suspended load and bedload transport rate; (e) initial and final morphology and their difference for the beach with the artificial reef under irregular waves.
Figure 10. Wave surface profile time series at wave gauge (a) W1 the wave breaking area and (b) W9 in the offshore area by the non-hydrostatic simulation using different values of maximum wave
steepness criterium, maxbrsteep.
Figure 11. (a) Final morphology; (b) position of the scarp; (c) position of the scarp by non-hydrostatic simulation using different values of water depth factor (depthscale).
Figure 12. Final morphology by non-hydrostatic simulation using different values of equilibrium sediment concentration factor (sedcal).
Item Position d Evaluation
Wave surface W1 0.9421 Excellent
Wave surface W2 0.9917 Excellent
Wave surface W3 0.9925 Excellent
Wave surface W4 0.9936 Excellent
Wave surface W5 0.9905 Excellent
Wave surface W6 0.9942 Excellent
Wave surface W7 0.9921 Excellent
Wave surface W8 0.9966 Excellent
Wave surface W9 0.9955 Excellent
Beach profile 0.9953 Excellent
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Kuang, C.; Fan, J.; Han, X.; Li, H.; Qin, R.; Zou, Q. Numerical Modelling of Beach Profile Evolution with and without an Artificial Reef. Water 2023, 15, 3832. https://doi.org/10.3390/w15213832
AMA Style
Kuang C, Fan J, Han X, Li H, Qin R, Zou Q. Numerical Modelling of Beach Profile Evolution with and without an Artificial Reef. Water. 2023; 15(21):3832. https://doi.org/10.3390/w15213832
Chicago/Turabian Style
Kuang, Cuiping, Jiadong Fan, Xuejian Han, Hongyi Li, Rufu Qin, and Qingping Zou. 2023. "Numerical Modelling of Beach Profile Evolution with and without an Artificial Reef" Water 15, no. 21: 3832.
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-4441/15/21/3832?utm_campaign=releaseissue_waterutm_medium=emailutm_source=releaseissueutm_term=titlelink151","timestamp":"2024-11-12T05:24:42Z","content_type":"text/html","content_length":"500424","record_id":"<urn:uuid:284e5ca3-e534-477f-ad00-ec8454afc7f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00589.warc.gz"} |
Angular Rate Control in the HL-20 Autopilot
This is Part 2 of the example series on design and tuning of the flight control system for the HL-20 vehicle. This part deals with closing the inner loops controlling the body angular rates.
Control Architecture
Open the HL-20 model with its flight control system.
This 6-DOF model is adapted from NASA HL-20 Lifting Body Airframe (Aerospace Blockset). The model is configured to simulate the final approach to the landing site. The "Guidance System" generates the
glideslope trajectory and corresponding roll, angle of attack (alpha), and sideslip angle (beta) commands. The "Flight Control System" is tasked with adjusting the control surfaces to track these
commands. The "Controller" block inside the "Flight Control System" is a variant subsystem with different autopilot configurations.
The "Baseline" and "Classical" controllers use a classic cascaded-loop architecture with three inner P-only loops to control the angular rates p,q,r, and three outer PI loops to control the angular
positions phi,alpha,beta. The six proportional gains and three integral gains are all scheduled as a function of alpha and beta. The "Baseline" variant contains the baseline design featured in NASA
HL-20 Lifting Body Airframe (Aerospace Blockset). Parts 2 and 3 of this series use the "Classical" variant to walk through the tuning process. The active variant is controlled by the workspace
variable CTYPE. Set its value to 2 to activate the "Classical" variant of the controller.
% Select "Classical" variant of controller
CTYPE = 2;
% call model update to make sure only active variant signals are analyzed during linearization
set_param('csthl20_control', 'SimulationCommand', 'update');
Note that this variant uses a mix of lookup tables and MATLAB Function blocks to schedule the autopilot gains.
Setup for Controller Tuning
In Part 1 of this series (Trimming and Linearization of the HL-20 Airframe), we obtained linearized models of the "HL20 Airframe" and "Controls Selector" blocks for 40 different aircraft orientations
(40 different pairs of (alpha,beta) values). Load these arrays of linearized models.
load csthl20_TrimData G7 CS
8x5 array of state-space models.
Each model has 34 outputs, 9 inputs, and 7 states.
8x5 array of state-space models.
Each model has 6 outputs, 5 inputs, and 0 states.
The slTuner interface is a convenient way to obtain linearized models of "csthl20_control" that are suitable for control system design and analysis. Through this interface you can designate the
signals and points of interest in the model and specify which blocks you want to tune.
ST0 = slTuner('csthl20_control');
ST0.Ts = 0; % ask for continuous-time linearizations
Here the points of interest include the angular and rate demands, the corresponding responses, and the deflections da,de,dr.
AP = {'da;de;dr'
'HL20 Airframe/pqr'
'Controller/Classical/Demands' % angular demands
Since we already obtained linearized models of the "HL20 Airframe" and "Controls Selector" blocks as a function of (alpha,beta), the simplest way to linearize the entire model "csthl20_control" is to
replace each nonlinear component by a family of linear models. This is called "block substitution" and is often the most effective way to linearize complex models at multiple operating conditions.
% Replace "HL20 Airframe" block by 8-by-5 array of linearized models G7
BlockSub1 = struct('Name','csthl20_control/HL20 Airframe','Value',G7);
% Replace "Controls Selector" by CS
BlockSub2 = struct('Name','csthl20_control/Flight Control System/Controls Selector','Value',CS);
% Replace "Actuators" by direct feedthrough (ignore saturations and second-order actuator dynamics)
BlockSub3 = struct('Name','csthl20_control/Actuators','Value',eye(6));
ST0.BlockSubstitutions = [BlockSub1 ; BlockSub2 ; BlockSub3];
You are now ready for the control design part.
Closing the Inner Loops
Begin with the three inner loops controlling the angular rates p,q,r. To get oriented, plot the open-loop transfer function from deflections (da,de,dr) to angular rates (p,q,r). With the slTuner
interface, you can query the model for any transfer function of interest.
% NOTE: The second 'da;de;dr' opens all feedback loops at the plant input
Gpqr = getIOTransfer(ST0,'da;de;dr','pqr','da;de;dr');
bode(Gpqr(1,1),Gpqr(2,2),Gpqr(3,3),{1e-1,1e3}), grid
legend('da to p','de to q','dr to r')
ans =
Legend (da to p, de to q, dr to r) with properties:
String: {'da to p' 'de to q' 'dr to r'}
Location: 'northeast'
Orientation: 'vertical'
FontSize: 8.1000
Position: [0.7795 0.8188 0.1750 0.1278]
Units: 'normalized'
Use GET to show all properties
This Bode plot suggests that the diagonal terms behave as integrators (up to the sign) beyond 5 rad/s. This justifies using proportional-only control. Consistent with the baseline design, set the
target bandwidth for the p,q,r loops to 30, 22.5, and 37.5 rad/s, respectively. The gains Kp, Kq, Kr for each (alpha,beta) value are readily obtained from the plant frequency response at these
frequencies, and the phase plots indicate that Kp should be positive (negative feedback) and Kq, Kr should be negative (positive feedback).
% Compute Kp,Kq,Kr for each (alpha,beta) condition. Resulting arrays
% have size [1 1 8 5]
Kp = 1./abs(evalfr(Gpqr(1,1),30i));
Kq = -1./abs(evalfr(Gpqr(2,2),22.5i));
Kr = -1./abs(evalfr(Gpqr(3,3),37.5i));
bode(Gpqr(1,1)*Kp,Gpqr(2,2)*Kq,Gpqr(3,3)*Kr,{1e-1,1e3}), grid
legend('da to p','de to q','dr to r')
ans =
Legend (da to p, de to q, dr to r) with properties:
String: {'da to p' 'de to q' 'dr to r'}
Location: 'northeast'
Orientation: 'vertical'
FontSize: 8.1000
Position: [0.7795 0.8188 0.1750 0.1278]
Units: 'normalized'
Use GET to show all properties
To conclude the inner-loop design, push these gain values to the corresponding lookup tables in the Simulink® model and refresh the slTuner object.
MWS = get_param('csthl20_control','ModelWorkspace');
Next you need to tune the outer loops controlling roll, angle of attack, and sideslip angle. Part 3 of this series (Attitude Control in the HL-20 Autopilot - SISO Design) shows how to tune a classic
SISO architecture and Part 4 (Attitude Control in the HL-20 Autopilot - MIMO Design) looks into the benefits of a MIMO architecture.
Related Topics | {"url":"https://ww2.mathworks.cn/help/control/ug/HL20RateControlExample.html","timestamp":"2024-11-12T18:38:00Z","content_type":"text/html","content_length":"81652","record_id":"<urn:uuid:0d88db59-ec49-4666-9212-50122858070e>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00785.warc.gz"} |
This function is not yet fully documented. This is a transcript of the text-formatted help.
bblaplacian Create a discrete Laplace-operator
BB=bblaplacian(DIM1) creates a black-box operator which computes the
discrete Laplace-operator on a hypercube with dimension given in DIM1.
The resulting hypercube has the same dimension as the input, and is computed
adduming all values outside the hypercube is zero.
The finite-difference stencil is [1,-2,1] for 1D-problems. This is applied
to each dimension of the hypercube and summed.
Example: Poisson's equation with Dirichlet boundary condition zero of a
rectangular domain with 100-by-200 internal nodes, amounts to solving
BB*X=F(:), where F contains the function values on the internal nodes, and
See also , , bbconvn. | {"url":"https://xtra.nru.dk/bbtools/help/toolbox/bbtools/bblaplacian.html","timestamp":"2024-11-09T03:29:40Z","content_type":"text/html","content_length":"3351","record_id":"<urn:uuid:2ee2141b-f012-4ca3-8e14-8f69b566d192>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00312.warc.gz"} |
Question 4 and 5, Exercise 5.1
Solutions of Question 4 and 5 of Exercise 5.1 of Unit 05: Polynomials. This is unit of Model Textbook of Mathematics for Class XI published by National Book Foundation (NBF) as Federal Textbook
Board, Islamabad, Pakistan.
Question 4
If $4 y^{3}-4 y^{2}+10+2 y$ is completely divisible by any of its factor such that the quotient is $4 y^{2}-8 y+10$, then find other factor.
Question 5
Find the value of ' $q$ ' if $x^{3}+q x^{2}-7 x+6$ is exactly divisible by $(x+1)$.
Let $p(x)=x^{3}+q x^{2}-7 x+6$ and $x-c=x+1$ $\implies c=-1$.
By factor theorem $x+1$ is factor of $p(x)$ iff $p(-1)=0$.
This gives \begin{align*} &(-1)^3+q(-1)^2-7(-1)+6=0 \\ -&1+q+7+6=0\\ &q+12=0\\ &q=-12 \end{align*}
Go to | {"url":"https://www.mathcity.org/math-11-nbf/sol/unit05/ex5-1-p3","timestamp":"2024-11-06T08:13:54Z","content_type":"application/xhtml+xml","content_length":"23587","record_id":"<urn:uuid:a92b8758-bf7d-4337-b6b9-915e70df7237>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00265.warc.gz"} |
Handreiking Polymorphic Pseudonimization Notation
The specifications on this page are a copy of the one provided in the "Uniforme Set van Eisen" (USvE) version 1.0. Please note that the Afsprakenstelsel Elektronische Toegangsdiensten adheres to
these USvE specifications, and a copy is included here for information only. The copy may have been modified to reflect Afsprakenstelsel Elektronische Toegangsdiensten terminology and references.
The most recent version of the technical BSNk specifications are available on Logius | BSNk PP documentatie or on request (beheerorganisatie BSNk through servicecentrum@logius.nl). The information
below is for information only.
Polymorphic Pseudonymization uses a number of cryptographic structures. These crypto-structures are denoted in the Interface Specifications as follows:
Pseudonym Notation Pseudonym (English) Notation Explanation
Polymorf PP@MU Polymorphic Pseudonym PP@MU PP denotes the pseudonym is a Polymorphic Pseudonym. The abbrevation following the '@' symbol denotes the relying party the
Pseudoniem Polymorphic Pseudonym is unique to.
Polymorfe PI@MU Polymorphic Identity PI@MU PI denotes the pseudonym is a Polymorphic Identity. The abbrevation following the '@' symbol denotes the relying party the Polymorphic
Identiteit Pseudonym is unique to.
Versleuteld VP@DV Encrypted Pseudonym EP@DV VP in Dutch or EP in English denotes the pseudonym is an Encrypted Pseudonym. The abbrevation following the '@' symbol denotes the
Pseudoniem relying party the Encrypted Pseudonym is unique to.
Versleutelde VI@DV Encrypted Identity EI@DV VI in Dutch or EI in English denotes the pseudonym is an Encrypted Identity. The abbrevation following the '@' symbol denotes the
Identiteit relying party the Encryped Pseudonym is unique to.
persistent P@DV persistent Pseudonym P@DV P denotes the pseudonym is a persistent Pseudonym. The abbrevation following the '@' symbol denotes the relying party the persistent
Pseudoniem Pseudonym is unique to.
The Identity in polymorphic pseudonymization refers to the identity obtained after decryption of an Encrypted Pseudonym by a receiving
Identiteit - Identity - party. This Identity equals the root identifying attribute used to generate the PIP that Polymorph Pseudonyms are based on, for
example the BSN. Instead of referring to the decrypted Encrypted Pseudonym as Identity, the root identifying attribute is used
Sleutelmateriaal DV-keys The Dienstverlener (DV) specific keymaterial necessary to decrypt Encrypted Pseudonyms.
PolymorphicSchemePublicKeySet Polymorphic Pseudonymization scheme general public keys.
This paragraph describes the technical format of polymorphic identities and pseudonyms and related key formats.
Polymorphic identities and pseudonyms in the scheme are based on cryptographic properties of elliptic curves.
Usages of Polymorphic Pseudonymization
• Activation
□ Polymorphic Identity or Pseudonym
□ Encrypted Identity or Pseudonym
• Usage (transformation and decryption)
□ Encrypted Identity or Pseudonym
□ Identity or Pseudonym
Format for Polymorphic Identity or Pseudonym
A Polymorphic Identity or Pseudonym is a combination of points on an elliptic curve. In order for the Identity or Pseudonym to be properly usable in the scheme, some additional information is needed.
This information is necessary for practical management and secure implementation of Identity or Pseudonym in the Scheme and consists of elements like versioning (for key management) and recipient.
The syntax for expressing an Identity or Pseudonym with this information is listed below.
Values of the notations below SHALL be represented as (the base64 encoding of) the DER-encoded structure in ASN.1 notation.
Polymorphic Identity or Pseudonym
A Polymorphic Identity or Pseudonym consists of 3 points on an elliptic curve. Polymorphic Identity or Pseudonym are provided via the Interface spec BSNk: activate. They are used via the interface
Interface spec BSNk: transform. The notation for a complete Polymorphic Identity or Pseudonym is as follows:
Polymorphic Identity or Pseudonym ASN.1 notation
PolymorphicIdentity ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-polymorphic-identity),
schemeVersion INTEGER,
schemeKeyVersion INTEGER,
creator IA5String,
recipient IA5String,
recipientKeySetVersion INTEGER,
points SEQUENCE (SIZE (3)) OF ECPoint
PolymorphicPseudonym ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-polymorphic-pseudonym),
schemeVersion INTEGER,
schemeKeyVersion INTEGER,
creator IA5String,
recipient IA5String,
recipientKeySetVersion INTEGER,
type INTEGER,
points SEQUENCE (SIZE (3)) OF ECPoint
Herein the schemeVersion indicates the version of the cryptographic scheme and this syntax and SHALL start at 1. The schemeKeyVersion is a version that SHALL start at 1 and represents the effective
set of long term scheme master keys (PP-M, PD-M, etc...). The schemeKeyVersion defines the elliptic curve used in the scheme as well. The creator SHALL contain the entityID (OIN) of the creator, and
the recipient SHALL contain the entityID (OIN) of the recipient. The recipientKeySetVersion holds the version number for the set of recipient's keys for Polymorphic Identities and Pseudonyms (PA-Di).
Note: In schemeVersion 1 the recipientKeySetVersion for MUs and ADs is a sequence starting at 1. Type defines the identity type the Pseudonym is derived of, e.g. from a BSN or an eIDAS Uniqueness
Identifier. This field is not necessary in identity based forms as here the identity type will become clear as part of decryption of the final structure, i.e. the Encrypted Identity. The values
currently defined are the ASCII value of 'B' (0x42) for BSN based and 'E' (0x45) when based on a eIDAS uniqueness identifier. ECPoint is identical to ECPoint as defined in BSI TR 03111 and ANSI X9.62
(2005). Here two encodings are specified, compressed and compressed. Both encodings are allowed, with a preference for uncompressed encoding.
A Polymorphic Identity of Polymorphic Pseudonym can be signed for integrity protection:
SignedPolymorphicIdentity ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-polymorphic-identity-signed),
signedPI SEQUENCE {
polymorphicIdentity PolymorphicIdentity,
auditElement OCTET STRING,
signingKeyVersion INTEGER
signatureValue ECDSA-Signature
SignedPolymorphicPseudonym ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-polymorphic-pseudonym-signed),
signedPP SEQUENCE {
polymorphicPseudonym PolymorphicPseudonym,
auditElement OCTET STRING,
signingKeyVersion INTEGER
signatureValue ECDSA-Signature
An auditElement holds an audit value consisting of an identifier for the creator, a timestamp and a sequence number from that creator. This auditElement is 16 bytes (32-bit creator, 32-bit timestamp
and 64-bit sequence-number). The creator identifies the party providing the Polymorphic/Encrypted Identity or Pseudonym and the unique device used. The timestamp and sequence number can be used in
case of a compromise or dispute, so that mitigating measure or resolution can be accomplished.
Note: the timestamp is 32-bit in seconds since 1 jan 1970 UTC. The auditElement is encrypted under a key only retrievable by the supervisor of the scheme, which is provided to the supervisior by the
keymanagement role.
The signatureValue can be used to assert the authenticity of the (polymorphic/encrypted) Identity or Pseudonym. The signature is applied to the byte sequence of the complete DER-encoded signed
sequence (e.g. signedPP in a SignedPolymorphicPseudonym). The public key for verification can be retrieved using the creator from the structure covered under the signature and the signingKeyVersion.
-- ECPoint is described in ANSI X9.62 (2005), annex E.6.
-- In particular, encoding from point to octet string and
-- from octet string to a point is defined in annex A.5.7
-- and A.5.8 of ANSI X9.62.
ECPoint ::= OCTET STRING
ECDSA-Signature ::= SEQUENCE {
signatureType OBJECT IDENTIFIER (ecdsa-with-SHA384),
signatureValue EC-Sig-Value
-- EC-Sig-Value is identitical to BSI TR 03111 ECDSA-Sig-Value.
-- which is identical to ECDSA-Sig-Value defined in RFC5480 as well.
EC-Sig-Value ::= SEQUENCE {
r INTEGER,
s INTEGER
ecdsa-with-SHA384 OBJECT IDENTIFIER ::= {
iso(1) member-body(2) us(840) ansi-X9-62(10045) signatures(4)
ecdsa-with-SHA2(3) 3 }
id-BSNk-scheme-nl OBJECT IDENTIFIER ::= { joint-iso-itu-t(2) country(16) nl(528) nederlandse-organisatie(1) nederlandse-overheid(1003) ..... TODO }
id-BSNk-identifiers OBJECT IDENTIFIER ::= { id-BSNk-scheme-nl 1 }
id-BSNk-polymorphics OBJECT IDENTIFIER ::= { id-BSNk-identifiers 1 }
id-BSNk-polymorphic-identity OBJECT IDENTIFIER ::= { id-BSNk-polymorphics 1 }
id-BSNk-polymorphic-pseudonym OBJECT IDENTIFIER ::= { id-BSNk-polymorphics 2 }
id-BSNk-polymorphic-identity-signed OBJECT IDENTIFIER ::= { id-BSNk-polymorphics 3 }
id-BSNk-polymorphic-pseudonym-signed OBJECT IDENTIFIER ::= { id-BSNk-polymorphics 4 }
PIP – PPCA optimized
For privacy enhanced implementation, Polymorphic Identities and Pseudonyms can be implemented on a smartcard. This is called a PP-card application, or PPCA. A Polymorphic Identity and a Polymorphic
Pseudonym can be combined to 5 points on an elliptic curve rather than six, for optimization in a smartcard implementation. The PPCA-optimized PIP version of Polymorphic Identities or Pseudonyms are
provided in Interface spec BSNk: activate.
The combined notation for an Polymorphic Identity and Pseudonym is as follows:
Polymorphic Identity and Pseudonym (PIP) ASN.1 notation
PIP ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-polymorphic-pip),
schemeVersion INTEGER,
schemeKeyVersion INTEGER,
creator IA5String,
recipient IA5String,
recipientKeySetVersion INTEGER,
type INTEGER,
points SEQUENCE (SIZE (5)) OF ECPoint
The first, second and fourth ECPoint in a PIP correspond to those of a PI. Similarly, the first, third and fifth correspond to those of a PP. In this fashion one can extract a PI and PP from a PIP.
There also exists a signed version of a PIP:
SignedPIP::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-polymorphic-pip-signed),
signedPIP SEQUENCE {
pip PIP,
auditElement OCTET STRING,
signingKeyVersion INTEGER
signatureValue ECDSA-Signature
Which follows the same concepts as described for a Polymorphic Identity or Polymorphic Pseudonym.
id-BSNk-polymorphic-pip OBJECT IDENTIFIER ::= { id-BSNk-polymorphics 5 }
id-BSNk-polymorphic-pip-signed OBJECT IDENTIFIER ::= { id-BSNk-polymorphics 6 }
Encrypted Identity or Pseudonym
An Encrypted Identity or Pseudonym consists of 3 points on an elliptic curve. The notation for a complete Encrypted Identity and an Encrypted Pseudonym is as follows:
Encrypted pseudoID ASN.1 notation
EncryptedIdentity ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-encrypted-identity),
schemeVersion INTEGER,
schemeKeyVersion INTEGER,
creator IA5String,
recipient IA5String,
recipientKeySetVersion INTEGER,
points SEQUENCE (SIZE (3)) OF ECPoint
EncryptedPseudonym ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-encrypted-pseudonym),
schemeVersion INTEGER,
schemeKeyVersion INTEGER,
creator IA5String,
recipient IA5String,
recipientKeySetVersion INTEGER,
diversifier IA5String OPTIONAL,
type INTEGER,
points SEQUENCE (SIZE (3)) OF ECPoint
SignedEncryptedIdentity ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-encrypted-identity-signed),
signedEI SEQUENCE {
encryptedIdentity EncryptedIdentity,
auditElement OCTET STRING
signatureValue EC-Schnorr-Signature
SignedEncryptedPseudonym ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-encrypted-pseudonym-signed),
signedEP SEQUENCE {
encryptedPseudonym EncryptedPseudonym,
auditElement OCTET STRING
signatureValue EC-Schnorr-Signature
DirectEncryptedPseudonym ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-encrypted-direct-pseudonym),
schemeVersion INTEGER,
schemeKeyVersion INTEGER,
creator IA5String,
recipient IA5String,
recipientKeySetVersion INTEGER,
type INTEGER,
points SEQUENCE (SIZE (3)) OF ECPoint
SignedDirectEncryptedPseudonym ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-encrypted-direct-pseudonym-signed),
signedDEP SEQUENCE {
directEncryptedPseudonym DirectEncryptedPseudonym,
auditElement OCTET STRING
signatureValue EC-Schnorr-Signature
The fields correspond to the same fields in a Polymorphic Identity or Pseudonym. The recipientKeySetVersion holds the version number for the set of recipient's keys for Identities and Pseudonyms
(PD-Di, PC-Di and PI-Di).
Note: In schemeVersion 1 the recipientKeySetVersion for DVs is a value of 8 decimal digits corresponding with the issue date (notBefore) of the certificate, in the format YYYYMMDD, used to request
the PEM file at the party generating the keys within the scheme.
A DirectEncryptedPseudonym is identitical to an EncryptedPseudonym, although an additonal processing step is needed before decryption. This form is only applicable for reporting from
BSNk_registration to CIF.
EC-Schnorr-Signature ::= SEQUENCE {
signatureType OBJECT IDENTIFIER (ecschnorr-plain-SHA384),
signatureValue EC-Sig-Value
bsi-de OBJECT IDENTIFIER ::= {
itu-t(0) identified-organization(4) etsi(0)
reserved(127) etsi-identified-organization(0) 7
id-ecc OBJECT IDENTIFIER ::= { bsi-de algorithms(1) 1 }
ecschnorr-plain-signatures OBJECT IDENTIFIER ::= { id-ecc signatures(4) 3 }
ecschnorr-plain-SHA384 OBJECT IDENTIFIER ::= { ecschnorr-plain-signatures 3 }
The auditElement is similar to the auditElement of a Polymorphic Identity or Pseudonym. The signature is a Schnorr signature for efficiency.
id-BSNk-encrypted OBJECT IDENTIFIER ::= { id-BSNk-identifiers 2 }
id-BSNk-encrypted-identity OBJECT IDENTIFIER ::= { id-BSNk-encrypted 1 }
id-BSNk-encrypted-pseudonym OBJECT IDENTIFIER ::= { id-BSNk-encrypted 2 }
id-BSNk-encrypted-identity-signed OBJECT IDENTIFIER ::= { id-BSNk-encrypted 3 }
id-BSNk-encrypted-pseudonym-signed OBJECT IDENTIFIER ::= { id-BSNk-encrypted 4 }
id-BSNk-encrypted-direct-pseudonym OBJECT IDENTIFIER ::= { id-BSNk-encrypted 5 }
id-BSNk-encrypted-direct-pseudonym-signed OBJECT IDENTIFIER ::= { id-BSNk-encrypted 6 }
Identity or Pseudonym
Finally, an Encrypted Identity or Pseudonym can be decrypted into a Identity or Pseudonym respectively, consisting of (the X coordinate of) 1 point on an elliptic curve. The Identity or Pseudonym is
not directly used in any of the interfaces, but is the RECOMMENDED representation of a Identity or Pseudonym for a relying party to use after decryption of a Encrypted Identity or Pseudonym.
Decrypted pseudoID ASN.1 notation
Identity ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-decrypted-identifier),
schemeVersion INTEGER,
schemeKeyVersion INTEGER,
recipient IA5String,
type INTEGER,
identityValue IA5String
Pseudonym ::= SEQUENCE {
notationIdentifier OBJECT IDENTIFIER (id-BSNk-decrypted-pseudonym),
schemeVersion INTEGER,
schemeKeyVersion INTEGER,
recipient IA5String,
recipientKeySetVersion INTEGER,
diversifier IA5String,
type INTEGER,
pseudonymValue IA5String
In case of an Identity, the identity can be extracted from the X coordinate of the EllipticCurvePoint of the Identity. In schemeVersion 1, the X coordinate, after conversion from a number to a
bytearray, contains an encoded identity padded using OAEP as defined in Section 7.1 of RFC8017 (PKCS#1 v2.2). Here the following parameters are chosen:
• The place of n (RSA modulus) is taken by the order of curve q; length in bytes of q is denoted by k as in PKCS #1, i.e. equal to 40 for the Brainpool320r1 curve used in version 1 of the scheme.
• Hash function is SHA384 truncated to first 10 bytes, i.e. hLen = 10.
• Message length mLen = to k – 2hLen – 2 (PKCS #1 only requires ≤), i.e. equal to 18.
• MGF1 as defined in PKCS #1 is used as Mask Generation Function.
• Optional Label is empty string.
The decoded identity (18 bytes) consists of a prefix of three bytes and the identity (e.g. BSN). The prefix consists of a version, a type and a length of the identifier. All not used bytes are zero.
That is, 15 bytes is the longest size supported for an identifier in version 1.
In case of a Pseudonym, the identifying, persistent pseudonym of a user is the EllipticCurvePoint of the Pseudonym. The RECOMMENDED representation of a Pseudonym used in a DV registration, consists
of the recipientKeySetVersion (decimal string of length 8) of the closing key with the uncompressed EllipticCurvePoint appended. If two such representations are equal the pseudonyms correspond to the
same person. However, we can only deduce that two pseudonyms do not correspond to the same person if the pseudonymValue differ while all other values are equal. We note that the
recipientKeySetVersion of the closing key can be different from the recipientKeySetVersions of the EI and EP decryption key.For each decrypted pseudonym the DV shall archive the additional fields
decrypted from the Encrypted Pseudonym.
id-BSNk-decrypted OBJECT IDENTIFIER ::= { id-BSNk-identifiers 3 }
id-BSNk-decrypted-identifier OBJECT IDENTIFIER ::= { id-BSNk-decrypted 1 }
id-BSNk-decrypted-pseudonym OBJECT IDENTIFIER ::= { id-BSNk-decrypted 2 }
Key formats
Polymorphic pseudonimization uses various keys. These keys have been versioned, see the syntax above.
Keys for relying parties are provided using the notation described in DV-key format.
Several of the scheme-wide keys are public, and can be used to use the polymorphism or verify signatures. These keys are defined in Metadata and under the role PPSteutelSet in RoleDescriptors
non-Participants. For these public keys the brainpool P320r1 curve is used, which is a named curve defined as
-- Brainpool curves and the TeleTrust namespace are defined in BSI TR-03111
ecStdCurvesAndGeneration OBJECT IDENTIFIER ::= {
iso(1) identified-organization(3) teletrust(36) algorithm(3)
signature-algorithm(3) ecSign(2) ecStdCurvesAndGeneration(8)
ellipticCurve OBJECT IDENTIFIER ::= { ecStdCurvesAndGeneration 1 }
versionOne OBJECT IDENTIFIER ::= { ellipticCurve 1 }
brainpoolP320r1 OBJECT IDENTIFIER ::= { versionOne 9 } | {"url":"https://afsprakenstelsel.etoegang.nl/Startpagina/v3/handreiking-polymorphic-pseudonimization-notation","timestamp":"2024-11-12T23:46:00Z","content_type":"text/html","content_length":"54189","record_id":"<urn:uuid:5dfd26ba-d2f3-4fb6-ae95-f84d483e51ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00388.warc.gz"} |
Logic Seminar
OSU Math Logic Seminar
The contact person for the seminar and mailing list is Chris Miller.
The seminar receives some financial support from the Mathematics Research Institute.
The seminar is nominally on Tuesdays, 13:50--14:45, in Dulles Hall 024 (behind the Math Tower, in the basement).
Oct 22
Chris Miller (OSU). A tameness result for expansions of (R,<,+,Z)
Abstract. I will present a tameness result for the expansion of (R,<,+,Z) by all bounded sets of reals (of any arity) whose closures are countable and have finite Cantor-Bendixson rank. Loosely, the
definable sets are as well behaved as one could reasonably expect. (This is a special case of some ongoing work with Masato Fujita.)
Oct 8
Leo Jimenez (OSU). Internality of autonomous differential equations
Abstract. When solving a differential equation, one sometimes finds that solutions can be expressed using a finite number of fixed, particular solutions, and some complex numbers. As an example, the
set of solutions of a linear differential equation is a finite-dimensional complex vector space. A model-theoretic incarnation of this phenomenon is internality to the constants in a differentially
closed field of characteristic zero. In this talk, I will define what this means, and discuss some recent progress, joint with Christine Eagles, on finding concrete methods to determine whether or
not the solution set of a differential equation is internal. A corollary of our method also gives a criteria for solutions to be Liouvillian: I will show a concrete application to Lotka-Volterra
Oct 1
Francis Wagner (OSU). Malnormal subgroups of finitely presented groups
Introduced by Sapir in the late 1990s, the `S-machine' is a computational model which resembles a multi-tape, non-deterministic Turing machine. This model was carefully conceived in order to be both
computationally robust and interpretable as a multiple HNN extension of a free group. As such, S-machines have proved to be a remarkable tool in the study of groups. I will discuss a generalization
of the S-machine which yields the following refinement of Higman's embedding theorem: Every finitely generated recursively presented group may be quasi-isometrically embedded as a malnormal subgroup
of a finitely presented group; moreover, the decidability of the Word problem is preserved by this embedding.
Sept 24
Michael Bersudsky (OSU). Equidistribution of polynomially bounded o-minimal curves in homogenous spaces
Abstract. Given a real algebraic group G and an o-minimal structure on the real field, one can naturally define subsets of G that are definable in this structure. Peterzil and Starchenko recently
showed that when G is the group of upper-triangular matrices and L is a lattice in G, the closure of the image in G/L of a definable set in G is the closed image of a potentially larger definable set
in G. Moreover, they showed that if a curve in G is definable in a polynomially bounded o-minimal structure and its image in G/L is dense, then the curve is uniformly distributed in G/L. In this
talk, I will present recent joint work with Nimish Shah and Hao Xing, where we extend these results for curves definable in a polynomially bounded o-minimal structure in a general real algebraic
group G and a general lattice L in G, under a 'non-contraction' condition on the curves. This work builds upon Shah's earlier technique for polynomial curves in homogeneous spaces, ‘tangency at
infinity’ property of o-minimal curves shown by Peterzil and Steinhorn, and Ratner's groundbreaking theorems. A key innovation in our analysis is a proof of a certain growth property for families of
polynomially bounded o-minimal functions.
Sept 17
Chris Miller (OSU). A brief introduction to polynomially bounded o-minimality.
Abstract. I will present some basic results about polynomially bounded o-minimal expansions of the real field that might be needed for some subsequent seminars. | {"url":"https://research.math.osu.edu/logicseminar/","timestamp":"2024-11-08T18:30:22Z","content_type":"text/html","content_length":"5936","record_id":"<urn:uuid:cd3f57c0-534e-4aea-8f0e-504bb9cc2efc>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00043.warc.gz"} |
Torsional vibration analysis of crank train with low friction losses
High level of mechanical efficiency is exacted from internal-combustion engines. The reduction of friction losses of crankshaft main bearings can significantly contribute to the enhancement of this
efficiency. For this purpose, an innovative design of a crankshaft is developed. The potential of computational modelling during the development of this innovative crank train is described in the
article. The dynamics of the whole crank train is solved by using a multi-body system software, where flexible finite-element bodies along with hydrodynamic bearings are incorporated. Regarding the
simulation results, attention is paid to the torsional vibration and its analysis, including concept design of a torsional damper, because a reduction of friction losses is associated with the
improvement of torsional vibration in this case.
1. Introduction
Torsional vibration is common to internal-combustion engine crankshafts. The crank train generates alternating torque due to alternating combustion pressure in conjunction with the alternating effect
of reciprocating parts inertia. This torque brings the elastic crankshaft in vibration about the axis of rotation. The torsional vibration can cause cracking and crankshaft failure; and therefore,
this crankshaft loading is very dangerous.
A lot of effort has been made to simulate and measure the torsional vibration of the power train with the use of internal-combustion engines. Simulations focused only on this type of loading are
often based on the so-called physical model of linear torsional vibration system, where all the rotating and the reciprocating parts are reduced to the crankshaft rotation axis, under the condition
of potential energy and mean value of kinetic energy of the real system and also the physical model [1, 2].
A similar computational model can effectively be used also for the dynamics solution of the whole vehicle power train [3, 4].
The impact of a torsional damper, which is able to reduce the torsional vibration, can also be investigated by this physical model of torsional system utilizing discs, elastic and damping effects [1,
5, 6], however it is possible to research the influence of a damper on the crank train by finite element analysis [7].
More complex computational models of internal-combustion engine dynamics, based on the Multi-Body System principles and containing flexible bodies, allow extensive analyses of engine vibrations,
including the torsional vibration of the crank train [8].
Since the computational model should be verified by an experiment, rotational laser vibrometers are used for the measurement of the torsional vibration of some appropriate parts. The principles of
the measurement are described in [9].
In this paper, torsional vibration is investigated in the case of innovative crank train of a 1.6 litre naturally aspirated spark-ignition engine. The engine has four cylinders in an in-line
configuration and reduced friction losses. The reduction of crank train friction losses is achieved by a reduced number of crankshaft main bearings from 5 to 3.
The new crankshaft with 3 main pins is derived from the standard one with 5 main pins mounted into the production engine. The missing main pins and adjacent crank webs are substituted with welded
sheet-metal webs, which results in mass and inertia moments reduction, see Fig. 1, where pipes between crank pins enable a central feeding of crank bearings with lubricating oil.
Fig. 1An exploded view of the 3-main-bearing crankshaft
The sheet-metal web assemblies are laser welded with the flanges of crank pins, because laser welding brings low thermal loading of the weld, which was also experimentally validated during this
project. Current design of the 3-main-bearing crankshaft results from previous design, computational, and technology studies [10, 11].
These studies show that potential for savings of power losses of crankshaft main bearing reaches around 33 % for whole engine speed range and different load in comparison with the standard
5-main-bearing crankshaft. Simulations of crankshaft friction losses are validated also by measurement of motored standard engine with deactivated main bearing number 2 and 4 under miscellaneous
lubricating oil conditions but under the same conditions for both variants.
Another advantage of the laser welded 3-main-bearing crankshaft is the lower mass – a reduction of slightly more than 12 % – even though this crankshaft is made of steel, while the standard one is a
ductile iron casting.
However not only friction losses and mass are affected by the new crankshaft design, but also changes in system dynamic response must be taken into account. Therefore the state-of-the-art
computational methods are employed in order to investigate behavior of the crank train under dynamic conditions.
2. Simulations of crank train dynamics
For the solution of engine dynamics a complex computational model is used in the present study. The model is also known as a virtual engine and it is solved in time domain. This enables to involve
various physical problems including different non-linear features.
The described computational model is assembled and solved in Multi-Body System (MBS) environment based on ADAMS programming language. During the assembly process of the model either ADAMS commands
can be directly utilized or user-written FORTRAN or C++ subroutines can be incorporated [8, 11, 12].
Since the virtual engine allows to solve the dynamics of the entire engine, similarly to real engines it includes all of its sub-systems. The included sub-system can be a crank train, a valve train,
a timing drive, an oil pump, an auxiliary drive, a rubber damper and others. In order to analyze torsional vibration of the crank train, only the crank train module is utilized.
This module, Fig. 2, is composed of rigid bodies, flexible bodies and interconnections between them [13, 14].
As rigid are modeled those bodies whose deformation is not relevant for this type of simulation, nevertheless their inertia effects are considerable, therefore are defined by their location of center
of gravity, mass, and inertia tensor. These bodies are:
• Piston group,
• Connecting rod assembly,
• Dynamometer rotor.
Fig. 2The MBS computational model – the crank train as the main module of the virtual engine
The Craig-Bampton method is used for reducing the size of finite element models of these engine parts:
• Crankshaft,
• Flywheel,
• Crankshaft pulley,
• Crank train sump,
• Cylinder head,
• Engine block,
• Gear case.
Two rigid discs joint via a force element with torsional stiffness and damping represent a dumb-bell shaft connecting the flywheel with the dynamometer rotor (not shown in Fig. 2). The stiffness and
damping values are adjusted on the basis of measurement of the whole crank train speed oscillation.
A non-linear hydrodynamic journal bearing model according to [15] is used for the interaction model between the crankshaft and the engine block and between the crankshaft and the connecting rod. The
model contains the influence of oil grooves, while oil viscosity class SAE 5W-30 is considered during the simulation.
A general kinematic constraint is used for body which effect is important for dynamics of the system; however, which is not the subject matter of the simulation, for instance a constraint between a
piston and a cylinder liner.
In order to excite the virtual engine, in-cylinder high-pressure measurement of the standard engine is carried out and results, after statistical processing according to [16], are used during the
simulation. Simulation of the engine dynamics covers the whole operating speed range, thus from 1000 rpm up to 6200 rpm.
3. Torsional vibration
A periodic signal of a quantity in time domain can be processed by the harmonic analysis in complex domain according to:
${Q}_{c}=\frac{2}{n}\sum _{j=0}^{n-1}{Q}_{j}{e}^{i\left(k2\pi \frac{j}{n}\right)},$
where ${Q}_{c}$ is a complex number associating amplitude and phase angle of the $k$th harmonic order of quantity ${Q}_{j}$ defined in discrete time step $j$ for $n$ steps [17, 18].
Results of the signal processing obtained by this method at four-stroke engines contain also half-order harmonics (4.5th, 5.5th et cetera). This is caused by the fact that whole working cycle of a
four-stroke internal-combustion engine takes two crankshaft revolutions (720°) while the fundamental frequency for engine dynamics is crankshaft rotational frequency (360°) [11]. Thus:
where $\kappa$ is harmonic order regarding engine speed. This is illustrated by harmonic analysis in time domain in Fig. 3, showing the amplitude, the phase angle, and the multiple of fundamental
frequency for first few harmonic components of a cylinder unit torque including effects of gas and inertia forces.
Fig. 3Harmonic analysis of torque of a cylinder unit in time domain
The crankshaft pulley angular displacement is chosen for description and comparison of the crank train torsional vibration due to its complexity, the crank train speed oscillation and crankshaft
torsional static and alternating deformation are therefore included into this quantity.
The computational model of the crank train dynamics is verified by means of experiment on an engine with the standard crankshaft. The crankshaft pulley angular displacement is measured by rotational
laser vibrometer POLYTEC 4000 Series, and the comparison of the experiment and the simulation results is shown in Fig. 4.
Fig. 4Validation of the MBS model – harmonic analysis of the angular displacement of the standard crankshaft’s pulley
Harmonic analysis of a crankshaft pulley angular displacement of the standard crank train obtained as a result of the simulation is shown in Fig. 5.
The 6th harmonic component takes a big share on crank train torsional vibration during its resonance reaching at 5250 rpm. The 8th harmonic component achieves its resonance at 3900 rpm, however the
resonance amplitude is smaller compared to the 6th one. The 10th harmonic component shows some resonance too, but its impact on the total shaft oscillation is considerably smaller. The mentioned
harmonic components bring crankshaft on alternating torsional deformation while lower harmonic components, reflecting resonances at lower engine speed (the 2nd one or the 4th one), show whole crank
train speed oscillation – torsional vibration knot point lies between the flywheel and the dynamometer rotor.
The results of crank train dynamics simulation show that the so-called major orders take the highest part in crank train torsional vibration due to their identical phase angle at all cylinder units
of the engine. For the studied in-line four-cylinder engine, these orders are integer multiples of 2. The synthesis of all harmonic orders corresponds to half of the peak-to-peak value from the
periodic torsional oscillation.
The same dynamics simulation is carried out for crank train with 3-main-bearing crankshaft and results of harmonic analysis of the pulley’s angular displacement are presented in the Fig. 6.
Significant changes in crankshaft design decrease the torsional natural frequency of the system.
Fig. 5Harmonic analysis of a pulley angular displacement of the standard crankshaft
Fig. 6Harmonic analysis of a pulley angular displacement of the new crankshaft
Lower natural torsional frequency can be noticed, for example, at resonance speed of the 8th harmonic component which is reached at 3600 rpm (minus 300 rpm as compared to the standard crank train).
This is caused mainly by enlarged counterweighs of the new crankshaft significantly reducing load of the middle main bearing however also decreasing torsional natural frequency of the crank train.
Nevertheless, the small number of main bearings, where the crankshaft is supported by the engine block, brings further deviations in the crank train dynamics behavior. Amplitudes of the harmonic
component resonances are significantly increased in the case of major harmonic orders and also other harmonic orders.
Crankshaft material has inherent damping determined by the experimental modal analysis for the need of simulation. However, as the simulation results show, “the sharpness” of resonance curves is
caused, in particular, by the lower external damping of crank train torsional vibration by means of the crankshaft main bearings.
Shown torsional vibration of the new crankshaft, Fig. 6, cannot be admitted owing to likely crankshaft failure, which can be proved by only static structural analysis, if boundary conditions
corresponding to the dynamic simulation are considered. Hence the next step for its reduction must be done.
4. Concept design of a torsional vibration damper
Engines of class as presented used to be equipped by a tuned rubber torsional vibration damper featured by good effect and low-costs [20]. The damper consists of a rubber band interposed between an
inertia ring and a crankshaft pulley and its function, in sum, rests in retuning of the torsional system and putting more damping into it.
4.1. Parameters of a torsional vibration damper
For dimensioning of the damper, the equivalent moment of inertia of crank train must be determined. If the crank train is reduced into discrete discs jointed by immaterial elastic shaft, as shown in
Fig. 6, the crank train equivalent moment of inertia can be calculated as:
${I}_{ef}=\sum _{j=1}^{m}{I}_{j}{a}_{j}^{2},$
where ${a}_{j}$ is the relative twist for appropriate natural frequency of the $j$th disc having ${I}_{j}$ moment of inertia and the meaning of variables is further evident from Fig. 7.
Fig. 7The crank train discretization into individual discs and elastic line corresponding to the first natural frequency
In the case of the innovative crank train, the stiffness of elastic shaft sections, ${c}_{j}$, can be obtained by the finite element method if suitable boundary conditions are included [18].
Then relative size of the damper can be suggested as:
$\mu =\frac{{I}_{d}}{{I}_{ef}},$
where ${I}_{d}$ is moment of inertia of the damper inertia ring.
Fig. 8A scheme of reduced crank train with a torsional damper
The damper relative tuning can be defined as:
$w=\frac{{\mathrm{\Omega }}_{d}}{\mathrm{\Omega }}=\frac{\sqrt{\frac{{c}_{d}}{{I}_{d}}}}{\sqrt{\frac{c}{{I}_{ef}}}},$
where $\mathrm{\Omega }$ is torsional natural frequency of the crank train, calculated via model described in Fig. 7 and verified by the virtual engine, $c$ is torsional stiffness according to Fig.
8, ${\mathrm{\Omega }}_{d}$ is torsional natural frequency of the damper and ${c}_{d}$ is torsional stiffness of the damper rubber band.
Relative damping is described by the ratio:
$\gamma =\frac{{b}_{d}}{2{I}_{d}\mathrm{\Omega }},$
where ${b}_{d}$ is rubber band viscous damping, limited by rubber material properties.
For initial parametric studies of damper should be defined also relative amplitude:
$\xi =\frac{\phi }{{\phi }_{stat}},$
where $\phi$ means dynamic amplitude of crankshaft pulley and ${\phi }_{stat}$ is a static amplitude, and relative frequency:
$\eta =\frac{\omega }{\mathrm{\Omega }}.$
In which $\omega$ means excitation frequency.
According to Fig. 8, which describes torsional system including the reduced crank train and parallel model of torsional damper and with the use of parameters defined by Eqs. (4)-(8), the equation for
relationship between relative amplitude and damper parameters can be derived:
$\xi =\sqrt{\frac{4{\gamma }^{2}{\eta }^{2}+{\left({\eta }^{2}-{w}^{2}\right)}^{2}}{4{\gamma }^{2}{\eta }^{2}{\left(1-{\eta }^{2}-\mu {\eta }^{2}\right)}^{2}+{\left[\mu {w}^{2}{\eta }^{2}-\left({\eta
}^{2}-1\right)\left({\eta }^{2}-{w}^{2}\right)\right]}^{2}}}.$
This equation allows to investigate the influence of damper relative parameters on the resonance curves of harmonic components of the crank train torsional vibration and is very useful for
time-effective parametric studies.
Resonance curves, presented in Fig. 9, show the influence of damper relative size on relative amplitude for given relative tuning and the same damping ratio. The range of damper parameters, in
practice, is limited by the build-up area, and material properties of a rubber band, especially the damping properties.
Fig. 9Comparison of the torsional damper effect on resonance curve
4.2. Effect of a torsional vibration damper
The effect of the torsional vibration damper is verified by the virtual engine as well. Since real rubbers show hysteretic damping properties rather than viscous, Wiechert rheological model of a
rubber band is used [21, 22].
The influence of the torsional vibration damper upon a crankshaft pulley torsional vibration is shown in Fig. 10 where the notes used mean:
• Without TD: the new crank train without the torsional damper,
• Standard TD: the new crank train with torsional damper intended for the standard crank train,
• Standard TD with softer rubber: the new crank train with torsional damper intended for the standard crank train but with lower torsional stiffness of the rubber band,
• New torsional damper: the new crank train equipped with torsional damper with optimized parameters,
• Standard without TD: the standard crank train without a torsional damper.
Since the standard crank train can operate reliably without the use of a torsional damper (verified by a measurement), it is evident that the new crank train equipped by the optimized torsional
damper should operate as well.
Because manufacturing of torsional vibration dampers is characterised by rubber properties variance, limits of rubber band stiffness are suggested and verified. A half peak-to-peak value of relative
angular displacement of a crankshaft pulley for upper, lower, and nominal rubber band stiffness is shown in Fig. 11.
Fig. 10Comparison of the torsional damper effect on a crankshaft pulley angular displacement (½ peak-to-peak value)
Fig. 11The influence of rubber band stiffness limits upon a crankshaft pulley angular displacement (½ peak-to-peak value)
5. Conclusions
Modern internal-combustion engines have to meet strict requirements on fuel efficiency which can be reached, among others, by low friction losses of a crank train. However, lowering of friction
losses is often attached to the decline in system dynamic behavior. The described innovative crank train represents an extreme case of this behavior. Nevertheless, modern computational methods can
bring solutions of the mentioned challenges and can shift manufacturing of expensive prototypes to later phase of the development process.
Presented technique for analysis of a crank train torsional vibration and concept design of a torsional damper is universally applicable for general crank train of a two-stroke and a four-stroke
Further steps of this project will be: a detail design of a torsional damper focused on temperature endurance of the rubber band, an analysis of crankshaft fatigue life, and a measurement of the real
crank train prototype.
• Filipović I., Bibić D., Blažević A., Milašinović A., Pecar A. Preliminary selection of basic parameters of different torsional vibration dampers intended for use in medium speed diesel engines.
Transactions of Famena, Vol. 36, Issue 3, 2012, p. 79-88.
• Östman F., Toivonen H. T. Model-based torsional vibration control of internal combustion engines. IET Control Theory and Applications, Vol. 2, Issue 11, 2008, p. 1024-1032.
• Shengping F., Shengbo L., Ning L., Mishchenko E. Dynamic optimization of tracked vehicle power train based on torsional vibration analysis. Advances in Mechanical Engineering, Vol. 8, Issue 5,
2016, p. 1-12.
• Kučera P., Píštěk V. Longitudinal and lateral dynamics of a commercial vehicle in Simulink software. Proceedings of the International Conference on Transport Means, 2015, p. 458-461.
• Mendes A. S., Meirelles P. S., Zampieri D. E. Analysis of torsional vibration in internal combustion engines: Modelling and experimental validation. Proceedings of the Institution of Mechanical
Engineers, Part K, Journal of Multi-body Dynamics, Vol. 222, Issue 2, 2008, p. 155-178.
• Matyja T., Lazarz B. Selection of torsional vibration damper based on the results of simulation. Journal of Vibroengineering, Vol. 17, Issue 8, 2015, p. 4069-4077.
• Wu F. T., Cheng C. C. Design and analysis of a speed-dependent torsional vibration absorber. Proceedings of the Institution of Mechanical Engineers, Part D, Journal of Automobile Engineering,
Vol. 220, Issue 6, 2006, p. 763-774.
• Novotný P., Píštěk V. New efficient methods for powertrain vibration analysis. Proceedings of the Institution of Mechanical Engineers, Part D, Journal of Automobile Engineering, Vol. 224, Issue
5, 2010, p. 611-629.
• Xiang L., Yang S., Gan C. Torsional vibration measurements on rotating shaft system using laser doppler vibrometer. Optics and Lasers in Engineering, Vol. 50, Issue 11, 2012, p. 1596-1601.
• Drápal L., Novotný P., Maršálek O., Raffai P., Píštěk V. A conceptual study of cranktrain with low friction losses. Journal of Middle European Construction and Design of Cars, Vol. 11, Issue 2,
2013, p. 6-11.
• Drápal L., Novotný P., Píštěk V. Dynamic Simulation of Progressive Crank Train. Advances in Intelligent Systems and Computing Advanced Mechatronics Solution. Springer, Basel, 2016, p. 207-215.
• Novotný P. Virtual Engine – A Tool for Powertrain Development. Inaugural Dissertation, Brno University of Technology, Czech Republic, 2009.
• Rebbert M. Simulation der Kurbewellendynamik unter Berücksichtigung der hydrodynamischen Lagerung zur Lösung motorakusticher Fragen. Ph.D. Dissertation, Rheinisch-Westfälischen Technischen
Hochschule, Aachen, Germany, 2000, p. 110.
• Ortjohann T., Rebbert M., Masssen F., Robers M. 3D-durability analysis of crankshaft via coupled dynamic simulation including modal reduction. SAE Technical Paper, 2006.
• Butenschön H. J. Das Hydrodynamische, Zylindrische Gleitlager Endlicher Breite Unter Instationärer Belastung. Ph.D. Dissertation, Universität Karlsruhe, Germany, p. 1976, p. 219.
• Heywood J. B. Internal Combustion Engine Fundamentals. 1st Edition. McGraw-Hill, New York, 1988, p. 930.
• Nestorides E. J. A Handbook on Torsional Vibration. Cambridge University Press, Cambridge, 1958, p. 664.
• Hafner K. E., Maass H. Torsionsschingungen in der Verbrennungs-kraftmaschine. Springer, Vienna, New York, 1985, p. 434.
• Craig R. R., Kurdila A. J. Fundamentals of Structural Dynamics. John Willey and Sons, New Jersey, 2006, p. 728.
• Heisler H. Advanced Engine Technology. 1st Edition, Arnold, Oxford Great Britain, 2002, p. 794.
• ADAMS/Engine Help. Version MD Adams R3. MSC. SOFTWARE, MSC Software Corporation, Newport Beach, CA, 2008.
• Brinson H. F., Brinson L. C. Polymer Engineering Science and Viscoelasticity: An Introduction. Second Edition, Springer, New York, 2015, p. 482.
About this article
Mechanical vibrations and applications
torsional vibration
torsional damper
crank train
multi-body system
The research leading to these results has received funding from the MEYS under National Sustainability Programme I (Project LO1202).
Copyright © 2017 JVE International Ltd.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/17876","timestamp":"2024-11-11T22:25:42Z","content_type":"text/html","content_length":"150429","record_id":"<urn:uuid:db33873a-d6ae-4d0a-961e-ff92bb30f461>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00459.warc.gz"} |
Conditional Restricted Boltzmann Machines for Structured Output Prediction
Conditional Restricted Boltzmann Machines for Structured Output Prediction. Mnih, V., Larochelle, H., & Hinton, G. E. CoRR, 2012. Link Pdf bibtex
@Article{ dblp1597062,
title = {Conditional Restricted Boltzmann Machines for Structured
Output Prediction},
author = {Volodymyr Mnih and Hugo Larochelle and Geoffrey E.
author_short = {Mnih, V. and Larochelle, H. and Hinton, G. E.},
bibtype = {article},
type = {article},
year = {2012},
key = {dblp1597062},
id = {dblp1597062},
biburl = {http://www.dblp.org/rec/bibtex/journals/corr/abs-1202-3748}
url_link = {http://arxiv.org/abs/1202.3748},
journal = {CoRR},
volume = {abs/1202.3748},
text = {CoRR abs/1202.3748 (2012)},
url_pdf = {absps/uai_crbms.pdf}
Downloads: 0
["Mnih, V.","Larochelle, H.","Hinton, G. E."],"bibbaseid":"mnih-larochelle-hinton-conditionalrestrictedboltzmannmachinesforstructuredoutputprediction-2012","bibdata":
{"bibtype":"article","type":"article","title":"Conditional Restricted Boltzmann Machines for Structured Output Prediction","author":[{"firstnames":["Volodymyr"],"propositions":[],"lastnames":
[]}],"author_short":["Mnih, V.","Larochelle, H.","Hinton, G. E."],"year":"2012","key":"dblp1597062","id":"dblp1597062","biburl":"http://www.dblp.org/rec/bibtex/journals/corr/
abs-1202-3748","url_link":"http://arxiv.org/abs/1202.3748","journal":"CoRR","volume":"abs/1202.3748","text":"CoRR abs/1202.3748 (2012)","url_pdf":"absps/uai_crbms.pdf","bibtex":"@Article{\t
dblp1597062,\n title\t\t= {Conditional Restricted Boltzmann Machines for Structured\n\t\t Output Prediction},\n author\t= {Volodymyr Mnih and Hugo Larochelle and Geoffrey E.\n\t\t Hinton},\n
author_short\t= {Mnih, V. and Larochelle, H. and Hinton, G. E.},\n bibtype\t= {article},\n type\t\t= {article},\n year\t\t= {2012},\n key\t\t= {dblp1597062},\n id\t\t= {dblp1597062},\n biburl\t=
{http://www.dblp.org/rec/bibtex/journals/corr/abs-1202-3748}\n\t\t ,\n url_link\t= {http://arxiv.org/abs/1202.3748},\n journal\t= {CoRR},\n volume\t= {abs/1202.3748},\n text\t\t= {CoRR abs/1202.3748
(2012)},\n url_pdf\t= {absps/uai_crbms.pdf}\n}\n\n","bibbaseid":"mnih-larochelle-hinton-conditionalrestrictedboltzmannmachinesforstructuredoutputprediction-2012","role":"author","urls":{"
link":"http://arxiv.org/abs/1202.3748"," pdf":"www.cs.toronto.edu/~fritz/absps/uai_crbms.pdf"},"metadata":{"authorlinks":{"hinton, g":"https://bibbase.org/show?bib=www.cs.toronto.edu/~fritz/
["conditional","restricted","boltzmann","machines","structured","output","prediction","mnih","larochelle","hinton"],"title":"Conditional Restricted Boltzmann Machines for Structured Output | {"url":"https://bibbase.org/network/publication/mnih-larochelle-hinton-conditionalrestrictedboltzmannmachinesforstructuredoutputprediction-2012","timestamp":"2024-11-12T20:30:53Z","content_type":"text/html","content_length":"15216","record_id":"<urn:uuid:fc1957ba-cdfc-4c14-8b67-f4b9adc07e18>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00219.warc.gz"} |
Tips for the Equation Editor in Instructure Canvas
I’ve been banging on Instructure Canvas for a week now, and I have to say that I’m incredibly impressed in general. Â I don’t think I’ve ever said anything nice about a Learning Management System
(LMS), but I have openly confessed my love for Canvas several times in the last week [more on that to come]. Â Last week was all about learning to use canvas with my TALDAÂ guinea pigs. Â This week
is all about figuring out how to do MATH in Canvas.
NOTE: The Canvas discussion boards include avatars of students plus their names. I’ve blurred those out to protect the innocent (instructors pretending to be students).
The equation editor built in to the WYSIWYG editor in canvas is actually quite good. Â I’ve had math instructors trying to break it all day, and we haven’t taken it down yet. Â However, I did find
that a cheat sheet for the Canvas Equation Editor would be helpful. Â So I’ve made one. Â You can find it under Resources / Handouts on this site.
You can add equations in announcements, on pages, in modules, and in discussion boards. Â So can students. Â By the way, you can also (supposedly) paste a LaTeX equation into the editor directly,
but I have not verified that one myself. Â Typing LaTeX commands right into the editor does not seem to work so well.
Hopefully we’ll figure out easy ways to add graphs using the WYSIWYG editor tomorrow. Â Although you can embed on the instructor side, the students are unable to get to the HTML code to embed images,
so we have to find a workaround for that. Â Thirty smart math instructors in a room should be able to figure this one out!
About Author | {"url":"https://edgeoflearning.com/tips-for-the-equation-editor-in-instructure-canvas/","timestamp":"2024-11-04T21:57:16Z","content_type":"text/html","content_length":"193679","record_id":"<urn:uuid:95b2cda8-5405-4abd-ba09-bce26c0e8960>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00766.warc.gz"} |
Unscramble GAG
How Many Words are in GAG Unscramble?
By unscrambling letters gag, our Word Unscrambler aka Scrabble Word Finder easily found 2 playable words in virtually every word scramble game!
Letter / Tile Values for GAG
Below are the values for each of the letters/tiles in Scrabble. The letters in gag combine for a total of 5 points (not including bonus squares)
What do the Letters gag Unscrambled Mean?
The unscrambled words with the most letters from GAG word or letters are below along with the definitions.
• gag (v. t.) - To stop the mouth of, by thrusting sometimes in, so as to hinder speaking; hence, to silence by authority or by violence; not to allow freedom of speech to. | {"url":"https://www.scrabblewordfind.com/unscramble-gag","timestamp":"2024-11-11T04:58:55Z","content_type":"text/html","content_length":"34793","record_id":"<urn:uuid:ed915dbe-40ba-475d-866a-a25dadb63ecf>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00156.warc.gz"} |
Online College Exams - Hire Someone To Take My Exam | Pay Me To DO Your Online Examination
are reading this, you are probably either a student who is interested in how to take the time out to learn about Differential Equations Class and want to know more about this wonderful and
interesting class or you are simply interested in learning how to get paid to do an online college exam. Differential Equation Class is really easy to understand because it does not have to be
complicated at all. 1 day to think about taking in considering thinking about working together with others and also thinking about how each student will understand the concept of the differentials.
You can get paid to do university exam if you are willing to do the research and study on your own. Many schools offer different courses and programs to their students that help them improve their
understanding of differentials. It does not necessarily mean that the student will need to go to a specific college or university in order to take the course, but he or she will need to go to a
school that offers a Differential Equation Class. This is actually a class that can be taken online or in the traditional classroom setting.
When you do Differentials Class at a school, you will not only be able to understand differentials as they appear on the graphs, charts, or other forms. You will also be able to understand what these
differentials mean. In order to understand the differentials better, you will be able to make a chart and graph about how many differentials are required for the equation to work. This is the basic
understanding of the Differentials Equation Class. Once the students can have a better understanding of what Differentials Equation Class is about, they can then go out and apply the information they
learned about the Differentials Equation Class to an actual job and to try to get paid to do an online college exam.
However, it should be noted that although there are different levels of understanding that one needs to be able to pass a college exam. If the student cannot understand the differentials on paper,
then it will be hard to apply the information they learned about the Differentials Equation Class to the actual exam question. Therefore, the student needs to know how to apply what they learned in a
certain class to the actual exam questions.
Differentials Equation Class is not a very difficult class to understand once you understand the basics of it. The difference between Differentials Equation Class and algebra class is the fact that
you will learn the concept of differentials using the graphing formula, which can be found on the differentials tab and then you can use this same tab to explain why this works and also how to make a
chart or graph so that the student can learn to make a chart or graph on their own to relate these differentials to an actual equation.
Differentials Equation Class also gives a good overview on what a differential equation is. It explains why it is important to understand differentials in terms of what causes them and what you have
to do to make the problem work. Differentials Equation Class also gives students the tools needed to be able to make a chart or graph on their own and to understand how to apply the knowledge they
have learned from class and the Differentials Equation Class to the actual exam questions on their exam.
In order to get paid to do an online college exam, a student will not have to do any more than what is necessary to make a chart or graph of a graph and explain what is happening. It is up to the
student to do the legwork involved with the job. This can take a lot of time, but the end result can be a good pay check.
There are many other ways that a student can work to make money doing a job. They can work for an employer by doing clerical work and help out around the office. This type of work will be less time
consuming and may not require any special knowledge in order for the student to be successful at doing a job as an employee.
Online College Exams | {"url":"https://hireforexamination.com/online-college-exams","timestamp":"2024-11-02T04:41:36Z","content_type":"text/html","content_length":"86283","record_id":"<urn:uuid:c025aced-6f32-44d7-89c6-e07a4cdefc1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00728.warc.gz"} |
Chennai Mathematical Institute
Understanding Relationships Between Quantum and Classical Complexity Classes: Separations, Collapses, and Closures
Rahul Tripathi
University of Rochester.
Over the past decade, quantum computing has emerged as a major contender to classical computing with far-reaching implications and challenges for the real world. Efficient quantum algorithms for
problems such as the discrete logarithm and factoring have led researchers to rethink the foundations of classical cryptographic schemes.
Quantum complexity theory is the study of the computational power and the limitations of quantum computing. A major theme in quantum complexity theory is to understand the relationships between
quantum and classical complexity classes. In this talk, I will illuminate this relationship by means of separations, collapses, and closure results involving quantum and classical counting classes.
The most central quantum complexity classes EQP, BQP, and NQP (the quantum analogs of P, BPP, and NP), are known to be related to classical counting complexity classes. We prove that standard
(relativizable) proof techniques cannot improve the best-known classical bounds for EQP and BQP significantly. For relationships between certain quantum and counting classes, we prove stronger
results: No standard (relativizable) proof technique can show that every infinite set in one class has a nontrivial approximation in another class (even under the nondemanding approximation notion of
merely having an infinite subset belonging to the other class). These results show the limitations of a broad class of proof techniques in resolving questions on relationship between quantum and
classical complexity classes. We also obtain interesting consequences, in terms of the complexity of the polynomial hierarchy, of hypotheses involving the quantum complexity classes EQP, BQP, and
Though we will touch on this just briefly in the talk, our analysis leads to the resolution of important questions in classical complexity theory as well. The foremost among these is that, resolving
a question open since the seminal 1994 counting class paper of Fenner, Fortnow, and Kurtz, we prove that the counting classes WPP and LWPP are not uniformly gap-definable.
This talk is based on joint work with Holger Spakowski and Mayur Thakur. | {"url":"https://www.cmi.ac.in/activities/show-abstract.php?absyear=2004&absref=27&abstype=sem","timestamp":"2024-11-11T23:38:37Z","content_type":"text/html","content_length":"8807","record_id":"<urn:uuid:d371ce9d-7eb2-40fa-9347-405dc6d049b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00364.warc.gz"} |
Oliver Labs: Cubic Surface Models and their Historical and Mathematical Background
In the 19^th century, cubic surfaces - defined by an implicit equation of degree three in three variables - were among the first interesting examples in the development of modern algebraic geometry.
A well-known result by Arthur Cayley and George Salmon is that any smooth cubic contains exactly 27 straight lines. Other prominent facts are the classification of all cubic surfaces w.r.t. their
singularities by Ludwig Schläfli, and Alfred Clebsch's birational map between the plane and such surfaces where six points play an essential role.
The talk will present both the historical and the mathematical background of classical hand-crafted and also recent 3d-printed cubic surface models. Some of their fascinating features such as the
movement of the straight lines as the surfaces vary may very well be visualized using interactive software. In 2011 and 2014, the speaker created two versions of a complete series of more than 45
types of 3d-printed cubic surface models. Copies of these are now part of several university collections such as those at Lisbon, Strasbourg, Dresden, and Mainz, as well as at the IHP at Paris. He
will bring some examples of these sculptures with him in order to illustrate facts which may better be appreciated when seeing and touching a real object.
Time & Location
Jan 31, 2019 | 05:15 PM
HS 001/Arnimallee 3
(Tea/coffee will be served from 16:45 in room 006/A3.) | {"url":"https://www.mi.fu-berlin.de/en/math/groups/ag-c/dates/31_01_2019_Labs.html","timestamp":"2024-11-10T06:29:16Z","content_type":"text/html","content_length":"25074","record_id":"<urn:uuid:91e0d360-8685-4e1a-927f-79f1dcc3b679>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00731.warc.gz"} |
Simon Says - Get started with geometry blocks
Get started with geometry blocks!
Let's make a fast start with those geometry blocks in the geometry library.Most of the geometry blocks in this library has a style param which could accept other moving style blocks to create
complicated patterns.
Now Let's take a look at some patterns drawn by combinations of those geometry blocks. | {"url":"https://turtlestitch.snapontop.org/turtlestitch-coded-embroidery/get-started-with-geometry-blocks","timestamp":"2024-11-09T23:16:33Z","content_type":"text/html","content_length":"92840","record_id":"<urn:uuid:bbcf591b-95db-4eb7-b0fd-2c9a6b18c914>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00010.warc.gz"} |
Type IIB Holographic Superfluid Flows
Published for SISSA by Springer
Received[: December 15, 2010] Revised[: January 26, 2011] Accepted[: February 6, 2011] Published: March 1, 2011
Type IIB holographic superfluid flows
Daniel Are´an,a,b Matteo Bertolini,a,b Chethan Krishnan,b and Tom´aˇs Proch´azkab a
International Centre for Theoretical Physics (ICTP) Strada Costiera 11, I 34014 Trieste, Italy
SISSA and INFN — Sezione di Trieste Via Bonomea 265, I 34136 Trieste, Italy
E-mail: [email protected],[email protected],[email protected],
Abstract:[We construct fully backreacted holographic superfluid flow solutions in a ] five-dimensional theory that arises as a consistent truncation of low energy type IIB string theory. We construct
a black hole with scalar and vector hair in this theory, and study the phase diagram. As expected, the superfluid phase ceases to exist for high enough superfluid velocity, but we show that the phase
transition between normal and superfluid phases is always second order. We also analyze the zero temperature limit of these solutions. Interestingly, we find evidence that the emergent IR conformal
symmetry of the zero-temperature domain wall is broken at high enough velocity.
Keywords: [AdS-CFT Correspondence, Holography and condensed matter physics] (AdS/CMT), Black Holes in String Theory, Black Holes
1 Introduction 1
1.1 Summary of results 2
2 The IIB set up 4
3 Hairy black hole solution 5
4 Superfluid flow phase transition 8
5 Zero temperature limit 11
A Asymptotic relations 16
B On-shell action and counter-terms 16
C Superfluid fraction 18
D The hairless solution: Reissner-Nordstrom 20
1 Introduction
In the last few years there has been an intense effort to model superconductor/superfluid phase transitions using the AdS/CFT correspondence. The basic observation that makes this industry possible
is the fact that at finite charge density and at sufficiently low tem-peratures, an AdS black hole in the presence of a charged scalar field is unstable to the formation of hair [1]. Using the basic
AdS/CFT dictionary [2–4], this gets easily interpreted as a superfluid-like phase transition in the dual field theory, cf. Weinberg [5].
Much of the work on holographic superconductors is done in the context of phenomeno-logical models, along the lines of the proposal originally presented in [6]. This is based on the minimal set-up of
a charged massive scalar minimally coupled to Einstein-Maxwell theory. While many interesting results can be obtained within this minimal framework (see [7–9] for reviews and references), such a
bottom-up approach has some intrinsic lim-itations. Since the hope is that holographic constructions may eventually shed some light on some basic properties of high-Tcsuperconductors, it would be
desirable to have a micro-scopic understanding of the underlying theory. This is something that phenomenological models, by definition, cannot offer. Secondly, they do not guarantee the existence of
a quan-tum critical point in the phase diagram, which is instead expected to control the physics of high-Tc superconductors. Indeed, the phenomenological models that one typically works with have no
potentials but the mass term. However, it is expected that to have an emer-gent conformal symmetry in the infrared in the zero temperature limit, one should have
potentials that allow symmetry-breaking minima [10]. Recently, some progress has been
made in this respect and several microscopic embeddings of holographic superconductors have been proposed in the framework of type IIB string theory [11], M-theory [12], and D7-brane models [13,14].
In these models, the potentials quite generically allow symmetry breaking vacua.
Most studies have also been performed in the probe approximation, which is a large-charge limit in which the backreaction of the matter fields on the gravitational field is negligible. While many
interesting results can be obtained with such a simplified setup when the temperatures are near the phase transition, the analysis becomes less and less reliable at very low temperatures, where the
backreaction is non-negligible. This prevents exploration of interesting low temperature phenomena: in particular, understanding the ground state of holographic superconductors is outside the regime
of applicability of the probe limit. Therefore, it is useful to realize holographic constructions where the backreac-tion is taken into account. Progress in this direcbackreac-tion began with [15],
where a (numerical) backreacted solution for the phenomenological model of [6] was presented.
In trying to explore the phase diagram of holographic superconductors, an interesting direction was pursued in [16,17] where the original holographic superfluid was studied in the presence of a
non-vanishing superfluid velocity (aka superfluid flow). Holographically this needs a non-trivial profile for a spatial component of the gauge field, besides the ever-present temporal component. The
latter corresponds to a charge density and is necessary to have a phase transition in the first place (see [18] for a recent alternative proposal). Two interesting results obtained in [16,17] were to
show the existence of a critical velocity above which the superfluid phase ceases to exist, as expected for physical superfluids, and the existence of a tricritical point in the velocity vs.
temperature diagram where the order of the phase transition changes from second to first. Moreover, it was noticed in [19,20] that these solutions can be efficiently compared to 2+1-dimensional
superconducting thin films or wires. They behave very much like superfluids, in that an applied external magnetic field does not get expelled as if the gauge field were not dynamical. The
four-dimensional gravitational model of [16,17] was further analyzed from this latter viewpoint in [20], where the system was in fact studied at fixed current rather than at fixed velocity. This
choice allowed new checks, and remarkable agreement with some peculiar properties of real-life superconducting films (see [21]) was found.
All solutions presented in [16, 17, 20] have been obtained in the probe approxima-tion. Hence, while being able to confront phenomena near or right below the critical temperature, not much could be
said about the low temperature regime of such superfluid flows. This problem was addressed more recently in [22], where the backreaction of the phenomenological four-dimensional model of [16,17] was
1.1 Summary of results
In this paper we take some concrete steps forward in the above program on superfluid flows: we focus our attention on models with known microscopic embedding and symmetry breaking vacua, and work at
the backreacted level. Specifically, we will describe a holo-graphic superfluid flow in four dimensions by means of a fully backreacted solution of a
five-dimensional gravitational system whose action arises as a consistent truncation of type
IIB string theory [11]. The effective theory is essentially Einstein-Maxwell theory with a Chern-Simons term, interacting with a complex charged scalar with a non-trivial potential. It can be
obtained upon compactification of type IIB theory on an AdS5× Y geometry, Y being a Sasaki-Einstein manifold. Using the numerical solutions that we find, we analyze several aspects of the rich phase
diagram of this system. In particular, we present the plots of the scalar condensate against temperature and its dependence on the superfluid velocity, analyze the nature of the phase transition
computing the free energy difference between the superconducting and the normal phase, and give some predictions on the zero temperature limit.
As one would expect on physical grounds, we observe that for high enough velocity the system stops superconducting. Interestingly, we find that for all velocities we have investi-gated the phase
transition in these type IIB constructions is always second order. Hence, we do not find the tricritical point which characterizes the phase diagram of models with large charges. The same behavior
was observed in the phenomenological but backreacted AdS4 model of [22] for low values of the scalar charge (in fact, only for the case q = 1 in their notation). The persistence of the second order
phase transition has been observed also in the unbackreacted case for large masses of the scalar in five dimensions [23]. We will have some more comments on this in section 4.
One of the advantages of having a fully backreacted model is that one can also in-vestigate the low temperature limit. In the zero velocity case, it is known [10, 24] that the type IIB hairy black
hole solution tends to a domain wall with an emergent conformal symmetry in the deep IR. (This is in contrast with the phenomenological model of [25] where the potential has only a mass term and no
symmetry breaking minima, and the zero temperature limit generically does not lead to an IR AdS geometry.) When the velocity is turned on and it is high enough, we find evidence that the solution
stops being AdS in the IR. This suggests that beyond some critical velocity the IR conformality is lost. Along the way, we also discuss the importance of the frame comoving with the superfluid flow
in these results.
The rest of the paper is organized as follows. In section2we present the truncated type IIB five-dimensional action and the equations of motion. Our ansatz for the relevant fields, and the procedure
we pursue to obtain numerical solutions with the desired features are discussed in section3. Using these solutions, in section4we study the phase diagram of the superfluid flow. In particular, we
analyze the nature of the phase transition as a function of the superfluid velocity. In doing so, we compute the free energy of the superfluid phase and compare it to that of the normal phase (which
is described, holographically, by a Reissner-Nordstrom black hole with no scalar hair). Finally, in section5[we study the T → 0 limit of]
some geometrical quantities like the Ricci scalar and the Riemann tensor squared. We also study the variation of the superfluid fraction as the temperature is lowered. These analyses allow us to
explore the nature of the ground state of holographic type IIB superfluid flows. The appendices contain more technical material which might help the reader in following our analytical and numerical
computations more closely.
2 The IIB set up
In [11] a consistent truncation of type IIB supergravity was presented, which has the structure of an Einstein-Maxwell (plus Chern-Simons) system in five dimensions coupled to a charged scalar field
with a non-trivial potential. The action reads
SIIB = Z d5x√[−g] " R −L 2 3 FabF ab[+]1 4 2L 3 3 ǫabcdeFabFcdAe+ −1 2 (∂aψ)
2[+ sinh]2[ψ(∂][aθ − 2Aa][)]2[−]
6 L2 cosh 2 ψ 2 (5 − cosh ψ) !# . (2.1)
Here, ǫ01234 [= 1/]√[−g, and we have written the charged (complex) scalar by splitting the] phase and the modulus in the form ψeiθ. For later convenience we recall that the Abelian gauge field A is
dual to an R-symmetry in the boundary field theory [11] and the scalar field has R-charge R = 2.
The matter equations of motion are 1 √ −g∂a 4 3L 2√ −gFab− [27]8 L3√[−gǫ]abcdeFcdAe + + 2 27L 3[ǫ]pqrsb[FpqFrs][+ 2 sinh]2[ψ(∂]b θ − 2Ab) = 0 , (2.2) 1 √ −g∂a( √ −g∂aψ) −1 2sinh 2ψ(∂bθ − 2Ab) 2[+] +
3 2L2
sinh ψ(5 − cosh ψ) − 2 cosh2 ψ 2
sinh ψ
= 0 . (2.3) The Einstein equations can be written as
Rab− 1 2gabR − 2 3L 2[F] acFbc− gab 4 F cd[F] cd + −1[2]Ξab+ 1 4gabΞ a a − 3 2L2gabcosh 2 ψ 2 (5 − cosh ψ) = 0 , (2.4) with Ξab≡ ∂aψ∂bψ + sinh2ψ(∂aθ − 2Aa)(∂bθ − 2Ab) .
It is convenient to use the gauge invariance to shift away the angle θ and also write the various expressions in terms of covariant derivatives. This basically means that we set θ to zero in the
above equations and use
∇a 4[3]L2Fa[b][−] 8 27L 3[ǫ]a cde b FcdAe + 2 27L 3[ǫ]pqrs bFpqFrs− 4 sinh2ψ Ab = 0 , (2.5) ∇a∇aψ − 2 sinh 2ψ(AbAb) + 3
sinh ψ(5 − cosh ψ) − 2 cosh2 ψ[2]
sinh ψ
as the matter equations of motion. The leading terms in the scalar potential take the form
V (ψ) = −[L]12[2] −3ψ 2
2L2 + . . . (2.7)
which have the immediate interpretation as the AdS cosmological constant and the scalar mass term. Typically, in a minimal phenomenological model the scalar potential has just the above two terms.
Higher order terms affect mostly the very low temperature regime where the condensate becomes larger and thus the type IIB model can become substantially different from the minimal one. There are
then two reasons as to why one should try and work out a fully backreacted solution for this type IIB model. The first is that the scalar has charge R = 2 and hence the probe approximation, which is
a large charge scaling limit, is potentially inappropriate already at temperatures near the critical temperature. The second is that a backreacted solution would let one study the system in a regime
(i.e. very low temperatures) where, as just noticed, the differences of the action (2.1) with respect to that of a phenomenological model, are more apparent.
Note that the scalar mass is m2 [= −3. In d = 4, this mass is in the range where] the leading fall-off at the boundary, which is O(1/r), corresponds to a non normalizable mode. So, using the AdS/CFT
map, we will interpret it as the source of the dual field theory operator O. The subleading fall-off is O(1/r3[) and corresponds to a condensate for] O (whose dimension will therefore be ∆ = 3). It
is evident from the value of the R-charge and this fall-off that ∆ = 3|R|/2 and O is therefore a chiral primary [11].
3 Hairy black hole solution
We want to construct a fully backreacted hairy black hole solution, holographically de-scribing a superfluid flow. To achieve this we must keep the metric (also) unfixed and find a self-consistent
solution for the metric, the gauge field and the scalar. To have a charged scalar condense, we need to turn on both the scalar and the time component of the gauge field in the bulk [1]. Moreover, to
obtain a non-vanishing superfluid flow, we should break the isotropy in the boundary directions that was present in the original holo-graphic superconductor construction of [11]. Indeed, the
superfluid velocity in (say) the x-direction is captured by the leading fall-off of the bulk gauge field component Ax at the boundary, which should therefore have a non-trivial bulk profile.
Altogether, this means that we need ψ, At and Ax to be non-trivial. Since we would like to work with ordinary as opposed to partial differential equations, we look for an ansatz where these are
functions purely of the holographic direction r: fortunately, this turns out to be enough to obtain a solution. Consistency of the Einstein equations then demands that we choose a metric ansatz of
the form
ds2 [= −]r 2[f (r)] L2 dt 2[+]L2h(r)2 r2[f (r)] dr 2 −2C(r)r 2 L2dtdx+ r2 L2B(r)dx 2[+]r2 L2dy 2[+]r2 L2dz 2[. (3.1)]
The metric contains four independent functions, f (r), h(r), C(r) and B(r). Together with the ansatz for the gauge field and the scalar
this will give rise to a set of seven independent equations for seven unknowns. Our ansatz
here is essentially of the same form as the one in [22], albeit in one more dimension. This can be demonstrated by going over to an Eddington-Finkelstein form and working in a frame where the normal
fluid considered in [22] is at rest.
Let us first notice that with this choice of ansatz the terms in the equations of mo-tion (2.5) arising from the ǫabcde [piece all vanish. A second important fact is that there] are several scaling
symmetries one should be aware of. In particular, the ambiguity in the units at the boundary for the time t and the distance along x translate to the following two scaling symmetries of the resulting
t → t/a , f → a2f , [h → a h ,] [C → a C ,] At→ a At, (3.3) x → x/b , B → b2B , [C → b C ,] Ax→ b Ax . (3.4) These are symmetries of the action and therefore of the equations of motion. Two further
scaling symmetries of the system that we will use are
(r, t, x, y, z, L) → α(r, t, x, y, z, L) , (At, Ax) → (At, Ax)/α , (3.5) r → βr , (t, x, y, z) → (t, x, y, z)/β , (At, Ax[) → β(A]t, Ax) . (3.6) The first scaling changes the metric by a factor α2
[and leaves the gauge field invariant, but] its effect is to scale the action (2.1) by an overall constant factor α2[, therefore leaving the] equations of motion unaffected. The second scaling is the
usual holographic renormalization group operation in AdS, and it is easily seen that the metric, gauge field and the equations of motion are left invariant. Using the symmetries (3.5) and (3.6) we
can scale the horizon radius rH and the AdS scale L to unity. We will assume this has been done in what follows, unless stated otherwise.
The strategy we pursue to construct (numerically) our solution is as follows. First, using our ansatz, one can massage the equations of motion and end up with first order differential equations for f
and h and second order differential equations for B, C, At, Ax and ψ. All in all we have then two first order and five second order equations resulting in twelve degrees of freedom. Therefore, to fix
a solution we need twelve pieces of data.
We start by considering the fields (3.1)–(3.2) near the horizon (r = rH) and expand their several components Φ in a Taylor series as
Φ = ΦH[0] + ΦH[1] [(r − rH]) + . . . . (3.7) Requiring regularity of the solution at the horizon amounts to setting some specific coeffi-cients to zero. To linear order in (r − rH), the expansion at
the horizon takes the form
f = f[1]H[(r − rH]) + . . . (3.8)
h = hH[0] + hH[1] [(r − rH]) + . . . (3.9) B = B[0]H + BH[1] [(r − rH]) + . . . (3.10)
C = C[1]H[(r − rH]) + . . . (3.11)
Ax = AH
+ AH
[x,1(r − r]
H) + . . . (3.13)
ψ = ψ[0]H + ψH[1] [(r − rH]) + . . . . (3.14) That is, demanding regularity is tantamount to setting f[0]H, C[0]H and φH[0] to zero. Imposing now the equations of motion has the effect of putting
further constraints on many coeffi-cients, which all end up being determined by a small set of independent horizon data. It turns out that the coefficients can all be determined in terms of six
independent data
(hH[0] , B[0]H, C[1]H, AH[t,1], AH[x,0], ψH[0] ) . (3.15) This means that the solutions that we will find by integrating from the horizon will be a six-parameter family. All other coefficients are
functions of these ones. One such relation which will be useful later is
f[1]H = (hH[0] )2 9 4 + 2 cosh ψ H 0 − cosh(2ψ[0]H) 4 −2(A H t,1)2 9 . (3.16)
The next step is to integrate the solution from the horizon out to the boundary (r → ∞), starting with the free horizon data (3.15), trying a suitable ansatz for the asymptotics of the fields at the
boundary. In fact, the asymptotic expansion in five dimensions is subtle because, as already noticed, the mass of the scalar is such that there is a non-normalizable mode. To accommodate a generic
solution obtained by integration from the horizon, we therefore need to turn on the non-normalizable mode of the scalar as well at the boundary. The non-normalizable mode triggers further logarithmic
terms in the asymptotic expansion, so we need to keep track of them as well. It turns out that a combined series expansion in both 1/rn and log r/rm
Φ = ∞ X n=0 Φn 1 rn + ∞ X m=0 Φl[m] log r rm , (3.17) works nicely.
Using a shooting technique we select, out of all possible solutions, those which match our physical requirements. In particular, we ask that the space be asymptotically AdS and that the source term
for the field theory operator dual to the scalar field be vanishing, since we want the U(1) breaking to be spontaneous.
We have found that the following asymptotic expansion solves the equations of mo-tion,1 while being general enough to match the curves arising from the integration from
What we do is to plug this expansion into the EoMs and demand that the result be zero order-by-order. We find that either this is satisfied identically or that the resulting relations can be
interpreted as the definitions of higher order terms in the expansion.
the horizon f = h2
+f4 r4 + f
l r4 log r + . . . , h = h0+ h2 r2 + h4 r4 + hl
r4 log r + . . . , (3.18) B = B0+ B4 r4 + B
l r4 log r + . . . , C = C0+ C4 r4 + C
l r4 log r + . . . , (3.19) At= At,0+At,2 r2 + Al
r2 log r + . . . , Ax= Ax,0+ Ax,2 r2 + Al
r2 log r + . . . , (3.20) ψ = ψ1 r + ψ3 r3 + ψl 3 r3 log r + . . . . (3.21)
Of course, not all of the above coefficients are independent. We relegate the explicit expressions for the dependent ones to appendix A. We merely note that when the non normalizable mode ψ1 is set
to zero, the expressions are such that all the logarithmic pieces vanish as expected. It can also be seen that the independent parameters at the boundary can be taken to be
(h0, f4, B0, B4, C0, C4, At,0, At,2, Ax,0, Ax,2, ψ1, ψ3) . (3.22) To get asymptotically AdS solutions, we must set B0, h0 to 1 and C0, ψ1 to zero. The scaling symmetries can be used to accomplish the
first two conditions, whereas we need to shoot for the last two. We are therefore left with eight independent boundary data. They are
(f4, B4, C4, At,0, At,2, Ax,0, Ax,2, ψ3) . (3.23) We see here that the physical requirements we impose at the boundary do not fix as many integration constants of the ODE system, as the regularity
conditions at the horizon does. This means that for the solutions that we obtain, there are hidden relations between the boundary data. Concretely, since there are only six independent pieces of
horizon data this gives us relations between the above eight variables, which will then be used to study the phase diagram of the boundary theory.
An important quantity for studying the thermodynamics of the system is of course the superfluid temperature. This corresponds to the black hole Hawking temperature, T. From the structure of the
metric (3.1) we easily get
T = r 2 Hf′(rH) 4π L2[h(r] H) , (3.24)
which is then also determined in terms of our horizon data. After some simple algebra, recalling we have set rH = 1 and using the horizon relation (3.16), we get
T = 1 4π " hH[0] 9 4+ 2 cosh ψ H 0 − cosh(2ψ[0]H) 4 −2(A H t,1)2 9hH[0] # . (3.25)
4 Superfluid flow phase transition
We plot the result for the condensate versus the temperature in figure1, for different values of the superfluid velocity. For the rest of the paper, we will introduce the notation
µ ≡ At,0, [hOi ≡]√2 ψ3, ξ ≡ Ax,0
0.01 0.02 0.03 0.04
0.06 0.0 0.2 0.4 0.6 0.8
Figure 1. Condensate plots for various values of the velocity ξ = 0, 0.1, 0.33, 0.4, 0.5 (from right to left). The zero velocity case, ξ = 0, which we report for ease of comparison, precisely agrees
with existing results in the literature [11].
where µ is the field theory chemical potential, O the (condensing) chiral primary operator, and ξ the superfluid velocity in units of the chemical potential. When we work in an ensemble with fixed
chemical potential, the meaningful (dimensionless) quantities relevant for the condensate plot are
T µ and
µ3 . (4.2)
In constructing the plots, we have also rescaled by the (velocity-dependent) factorp1 − ξ2[,] which is nothing but the relativistic boost factor.
From the form of the curves in figure 1, it is evident that there is a phase transition to a hairy black hole at low temperatures. As expected, the critical temperature decreases as the velocity is
increased. For instance, for ξ = 1/2 (which is the highest velocity we have investigated) we observe that Tc(ξ = 1/2) = 0.067 Tc(ξ = 0). It is clear from the condensate plot that the superfluid phase
cannot exist for velocities that are much higher than this.
One can compare the free energy of the normal phase (which corresponds to a Reissner-Nordstrom black hole with no hair) and the hairy/superfluid phase to see that the superfluid phase is favored when
it exists. We collect some details of the free energy computation in appendix D, while figure2 contains the free energy comparison between the superfluid phase and the normal phase at the same value
of T /µ. In terms of Srendefined in eq. (B.11), the precise quantities we plot are
Sren µ4[Vol] 4 ≡ Ω µ4 vs. T µ . (4.3)
The plot demonstrates that the phase transition stays second order for all values of the velocity, up to our numerical precision. This should be contrasted to the unbackreacted
0.00 0.01 0.02 0.03 0.04
0.06 0.26 0.24 0.22 0.20 0.18 0.16
Figure 2. Free energy plots for various velocities ξ = 0, 0.1, 0.33, 0.4, 0.5 (dashed lines from bottom to top). The RN-AdS black hole is also presented for comparison (solid line). The plots show
that for any velocities the phase transition is second order. The apparent overlap of the ξ = 0.5 curve with the normal phase, is an artifact of the resolution of the figure.
cases previously considered in the literature, where the phase transition typically changes to first order for high enough values of the velocity [16, 17, 20]. In [22] a backreacted superfluid in
AdS4 was considered and it was found that for low enough values of the charge of the scalar field, the phase transition remained second order. Our type IIB system seems to be analogous to this latter
scenario: the (R-)charge of the scalar in our case is fixed by the IIB construction to be 2 and it is plausible that this is a low enough value so that the transition remains second order all
In [23], the phases of the (unbackreacted) superfluid for various values of the masses of the scalar field in AdS5 were investigated and it was found that for high enough mass, there is always a
second order transition close to the normal phase. Since the probe limit is a large charge limit, we should expect a similar structure also in the backreacted case when the charge is large. That is,
when the charge and the mass are both large, we should expect a persistent second order transition. In our IIB case, we are exploring the opposite limit, namely low (R-)charge and low mass (since the
charge and mass are related for chiral primaries). Again, we find that the second order transition exists irrespective of the velocity. Based on these observations, it is tempting to make the
suggestion that whenever the mass and charge are scaled together in some appropriate way, the second order transition persists for all velocities. Of course, to make and/or establish a precise
statement along these lines will require a much more thorough exploration of the masses and charges of the scalars than we have undertaken here. Moreover, as already noticed, the persistence of the
second order transition was also found in the AdS4 case for small charges and small mass [22], while it was found not to exist for any value of the mass in the probe limit [23]. So it is clear that
the appropriate statement, if it exists, will have to be dimension-dependent.
5 Zero temperature limit
One of the advantages of having a fully backreacted solution is that one can reliably go to the zero temperature limit. At zero velocity, the zero temperature solution is expected to be described by
a domain wall, corresponding to the symmetry-breaking vacuum of the scalar potential that restores conformal symmetry in the IR. Such domain wall solution was constructed in [24], and conjectured to
correspond to the ground state of the type IIB holographic superconductor. Since we have here fully backreacted solutions at non-zero velocity, a natural question one would like to answer is whether
and how such IR behavior gets modified when the superfluid flows.
As a warm-up, and for later comparison, let us first consider the static case. A prelim-inary check one can perform is to see whether for ξ = 0 our condensate value tends to the condensate value
found in [24]. This is indeed the case: for the lowest temperature point (T /µ = 3.05 · 10−4), our condensate in the normalizations of [24]
hOiDW ≡ ψ3
(2µ/√3)3 (5.1)
is ≈ 0.3215, which is close enough to the zero-temperature value of ≈ 0.322 found in [24]. Even without explicitly constructing the domain wall solution, one can find evidence for its existence by
investigating the horizon values of the curvature scalars R and RabcdRabcd. This strategy was adopted in [12] for superconductors in M-theory, and it was found that these curvature scalars on the
horizon go to the AdS4 values expected from a domain wall solution with a symmetry-breaking minimum in the IR. We can do the same computation here, and we do find evidence that the solution has an
emergent AdS5 in the IR with the correct length scale. Note that the IR AdS scale, as determined by the symmetry-breaking vacuum [24] is L′ = 23[3]/2 where we have set L = 1 in the UV. Using the fact
that the Ricci scalar for AdS5 is −20/L2, we find that the predicted value is −22.5 in the IR. A similar computation using the RabcdRabcd = 40/L4 shows that in the zero temperature limit we should
get the value 50.625. We plot the results for both curvature scalars in figure3 The plots clearly demonstrate that at low temperatures the curvatures indeed stabilize to the expected domain wall
values in the infrared.
The behavior of RabcdRabcd deserves a closer look, however. A distinctive feature of the present five-dimensional case, as compared to the four-dimensional model of [12], is that RabcdRabcd
stabilizes to the domain wall value close to the horizon, but it starts increasing as the radius is further reduced. At the horizon its value is (of course) finite, but is well on its way to the
divergence at the singularity inside the horizon.2 Note that in order to make the connection with the domain wall, what we really need is the emergence of an AdS5 throat of the correct length scale
at zero temperature, and our plots give evidence for that. Figure 4 reports the behavior of RabcdRabcd zooming in near the horizon region
This sharp ascent in the curvature scalars close to the horizon is not a peculiarity of the broken phase: it is also there in the normal phase. For instance, RabcdRabcd, whose expression for the
normal phase
Reissner-Nordstrom black hole we report in eq. (D.4), has a similar sharp ascent at the horizon, while remaining finite there. On the other hand, the AdS4case is somewhat special in that the Ricci
scalar is a constant in
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 -25 -24 -23 -22 -21 -20 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 40 50 60 70 80 90
Figure 3. Ricci scalar R and RabcdRabcd as a function of the radial coordinate near the horizon, at zero superfluid velocity. The horizontal dashed lines mark the corresponding values of R and RabcdR
for the UV and IR AdS geometries.
0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 50.6 50.8 51.0 51.2 51.4 51.6 51.8
Figure 4. Behavior of RabcdR abcd
near the horizon for different temperatures, from left to right: T /µ = 3.05 · 10−4(black), 1.55 · 10−3(red), 3.33 · 10−3(blue). The dashed vertical lines correspond to the corresponding horizon
radii. The stabilized (i.e. domain wall) value RabcdRabcd = 50.625 is indicated by the dashed horizontal line.
for different temperatures. Happily, as the temperature is lowered the stabilized region of the plot gets closer to the horizon and asymptotes to the expected AdS5 value of 50.625.
Let us now consider the cases with velocity, ξ 6= 0. We report in figure5 the plot for the Ricci scalar vs. radius for different superfluid velocities (including the zero-velocity case, to ease the
comparison) and in figure 6 that for RabcdRabcd. The presence of a new scale means that there is a possibility that the emergent conformal symmetry in the IR is broken. While for low velocities our
plots suggest that the same IR fixed point as the static case is recovered, interestingly enough, we find that for high enough velocities the conformal symmetry of the solution is indeed broken and
the curvature scalars diverge without any stabilization whatsoever. This is analogous to the phenomenological models with no quantum critical point in the IR. The conclusion seems to be that the
solutions do not stabilize to the conformal quantum critical point when the velocity is high enough.
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 -25 -24 -23 -22 -21 -20 -25 -24 -23 -22 -21 -20 -25 -24 -23 -22 -21 -20 -25 -24 -23 -22 -21 -20 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.2 0.4
0.6 0.8 1.0 1.2 1.4
Figure 5. Ricci scalar R as a function of the radial coordinate near the horizon for a low temper-ature. The horizontal dashed lines indicate the corresponding values of R for the UV and IR AdS
geometries. 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 40 50 60 70 80 90 40 50 60 70 80 90 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 40 50 60 70 80 90 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.2 0.4 0.6 0.8 1.0 1.2 1.4 40 50 60
70 80 90 Figure 6. RabcdR abcd
as a function of the radial coordinate near the horizon for a low temperature. The horizontal dashed lines indicate the corresponding values of RabcdRabcd for the UV and IR AdS geometries. The
stabilization to the IR value, when it happens, holds till very close to the horizon.
While we have not performed an exhaustive scan of velocities in this paper, it would be interesting to see for what precise value of the velocity this qualitative change happens, and study the
precise nature of the phases and phase transitions, if any, there. From our
0.01 0.02 0.03 0.04
0.06 0.0 0.2 0.4 0.6 0.8 1.0
Figure 7. Plots of the superfluid fraction vs. temperature for various values of the velocity ξ = 0.1, 0.33, 0.4, 0.5 (from right to left).
analysis, it appears that the regime where this transition happens is between ξ = 0.33 and ξ = 0.4. Related to this is the observation that the condensate
µp1 − ξ2 (5.2)
that we plotted earlier, tends to the same value at the horizon for all values of the velocity, for small enough velocity. This is again indicative of a quantum phase transition: there is a change in
the nature of the solution as we tune an order parameter at zero temperature. The results we find are consistent with the idea that the phase structure in the temperature-velocity plane is determined
by the quantum critical point. It is intriguing that the relevant condensate seems to be measured in units of chemical potential as seen in a frame comoving with the superfluid flow. For a timelike
vector, which for us is the superfluid velocity 4-vector, the time component in the rest (i.e., comoving) frame is nothing but its norm. Therefore, since we want to plot a scalar quantity for the
dimensionless condensate, this is the natural choice. But unlike in the case of an ordinary fluid where the fluid velocity can be interpreted as arising from a boost of a static black hole, here the
anisotropic part of the metric does not seem to have such a simple interpretation in the bulk. We intend to come back to some of these questions in the near future.
Another quantity of interest3 [in understanding the zero temperature limit is the ]
su-perfluid fraction ζ. It corresponds to the ratio between the charge density of the susu-perfluid flow and the total charge density of the system. In appendix C, following [22], we elab-orate on
the interpretation of the boundary theory in terms of a two-fluid model and compute the expression of the superfluid fraction in terms of the fall-offs of the bulk fields, eqs. (3.18)–(3.21). The
result is
ζ = −A[A]x,2C4 t,2B4
. (5.3)
This quantity is interesting because from the curves in figures 1 and 2 of [22] we see that for
the AdS4 case its behavior near zero temperature captures some interesting aspects of the nature of the phase transitions. More specifically, together with our results in this paper (see figure 7[),
we are lead to conjecture that ζ → 1 at zero temperature for all velocities]
where the rescaled condensate value at zero temperature tends to its value at zero velocity. From the evidence presented in [22] one could think that the zero temperature limit of the superfluid
fraction is correlated with the existence or not of a first order phase transition at high enough velocity. However, in our case we have an explicit situation where we see a consistently second order
phase transition where the limiting value of the condensate at zero temperature changes qualitatively as we tune the velocity. Remarkably, we find that ζ → 1, only in those cases where the zero
temperature condensate value [µ]hOi√1/3
takes its corresponding value at zero velocity. Since this condensate value captures the existence or not of the (anisotropic) domain wall, the natural conjecture is that ζ = 1, for the domain wall
when it exists. Notice that ζ → 1 is what one would expect for the ground state of a superfluid flow. What we have basically demonstrated then is that three quantities (namely the curvature scalar
(s), the rescaled condensate and the superfluid fraction) undergo a qualitative change at the same velocity, as we tune the velocity. We believe this is strong evidence for the existence/
non-existence of the domain wall as we go through that velocity.
Despite the evidence we have presented, it should be borne in mind that the preser-vation of the conformal symmetry for low velocities is not fully established. Unlike in the zero velocity domain
wall examples discussed in the literature, we have not constructed an explicit solution that has emergent conformal symmetry in the IR in the cases with (low) velocity. However, the fact that the
curvature scalars and the condensate (5.2) stabilize to their respective zero velocity values (within our numerical precision), is an indication that this might indeed be the case. One another caveat
that we emphasize here is that the perturbative stability of these consistent truncations in the zero temperature limit is not settled. In particular, when the Sasaki-Einstein manifold is a sphere,
instabilities are known to exist in the zero temperature domain wall solution [26,27].4 It is possible that for a more complicated choice of Sasaki-Einstein space (which is indeed what we need to
have here anyway, in order to let the scalar chiral primary we focus on to be the operator responsible for the black hole phase transition [11]) the five-dimensional theory is stable. It is also
interesting that the simple stringy consistent truncations do give rise to scalar potentials with symmetry breaking vacua, resulting in an emergent conformal symmetry in the IR at zero temperature.
This is precisely what one expects in the zero temperature limit of a high-Tc superconductor, which is believed to be governed by a quantum critical point. So our expectation is that in the
(unlikely?) event that no Sasaki-Einstein trun-cation can be made stable, these models should still capture some generic features of a holographic superfluid with emergent conformal symmetry in the
A related instability was recently shown to exist also in M-theory [28] for a similar consistent truncation for the ground state of a 2+1 dimensional superconducting system [24,29].
We would like to thank Silviu Pufu and Julian Sonner for email correspondence and use-ful comments at different stages of this work. We also acknowledge helpuse-ful discussions and/or correspondence
with Nikolay Bobev, Jarah Evslin, Jerome Gauntlett, Chris Her-zog, Giuseppe Policastro and Ho-Ung Yee. D.A., M.B. and C.K. would like to thank the organizers of the ESI Programme on AdS Holography
and the Quark-Gluon Plasma in Vienna, where part of this work has been done, for hospitality and financial support. C.K. thanks the International Solvay Institutes, Brussels for hospitality during
parts of this work. D.A. thanks the FRont Of Galician Speaking scientists for unconditional support. A Asymptotic relations
In this appendix, we present the relations defining the dependent coefficients in the asymp-totic expansion in the IIB case
f4= 1 48C2 0 96C0C4h20+ 48B4h04+ 96C02h0h4+ 96B0h30h4+ C02h20ψ14+ B0h40ψ41+ + 24C[0]2h2[0]ψ1ψ3+ 24B0h4[0]ψ1ψ3[− 12h]4[0]L4ψ2[1]A2[x,0]+ + 24C0h20L4ψ12Ax,0At,0+ 12B0h20L4ψ21A2t,0 , (A.1) f[4]l = 1 3C2
0 − C02h20ψ14− B0h40ψ14+ 3B0h20L4ψ12A2t,0+ + C[0]2(h2[0]ψ4[1][− 3L]4ψ[1]2A2[t,0]) + B0h20(h20ψ14− 3L4ψ12A2t,0) , (A.2) h2 = − h0ψ[1]2 12 , h l 4= h2[0]ψ[1]4[−3L]4ψ2[1]A2[t,0] 6h0 , B[4]l= L4ψ[1]2A2
[x,0], C[4]l[= −L]4ψ[1]2Ax,0ψ0, (A.3) ψ[3]l [= −]2(C 2 0ψ13+ B0h20ψ31+ 3h20L4ψ1A2x,0− 6C0L4ψ1Ax,0At,0− 3B0L4ψ1A2t,0) 3(C[0]2+ B0h2[0]) , (A.4) Al[t,2] [= −]3ψ 2 1At,0 2 , A l x,2 = − 3ψ[1]2Ax,0 2 .
These are the general expressions when ψ1 6= 0. Our primary interest will be to shoot for the case ψ1 = 0, in which case all of the coefficients above vanish identically, except
f4C02− 2C0C4h20− B4h40
2h0(C[0]2+ B0h2[0]) . (A.6)
In particular, all the logarithmic terms vanish and we end up with a usual asymptotic expansion in 1/r, as expected. Note also that in asymptotically AdS solutions, C0 = 0 as well. Moreover, when
there is no superfluid velocity and the isotropy is not broken, B4 = 0 and therefore we end up getting h4 = 0. This last result is useful in making comparisons with the holographic superconductor
case investigated in [11].
B On-shell action and counter-terms
In order to compute the free energy, we need the on-shell action for the type IIB system. As we show below it turns out that, remarkably, the on-shell action can be written purely
as a boundary piece, and be easily evaluated. However, this boundary term is divergent:
to cancel it we need to introduce boundary counter-terms. In what follows, we describe both these steps.
For the ansatz that we work with, it can be checked directly that, despite the compli-cations of the equations of motion, the following relations hold
L0− R = 2L 2 r2 Tyy =
r2 Tzz. (B.1)
Here T stands for the stress tensor arising from our IIB Lagrangian, L0 is defined via SIIB=
d5x√[−gL]0, (B.2)
and R is the Ricci scalar. Notice that these relations only depend on our ansatz, i.e. they are true before we use the equations of motion. Going on-shell, we replace Tyy and Tzz by Eyy and Ezz,
where E denotes the Einstein tensor Eab ≡ Rab− 1[2]gabR. Together with the relation
Ea[a][= −]3
2R (B.3)
that is valid in five dimensions, this implies that √ −gL0=√[−g] L 2 r2(Eyy+ Ezz) − 2 3E a a . (B.4)
The right-hand-side depends only on the metric functions and can be evaluated explicitly for our ansatz. Direct computation reveals that it can be written as a total differential so that the
(on-shell) action takes the form
SIIB,OS= −vol4 Z ∞ rH dr 2rf (r) L2[h(r)]2 √ −g ′ , where √[−g =]r 3[h(r)] L3 s C(r)2 f (r) +B(r) , (B.5) and the prime denotes the derivative with respect to r. Because of the presence of f , this
expression is zero at the horizon and that end of the integral is safe. But it clearly gets contributions from the boundary, where it diverges as r4 [and we need to regulate it with] appropriate
The counter-terms6 for the gravitational part of the action in asymptotically AdS spaces can be looked up in [30]. Along with these we also have to add counter-terms for the scalar part. The final
form of these terms in our notations and conventions can be written as Sct= 2 Z d4x√[−γ][K −] 3 L + Z d4x√[−γ]|ψ| 2 L . (B.6)
The sign convention for the extrinsic curvature is chosen so that with the outward pointing normal na,
Kab≡ 1
2(∇anb+ ∇bna) . (B.7)
We loosely refer to the Gibbons-Hawking term also as a counter-term, even though strictly speaking it is a boundary term necessary to make the variational problem well-defined.
Note that the general gravitational counter-term discussed in [30] involves a boundary Ricci
scalar as well: but this does not contribute for us, because our boundary becomes flat as we take it to infinity. The various quantities (including the scalar extrinsic curvature) can be computed by
cutting off the spacetime at some finite r = r0, then taking the limit r0 → ∞ for the quantity SIIB,OS+ Sctat the end of the computation. If we define the boundary at r = r[0, then the outward normal
to the surface Φ(t, r, x, y, z) ≡ r − r0] = 0 is na ∼ ∇aΦ, and after normalizing7 so that gabnanb= 1, we get
na= 0,
rpf(r), 0, 0, 0 !
. (B.8)
Since we need only the scalar extrinsic curvature, we don’t need to introduce 4-D coordi-nates on the boundary and can compute it directly in the bulk coordicoordi-nates as
K = gab[∇a]nb =
f1/2 8C2+ rf B′+ 2rCC′+ 8Bf + rBf′
2L(C2[+ Bf )h] . (B.9)
So the final form of the counter-term action is Sct= Vol4 lim r→∞ " r4[f]1/2 [8C]2[+rf B]′[+2rCC]′[+8Bf +rBf]′ L5[hpC]2[+ Bf] − r4 L4 p C2[+Bf]6 L− ψ2 L # . (B.10) With the addition of this piece,
the renormalized action SIIB,OS+ Sct no longer has the r4 divergence and is finite. The net result is
Sren= vol4 lim r→∞ " r4[f]1/2 [8C]2[+ rf B]′[+ 2rCC]′[+ 8Bf + rBf]′ L5[hpC]2[+ Bf] + −r 4 L5 p C2[+ Bf (6 − ψ]2 ) −2r 4[f (r)] L5[h(r)] s C(r)2 f (r) + B(r) # . (B.11) It is interesting to note that
since we are always working with solutions with ψ1 = 0, the scalar piece can in fact be omitted if one desires.
C Superfluid fraction
In this section we present some details of the definition and computation of the superfluid fraction ζ for our solutions. We start with the renormalized action from the previous appendix and compute
the boundary stress tensor and the boundary current by varying with respect to the boundary metric and the boundary components of the vector potential.
Tµν = [√]1 −γ δS δγµν , Jµ= 1 √ −γ δS δAµ, (C.1)
where now S = SIIB+Sctwith SIIBdefined by (2.1) and Sctdefined by (B.6). In particular, the relations above are not tied to our ansatz. To compute the boundary stress tensor and
current, we need to introduce coordinates on the boundary, and we will use Greek indices
for them. After doing the variations, using our ansatz and going on shell on the bulk, the resulting stress tensor and current vanish in the strict r → ∞ limit. This is consistent with the fact that
they should be finite since we are using the renormalized action to compute them. The more interesting quantity is the boundary fluid stress tensor and the fluid current, which are defined in AdS5
Tµν = lim r→∞r 2[Tµν] [,] [J] µ= lim r→∞r 2[Jµ][.] [(C.2)]
We are using units where 16πG = 1 = L in this section. Suppressing the details and re-stricting to our ansatz these quantities can be explicitly computed in terms of the boundary fall-offs of eqs.
(3.18)–(3.21) to be Tµν = 3f4− B4 4C4 0 0 4C4 f4− 3B4 0 0 0 0 B4+ f4 0 0 0 0 B4+ f4 , Jµ= 4 3 At,2 Ax,2 0 0 . (C.3)
Now we follow the interpretation of [22] for these quantities in terms of a two-fluid model on the boundary, where one component is an ordinary (ideal) fluid and the other is a superfluid. First we
can write these quantities suggestively in terms of uµ = (−1, 0, 0, 0) and nµ= (0, 1, 0, 0) as Tµν = (ǫ + P ) uµuν + P ηµν− 4B4nµnν − 8C4u(µnν), Jµ= ρ uµ− Jsnµ, (C.4) where P ≡ f4+ B4 , ǫ ≡ 3f4− B4,
ρ ≡ − 4 3At,2 , Js≡ − 4 3Ax,2. (C.5) Note that what we have done is merely to rewrite the expressions covariantly in terms of the vectors uµand nµ. Another way to state the same thing is that (for
example) the most general symmetric second rank tensor constructed from uµand nµwill have to be a linear combination of ηµν, uµuν, u(µnν) and nµnν.
The two fluid model can be defined by the stress tensor
Tµν = (ǫ0+ P0) uµuν+ P0ηµν+ µ ρsvµvν , Jµ= ρnuµ+ ρsvµ, (C.6) where the subscripts n and s stand for the normal and superfluid components of the charge density, with the total charge density ρ =
ρs+ρn. Aside from the various thermodynamical state variables (whose precise interpretations will not be important to us, see [22]), we have also introduced the superfluid velocity vµ that satisfies
the constraint (“Josephson equation”)
uµvµ= −1. (C.7)
The superfluid fraction is defined as
ζ = ρs
Our stress tensor (C.4) can be brought to the two-fluid form by defining vµas
vµ= uµ+ B4 C4
nµ. (C.9)
This automatically satisfies vµuµ = −1 as a consequence of uµuµ = −1, and nµuµ = 0. Rewriting our stress and current tensors (C.4) in these new variables we get the two-fluid form (C.6):
Tµν = (ǫ + P + 4 C[4]2/B4) uµuν+ P ηµν[− (4 C][4]2/B4) vµvν , (C.10) Jµ = (ρ + JsC4/B4) uµ− (JsC4/B4) vµ. (C.11) Reading off the superfluid fraction from this, we find that
ζ = −(JsC4/B4)
ρ = −
At,2B4 , (C.12)
where we have written the final result in terms of the fall-offs obtained directly from the solutions. This is the form we use for making the plots in figure7.
D The hairless solution: Reissner-Nordstrom
In understanding the phase structure, it is important to keep in mind that we are interested in comparing the free energy of the hairy black hole solution to that of Reissner-Nordstrom. In the five
dimensional IIB case, the Reissner-Nordstrom metric [11] can be given in terms of our ansatz (3.1) by f (r) = 1 −[r]14 1 + 4µ 2 9 +4µ 2 9r6 , At= µ 1 −[r]12 , (D.1) h = 1 , B = 1 , C = 0 , Ax= 0 , ψ
= 0 . (D.2) In this notation, the curvature invariants studied in section 5 take the form
R = −20f − r 10f′+ rf′′ , (D.3)
RabcdRabcd = 40f2+ 4r f 10f′+ r f′′ + r222(f′)2+ 8r f′f′′+ r2(f′′)2 . (D.4) All these expressions are obtained after all the necessary rescalings: we have set 16πG = L = rH = 1. The Hawking
temperature now takes the form
TH = 1 − 2µ 2[/9]
π , (D.5)
as can be determined by the periodicity of the Euclidean section. The renormalized on-shell action that we determined before takes a simple form for this solution:
Sren= 1 + 4µ2/9 . (D.6)
We will compare the free energies of the hairy and hairless cases at the same T /µ to determine which one is the favored phase.
[1] S.S. Gubser, Breaking an Abelian gauge symmetry near a black hole horizon,
Phys. Rev. D 78 (2008) 065034[arXiv:0801.2977] [SPIRES].
[2] J.M. Maldacena, The large-N limit of superconformal field theories and supergravity, Adv.
Theor. Math. Phys. 2 (1998) 231] [hep-th/9711200[Int. J. Theor. Phys. 38 (1999) 1113]
[3] S.S. Gubser, I.R. Klebanov and A.M. Polyakov, Gauge theory correlators from non-critical
string theory,Phys. Lett. B 428 (1998) 105[hep-th/9802109] [SPIRES].
[4] E. Witten, Anti-de Sitter space and holography, Adv. Theor. Math. Phys. 2 (1998) 253 [hep-th/9802150] [SPIRES].
[5] S. Weinberg, The quantum theory of fields, Vol II, Cambridge University Press, Cambridge, U.K. (1996).
[6] S.A. Hartnoll, C.P. Herzog and G.T. Horowitz, Building a holographic superconductor,
Phys. Rev. Lett. 101 (2008) 031601[arXiv:0803.3295] [SPIRES].
[7] S.A. Hartnoll, Lectures on holographic methods for condensed matter physics,
Class. Quant. Grav. 26 (2009) 224002[arXiv:0903.3246] [SPIRES]. [8] C.P. Herzog, Lectures on holographic superfluidity and superconductivity,
J. Phys. A 42 (2009) 343001[arXiv:0904.1975] [SPIRES].
[9] G.T. Horowitz, Introduction to holographic superconductors,arXiv:1002.1722[SPIRES]. [10] S.S. Gubser and F.D. Rocha, The gravity dual to a quantum critical point with spontaneous
symmetry breaking,Phys. Rev. Lett. 102 (2009) 061601[arXiv:0807.1737] [SPIRES].
[11] S.S. Gubser, C.P. Herzog, S.S. Pufu and T. Tesileanu, Superconductors from Superstrings,
Phys. Rev. Lett. 103 (2009) 141601[arXiv:0907.3510] [SPIRES].
[12] J.P. Gauntlett, J. Sonner and T. Wiseman, Holographic superconductivity in M-theory,
Phys. Rev. Lett. 103 (2009) 151601[arXiv:0907.3796] [SPIRES].
[13] M. Ammon, J. Erdmenger, M. Kaminski and P. Kerner, Superconductivity from gauge/gravity
duality with flavor,Phys. Lett. B 680 (2009) 516[arXiv:0810.2316] [SPIRES].
[14] M. Ammon, J. Erdmenger, M. Kaminski and P. Kerner, Flavor superconductivity from
gauge/gravity duality,JHEP 10 (2009) 067[arXiv:0903.1864] [SPIRES].
[15] S.A. Hartnoll, C.P. Herzog and G.T. Horowitz, Holographic superconductors,
JHEP 12 (2008) 015[arXiv:0810.1563] [SPIRES].
[16] P. Basu, A. Mukherjee and H.-H. Shieh, Supercurrent: vector hair for an AdS black hole,
Phys. Rev. D 79 (2009) 045010[arXiv:0809.4494] [SPIRES].
[17] C.P. Herzog, P.K. Kovtun and D.T. Son, Holographic model of superfluidity,
Phys. Rev. D 79 (2009) 066002[arXiv:0809.4870] [SPIRES].
[18] T. Faulkner, G.T. Horowitz and M.M. Roberts, Holographic quantum criticality from
multi-trace deformations,arXiv:1008.1581[SPIRES].
[19] V. Keranen, E. Keski-Vakkuri, S. Nowling and K.P. Yogendran, Inhomogeneous structures in
holographic superfluids: II. Vortices,Phys. Rev. D 81 (2010) 126012[arXiv:0912.4280]
[20] D. Arean, M. Bertolini, J. Evslin and T. Prochazka, On holographic superconductors with DC
current,JHEP 07 (2010) 060[arXiv:1003.5661] [SPIRES].
[21] M. Tinkham, Introduction to superconductivity, second edition, Dover publications, New York, U.S.A. (1996).
[22] J. Sonner and B. Withers, A gravity derivation of the Tisza-Landau model in AdS/CFT,
Phys. Rev. D 82 (2010) 026001[arXiv:1004.2707] [SPIRES].
[23] D. Arean, P. Basu and C. Krishnan, The many phases of holographic superfluids,
JHEP 10 (2010) 006[arXiv:1006.5165] [SPIRES].
[24] S.S. Gubser, S.S. Pufu and F.D. Rocha, Quantum critical superconductors in string theory
and M-theory,Phys. Lett. B 683 (2010) 201[arXiv:0908.0011] [SPIRES].
[25] G.T. Horowitz and M.M. Roberts, Zero temperature limit of holographic superconductors,
JHEP 11 (2009) 015[arXiv:0908.3677] [SPIRES].
[26] L. Girardello, M. Petrini, M. Porrati and A. Zaffaroni, The supergravity dual of N = 1 super
Yang-Mills theory,Nucl. Phys. B 569 (2000) 451[hep-th/9909047] [SPIRES].
[27] J. Distler and F. Zamora, Chiral symmetry breaking in the AdS/CFT correspondence,
JHEP 05 (2000) 005[hep-th/9911040] [SPIRES].
[28] N. Bobev, N. Halmagyi, K. Pilch and N.P. Warner, Supergravity instabilities of
non-supersymmetric quantum critical points,Class. Quant. Grav. 27 (2010) 235013
[arXiv:1006.2546] [SPIRES].
[29] J.P. Gauntlett, J. Sonner and T. Wiseman, Quantum criticality and holographic
superconductors in M-theory,JHEP 02 (2010) 060[arXiv:0912.0512] [SPIRES].
[30] V. Balasubramanian and P. Kraus, A stress tensor for anti-de Sitter gravity, | {"url":"https://123dok.org/document/wyexx27q-type-iib-holographic-superfluid-flows.html","timestamp":"2024-11-04T17:45:07Z","content_type":"text/html","content_length":"198641","record_id":"<urn:uuid:7b851782-d2ea-4ad9-9bba-83228d3856df>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00344.warc.gz"} |
DSS DMLTest Prior Estimate of theta dispersion
Last seen 4 weeks ago
I want to calculate the prior estimation of the dispersion parameter theta but having some problems. Any help highly appreciated.
What the paper tells me
In the DSS Bayesian hierarchical model, the biological variation among replicates of a group is captured by the beta distribution. The parameters of the beta distribution are mu (mean methylation)
and theta (dispersion). Thereby theta is relative to the group mean.
The prior on theta is thought as a log-normal distribution theta = log-normal (m_0j, r_0j^2). The mean (m_0j) and variance (r_0j^2) can be estimated from the data. To do so, a method of moments (MOM)
estimator is applied to each CpG site in order to estimate the dispersion parameters. For the MOM estimator I used a Parameter Estimation for the Beta Distribution (https://scholarsarchive.byu.edu/
cgi/viewcontent.cgi?article=2613&context=etd; alpha 2.19, beta 2.20). The distribution of log(theta) should be normal distributed with parameter m_0j, r_0j^2.
What I don't get
What mean/variance do I plug into the MOM estimator for beta and alpha? For the mean, do I use the ratio of methylated/total reads per CpG? Or do I use the group mean e.g. the total number of
methylated reads divided by the total number of reads per sample. Next, I am wondering how to calculate the variance? The mean methylation is a continuis variable but what values do I substract? I
used the number of methylated reads (1) and number of unmethylated reads (o) to calculate (X - mean(X)) ^2. But that seems awfully wrong.
Second, do I understand correctly, that the MOM estimates are used as prior theta and the log-normal distribution is just a "visual" conformation that this is possible?
Good by cruel world
I hope you can help me. I tried to get all the information from the paper https://academic.oup.com/nar/article-lookup/doi/10.1093/nar/gku154 but it is my first approach to understand a Bayesian
Hierarchical Model and right now I need some push in the right direction
[To get my own prior which I can update, I guess ;) ] | {"url":"https://support.bioconductor.org/p/107818/","timestamp":"2024-11-04T04:52:20Z","content_type":"text/html","content_length":"16110","record_id":"<urn:uuid:97c19527-7ddc-44ec-95f1-41b4f7a70b4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00562.warc.gz"} |
Moments about ICR, continued
Dear Biomch-L readers,
Now that Herman Woltring has shown us the mathematical
relationships between various definitions of joint
moments/forces, as well as the 3-dimensional generalization, I
want to point out a difference between two views on dynamic
analysis. In my view this is important to clarify the discussion.
Herman is deliberately limiting the discussion to *net* joint
kinetics, i.e. the model consists of rigid links with one force
and one moment transmitted by each joint. These variables are
calculated, plus sometimes the joint powers (moment x angular velocity).
The analysis essentially stops there, and individual muscles are
not part of the model. This is probably a good method for clinical
gait analysis, because no detailed information on muscle lines of
action is needed, and no assumptions on load sharing of muscles have to
be made. However, these joint forces and moments are not physical
quantities but mathematical abstractions: they do not exist at all
anywhere in the system. When I say 'not exist', I mean that there
is no anatomical structure loaded by (= deformed as a function of)
either the force or the moment. Is that not a good definition of
'physical existence' of a force: that it produces a deformation
somewhere that has a one-to-one relationship to the force? Hmm,
you could even say that force is then also a mathematical
abstraction, and that only stresses 'exist'. But let's accept the
concept of force (muscle force, ligament force, contact force...)
for now.
Another way to look at 'net kinetics' analysis is as a
transformation of the original kinematic, kinetic and
anthropomorphic measurements, intended to facilitate the
(clinical) quantification of 'gait quality' or the recognition of
certain abnormalities. A mechanical interpretation of the
resulting 'net kinetics' variables is not the real purpose of the
analysis (apart from the fact that, strictly speaking, it is not
even allowed - see above). Looking at it this way, I must agree
with Ian Stokes that it does not really matter which reference
point is used to calculate the joint moment. Just as long as you
use the same reference point when comparing results, and the
variables obtained still contain useful information. In fact,
using a fixed point is to preferred above the elusive ICR. The
ICR can only be estimated when the kinematic data are of
sufficient quality, and even then requires sophisticated filtering
and analysis methods. For such a 'net kinetic' analysis it might
be more reliable to use the lateral epicondyle as reference at the
knee, rather than the ICR, because it can be marked directly and
measured by the measuring system. The choice of reference point
does require standardization however, to avoid problems when
comparing published results.
Many biomechanicians however, *are* interested in real muscle
forces and real joint forces, and try to estimate them as well as
possible. These forces are not mathematical but physical
quantities. There are of course the well-known indeterminacy
problems because the equilibrium equations for moment and force
often have too many unknowns. For some situations however, such an
analysis is the right tool for the job. In that case, the reasoning
of my previous posting applies: the ICR is the only point about which the
moment arms of muscle forces (dL/dA) and joint forces (zero) are easy to
obtain. (For simplicity I limit the discussion to 2D). Note that the
'net joint force' resulting from this type of analysis is not the same
as in the 'net kinetics' analysis, but is much larger (and more
realistic). This force may also be only a resultant of several physical
(contact & ligaments) forces, but the muscle forces that have been
obtained are real physical quantities.
So my revised opinion is: use the ICR as reference point when
estimating muscle and joint forces. For a 'net kinetic' analysis,
only standardization is required; there is no preferred reference
point. Clinical usefulness seems to be more important than mechanical
interpretation in that case.
Finally, this is probably a very academic discussion without
practical implications; the various definitions reviewed by Ian
Stokes produce very similar results. But sometimes it is
enlightening to think about why you do things one way, and not the
other way.
-- Ton van den Bogert
University of Utrecht, Netherlands | {"url":"https://biomch-l.isbweb.org/forum/biomch-l-forums/biomch-l-1988-2010/461-moments-about-icr-continued","timestamp":"2024-11-07T19:41:37Z","content_type":"application/xhtml+xml","content_length":"53387","record_id":"<urn:uuid:8062dec0-38d1-4041-a602-40fcb305027d>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00266.warc.gz"} |
Congratulations to Our 2022 PhD Graduates!
Congratulations to our 2021 PhD graduates, Larry Allen, Chian Chua, Henry Ickes, Alan Mullenix, Mads Reynolds, and Jonathan Stanfill. Continue reading to hear more about their time at Baylor and
their plans for the future. You may click on any of the names below to jump to the new graduate of your choice.
Larry Allen
One of the major contributing factors to my decision to attend Baylor was the campus visit for prospective graduate students. While on the visit, I was impressed by the supportive and nurturing
community I observed in the mathematics department. I saw how the faculty developed the students as researchers and teachers, and I also saw how the graduate students cared for each other as friends
and fellow mathematicians. This allayed much of my uncertainty about attending graduate school and ultimately cemented my choice to pursue a degree at Baylor.
During my time at Baylor, I conducted research with Dr. Kirby on problems in approximation theory and numerical analysis related to Bernstein polynomials. In particular, our work focused on
approximating and interpolating smooth functions with Bernstein polynomials. By making use of the structure of the corresponding matrices, we were able to provide fast, stable algorithms for solving
these problems.
While at Baylor, I was also given the privilege of teaching many first-year undergraduate courses such as Precalculus and Business Calculus. Teaching and interacting with students enhanced my
experience as a mathematician and a researcher and allowed me to consider different perspectives on familiar topics. Although I have decided not to stay in academia, I am incredibly grateful for the
opportunity I had to teach and share the beauty of mathematics.
After graduation, I will be using the knowledge and experience I gained at Baylor in my job as an applied research mathematician. I am fortunate to have been given the opportunity to study at Baylor,
and I look forward to what the future holds!
- Larry Allen
Chian Chua
Chian's advisor is Tao Mei. Chian has accepted a postdoctoral position at Ohio State University.
Henry Ickes
Henry's adviser is Johnny Henderson
Alan Mullenix
With prospects looking grim on the job market, I returned to Shelton State Community College in around 2012, declared academic bankruptcy, and began as a 23 year old freshman. The plan was simply to
get Associate Degree in Science to have something on that resume. On the course checklist was trigonometry and much to my surprise, it went well and sparked an interest. I began speaking with the
instructor after lectures, and he introduced me to the rest of the mathematics faculty who quickly became dear friends. They nurtured my interest, generously invested time, loaded me up with
hand-me-down advanced textbooks, found me work at the tutoring center, introduced me to faculty at the local state university, and gave me the confidence to change my course to a four-year degree.
Transferring to the University of Alabama, I eventually declared my major as Mathematics, graduating, to my disbelief, with honors. I was urged to consider graduate school, but was concerned the
social and supportive elements I’d come to value would be hard to find at such a level.
I happened to get a pamphlet from Baylor and was drawn to both the generous material benefits offered, and the departmental culture described. It seemed important to them to foster a collaborative
and friendly place where students and faculty communicated freely. Since this was vitally important to what I had come to love about academia, I drove out to take the tour and it confirmed everything
they had claimed; there was no doubt it was the offer I wanted to accept. My six years at Baylor and in Waco have given me a PhD, yes, but also ten semesters of teaching, invaluable friends,
relationships, and colleagues. I wasn’t sure what research direction I wanted to go in, but my interest in optimization led me to nonlinear partial differential equations, specifically the study of
Mean Field Games, models of high population competitions between rational agents. I couldn’t have asked for a better mentor in Dr. Jameson Graber. His insights, breadth of expertise, time spent, and
constant aid were instrumental in both my success and exemplify what Baylor Mathematics can offer students. I’ve felt unwaveringly supported by this department, and though I’m excited to enter the
next chapter, I’ll always look back on my time at Baylor with nothing but fondness.
- Alan Mullenix
Mads Reynolds
Needless to say, I was accepted and offered a tour of the campus. When my wife and I flew out, we were immediately impressed with the people in Waco. Everyone we met was friendly. As part of my visit
to campus, I was able to sit in on a few lectures, and I was again impressed with the faculty here a Baylor. Not only were they experts in their fields, but they also knew how to teach as well. My
opinion of the professors continued to grow as I got to work with many of them through the classes they taught. I then took a class by Dr. Jonathan Meddaugh and enjoyed his approach to topology.
After the class I started collaborating with him as my advisor. For the last few years, we have been delving into a new topic, that of shifts of hereditarily finite type.
Before coming to Baylor, I taught math at the secondary level. I have always enjoyed teaching and Baylor has given me plenty of opportunities to continue developing and refining my teaching
abilities. One of my biggest rewards as an instructor of mathematics is for a student who, in their own words “is not good at math,” to come to an appreciation of the subject and realize that math is
something they can do.
After graduation, I will be staying at Baylor for a post doctorial teaching fellowship. Although I do not know exactly what the future will bring beyond this year, I will say that my wife and I liked
the area and the campus so much that she is also pursuing a doctorate here a Baylor.
- Mads Reynolds
Jonathan Stanfill
Jonathan's adviser is Fritz Gesztesy. Jonathan has accepted a named postdoctoral position at Ohio State University. | {"url":"https://math.artsandsciences.baylor.edu/news/story/2022/congratulations-our-2022-phd-graduates","timestamp":"2024-11-05T13:57:40Z","content_type":"text/html","content_length":"118919","record_id":"<urn:uuid:cabcdf7f-bac2-40b4-8f9a-90dc98bc7f11>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00448.warc.gz"} |
"Saving specific features from a Model for Text Mining"
The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6,
2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked
"Saving specific features from a Model for Text Mining"
I am using Rapidminer to build a text classification model using Naive Bayes. I have built the model fine and understand how to apply said model in RapidMiner, but I was wondering if there was anyway
to save the features the Bayesian model extracts into say a database table or excel spreadsheet? I want to do this because I am planning on using the Bayes model to help select key terms for a model
and then take these terms and help rank documents using a cosine similarity and weighting scheme, which I have already developed . I don't know if this is possible in RapidMiner, or if maybe
RapidMiner has the cosine similarity feature already and I can just maybe use that instead somehow.
Any help would be much appreciated.
Hi there George,
And welcome! It sound like you're in a position to indulge in Groovy scripting; in general you can pick apart most inputs using this Java scripting operator, there are some examples on the Wiki,
and even I've managed a demo at
tip: download the source of the relevant model, so you know what is available.
So you think with the groovy scripting I should be able to pull the terms and their specific bayes scores? I will look into it thanks!
Hi GeorgeDittmar,
if you do not want to enter the dark side of groovy scripting, you might consider a different learner that provides you the word weights. E.g the Support Vector Machine does that. Then you can
transform the weights with the Weights to Data Operator and process them further.
Ciao Sebastian
hmm I might have to suggest switching classifiers to the group, but we are trying to duplicate work I did last Winter because we switched everything over to the rapidminer framework while I was
gone. Furiously trying to figure this framework out and get papers written for it. I cant seem to get the demo that haddock posted to work, I download the file but I cant seem to open it, maybe I
am just missing something.
Hi George,
read my footer and then search for the process haddock posted ("Association rules as examples") in the myexperiment extension.
Ciao Sebastian
Here is a quick one I think. Is the Distribution table that is in the simple distribution tab results for Naive Bayes made on the fly by rapid miner or do you think its possible to pull that out
with Groovy?
Hi there,
Non-examplesets have their own renderers, so the answer could actually be yes
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="5.0">
<operator activated="true" class="process" expanded="true" name="Root">
<description>Using a simple Naive Bayes classifier.</description>
<process expanded="true" height="362" width="547">
<operator activated="true" class="retrieve" expanded="true" height="60" name="Retrieve" width="90" x="45" y="30">
<parameter key="repository_entry" value="//Samples/data/Iris"/>
<operator activated="true" class="naive_bayes" expanded="true" height="76" name="NaiveBayes" width="90" x="179" y="30"/>
<operator activated="true" class="multiply" expanded="true" height="94" name="Multiply" width="90" x="313" y="30"/>
<operator activated="true" class="execute_script" expanded="true" height="76" name="Execute Script" width="90" x="447" y="30">
<parameter key="script" value=" import com.rapidminer.tools.Ontology; Model m = input[0]; Attribute[] attributes= new Attribute[1]; attributes[0] = AttributeFactory.createAttribute("String description", Ontology.STRING); MemoryExampleTable table = new MemoryExampleTable(attributes); DataRowFactory ROW_FACTORY = new DataRowFactory(0); String[] strings= new String[1]; strings[0]=m.getDistribution(0,0).toString(); DataRow row = ROW_FACTORY.create(strings, attributes); table.addDataRow(row);	 ExampleSet exampleSet = table.createExampleSet(); return exampleSet; "/>
<connect from_op="Retrieve" from_port="output" to_op="NaiveBayes" to_port="training set"/>
<connect from_op="NaiveBayes" from_port="model" to_op="Multiply" to_port="input"/>
<connect from_op="Multiply" from_port="output 1" to_op="Execute Script" to_port="input 1"/>
<connect from_op="Multiply" from_port="output 2" to_port="result 2"/>
<connect from_op="Execute Script" from_port="output 1" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
<portSpacing port="sink_result 3" spacing="0"/>
I couldn't bear the thought of leaving you nothing to do, so I've left the loops and labels for you to thrill over ;D
thanks I will mess around with that. I finally got your demo to run and was able to do a little bit of scripting to pull some info with groovy so I have some where to start at least. | {"url":"https://community.rapidminer.com/discussion/9631/saving-specific-features-from-a-model-for-text-mining","timestamp":"2024-11-04T07:33:20Z","content_type":"text/html","content_length":"300614","record_id":"<urn:uuid:d27225a8-a8e3-4cd3-9e16-4c214c9a3727>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00572.warc.gz"} |
X11 Possible Nonce Vulnerability...
verters com slash blockcheck
tl;dr - it seems that 90% or more of blocks have a nonce value divisible by 256. Can this be confirmed? If so, it essentially means that only mining nonce values divisible by 256 will yield a far
greater chance of finding blocks.
verters com slash blockcheck
tl;dr - it seems that 90% or more of blocks have a nonce value divisible by 256. Can this be confirmed? If so, it essentially means that only mining nonce values divisible by 256 will yield a far
greater chance of finding blocks.
No idea if that's true, however I've a script running right now which checks this.
Will probably run the complete night to parse the complete blockchain, but I'll report back tomorrow.
Gotta catch some sleep now :smile:
No probs.
Remember, to account for blocks that have a very low difficulty - as they throw the equation off.
did a check for the last 50K blocks
~2% for Darkcoin
EDIT: for anyone interested, here is fixed code (using correct port/library)
// this code uses EasyBitcoin a PHP Wrapper API for the JSON RPC interface
$coin = new Bitcoin('darkcoind','password','localhost','9998');
$blockCount = $coin->getblockcount();
$blockHistory = array();
$i = 0;
for($blockID=$blockCount; $blockID>$blockCount-50000; $blockID--) {
// get block hash
$blockHash = $coin->getblockhash($blockID);
// get block
$block = $coin->getblock($blockHash);
$blockHistory[$blockID] = $block["nonce"];
file_put_contents(dirname(__FILE__) . "/nonce2.json",json_encode($blockHistory));
$nonceDB = json_decode(file_get_contents(dirname(__FILE__) . "/nonce2.json"),true);
$groups3 = array();
foreach($nonceDB as $height => $nonce) {
$val = $nonce % 256;
if (!isset($groups3[$val])) $groups3[$val] = 0;
you'll need this also
wget https://raw.githubusercontent.com/aceat64/EasyBitcoin-PHP/master/easybitcoin.php
and curl lib for php
sudo apt-get install php5-curl
Last edited by a moderator:
So the nonces are more divisible by 256... but not by much relative to the entire batch. However, there is still a slight pattern there.
So the nonces are more divisible by 256... but not by much relative to the entire batch. However, there is still a slight pattern there.
I did the same test for the last 10000
blocks, and here nonces divisible by 256 are also leading, but with 0.47% way closer to the theoretically expected value. Darkcoin has 2.748%.
How it can be a problem? We all will be using nonce divisible by 256 and chances for all will be equall again.
How it can be a problem? We all will be using nonce divisible by 256 and chances for all will be equall again.
Yeah, but only if someone notices the community about that pattern.... oh wait! *recompiles his miners*
Yeah, but only if someone notices the community about that pattern.... oh wait! *recompiles his miners*
Also thought about this...but you'd have to solo-mine.
The pools just count the number of your shares, not their quality, so most of the (already small) advantage will go away.
Yeah, but only if someone notices the community about that pattern.... oh wait! *recompiles his miners*
Also thought about this...but you'd have to solo-mine.
The pools just count the number of your shares, not their quality, so most of the (already small) advantage will go away.
I agree with this but it would be interesting to see mh/s comparison of two versions though - will be there any boost in share calculation?
//Still thinking of CPU/GPU internal hardware optimizations nonce like this could hit...//
I agree with this but it would be interesting to see mh/s comparison of two versions though - will be there any boost in share calculation?
//Still thinking of CPU/GPU internal hardware optimizations nonce like this could hit...//
For the shits and giggles I just build a miner which only uses
"nonce mod 256 = 0"
nonces, the pool seems to accept them, but I can't see a change in the hash-rate at the pool (and that's what counts).
I'll let it run over night and see what happens...
Bovine Bit-flipper
Foundation Member
ran the numbers. not seeing it.
Low diff or high diff, the numbers don't show an exploitable skew.
for all 186118 blocks, if nonce mod 'power of 2' == 0, increment value below:
2: 102581,
4: 60053,
8: 39145,
16: 27066,
32: 20272,
64: 13299,
128: 8389,
256: 5120,
512: 2616,
1024: 1382,
2048: 767,
4096: 450,
8192: 294,
16384: 237,
32768: 221,
65536: 214,
131072: 99,
262144: 57,
524288: 25,
1048576: 13,
2097152: 6,
4194304: 4,
8388608: 3,
16777216: 2,
33554432: 2
the code I used: (qnd)
(echo "[" ; for blocks in `seq 1 \`darkcoind getblockcount\`` ; do echo -n '{' ; darkcoind getblock $(darkcoind getblockhash $blocks) | egrep 'height|difficulty|nonce' | tr "\n" " " | sed -e 's/, $//g' ; echo '},' ; done ; echo "]" ) | tee nonce_values.json
then I edited nonce_values.json to remove the last trailing comma, then ran this.
import json
from pprint import pprint
modulii = dict()
mod_values = list(map((lambda x: 2**x), range(1,32)))
def parse_nonce(nonce):
results = []
for mod in mod_values:
if nonce < mod:
remainder = nonce % mod
if remainder == 0:
modulii[mod] = modulii.get(mod, 0) + 1
return results
with open('nonce_values.json') as f:
j= json.load(f)
count = 0
for nonce in j:
count += 1
(results array has the full mod/remainder breakdown if anybody's interested in further analysis.)
Happy hacking!
EDIT: updated the source extraction for easier adding of difficulty exclusions
Last edited by a moderator: | {"url":"https://www.dash.org/forum/index.php?threads/x11-possible-nonce-vulnerability.3226/","timestamp":"2024-11-11T14:45:07Z","content_type":"text/html","content_length":"102917","record_id":"<urn:uuid:9edf80be-2fb4-4de6-88ed-30ae0e9dee92>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00528.warc.gz"} |
Section 3 - Code Table 2 : Shape of the reference system
Code Meaning
0 Earth assumed spherical with radius = 6 367 470.0 m
1 Earth assumed spherical with radius specified (in m) by data producer
2 Earth assumed oblate spheroid with size as determined by IAU in 1965 (major axis = 6 378 160.0 m, minor axis = 6 356 775.0 m, f = 1/297.0)
3 Earth assumed oblate spheroid with major and minor axes specified (in km) by data producer
4 Earth assumed oblate spheroid as defined in IAG-GRS80 model (major axis = 6 378 137.0 m, minor axis = 6 356 752.314 m, f = 1/298.257 222 101)
5 Earth assumed represented by WGS-84 (as used by ICAO since 1998)
6 Earth assumed spherical with radius of 6 371 229.0 m
7 Earth assumed oblate spheroid with major or minor axes specified (in m) by data producer
8 Earth model assumed spherical with radius of 6 371 200 m, but the horizontal datum of the resulting latitude/longitude field is the WGS-84 reference frame
9 Earth represented by the Ordnance Survey Great Britain 1936 Datum, using the Airy 1830 Spheroid, the Greenwich meridian as 0 longitude, and the Newlyn datum as mean sea level, 0 height
10 Earth model assumed WGS84 with corrected geomagnetic coordinates (latitude and longitude) defined by Gustafsson et al., 1992
11 Sun assumed spherical with radius = 695,990,000 m (Allen, C.W., 1976 Astrophysical Quantities (3rd Ed.; London: Athlone) and Stonyhurst latitude and longitude system with origin at the
intersection of the solar central meridian (as seen from Earth) and the solar equator (Thompson, W, Coordinate systems for solar image data, A&A 449, 791–803 (2006))
255 Missing
( 1) WGS84 is a geodetic system that uses IAG-GRS80 as basis.
( 2) With respect to code figures 0, 1, 3, 6 and 7, coordinates can only be unambiguously interpreted, if the coordinate reference system in which they are embedded is known. Therefore, defining the
shape of the Earth alone without coordinate system axis origins is ambiguous. Generally, the prime meridian defined in the geodetic system WGS-84 can be safely assumed to be the longitudinal origin.
However, because these code figures do not specify the longitudinal origin explicitly, it is suggested to contact the originating centre if high precision coordinates are needed, in order to obtain
the precise details of the coordinate system used (effective as from 16 November 2016). | {"url":"https://codes.ecmwf.int/grib/format/grib2/ctables/3/2/","timestamp":"2024-11-14T23:45:06Z","content_type":"text/html","content_length":"19159","record_id":"<urn:uuid:cf1a46df-a515-4753-9bff-99f1c80955ef>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00142.warc.gz"} |
Higher order randomness
Higher order randomness is already developed. It adapts classical randomness notions, using the analogy between \(\Pi^1_1\) and computably enumerable:
A real \(x\) is \(\Pi^1_1\)-ML-random if it belongs to no set \(A\) of the form $$A=\bigcap_n{[W_n]^\prec}$$ where the sets \(W_n\) are uniformly \(\Pi^1_1\) and \(\mu([W_n])\leq{2^{-n}}\).
It also introduces new notions :
A real \(x\) is \(\Delta^1_1\)-random if it avoids all \(\Delta^1_1\) measure 0 properties.
A real \(x\) is \(\Pi^1_1\)-random if it avoids all \(\Pi^1_1\) measure 0 properties.
A real is \(\Pi^1_1\)-random if and only if it is \(\Delta^1_1\)-random and \(\omega_1^{\mathrm{CK}}=\omega_1^{\mathrm{CK(x)}}\).
We have the following strict inclusion: $$\Delta^1_1\text{-randomness}\subsetneq\Pi^1_1\text{-ML-randomness}\subsetneq\Pi^1_1\text{-randomness}$$
Randomness for \(\alpha\)-computability and Infinite Time Turing Machine
The extension of the definitions from higher order randomness is straightforward:
A real \(x\) is random over \(L_\alpha\) if it avoids every set \(A\subseteq\mathbb R\), with Borel code in \(L_\alpha\) and of measure 0.
A real \(x\) is ITTM-random if it avoids every set \(A\subseteq\mathbb R\) ITTM-semi-decidable and of measure 0.
A real \(x\) is \(\alpha\)-ML-random if it avoids every set \(A\) of the form $$A=\bigcap_n{[W_n]^\prec}$$ where the sets \(W_n\) are uniformly \(\alpha\)-computably enumerable and \(\mu([W_n])\leq{2
^{-n}}\). It is ITTM-ML-random if it is \(\Sigma\)-ML-random.
A real \(x\) is ITTM-random if and only if it is random over \(L_\Sigma\) and \(\Sigma^x=\Sigma\).
For which \(\alpha\) do we have : $$\text{randomness over }L_\alpha\subsetneq\alpha\text{-ML-randomness}\text{ ?}$$
Do we have $$\text{randomness over }L_\Sigma\subsetneq\text{ITTM-ML-randomness}\subsetneq\text{ITTM-randomness}\text{ ?}$$
Let \(\alpha\) be such that \(L_\alpha\models\text{"everything is countable"}\). Then the following statements are equivalent:
• \(\alpha\)-ML-randomness is strictly stronger than randomness over \(L_\alpha\).
• \(\alpha\) is projectible in \(\omega\).
• There exists a universal \(\alpha\)-ML-test.
$$\text{Randomness over }L_\lambda\subsetneq\lambda\text{-ML-randomness}$$ $$\text{Randomness over }L_\zeta=\zeta\text{-ML-randomness}$$ $$\text{Randomness over }L_\Sigma\subsetneq\Sigma\text
$$\text{Randomness over }L_\Sigma\subseteq\text{ITTM-randomness}\subsetneq\text{ITTM-ML-randomness}$$
$$\text{Randomness over }L_\Sigma\neq\text{ITTM-randomness ?}$$ Does \(x\) random over \(L_\Sigma\) implies $$L_\zeta[x]\not\prec_2L_\Sigma[x]\text{ ?}$$
If \(x\) is generic over \(L_\Sigma\) then $$L_\zeta[x]\prec_2L_\Sigma[x]$$ We therefore have: $$\text{Genericity over }L_\Sigma = \text{ITTM-genericity}$$ | {"url":"https://choum.net/panglesd/slides/slides-js/slides.html","timestamp":"2024-11-03T09:32:23Z","content_type":"text/html","content_length":"69627","record_id":"<urn:uuid:c7c0de94-b9ce-4f5a-a918-dd66e11c5bd7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00728.warc.gz"} |
openquake.hazardlib.geo package
openquake.hazardlib.geo package¶
Geographic primitives and utilities¶
Module openquake.hazardlib.geo.geodetic contains functions for geodetic transformations, optimized for massive calculations.
openquake.hazardlib.geo.geodetic.EARTH_ELEVATION = -8.848¶
Maximum elevation on Earth in km.
openquake.hazardlib.geo.geodetic.EARTH_RADIUS = 6371.0¶
Earth radius in km.
openquake.hazardlib.geo.geodetic.azimuth(lons1, lats1, lons2, lats2)[source]¶
Calculate the azimuth between two points or two collections of points.
Parameters are the same as for geodetic_distance().
Implements an “alternative formula” from http://williams.best.vwh.net/avform.htm#Crs
Returns: Azimuth as an angle between direction to north from first point and direction to the second point measured clockwise in decimal degrees.
openquake.hazardlib.geo.geodetic.distance(lons1, lats1, depths1, lons2, lats2, depths2)[source]¶
Calculate a distance between two points (or collections of points) considering points’ depth.
Calls geodetic_distance(), finds the “vertical” distance between points by subtracting one depth from another and combine both using Pythagoras theorem.
Returns: Distance in km, a square root of sum of squares of geodetic distance and vertical distance, which is just a difference between depths.
openquake.hazardlib.geo.geodetic.distance_matrix(lons, lats, diameter=12742.0)[source]¶
• lons – array of m longitudes
Parameters: • lats – array of m latitudes
Returns: matrix of (m, m) distances
openquake.hazardlib.geo.geodetic.distance_to_arc(alon, alat, aazimuth, plons, plats)[source]¶
Calculate a closest distance between a great circle arc and a point (or a collection of points).
• alon, alat (float) – Arc reference point longitude and latitude, in decimal degrees.
Parameters: • azimuth – Arc azimuth (an angle between direction to a north and arc in clockwise direction), measured in a reference point, in decimal degrees.
• plons, plats (float) – Longitudes and latitudes of points to measure distance. Either scalar values or numpy arrays of decimal degrees.
Returns: Distance in km, a scalar value or numpy array depending on plons and plats. A distance is negative if the target point lies on the right hand side of the arc.
Solves a spherical triangle formed by reference point, target point and a projection of target point to a reference great circle arc.
openquake.hazardlib.geo.geodetic.distance_to_semi_arc(alon, alat, aazimuth, plons, plats)[source]¶
In this method we use a reference system centerd on (alon, alat) and with the y-axis corresponding to aazimuth direction to calculate the minimum distance from a semiarc with generates in (alon,
Parameters are the same as for distance_to_arc().
openquake.hazardlib.geo.geodetic.geodetic_distance(lons1, lats1, lons2, lats2, diameter=12742.0)[source]¶
Calculate the geodetic distance between two points or two collections of points.
Parameters are coordinates in decimal degrees. They could be scalar float numbers or numpy arrays, in which case they should “broadcast together”.
Implements http://williams.best.vwh.net/avform.htm#Dist
Returns: Distance in km, floating point scalar or numpy array of such.
openquake.hazardlib.geo.geodetic.intervals_between(lon1, lat1, depth1, lon2, lat2, depth2, length)[source]¶
Find a list of points between two given ones that lie on the same great circle arc and are equally spaced by length km.
• lon1, lat1, depth1 (float) – Coordinates of a point to start placing intervals from. The first point in the resulting list has these coordinates.
• lon2, lat2, depth2 (float) – Coordinates of the other end of the great circle arc segment to put intervals on. The last resulting point might be closer to the first reference
Parameters: point than the second one or further, since the number of segments is taken as rounded division of length between two reference points and length.
• length – Required distance between two subsequent resulting points, in km.
Returns: Tuple of three 1d numpy arrays: longitudes, latitudes and depths of resulting points respectively.
Rounds the distance between two reference points with respect to length and calls npoints_towards().
openquake.hazardlib.geo.geodetic.min_distance_to_segment(seglons, seglats, lons, lats)[source]¶
This function computes the shortest distance to a segment in a 2D reference system.
• seglons – A list or an array of floats specifying the longitude values of the two vertexes delimiting the segment.
• seglats – A list or an array of floats specifying the latitude values of the two vertexes delimiting the segment.
Parameters: • lons – A list or a 1D array of floats specifying the longitude values of the points for which the calculation of the shortest distance is requested.
• lats – A list or a 1D array of floats specifying the latitude values of the points for which the calculation of the shortest distance is requested.
Returns: An array of the same shape as lons which contains for each point defined by (lons, lats) the shortest distance to the segment. Distances are negative for those points that stay on the
‘left side’ of the segment direction and whose projection lies within the segment edges. For all other points, distance is positive.
openquake.hazardlib.geo.geodetic.min_geodetic_distance(a, b)[source]¶
Compute the minimum distance between first mesh and each point of the second mesh when both are defined on the earth surface.
Parameters: • a – a pair of (lons, lats) or an array of cartesian coordinates
• b – a pair of (lons, lats) or an array of cartesian coordinates
openquake.hazardlib.geo.geodetic.npoints_between(lon1, lat1, depth1, lon2, lat2, depth2, npoints)[source]¶
Find a list of specified number of points between two given ones that are equally spaced along the great circle arc connecting given points.
• lon1, lat1, depth1 (float) – Coordinates of a point to start from. The first point in a resulting list has these coordinates.
Parameters: • lon2, lat2, depth2 (float) – Coordinates of a point to finish at. The last point in a resulting list has these coordinates.
• npoints – Integer number of points to return. First and last points count, so if there have to be two intervals, npoints should be 3.
Returns: Tuple of three 1d numpy arrays: longitudes, latitudes and depths of resulting points respectively.
Finds distance between two reference points and calls npoints_towards().
openquake.hazardlib.geo.geodetic.npoints_towards(lon, lat, depth, azimuth, hdist, vdist, npoints)[source]¶
Find a list of specified number of points starting from a given one along a great circle arc with a given azimuth measured in a given point.
• lon, lat, depth (float) – Coordinates of a point to start from. The first point in a resulting list has these coordinates.
• azimuth – A direction representing a great circle arc together with a reference point.
Parameters: • hdist – Horizontal (geodetic) distance from reference point to the last point of the resulting list, in km.
• vdist – Vertical (depth) distance between reference and the last point, in km.
• npoints – Integer number of points to return. First and last points count, so if there have to be two intervals, npoints should be 3.
Returns: Tuple of three 1d numpy arrays: longitudes, latitudes and depths of resulting points respectively.
Implements “completely general but more complicated algorithm” from http://williams.best.vwh.net/avform.htm#LL
openquake.hazardlib.geo.geodetic.point_at(lon, lat, azimuth, distance)[source]¶
Perform a forward geodetic transformation: find a point lying at a given distance from a given one on a great circle arc defined by azimuth.
• lon, lat (float) – Coordinates of a reference point, in decimal degrees.
Parameters: • azimuth – An azimuth of a great circle arc of interest measured in a reference point in decimal degrees.
• distance – Distance to target point in km.
Returns: Tuple of two float numbers: longitude and latitude of a target point in decimal degrees respectively.
Implements the same approach as npoints_towards().
openquake.hazardlib.geo.geodetic.spherical_to_cartesian(lons, lats, depths=None)[source]¶
Return the position vectors (in Cartesian coordinates) of list of spherical coordinates.
For equations see: http://mathworld.wolfram.com/SphericalCoordinates.html.
Parameters are components of spherical coordinates in a form of scalars, lists or numpy arrays. depths can be None in which case it’s considered zero for all points.
Returns: numpy.array of 3d vectors representing points’ coordinates in Cartesian space in km. The array has shape lons.shape + (3,). In particular, if lons and lats are scalars the result is a 3D
vector and if they are vectors the result is a matrix of shape (N, 3).
See also cartesian_to_spherical().
Module openquake.hazardlib.geo.line defines Line.
class openquake.hazardlib.geo.line.Line(points)[source]¶
Bases: object
This class represents a geographical line, which is basically a sequence of geographical points.
A line is defined by at least one point.
Parameters: points (list of Point instances) – The sequence of points defining this line.
Module openquake.hazardlib.geo.mesh defines classes Mesh and its subclass RectangularMesh.
class openquake.hazardlib.geo.mesh.Mesh(lons, lats, depths=None)[source]¶
Bases: object
Mesh object represent a collection of points and provides the most efficient way of keeping those collections in memory.
• lons – A numpy array of longitude values of points. Array may be of arbitrary shape.
Parameters: • lats – Numpy array of latitude values. The array must be of the same shape as lons.
• depths – Either None, which means that all points the mesh consists of are lying on the earth surface (have zero depth) or numpy array of the same shape as previous two.
Mesh object can also be created from a collection of points, see from_points_list().
DIST_TOLERANCE = 0.005¶
Tolerance level to be used in various spatial operations when approximation is required – set to 5 meters.
classmethod from_coords(coords, sort=True)[source]¶
Create a mesh object from a list of 3D coordinates (by sorting them)
Params coords: list of coordinates
Parameters: sort – flag (default True)
Returns: a Mesh instance
classmethod from_points_list(points)[source]¶
Create a mesh object from a collection of points.
Parameters: point – List of Point objects.
Returns: An instance of Mesh with one-dimensional arrays of coordinates from points.
Find closest point of this mesh for each point in the other mesh
Returns: Mesh object of the same shape as mesh with closest points from this one at respective indices.
Get a convex polygon object that contains projections of all the points of the mesh.
Returns: Instance of openquake.hazardlib.geo.polygon.Polygon that is a convex hull around all the points in this mesh. If the original mesh had only one point, the resulting polygon has a
square shape with a side length of 10 meters. If there were only two points, resulting polygon is a stripe 10 meters wide.
Compute and return distances between each pairs of points in the mesh.
This method requires that all the points lie on Earth surface (have zero depth) and coordinate arrays are one-dimensional.
Because of its quadratic space and time complexity this method is safe to use for meshes of up to several thousand points. For mesh of 10k points it needs ~800 Mb for just the resulting
matrix and four times that much for intermediate storage.
Returns: Two-dimensional numpy array, square matrix of distances. The matrix has zeros on main diagonal and positive distances in kilometers on all other cells. That is, value in cell (3, 5)
is the distance between mesh’s points 3 and 5 in km, and it is equal to value in cell (5, 3).
Uses openquake.hazardlib.geo.geodetic.geodetic_distance().
Compute and return Joyner-Boore distance to each point of mesh. Point’s depth is ignored.
See openquake.hazardlib.geo.surface.base.BaseSurface.get_joyner_boore_distance() for definition of this distance.
Returns: numpy array of distances in km of the same shape as mesh. Distance value is considered to be zero if a point lies inside the polygon enveloping the projection of the mesh or on one
of its edges.
Compute and return the minimum distance from the mesh to each point in another mesh.
Returns: numpy array of distances in km of shape (self.size, mesh.size)
Method doesn’t make any assumptions on arrangement of the points in either mesh and instead calculates the distance from each point of this mesh to each point of the target mesh and returns
the lowest found for each.
Return the shape of this mesh.
Returns tuple: The shape of this mesh as (rows, columns)
Returns: an array of shape (N, 3) with the cartesian coordinates
class openquake.hazardlib.geo.mesh.RectangularMesh(lons, lats, depths=None)[source]¶
Bases: openquake.hazardlib.geo.mesh.Mesh
A specification of Mesh that requires coordinate numpy-arrays to be two-dimensional.
Rectangular mesh is meant to represent not just an unordered collection of points but rather a sort of table of points, where index of the point in a mesh is related to it’s position with respect
to neighbouring points.
Parameters: surface – a Surface object
Returns: a 3D array of shape (3, N, M)
Module openquake.hazardlib.geo.nodalplane implements NodalPlane.
class openquake.hazardlib.geo.nodalplane.NodalPlane(strike, dip, rake)[source]¶
Bases: object
Nodal plane represents earthquake rupture orientation and propagation direction.
• strike – Angle between line created by the intersection of rupture plane and the North direction (defined between 0 and 360 degrees).
Parameters: • dip – Angle between earth surface and fault plane (defined between 0 and 90 degrees).
• rake – Angle describing rupture propagation direction (defined between -180 and +180 degrees).
Raises: ValueError – If any of parameters exceeds the definition range.
assert_equal(other, ignore=())¶
classmethod check_dip(dip)[source]¶
Check if dip is in range (0, 90] and raise ValueError otherwise.
classmethod check_rake(rake)[source]¶
Check if rake is in range (-180, 180] and raise ValueError otherwise.
classmethod check_strike(strike)[source]¶
Check if strike is in range [0, 360) and raise ValueError otherwise.
Module openquake.hazardlib.geo.point defines Point.
class openquake.hazardlib.geo.point.Point(longitude, latitude, depth=0.0)[source]¶
Bases: object
This class represents a geographical point in terms of longitude, latitude, and depth (with respect to the Earth surface).
• longitude (float) – Point longitude, in decimal degrees.
Parameters: • latitude (float) – Point latitude, in decimal degrees.
• depth (float) – Point depth (default to 0.0), in km. Depth > 0 indicates a point below the earth surface, and depth < 0 above the earth surface.
EQUALITY_DISTANCE = 0.001¶
The distance between two points for them to be considered equal, in km.
Compute the azimuth (in decimal degrees) between this point and the given point.
Parameters: point (Instance of Point) – Destination point.
Returns: The azimuth, value in a range [0, 360).
Return type: float
closer_than(mesh, radius)[source]¶
Check for proximity of points in the mesh.
Returns: Numpy array of boolean values in the same shape as the mesh coordinate arrays with True on indexes of points that are not further than radius km from this point. Function distance
() is used to calculate distances to points of the mesh. Points of the mesh that lie exactly radius km away from this point also have True in their indices.
Compute the distance (in km) between this point and the given point.
Distance is calculated using pythagoras theorem, where the hypotenuse is the distance and the other two sides are the horizontal distance (great circle distance) and vertical distance (depth
difference between the two locations).
Parameters: point (Instance of Point) – Destination point.
Returns: The distance.
Return type: float
distance_to_mesh(mesh, with_depths=True)[source]¶
Compute distance (in km) between this point and each point of mesh.
• mesh – Mesh of points to calculate distance to.
Parameters: • with_depths – If True (by default), distance is calculated between actual point and the mesh, geodetic distance of projections is combined with vertical distance (difference
of depths). If this is set to False, only geodetic distance between projections is calculated.
Returns: Numpy array of floats of the same shape as mesh with distance values in km in respective indices.
equally_spaced_points(point, distance)[source]¶
Compute the set of points equally spaced between this point and the given point.
• point (Instance of Point) – Destination point.
Parameters: • distance (float) – Distance between points (in km).
Returns: The list of equally spaced points.
Return type: list of Point instances
classmethod from_vector(vector)[source]¶
Create a point object from a 3d vector in Cartesian space.
Parameters: vector – Tuple, list or numpy array of three float numbers representing point coordinates in Cartesian 3d space.
Returns: A Point object created from those coordinates.
Check if this point is defined on the surface (depth is 0.0).
Returns bool: True if this point is on the surface, false otherwise.
point_at(horizontal_distance, vertical_increment, azimuth)[source]¶
Compute the point with given horizontal, vertical distances and azimuth from this point.
• horizontal_distance (float) – Horizontal distance, in km.
Parameters: • vertical_increment (float) – Vertical increment, in km. When positive, the new point has a greater depth. When negative, the new point has a smaller depth.
Returns: The point at the given distances.
Return type: Instance of Point
Create a circular polygon with specified radius centered in the point.
Parameters: radius – Required radius of a new polygon, in km.
Returns: Instance of Polygon that approximates a circle around the point with specified radius.
Generate WKT (Well-Known Text) to represent this point in 2 dimensions (ignoring depth).
Alias for .longitude
Alias for .latitude
Alias for .depth
Module openquake.hazardlib.geo.polygon defines Polygon.
class openquake.hazardlib.geo.polygon.Polygon(points)[source]¶
Bases: object
Polygon objects represent an area on the Earth surface.
Parameters: points – The list of Point objects defining the polygon vertices. The points are connected by great circle arcs in order of appearance. Polygon segment should not cross another
polygon segment. At least three points must be defined.
Raises: ValueError – If points contains less than three unique points or if polygon perimeter intersects itself.
openquake.hazardlib.geo.polygon.UPSAMPLING_STEP_KM = 100¶
Polygon upsampling step for long edges, in kilometers. See get_resampled_coordinates().
openquake.hazardlib.geo.polygon.get_resampled_coordinates(lons, lats)[source]¶
Resample polygon line segments and return the coordinates of the new vertices. This limits distortions when projecting a polygon onto a spherical surface.
Parameters define longitudes and latitudes of a point collection in the form of lists or numpy arrays.
Returns: A tuple of two numpy arrays: longitudes and latitudes of resampled vertices.
Module openquake.hazardlib.geo.utils contains functions that are common to several geographical primitives and some other low-level spatial operations.
class openquake.hazardlib.geo.utils.OrthographicProjection(west, east, north, south)[source]¶
Bases: object
Callable OrthographicProjection object that can perform both forward and reverse projection (converting from longitudes and latitudes to x and y values on 2d-space and vice versa). The call takes
three arguments: first two are numpy arrays of longitudes and latitudes or abscissae and ordinates of points to project and the third one is a boolean that allows to choose what operation is
requested – is it forward or reverse one. True value given to third positional argument (or keyword argument “reverse”) indicates that the projection of points in 2d space back to earth surface
is needed. The default value for “reverse” argument is False, which means forward projection (degrees to kilometers).
Raises ValueError in forward projection mode if any of the target points is further than 90 degree (along the great circle arc) from the projection center.
Parameters are given as floats, representing decimal degrees (first two are longitudes and last two are latitudes). They define a bounding box in a spherical coordinates of the collection of
points that is about to be projected. The center point of the projection (coordinates (0, 0) in Cartesian space) is set to the middle point of that bounding box. The resulting projection is
defined for spherical coordinates that are not further from the bounding box center than 90 degree on the great circle arc.
The result projection is of type Orthographic. This projection is prone to distance, area and angle distortions everywhere outside of the center point, but still can be used for checking shapes:
verifying if line intersects itself (like in line_intersects_itself()) or if point is inside of a polygon (like in openquake.hazardlib.geo.polygon.Polygon.discretize()). It can be also used for
measuring distance to an extent of around 700 kilometers (error doesn’t exceed 1 km up until then).
assert_equal(other, ignore=())¶
classmethod from_lons_lats(lons, lats)[source]¶
exception openquake.hazardlib.geo.utils.SiteAssociationError[source]¶
Bases: Exception
Raised when there are no sites close enough
class openquake.hazardlib.geo.utils.SphericalBB(west, east, north, south)¶
Bases: tuple
openquake.hazardlib.geo.utils.angular_distance(km, lat, lat2=None)[source]¶
Return the angular distance of two points at the given latitude.
>>> '%.3f' % angular_distance(100, lat=40)
>>> '%.3f' % angular_distance(100, lat=80)
openquake.hazardlib.geo.utils.assoc(objects, sitecol, assoc_dist, mode)[source]¶
Associate geographic objects to a site collection.
• objects – something with .lons, .lats or [‘lon’] [‘lat’], or a list of lists of objects with a .location attribute (i.e. assets_by_site)
Parameters: • assoc_dist – the maximum distance for association
• mode – if ‘strict’ fail if at least one site is not associated if ‘error’ fail if all sites are not associated
Returns: (filtered site collection, filtered objects)
Return the spherical coordinates for coordinates in Cartesian space.
This function does an opposite to spherical_to_cartesian().
Parameters: vectors – Array of 3d vectors in Cartesian space of shape (…, 3)
Returns: Tuple of three arrays of the same shape as vectors representing longitude (decimal degrees), latitude (decimal degrees) and depth (km) in specified order.
Given a list of Point objects, return a new list with adjacent duplicate points removed.
openquake.hazardlib.geo.utils.cross_idl(lon1, lon2, *lons)[source]¶
Return True if two longitude values define line crossing international date line.
>>> cross_idl(-45, 45)
>>> cross_idl(-180, -179)
>>> cross_idl(180, 179)
>>> cross_idl(45, -45)
>>> cross_idl(0, 0)
>>> cross_idl(-170, 170)
>>> cross_idl(170, -170)
>>> cross_idl(-180, 180)
Returns: a valid longitude in the range -180 <= lon < 180
>>> fix_lon(11)
>>> fix_lon(181)
>>> fix_lon(-182)
openquake.hazardlib.geo.utils.get_bounding_box(obj, maxdist)[source]¶
Return the dilated bounding box of a geometric object
openquake.hazardlib.geo.utils.get_longitudinal_extent(lon1, lon2)[source]¶
Return the distance between two longitude values as an angular measure. Parameters represent two longitude values in degrees.
Returns: Float, the angle between lon1 and lon2 in degrees. Value is positive if lon2 is on the east from lon1 and negative otherwise. Absolute value of the result doesn’t exceed 180 for valid
parameters values.
openquake.hazardlib.geo.utils.get_middle_point(lon1, lat1, lon2, lat2)[source]¶
Given two points return the point exactly in the middle lying on the same great circle arc.
Parameters are point coordinates in degrees.
Returns: Tuple of longitude and latitude of the point in the middle.
openquake.hazardlib.geo.utils.get_spherical_bounding_box(lons, lats)[source]¶
Given a collection of points find and return the bounding box, as a pair of longitudes and a pair of latitudes.
Parameters define longitudes and latitudes of a point collection respectively in a form of lists or numpy arrays.
Returns: A tuple of four items. These items represent western, eastern, northern and southern borders of the bounding box respectively. Values are floats in decimal degrees.
Raises: ValueError – If points collection has the longitudinal extent of more than 180 degrees (it is impossible to define a single hemisphere bound to poles that would contain the whole
openquake.hazardlib.geo.utils.line_intersects_itself(lons, lats, closed_shape=False)[source]¶
Return True if line of points intersects itself. Line with the last point repeating the first one considered intersecting itself.
The line is defined by lists (or numpy arrays) of points’ longitudes and latitudes (depth is not taken into account).
Parameters: closed_shape – If True the line will be checked twice: first time with its original shape and second time with the points sequence being shifted by one point (the last point becomes
first, the first turns second and so on). This is useful for checking that the sequence of points defines a valid Polygon.
openquake.hazardlib.geo.utils.normalize_lons(l1, l2)[source]¶
An international date line safe way of returning a range of longitudes.
>>> normalize_lons(20, 30) # no IDL within the range
[(20, 30)]
>>> normalize_lons(-17, +17) # no IDL within the range
[(-17, 17)]
>>> normalize_lons(-178, +179)
[(-180, -178), (179, 180)]
>>> normalize_lons(178, -179)
[(-180, -179), (178, 180)]
>>> normalize_lons(179, -179)
[(-180, -179), (179, 180)]
>>> normalize_lons(177, -176)
[(-180, -176), (177, 180)]
Get unit vector for a given one.
Parameters: vector – Numpy vector as coordinates in Cartesian space, or an array of such.
Returns: Numpy array of the same shape and structure where all vectors are normalized. That is, each coordinate component is divided by its vector’s length.
This fits an n-dimensional plane to a set of points. See http://stackoverflow.com/questions/12299540/plane-fitting-to-4-or-more-xyz-points
Parameters: points – An instance of :class:~numpy.ndarray. The number of columns must be equal to three.
Returns: A point on the plane and the normal to the plane.
openquake.hazardlib.geo.utils.point_to_polygon_distance(polygon, pxx, pyy)[source]¶
Calculate the distance to polygon for each point of the collection on the 2d Cartesian plane.
• polygon – Shapely “Polygon” geometry object.
Parameters: • pxx – List or numpy array of abscissae values of points to calculate the distance from.
• pyy – Same structure as pxx, but with ordinate values.
Returns: Numpy array of distances in units of coordinate system. Points that lie inside the polygon have zero distance.
openquake.hazardlib.geo.utils.triangle_area(e1, e2, e3)[source]¶
Get the area of triangle formed by three vectors.
Parameters are three three-dimensional numpy arrays representing vectors of triangle’s edges in Cartesian space.
Returns: Float number, the area of the triangle in squared units of coordinates, or numpy array of shape of edges with one dimension less.
Uses Heron formula, see http://mathworld.wolfram.com/HeronsFormula.html.
openquake.hazardlib.geo.utils.within(bbox, lonlat_index)[source]¶
• bbox – a bounding box in lon, lat
Parameters: • lonlat_index – an rtree index in lon, lat
Returns: array of indices within the bounding box | {"url":"https://docs.openquake.org/oq-engine/3.2/openquake.hazardlib.geo.html","timestamp":"2024-11-03T02:58:33Z","content_type":"application/xhtml+xml","content_length":"139186","record_id":"<urn:uuid:10697589-5774-4f88-a5e4-5d166f0f9cee>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00293.warc.gz"} |
6.10.3. Quantified constraints
6.10.3. Quantified constraints¶
Allow constraints to quantify over types.
The extension QuantifiedConstraints introduces quantified constraints, which give a new level of expressiveness in constraints. For example, consider
data Rose f a = Branch a (f (Rose f a))
instance (Eq a, ???) => Eq (Rose f a)
(Branch x1 c1) == (Branch x2 c2)
= x1==x1 && c1==c2
From the x1==x2 we need Eq a, which is fine. From c1==c2 we need Eq (f (Rose f a)) which is not fine in Haskell today; we have no way to solve such a constraint.
QuantifiedConstraints lets us write this
instance (Eq a, forall b. (Eq b) => Eq (f b))
=> Eq (Rose f a)
(Branch x1 c1) == (Branch x2 c2)
= x1==x1 && c1==c2
Here, the quantified constraint forall b. (Eq b) => Eq (f b) behaves a bit like a local instance declaration, and makes the instance typeable.
The paper Quantified class constraints (by Bottu, Karachalias, Schrijvers, Oliveira, Wadler, Haskell Symposium 2017) describes this feature in technical detail, with examples, and so is a primary
reference source for this feature.
6.10.3.1. Motivation¶
Introducing quantified constraints offers two main benefits:
• Firstly, they enable terminating resolution where this was not possible before. Consider for instance the following instance declaration for the general rose datatype
data Rose f x = Rose x (f (Rose f x))
instance (Eq a, forall b. Eq b => Eq (f b)) => Eq (Rose f a) where
(Rose x1 rs1) == (Rose x2 rs2) = x1 == x2 && rs1 == rs2
This extension allows us to write constraints of the form forall b. Eq b => Eq (f b), which is needed to solve the Eq (f (Rose f x)) constraint arising from the second usage of the (==) method.
• Secondly, quantified constraints allow for more concise and precise specifications. As an example, consider the MTL type class for monad transformers:
class Trans t where
lift :: Monad m => m a -> (t m) a
The developer knows that a monad transformer takes a monad m into a new monad t m. But this property is not formally specified in the above declaration. This omission becomes an issue when
defining monad transformer composition:
newtype (t1 * t2) m a = C { runC :: t1 (t2 m) a }
instance (Trans t1, Trans t2) => Trans (t1 * t2) where
lift = C . lift . lift
The goal here is to lift from monad m to t2 m and then lift this again into t1 (t2 m). However, this second lift can only be accepted when (t2 m) is a monad and there is no way of establishing
that this fact universally holds.
Quantified constraints enable this property to be made explicit in the Trans class declaration:
class (forall m. Monad m => Monad (t m)) => Trans t where
lift :: Monad m => m a -> (t m) a
This idea is very old; see Section 7 of Derivable type classes.
6.10.3.2. Syntax changes¶
Haskell 2010 defines a context (the bit to the left of => in a type) like this
context ::= class
| ( class1, ..., classn )
class ::= qtycls tyvar
| qtycls (tyvar atype1 ... atypen)
We extend class (warning: this is a rather confusingly named non-terminal symbol) with two extra forms, namely precisely what can appear in an instance declaration
class ::= ...
| [context =>] qtycls inst
| [context =>] tyvar inst
The definition of inst is unchanged from the Haskell Report (roughly, just a type). The context => part is optional. That is the only syntactic change to the language.
• Where GHC allows extensions in instance declarations we allow exactly the same extensions to this new form of class. Specifically, with ExplicitForAll and MultiParamTypeClasses the syntax becomes
class ::= ...
| [forall tyvars .] [context =>] qtycls inst1 ... instn
| [forall tyvars .] [context =>] tyvar inst1 ... instn
Note that an explicit forall is often absolutely essential. Consider the rose-tree example
instance (Eq a, forall b. Eq b => Eq (f b)) => Eq (Rose f a) where ...
Without the forall b, the type variable b would be quantified over the whole instance declaration, which is not what is intended.
• One of these new quantified constraints can appear anywhere that any other constraint can, not just in instance declarations. Notably, it can appear in a type signature for a value binding, data
constructor, or expression. For example
f :: (Eq a, forall b. Eq b => Eq (f b)) => Rose f a -> Rose f a -> Bool
f t1 t2 = not (t1 == t2)
• The form with a type variable at the head allows this:
instance (forall xx. c (Free c xx)) => Monad (Free c) where
Free f >>= g = f g
See Iceland Jack’s summary. The key point is that the bit to the right of the => may be headed by a type variable (c in this case), rather than a class. It should not be one of the forall’d
variables, though.
(NB: this goes beyond what is described in the paper, but does not seem to introduce any new technical difficulties.)
6.10.3.4. Superclasses¶
Suppose we have:
f :: forall m. (forall a. Ord a => Ord (m a)) => m Int -> Bool
f x = x == x
From the x==x we need an Eq (m Int) constraint, but the context only gives us a way to figure out Ord (m a) constraints. But from the given constraint forall a. Ord a => Ord (m a) we derive a second
given constraint forall a. Ord a => Eq (m a), and from that we can readily solve Eq (m Int). This process is very similar to the way that superclasses already work: given an Ord a constraint we
derive a second given Eq a constraint.
NB: This treatment of superclasses goes beyond the paper, but is specifically desired by users.
6.10.3.5. Overlap¶
Quantified constraints can potentially lead to overlapping local axioms. Consider for instance the following example:
class A a where {}
class B a where {}
class C a where {}
class (A a => C a) => D a where {}
class (B a => C a) => E a where {}
class C a => F a where {}
instance (B a, D a, E a) => F a where {}
When type checking the instance declaration for F a, we need to check that the superclass C of F holds. We thus try to entail the constraint C a under the theory containing:
• The instance axioms : (B a, D a, E a) => F a
• The local axioms from the instance context : B a, D a and E a
• The closure of the superclass relation over these local axioms : A a => C a and B a => C a
However, the A a => C a and B a => C a axioms both match the wanted constraint C a. There are several possible approaches for handling these overlapping local axioms:
• Pick first. We can simply select the first matching axiom we encounter. In the above example, this would be A a => C a. We’d then need to entail A a, for which we have no matching axioms
available, causing the above program to be rejected.
But suppose we made a slight adjustment to the order of the instance context, putting E a before D a:
instance (B a, E a, D a) => F a where {}
The first matching axiom we encounter while entailing C a, is B a => C a. We have a local axiom B a available, so now the program is suddenly accepted. This behaviour, where the ordering of an
instance context determines whether or not the program is accepted, seems rather confusing for the developer.
• Reject if in doubt. An alternative approach would be to check for overlapping axioms, when solving a constraint. When multiple matching axioms are discovered, we reject the program. This approach
is a bit conservative, in that it may reject working programs. But it seem much more transparent towards the developer, who can be presented with a clear message, explaining why the program is
• Backtracking. Lastly, a simple form of backtracking could be introduced. We simply select the first matching axiom we encounter and when the entailment fails, we backtrack and look for other
axioms that might match the wanted constraint.
This seems the most intuitive and transparent approach towards the developer, who no longer needs to concern himself with the fact that his code might contain overlapping axioms or with the
ordering of his instance contexts. But backtracking would apply equally to ordinary instance selection (in the presence of overlapping instances), so it is a much more pervasive change, with
substantial consequences for the type inference engine.
GHC adopts Reject if in doubt for now. We can see how painful it is in practice, and try something more ambitious if necessary.
6.10.3.6. Instance lookup¶
In the light of the overlap decision, instance lookup works like this when trying to solve a class constraint C t
1. First see if there is a given un-quantified constraint C t. If so, use it to solve the constraint.
2. If not, look at all the available given quantified constraints; if exactly one matches C t, choose it; if more than one matches, report an error.
3. If no quantified constraints match, look up in the global instances, as described in Instance declarations and resolution and Overlapping instances.
6.10.3.7. Termination¶
GHC uses the Paterson Conditions to ensure that instance resolution terminates. How are those rules modified for quantified constraints? In two ways.
• Each quantified constraint, taken by itself, must satisfy the termination rules for an instance declaration.
• After “for each class constraint (C t1 ... tn)”, add “or each quantified constraint (forall as. context => C t1 .. tn)“
Note that the second item only at the head of the quantified constraint, not its context. Reason: the head is the new goal that has to be solved if we use the instance declaration.
Of course, UndecidableInstances lifts the Paterson Conditions, as now.
6.10.3.8. Coherence¶
Although quantified constraints are a little like local instance declarations, they differ in one big way: the local instances are written by the compiler, not the user, and hence cannot introduce
incoherence. Consider
f :: (forall a. Eq a => Eq (f a)) => f b -> f Bool
f x = ...rhs...
In ...rhs... there is, in effect a local instance for Eq (f a) for any a. But at a call site for f the compiler itself produces evidence to pass to f. For example, if we called f Nothing, then f is
Maybe and the compiler must prove (at the call site) that forall a. Eq a => Eq (Maybe a) holds. It can do this easily, by appealing to the existing instance declaration for Eq (Maybe a).
In short, quantified constraints do not introduce incoherence. | {"url":"https://downloads.haskell.org/ghc/9.0.2/docs/html/users_guide/exts/quantified_constraints.html","timestamp":"2024-11-07T07:07:00Z","content_type":"text/html","content_length":"45932","record_id":"<urn:uuid:a2983642-7e57-4a0d-b1c1-f0b469b13c85>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00308.warc.gz"} |
Multiplication Facts 1 12 Worksheets Pdf
Mathematics, especially multiplication, develops the foundation of countless scholastic disciplines and real-world applications. Yet, for numerous learners, understanding multiplication can pose an
obstacle. To resolve this obstacle, educators and moms and dads have actually accepted a powerful tool: Multiplication Facts 1 12 Worksheets Pdf.
Introduction to Multiplication Facts 1 12 Worksheets Pdf
Multiplication Facts 1 12 Worksheets Pdf
Multiplication Facts 1 12 Worksheets Pdf -
Welcome to The Multiplying 1 to 12 by 12 100 Questions A Math Worksheet from the Multiplication Worksheets Page at Math Drills This math worksheet was created or last revised on 2021 02 19 and has
been viewed 964 times this week and 1 205 times this month
These multiplication facts worksheets provide various exercise to help students gain fluency in the multiplication facts up to 12 x 12 Free Worksheets Math Drills Multiplication Facts Printable
Relevance of Multiplication Technique Comprehending multiplication is critical, laying a solid structure for advanced mathematical ideas. Multiplication Facts 1 12 Worksheets Pdf offer structured and
targeted technique, promoting a much deeper understanding of this essential arithmetic procedure.
Evolution of Multiplication Facts 1 12 Worksheets Pdf
1 12 multiplication Worksheet Page Learning Printable
1 12 multiplication Worksheet Page Learning Printable
Fact Families Multiplication Division In this area of our site you ll find fact family circles fact family houses fact family triangles and factor factor product boxes Multiplication Tables This page
has printable multiplication tables Includes tables that are completely filled in partly filled in and blank Properties of Multiplication
We have thousands of multiplication worksheets This page will link you to facts up to 12s and fact families We also have sets of worksheets for multiplying by 3s only 4s only 5s only etc Practice
more advanced multi digit problems Print basic multiplication and division fact families and number bonds
From traditional pen-and-paper exercises to digitized interactive styles, Multiplication Facts 1 12 Worksheets Pdf have developed, satisfying varied understanding designs and preferences.
Types of Multiplication Facts 1 12 Worksheets Pdf
Standard Multiplication Sheets Basic workouts concentrating on multiplication tables, aiding learners construct a solid arithmetic base.
Word Issue Worksheets
Real-life scenarios incorporated into troubles, enhancing important reasoning and application skills.
Timed Multiplication Drills Tests designed to improve rate and precision, aiding in fast psychological math.
Advantages of Using Multiplication Facts 1 12 Worksheets Pdf
10 Best Images Of Multiplication Worksheets 1 12 Multiplication Worksheets 1 10 100 Division
10 Best Images Of Multiplication Worksheets 1 12 Multiplication Worksheets 1 10 100 Division
These charts cover multiplication facts from 1 to 12 so they re perfect for all the facts kids need to master You get a variety of styles and colors including Completed multiplication charts in color
and black and white Blank multiplication chart in black and white Diagonal shaded multiplication charts blank and complete
To complete these worksheets kids draw pictures of equal groups to solve each multiplication expression Multiplication Facts Using an Area Model This set of color the product pages helps kids model
multiplication facts using area To complete them kids color the area on a grid to find the answer to the given multiplication fact
Enhanced Mathematical Skills
Regular practice hones multiplication efficiency, boosting total mathematics capabilities.
Improved Problem-Solving Abilities
Word problems in worksheets develop logical reasoning and strategy application.
Self-Paced Discovering Advantages
Worksheets accommodate private discovering speeds, promoting a comfy and versatile knowing atmosphere.
Just How to Develop Engaging Multiplication Facts 1 12 Worksheets Pdf
Incorporating Visuals and Shades Lively visuals and shades capture focus, making worksheets aesthetically appealing and involving.
Including Real-Life Circumstances
Relating multiplication to everyday situations includes relevance and functionality to exercises.
Customizing Worksheets to Different Ability Degrees Tailoring worksheets based upon varying effectiveness degrees makes sure inclusive discovering. Interactive and Online Multiplication Resources
Digital Multiplication Tools and Gamings Technology-based resources provide interactive understanding experiences, making multiplication interesting and enjoyable. Interactive Websites and
Applications On-line platforms give varied and obtainable multiplication method, supplementing conventional worksheets. Customizing Worksheets for Numerous Knowing Styles Aesthetic Learners Aesthetic
help and diagrams aid understanding for students inclined toward visual understanding. Auditory Learners Spoken multiplication troubles or mnemonics cater to learners who comprehend principles with
acoustic means. Kinesthetic Learners Hands-on tasks and manipulatives support kinesthetic learners in recognizing multiplication. Tips for Effective Application in Knowing Consistency in Practice
Regular practice enhances multiplication skills, promoting retention and fluency. Balancing Rep and Range A mix of recurring workouts and varied problem layouts preserves interest and understanding.
Giving Positive Responses Responses help in determining locations of improvement, motivating continued development. Obstacles in Multiplication Method and Solutions Inspiration and Interaction
Obstacles Dull drills can result in disinterest; cutting-edge strategies can reignite inspiration. Overcoming Fear of Math Adverse assumptions around mathematics can prevent progress; developing a
favorable learning atmosphere is important. Influence of Multiplication Facts 1 12 Worksheets Pdf on Academic Performance Studies and Research Study Searchings For Research study suggests a positive
relationship in between consistent worksheet usage and boosted mathematics performance.
Multiplication Facts 1 12 Worksheets Pdf emerge as versatile devices, cultivating mathematical proficiency in learners while accommodating diverse knowing designs. From fundamental drills to
interactive on-line resources, these worksheets not only improve multiplication abilities but likewise promote essential reasoning and analytical abilities.
Printable Multiplication Facts 0 12 PrintableMultiplication
Multiplication Problems Between 0 12 Worksheets Multiplication worksheets Math worksheets
Check more of Multiplication Facts 1 12 Worksheets Pdf below
Times Tables Printables Web Enhance Multiplication Skills With These Times Table Test Printables
Multiplying 1 To 12 By 12 A
Free Printable Multiplication Facts 1 12 And Multiplication Chart Multiplication
Multiplication Facts 1 12 Printable Free Printable
11 Best Images Of 1 Through 12 Multiplication Worksheets 2nd Grade Math Worksheets
Printable Multiplication Chart 1 12 PrintableMultiplication
Multiplication facts worksheets K5 Learning
These multiplication facts worksheets provide various exercise to help students gain fluency in the multiplication facts up to 12 x 12 Free Worksheets Math Drills Multiplication Facts Printable
Multiplication Worksheets Up to 12s Super Teacher Worksheets
Practice basic multiplication from 0 12 with this printable puzzle activity 3rd through 5th Grades View PDF Multiplication Memory Match Up to 12s Lay all of the cards on the table face down Players
take turns trying to find multiplication facts and their matches 3rd and 4th Grades View PDF Multiplication Board Game To the Moon 0 12
These multiplication facts worksheets provide various exercise to help students gain fluency in the multiplication facts up to 12 x 12 Free Worksheets Math Drills Multiplication Facts Printable
Practice basic multiplication from 0 12 with this printable puzzle activity 3rd through 5th Grades View PDF Multiplication Memory Match Up to 12s Lay all of the cards on the table face down Players
take turns trying to find multiplication facts and their matches 3rd and 4th Grades View PDF Multiplication Board Game To the Moon 0 12
Multiplication Facts 1 12 Printable Free Printable
Multiplying 1 To 12 By 12 A
11 Best Images Of 1 Through 12 Multiplication Worksheets 2nd Grade Math Worksheets
Printable Multiplication Chart 1 12 PrintableMultiplication
Multiplication Facts 1 12 Printable Free Printable
100 Vertical Questions Multiplication Facts 1 2 By 1 10 A Multiplication Worksheet
100 Vertical Questions Multiplication Facts 1 2 By 1 10 A Multiplication Worksheet
Printable Multiplication Facts Tables Activities For Kids
FAQs (Frequently Asked Questions).
Are Multiplication Facts 1 12 Worksheets Pdf ideal for every age groups?
Yes, worksheets can be tailored to various age and ability degrees, making them versatile for different students.
Just how commonly should trainees practice utilizing Multiplication Facts 1 12 Worksheets Pdf?
Regular technique is essential. Routine sessions, preferably a few times a week, can yield considerable improvement.
Can worksheets alone boost math abilities?
Worksheets are an important tool yet must be supplemented with varied discovering techniques for extensive skill advancement.
Exist online systems supplying free Multiplication Facts 1 12 Worksheets Pdf?
Yes, many instructional web sites provide open door to a vast array of Multiplication Facts 1 12 Worksheets Pdf.
Just how can parents support their kids's multiplication practice at home?
Urging consistent technique, providing assistance, and producing a favorable learning environment are valuable actions. | {"url":"https://crown-darts.com/en/multiplication-facts-1-12-worksheets-pdf.html","timestamp":"2024-11-12T06:07:38Z","content_type":"text/html","content_length":"28921","record_id":"<urn:uuid:2ad88b81-8f35-44ed-a5a0-6f7c7db51c61>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00388.warc.gz"} |
Printable Graph Paper With Axis And Numbers
Printable Graph Paper With Axis And Numbers - Web printable graph paper with axis. Web if you are using graph paper for mathematics then it is essential to have proper axis and numbers on it. Graph
paper with axis templates is useful for. So, in a likewise manner all the quadrants have their respective. Web we are sharing free online graph paper for blank and printable grid paper template with
a4, axis,1 inch, with numbers, polar, coordinate, engineering,. One can quickly draw data on graph paper. Web use our free graph paper generator to create and customize pdfs of printable graph paper.
Making an axis on graph paper by yourself can lead to. This generator creates sheets that have multiple graphs on them. Web you can use these graph papers to plot any numbers on the graph on its axis
and serve your purposes in an accurate manner.
free graph paper with axis template in pdf printable graph paper with
Show grid show x and y axis. Graph paper with axis templates is useful for. Web printable graph paper with axis and numbers pdf template is a popular format for graphic representation of relations.
So, in a likewise manner all the quadrants have their respective. This generator creates sheets that have multiple graphs on them.
Printable Graph Paper With Axis And Numbers Pdf Printable Word Searches
One can quickly draw data on graph paper. Web if you are using graph paper for mathematics then it is essential to have proper axis and numbers on it. Web printable graph paper with axis and numbers
pdf template is a popular format for graphic representation of relations. Web use our free graph paper generator to create and customize pdfs.
free 8 numbered graph paper templates in pdf graph paper stickers
Web printable graph paper with axis and numbers pdf template is a popular format for graphic representation of relations. Web free online graph paper for blank and printable grid paper templates with
a4, axis,1 inch, with numbers, polar, coordinate, engineering, etc. Making an axis on graph paper by yourself can lead to. So, in a likewise manner all the quadrants.
Printable Graph Paper With Axis And Numbers Pdf Printable Word Searches
Graph paper with axis templates is useful for. Web if you are using graph paper for mathematics then it is essential to have proper axis and numbers on it. Web we are sharing free online graph paper
for blank and printable grid paper template with a4, axis,1 inch, with numbers, polar, coordinate, engineering,. Web free online graph paper for blank.
5+ Free Printable Graph Paper with Axis (X & Y) & Numbers
Web printable graph paper with axis and numbers pdf template is a popular format for graphic representation of relations. This generator creates sheets that have multiple graphs on them. Web if you
are using graph paper for mathematics then it is essential to have proper axis and numbers on it. As we are well aware that graph paper comes with.
PrintableGraphPaperWithXAndYAxisE1510761194205 On The Way
This generator creates sheets that have multiple graphs on them. Show grid show x and y axis. As we are well aware that graph paper comes with a number of usages at a time and in accordance with the
same the type of graph. Web printable graph paper with axis and numbers pdf template is a popular format for graphic.
Graph Paper with Numbers Printable PDF Online Get Graph Paper
Graph paper with axis templates is useful for. Web free online graph paper for blank and printable grid paper templates with a4, axis,1 inch, with numbers, polar, coordinate, engineering, etc. This
generator creates sheets that have multiple graphs on them. Web we are sharing free online graph paper for blank and printable grid paper template with a4, axis,1 inch, with.
Printable 4 Quadrant Graph Paper With Numbered X And Y Axis Free
This generator creates sheets that have multiple graphs on them. Web use our free graph paper generator to create and customize pdfs of printable graph paper. Web if you are using graph paper for
mathematics then it is essential to have proper axis and numbers on it. Web printable graph paper with axis and numbers pdf template is a popular.
FREE 22+ Sample Graph Paper Templates in MS Word PDF PSD
Web printable graph paper with axis. Web we are sharing free online graph paper for blank and printable grid paper template with a4, axis,1 inch, with numbers, polar, coordinate, engineering,. One
can quickly draw data on graph paper. So, in a likewise manner all the quadrants have their respective. Printable graph paper with axis is helpful for people when they.
Printable Graph Paper With Axis And Numbers Pdf Printable Word Searches
As we are well aware that graph paper comes with a number of usages at a time and in accordance with the same the type of graph. Web printable graph paper with axis and numbers pdf template is a
popular format for graphic representation of relations. Making an axis on graph paper by yourself can lead to. Customize features like.
Web we are sharing free online graph paper for blank and printable grid paper template with a4, axis,1 inch, with numbers, polar, coordinate, engineering,. Web printable graph paper with axis and
numbers pdf template is a popular format for graphic representation of relations. So, in a likewise manner all the quadrants have their respective. Web printable graph paper with axis. Web use our
free graph paper generator to create and customize pdfs of printable graph paper. Customize features like grid size, units, x and y axes, and more. One can quickly draw data on graph paper. As we are
well aware that graph paper comes with a number of usages at a time and in accordance with the same the type of graph. Web you can use these graph papers to plot any numbers on the graph on its axis
and serve your purposes in an accurate manner. Printable graph paper with axis is helpful for people when they are working with math or physics. Graph paper with axis templates is useful for. Show
grid show x and y axis. Web free online graph paper for blank and printable grid paper templates with a4, axis,1 inch, with numbers, polar, coordinate, engineering, etc. Web if you are using graph
paper for mathematics then it is essential to have proper axis and numbers on it. This generator creates sheets that have multiple graphs on them. Making an axis on graph paper by yourself can lead
So, In A Likewise Manner All The Quadrants Have Their Respective.
Printable graph paper with axis is helpful for people when they are working with math or physics. One can quickly draw data on graph paper. This generator creates sheets that have multiple graphs on
them. Web printable graph paper with axis.
Web We Are Sharing Free Online Graph Paper For Blank And Printable Grid Paper Template With A4, Axis,1 Inch, With Numbers, Polar, Coordinate, Engineering,.
Making an axis on graph paper by yourself can lead to. Web you can use these graph papers to plot any numbers on the graph on its axis and serve your purposes in an accurate manner. Web free online
graph paper for blank and printable grid paper templates with a4, axis,1 inch, with numbers, polar, coordinate, engineering, etc. Web if you are using graph paper for mathematics then it is essential
to have proper axis and numbers on it.
Graph Paper With Axis Templates Is Useful For.
Web printable graph paper with axis and numbers pdf template is a popular format for graphic representation of relations. Customize features like grid size, units, x and y axes, and more. Show grid
show x and y axis. As we are well aware that graph paper comes with a number of usages at a time and in accordance with the same the type of graph.
Web Use Our Free Graph Paper Generator To Create And Customize Pdfs Of Printable Graph Paper.
Related Post: | {"url":"https://dl-uk.apowersoft.com/en/printable-graph-paper-with-axis-and-numbers.html","timestamp":"2024-11-11T04:19:06Z","content_type":"text/html","content_length":"30771","record_id":"<urn:uuid:efd901df-33f4-43de-8bb9-aa7059d5a1e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00515.warc.gz"} |
Subtracting Large Whole Numbers Worksheets
Subtracting Large Whole Numbers Worksheets serve as foundational devices in the realm of mathematics, providing an organized yet versatile platform for students to discover and grasp numerical
principles. These worksheets supply a structured technique to understanding numbers, supporting a solid foundation whereupon mathematical efficiency flourishes. From the most basic checking workouts
to the ins and outs of sophisticated computations, Subtracting Large Whole Numbers Worksheets satisfy students of diverse ages and skill levels.
Introducing the Essence of Subtracting Large Whole Numbers Worksheets
Subtracting Large Whole Numbers Worksheets
Subtracting Large Whole Numbers Worksheets -
Subtracting large numbers in columns Grade 5 Subtraction Worksheet Find the difference 3 5 7 9 57 644 196 49 732 152 7 833 285 850 909 6 134 175 624 228 334 084 81 100 81 835 443 4 916 527 2 4 6 8
858 074 674 720 22 647 17 906 94 101 219 39 009 017 95 561 655 64 142 10 26 031 139 81 399 Online reading math for K 5
These subtraction worksheets range from simple subtraction of 2 digit numbers without regrouping to subtraction of large numbers in columns with multiple regroupings Free Worksheets Math Drills
Subtraction Printable
At their core, Subtracting Large Whole Numbers Worksheets are cars for conceptual understanding. They encapsulate a myriad of mathematical principles, leading students through the maze of numbers
with a collection of engaging and purposeful workouts. These worksheets go beyond the borders of standard rote learning, encouraging active engagement and cultivating an intuitive grasp of numerical
Nurturing Number Sense and Reasoning
Large Print Subtracting 4 Digit Numbers With All Regrouping G
Large Print Subtracting 4 Digit Numbers With All Regrouping G
Grade 5 math worksheets on subtracting large numbers Free pdf worksheets from K5 Learning s online reading and math program
Excel in subtracting large numbers with our multi digit subtraction worksheets Chock a block with practice in finding the difference between two large numbers ranging between 4 digits and 7 digits
this bundle of subtracting large numbers pdfs also gives insight into subtraction with borrowing or regrouping
The heart of Subtracting Large Whole Numbers Worksheets hinges on growing number sense-- a deep understanding of numbers' definitions and interconnections. They motivate expedition, inviting learners
to study math procedures, figure out patterns, and unlock the enigmas of series. With provocative challenges and logical puzzles, these worksheets become entrances to sharpening reasoning abilities,
supporting the logical minds of budding mathematicians.
From Theory to Real-World Application
Grade 5 Math Worksheets Round Large Numbers To The Underlined Digit K5 Learning Large Numbers
Grade 5 Math Worksheets Round Large Numbers To The Underlined Digit K5 Learning Large Numbers
Subtracting unlike fractions worksheets Subtracting unlike fractions hard worksheets Find the missing number in subtracting decimals Find the missing numbers in multiplication of decimal numbers Long
Division of Decimal Numbers by whole numbers Rounding Numbers Factoring Numbers Worksheets
In and Out Boxes Explore the Subtraction Worksheets in Detail Subtraction Tables and Charts Get the first step right in subtracting numbers with our subtraction tables and charts available in color
and printer friendly versions Use the blank charts and tables to boost a young learner s skill at subtraction Picture Subtraction Facts
Subtracting Large Whole Numbers Worksheets work as avenues bridging academic abstractions with the palpable realities of everyday life. By infusing useful circumstances into mathematical exercises,
learners witness the relevance of numbers in their environments. From budgeting and measurement conversions to comprehending analytical information, these worksheets empower trainees to wield their
mathematical prowess past the confines of the classroom.
Diverse Tools and Techniques
Versatility is inherent in Subtracting Large Whole Numbers Worksheets, utilizing an arsenal of pedagogical tools to deal with varied learning designs. Aesthetic help such as number lines,
manipulatives, and digital resources function as buddies in imagining abstract principles. This diverse method ensures inclusivity, accommodating learners with different choices, toughness, and
cognitive styles.
Inclusivity and Cultural Relevance
In a significantly varied world, Subtracting Large Whole Numbers Worksheets embrace inclusivity. They transcend social borders, integrating instances and problems that resonate with learners from
diverse histories. By incorporating culturally relevant contexts, these worksheets foster a setting where every student really feels stood for and valued, enhancing their connection with mathematical
Crafting a Path to Mathematical Mastery
Subtracting Large Whole Numbers Worksheets chart a program towards mathematical fluency. They impart willpower, essential reasoning, and analytical skills, essential attributes not only in maths but
in numerous elements of life. These worksheets encourage students to navigate the complex terrain of numbers, nurturing a profound admiration for the beauty and logic inherent in maths.
Embracing the Future of Education
In an era marked by technological advancement, Subtracting Large Whole Numbers Worksheets effortlessly adjust to digital systems. Interactive user interfaces and electronic sources increase typical
understanding, supplying immersive experiences that go beyond spatial and temporal limits. This amalgamation of typical methods with technological developments heralds a promising age in education
and learning, fostering a much more vibrant and interesting discovering environment.
Conclusion: Embracing the Magic of Numbers
Subtracting Large Whole Numbers Worksheets represent the magic inherent in maths-- a charming journey of expedition, exploration, and proficiency. They go beyond standard rearing, serving as drivers
for firing up the fires of inquisitiveness and query. Via Subtracting Large Whole Numbers Worksheets, students embark on an odyssey, opening the enigmatic world of numbers-- one trouble, one remedy,
at a time.
Subtracting Whole Numbers Worksheets
Add And Subtract Whole Numbers Worksheets
Check more of Subtracting Large Whole Numbers Worksheets below
Grade 5 Subtraction Worksheet Subtracting Large Numbers K5 Learning Subtracting Fractions From
Grade 5 Subtraction Worksheet Subtracting Large Numbers K5 Learning Subtracting Fractions From
Subtracting Decimals From Whole Numbers Worksheets
3rd Grade Addition And Subtraction Printable Worksheets Subtraction Math Worksheets Addition
Math Worksheets Printable Column Addition Big Numbers 6gif 10001294 16 Worksheets Subtracting
Subtracting Large Numbers Worksheet
Multi digit Subtraction Worksheets K5 Learning
These subtraction worksheets range from simple subtraction of 2 digit numbers without regrouping to subtraction of large numbers in columns with multiple regroupings Free Worksheets Math Drills
Subtraction Printable
Subtraction Worksheets Math Drills
This page includes Subtraction worksheets on topics such as five minute frenzies one two three and multi digit subtraction and subtracting across zeros Subtraction has been around for several years
now well maybe more than a few so it s probably a good thing for students to learn
These subtraction worksheets range from simple subtraction of 2 digit numbers without regrouping to subtraction of large numbers in columns with multiple regroupings Free Worksheets Math Drills
Subtraction Printable
This page includes Subtraction worksheets on topics such as five minute frenzies one two three and multi digit subtraction and subtracting across zeros Subtraction has been around for several years
now well maybe more than a few so it s probably a good thing for students to learn
3rd Grade Addition And Subtraction Printable Worksheets Subtraction Math Worksheets Addition
Grade 5 Subtraction Worksheet Subtracting Large Numbers K5 Learning Subtracting Fractions From
Math Worksheets Printable Column Addition Big Numbers 6gif 10001294 16 Worksheets Subtracting
Subtracting Large Numbers Worksheet
Two Digit Subtraction With Regrouping Worksheets
Grade 5 Subtraction Worksheet Subtracting Large Numbers K5 Learning Subtracting Fractions From
Grade 5 Subtraction Worksheet Subtracting Large Numbers K5 Learning Subtracting Fractions From
Subtracting Fractions From Whole Numbers Worksheet | {"url":"https://szukarka.net/subtracting-large-whole-numbers-worksheets","timestamp":"2024-11-09T00:35:11Z","content_type":"text/html","content_length":"27057","record_id":"<urn:uuid:4d7572c8-7aa0-4098-812c-514ae6cf87a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00755.warc.gz"} |
End of the first part of the proof: the relationship between α and β' | Let's prove Goldbach!
End of the first part of the proof: the relationship between α and β’
With this post we’ll conclude the main part of the Prime Number Theorem proof, which is based on the relationship between the constants $\alpha$ and $\beta$ which have been introduced in the post The
integral mean value and the absolute error function $R$. They are equal to, respectively, the limit superior of the absolute value of the function V and the limit superior of the integral mean of the
same absolute value. By Lemma N.7, in order to prove the Prime Number Theorem it’s sufficient to prove that $\alpha = 0$: in this post we’ll approach this result.
The two parts of the Prime Number Theorem proof
First of all, let’s recall the definitions of the constants $\alpha$ and $\beta$ (Definition N.14):
\alpha := \limsup_{x \to +\infty} |V(\log x)|
\beta := \limsup_{x \to +\infty} \frac{1}{\log x} \int_0^{\log x} |V(u)|\ du
where $x$, as it’s always implied in our posts, is a variable which assumes only integer values (in this case only positive integer values).
As usually happens in number theory, the techniques we’ll use will require the passage from integer numbers to real numbers. So we’ll define two new constants, which we’ll call $\alpha^{\prime}$ and
$\beta^{\prime}$, corresponding to $\alpha$ and $\beta$, where the variable of the limit superior assumes values inside a real interval:
Constants $\alpha^{\prime}$ and $\beta^{\prime}$
Given the variable $\xi$ which can assume values inside the real interval $[1, +\infty)$, we define the following constants:
\alpha^{\prime} := \limsup_{\xi \to +\infty} |V(\log \xi)|
\beta^{\prime} := \limsup_{\xi \to +\infty} \frac{1}{\log \xi} \int_0^{\log \xi} |V(u)|\ du
where for the real variable we used the symbol $\xi$, the Greek “x”, for remembering that it corresponds to the integer variable $x$ of the definition of $\alpha$ and $\beta$.
So, differently from $\alpha$, in order to compute $\alpha^{\prime}$, all values of the function $f(\xi) := |V(\log \xi)|$ for $\xi \in [1, +\infty)$ need to be considered, not just the integer
values of $\xi$ (that are the values of $x$), i.e. not only $|V(\log 1)|, |V(\log 2)|, |V(\log 3)|, \ldots$. Similarly, in order to compute $\beta^{\prime}$ all the values of the function $g(\xi) :=
\frac{1}{\log \xi} \int_0^{\log \xi} |V(u)|\ du$ for $\xi \in [1, +\infty)$ need to be considered, not only $\frac{1}{\log 1} \int_0^{\log 1} |V(u)|\ du, \frac{1}{\log 2} \int_0^{\log 2} |V(u)|\ du,
\frac{1}{\log 3} \int_0^{\log 3} |V(u)|\ du, \ldots$. So we are calculating superior limits not of sequences, but of functions. In one of our previous posts we saw what is the idea behind the
definition of the limit superior of a sequence, and we stated a formal definition of it. Now we could do the same for the limit superior of a function, showing the differences with respect to the
case of a sequence; however we’ll state just one essential property, which will let us go on and achieve our goals. This property is that: if we have a real function $h$ defined on a right unbounded
real interval $I$, and if we consider a sequence obtained by choosing a countable subset of the function values, that is a sequence of the kind $h(a_1), h(a_2), \ldots, h(a_n), \ldots$, with $a_1,
a_2, \ldots, a_n, \ldots \in I$, then the limit superior of the function is greater or equal to the one of the sequence. By applying this property to the case of $\alpha^{\prime}$, we’ll have that $I
= [1, +\infty)$, $h(\xi) = f(\xi) = |V(\log \xi)|$ and we’ll choose all function values obtained for integer values of $\xi$ (i.e. $h(a_1), h(a_2), \ldots, h(a_n), \ldots = h(1), h(2), \ldots, h(n),
\ldots = |V(\log 1)|, |V(\log 2)|, \ldots, |V(\log n)|, \ldots$). According to this property, the limit superior of the function $f(\xi) = |V(\log \xi)|$ is greater or equal to the one of the
sequence $|V(\log 1)|, |V(\log 2)|, \ldots, |V(\log n)|, \ldots$. But we called $\alpha^{\prime}$ the former, while we called $\alpha$ the latter, hence:
\alpha^{\prime} \geq \alpha
You can note that we haven’t said what the limit superior of a function is; we just said that it’s greater or equal to the limit superior of any sequence obtained by calculating the function on a
countable subset of its domain (by the way, such sequences are called subsequences). Indeed, from here to the definition it’s a short step, because the limit superior of a function can be defined as
the smallest real number which is greater or equal to the superior limits of all the subsequences of the function. But for the moment this definition is of little interest for us, because we need
only to know that $\alpha^{\prime} \geq \alpha$.
Of course we can repeat the same argument for $\beta$ and $\beta^{\prime}$, obtaining that:
\beta^{\prime} \geq \beta
Looking at these properties by the perspective of $\alpha$ and $\beta$, we can state the following Property:
Like for $\alpha$ and $\beta$, thanks to the boundedness of the function $|V|$ we can exclude that $\alpha^{\prime} = +\infty$ and that $\beta^{\prime} = +\infty$:
$\alpha^{\prime}$ and $\beta^{\prime}$ are real
The constants $\alpha^{\prime}$ and $\beta^{\prime}$ defined by Definition N.23 are real numbers.
The proof is practically the same as Corollary I of Proposition N.7A.
In this post we’ll prove the following Proposition:
Relationship between $\alpha$ and $\beta^{\prime}$
With reference to Definitions N.23 and N.14:
\alpha \leq \beta^{\prime}
We have to remember that our goal, by Lemma N.7, is to prove that $\alpha = 0$. In the next posts we’ll use Proposition N.23 for proving it by contraddiction: we’ll suppose that $\alpha \gt 0$ and
from that we’ll deduce that $\beta^{\prime} \lt \alpha$; but this is in contraddiction with Proposition N.23, so the hypothesis that $\alpha \gt 0$ must be false, i.e. $\alpha$ must be zero (it
cannot be negative, since it’s the limit superior of a non-negative function).
But why complicating things with $\beta^{\prime}$, instead of proving that $\alpha = 0$ directly? Indeed, as we’ll see in the next posts, it’s relatively simple to prove the implication $\alpha \gt 0
\Rightarrow \beta^{\prime} \lt \alpha$, but so far nobody has been able to prove that $\alpha = 0$ in a more direct way, without resorting to $\beta^{\prime}$. The introduction of $\beta^{\prime}$ is
one of the key ideas of the whole proof of the Prime Number Theorem, not only because in actual fact it’s the only known way for proving that $\alpha = 0$, but also because it lets break the proof
into two main parts:
• The proof that $\alpha \leq \beta^{\prime}$: it’s the long and complicated part which ends with this post;
• The proof of the implication $\alpha \gt 0 \Rightarrow \beta^{\prime} \lt \alpha$: it’s the relatively simple part which we’ll treat in the next posts.
In the next section we’ll see the proof of Proposition N.13.
Taking into account also $\alpha^{\prime}$ and remembering that $\alpha \leq \alpha^{\prime}$, starting from Proposition N.23 we could suppose that:
\alpha \leq \alpha^{\prime} \leq \beta^{\prime}
Or alternatively, remembering that $\beta \leq \beta^{\prime}$, we could suppose that:
\alpha \leq \beta \leq \beta^{\prime}
So, while keeping the goal of second part of the proof unchanged, that is $\alpha \gt 0 \Rightarrow \beta^{\prime} \lt \alpha$, two alternative goals for the first part may be to prove that $\alpha^
{\prime} \leq \beta^{\prime}$, or respectively that $\alpha \leq \beta$. The first relationship is true, and we’ll discuss it in the next post. We don’t know if the second relationship is true, but,
since it’s not necessary for our proof of the Prime Number Theorem, this is not a big problem.
You could think that the definition of $\beta$ is substantially useless, because the constants of interest are after all $\alpha$ and $\beta^{\prime}$: it would have been sufficient to define then
directly from the beginning.
This remark is correct, however we introduced initially $\beta$ because we prefer to work with integer numbers as far as possible: this way we can arrive to some points such that, in order to
overcome them, we need to pass to real numbers. We think that these situations are very interesting, because they give the possibility to appreciate the value of real analysis, answering to the
question that often some people ask: “why for proving some simple things sometimes is it necessary to use sophisticated mathematical techniques?”.
The proof that $\alpha \leq \beta^{\prime}$
Let’s start from Proposition N.12, that we proved in the previous post:
|R(x)| \log^2 x \leq 2 \int_1^x \log t \left| R\left(\frac{x}{t}\right) \right| dt + O(x \log x) \tag{1}
Let’s compare this inequality with our goal, $\alpha \leq \beta^{\prime}$, which we can write more explicitly using Definitions N.23 and N.14:
\limsup_{x \to +\infty} |V(\log x)| \leq \limsup_{\xi \to +\infty} \frac{1}{\log \xi} \int_0^{\log \xi} |V(u)|\ du \tag{2}
Clearly, in order to pass from (1) to (2), we have to replace somehow the function $R$ with the function $V$. As we noted in the post The integral mean value and the absolute error function R near
formula (11), the two functions are related each other by the following relationship which is true for all $u \in [0, +\infty)$:
V(u) = \frac{R(e^u)}{e^u} \tag{3}
We can let this function appear inside the integral at the right member of (1), by the substitution $t := \frac{x}{e^u}$:
\begin{aligned} \int_1^x \log t \left| R\left(\frac{x}{t}\right) \right| dt & = \ \left[t := \frac{x}{e^u}\right] \text{ (*)} \\ x \int_0^{\log x} \left| \frac{R(e^u)}{e^u} \right| (\log x - u)\ du &
= \text{[by (3)]} \\ x \int_0^{\log x} |V(u)| (\log x - u)\ du & = \text{(**)} \\ x \int_0^{\log x} \left( \int_0^v |V(u)|\ du \right) dv &= \\ x \int_0^{\log x} v \left( \frac{1}{v} \int_0^v |V(u)|\
du \right) dv \tag{4} \end{aligned}
First of all, we can note that the substitution $t := \frac{x}{e^u}$ is suitable because, applying it to the expression $R\left(\frac{x}{t}\right)$ which appears inside the integral, we’ll obtain $R\
left(x / \frac{x}{e^u}\right) = R\left(\frac{x e^u}{x}\right) = R(e^u)$, which is just the numerator of (3). So this substitution let us pass from the function $R$ to the function $V$.
However, when doing so, we have to consider that the new integration variable is $u$, hence we have to express $dt$ as a function of $du$ and we have to recalculate the integration ends:
• From $t := \frac{x}{e^u}$ we can obtain $dt = d\left(\frac{x}{e^u}\right) = x d\left(\frac{1}{e^u}\right) = x \left(- \frac{1}{e^{2u}} d(e^u) \right) = x \left(- \frac{1}{e^{2u}} e^u du \right) =
- x \frac{1}{e^{u}} du$
• If $t = 1$, we’ll have that $\frac{x}{e^u} = t = 1$. Since $x eq 0$, we can divide by $x$, obtaining $\frac{1}{e^u} = \frac{1}{x}$. Since both members are different from zero, we’ll have $e^u =
x$, that is $u = \log x$. So the new lower end of integration will be $\log x$.
• If $t = x$, we’ll have that $\frac{x}{e^u} = t = x$, hence $\frac{1}{e^u} = 1 \Rightarrow e^u = 1 \Rightarrow u = 0$. So the new upper end of integration will be $0$.
Thus we’ll have:
\begin{aligned} \int_1^x \log t \left| R\left(\frac{x}{t}\right) \right| dt & = \ \left[t := \frac{x}{e^u}\right] \\ \int_{\log x}^0 \log \frac{x}{e^u} \left| R(e^u) \right| \left(- x \frac{1}{e^{u}}
\right) du\end{aligned}
The constant $x$ can be brought outside of the integral, while the minus sign can be cancelled by exchanging the integration ends (thus making the lower end, as usual, lower than the upper end):
\begin{aligned} \int_{\log x}^0 \log \frac{x}{e^u} \left| R(e^u) \right| \left(- x \frac{1}{e^u}\right) du &= \\ x \int_0^{\log x} \log \frac{x}{e^u} \left| R(e^u) \right| \frac{1}{e^{u}} du &= \\ x
\int_0^{\log x} (\log x - \log e^u) \left| \frac{R(e^u)}{e^u} \right| du &= \\ x \int_0^{\log x} \left| \frac{R(e^u)}{e^u} \right| (\log x - u) du \end{aligned}
Joining the two previous formulas we’ll obtain the passage (*).
First of all we can note that $\log x - u = \int_u^{\log x} dv$. This way we can convert the initial integral into a double integral:
x \int_0^{\log x} |V(u)| (\log x - u) du = x \int_0^{\log x} |V(u)| \left( \int_u^{\log x} dv \right) du
Since the term $|V(u)|$ does not depend on $v$, we can bring it inside the inner integral:
x \int_0^{\log x} |V(u)| \left( \int_u^{\log x} dv \right) du = x \int_0^{\log x} \left( \int_u^{\log x} |V(u)|\ dv \right) du
Now let’s change the order of integration. We’ll adopt an intuitive approach, based on the idea of treating the integral as a summation (this idea is not so far from reality, because the integral is
defined as the limit of a summation, but it would require to be formalized).
The integral on the right can be read as follows: “Let $u$ vary between $0$ and $\log x$; for each value of $u$ let $v$ vary between $u$ and $\log x$; for each resulting couple $(u, v)$ compute $|V
(u)|$; finally sum up everything”. Now let’s focus on the possible couples $(u, v)$. We said how to generate these couples by fixing first a value for $u$ and then a value for $v$, but what if we did
the converse?
We said that $v$ varies between $u$ and $\log x$, meaning that it assumes, for each value of $u$, all the values between $u$ and $\log x$ (from now on, we’ll always mean that when we’ll use the
expression “varies between”). But $u$ varies between $0$ and $\log x$, so in particular, if $u = 0$, $v$ varies between $0$ and $\log x$ (for values of $u$ greater than zero, $v$ will still assume
values in this range, but it won’t assume all of them, because the minimum value assumed will be $u \gt 0$). This remark is important because it tells us that, for any $c \in [0, \log x]$, there
exists at least one couple $(u, v)$ with that value for $v$: it’s sufficient to set $u := 0$ and $v := c$. For checking it, it’s sufficient to note that the couple $(u, v) := (0, c)$ satisfies the
constraints of the integration variables, i.e. $0 \leq u \leq \log x$ and $u \leq v \leq \log x$.
Now let’s make a further step. After fixing a value for the variable $v \in [0, \log u]$, let’s see what are all and only the values of $u$ such that the couple $(u, v)$ exists (given that, as we
have just seen, $u = 0$ is one of such values). Generally, since $v$ varies between $u$ and $\log x$, we must have that $u \leq v$. This is the highest possible value for $u$. The minimum one is
certainly 0 because, as we have seen previously, $u := 0$ is a possible value for any value of $v$, and $u$ cannot be negative. So, after fixing a value for $v$, the range of possible values for $u$
is $[0, v]$: there are no couples of the kind $(u, v)$ such that $u$ is outside this range. So, if we consider the integral as a sum extended to all the couples $(u, v)$, we can rephrase it as
follows: “Let $v$ vary between $0$ and $\log x$; for each value of $v$ let $u$ vary between $0$ and $v$; for each resulting couple $(u, v)$ compute $|V(u)|$; finally sum up everything”. This
corresponds to the following formula:
\int_0^{\log x} \left( \int_0^v |V(u)|\ du \right) dv
But the set of the couples $(u, v)$ which have been considered this way is the same as before, because for each possible values of $v$ we have found all and only the corresponding values of $u$, so
we have simply looked at the same couples in a different order. By this principle (which is true for summations, but for integrals it’s only an intuitive explanation), we’ll have that:
\int_0^{\log x} \left( \int_u^{\log x} |V(u)|\ dv \right) du = \int_0^{\log x} \left( \int_0^v |V(u)|\ du \right) dv
This formula, along with the previous ones, explains the passage (**).
The last expression inside parentheses is formally very similar to the argument of the second $\limsup$ in (2); so, in order to reduce that expression to (2), we have to introduce the $\limsup$.
Before doing that, we have to know the following general Property:
Overestimation of a function by its limit superior
Let $f$ be a real function defined on an upper unbounded set. Then:
f \leq \left( \limsup_{x \to +\infty} f(x) \right) + o(1)
Before understanding why this Property is true, we have to remember that, by Definition A.9, it means that there exists a function $h$ such that $f \leq \left( \limsup_{x \to +\infty} f(x) \right) +
h$, and $h \to 0$ (where $\limsup_{x \to \infty} f(x)$ is a real number, which inside this relationship represents the constant function equal to that number).
We’ll not formally prove this Property, but we can understand it at least in the case of sequences: using the terminology of the post The limit inferior and the limit superior of a sequence, the
function $h$ represents how much the values of the function exceed its limit superior (the difference between them and the limit superior must tend to zero because, if it wasn’t so, those values
would be enough to generate a limit superior greater than the actual one, which would be absurd).
Applying this Property to the function $f(v) := \frac{1}{v} \int_0^v |V(u)|\ du$, we can continue the development of (4) as follows:
\begin{aligned} x \int_0^{\log x} v \left( \frac{1}{v} \int_0^v |V(u)|\ du \right)\ dv & \leq \\ x \int_0^{\log x} v \left( \left( \limsup_{v \to +\infty} \frac{1}{v} \int_0^v |V(u)|\ du \right) + o
(1) \right)\ dv \end{aligned} \tag{5}
In the last limit superior, the variable $v$ assumes all the real values in the range $[0, +\infty)$. Hence if we set $\xi := e^v$ (from which $v = \log \xi$), we’ll have that $\xi$ assumes all the
real values in the range $[1, +\infty)$, exactly like the $\xi$ of Definition N.23; in addition if $v = \log \xi \to \infty$, then also $\xi \to \infty$. So:
\begin{aligned} \limsup_{v \to +\infty} \frac{1}{v} \int_0^v |V(u)|\ du & = \\ \limsup_{\log \xi \to +\infty} \frac{1}{\log \xi} \int_0^{\log \xi} |V(u)|\ du & = \\ \limsup_{\xi \to +\infty} \frac{1}
{\log \xi} \int_0^{\log \xi} |V(u)|\ du & = \\ \beta^{\prime} \end{aligned}
Thus, substituting into (5), we’ll have:
\begin{aligned} x \int_0^{\log x} v \left( \left( \limsup_{v \to +\infty} \frac{1}{v} \int_0^v |V(u)|\ du \right) + o(1) \right)\ dv &= \\ x \int_0^{\log x} v \left(\beta^{\prime} + o(1)\right)\ dv &
= \\ x \int_0^{\log x} v \beta^{\prime} + o(v)\ dv \end{aligned} \tag{6}
where in the last passage we have applied Corollary 1 of Property A.8, which can be proved for small ohs as well as for big Ohs.
Doing some calculations, the last expression of (6) can be simplified as follows:
x \int_0^{\log x} v \beta^{\prime} + o(v)\ dv = \frac{1}{2} \left( x \beta^{\prime} \log^2 x + o(x \log^2 x) \right) \tag{7}
By the properties of integrals:
\int_0^{\log x} v \beta^{\prime} + o(v)\ dv = \int_0^{\log x} v \beta^{\prime} dv + \int_0^{\log x} o(v)\ dv \tag{a}
Let’s develop the first integral:
\int_0^{\log x} v \beta^{\prime}\ dv = \beta^{\prime} \int_0^{\log x} v\ dv = \beta^{\prime} \left[ \frac{v^2}{2} \right]_0^{\log x} = \beta^{\prime} \left( \frac{\log^2 x}{2} - \frac{0^2}{2} \right)
= \frac{1}{2} \beta^{\prime} \log^2 x \tag{b}
The second integral can be developed in a similar fashion, but first we have to apply Property A.15 in order to bring the small oh symbol outside of the integral (the cited Property was formulated
for big Ohs, but it’s valid also for small ohs):
\int_0^{\log x} o(v)\ dv = o\left( \int_0^{\log x} v\ dv \right) = \text{[like before]} = o\left( \frac{1}{2} \log^2 x \right) = \frac{1}{2} o(\log^2 x) \tag{c}
where in the last passage we applied Property A.8A, which was formulated for big Ohs but it’s true also for small ohs.
Substituting (b) and (c) into (a), we’ll have that:
\int_0^{\log x} v \beta^{\prime} + o(v)\ dv = \frac{1}{2} \beta^{\prime} \log^2 x + \frac{1}{2} o(\log^2 x)
x \int_0^{\log x} v \beta^{\prime} + o(v)\ dv = \frac{1}{2} \left( x \beta^{\prime} \log^2 x + x o(\log^2 x) \right)
Formula (7) follows from the equality $x o(\log^2 x) = o(x \log^2 x)$, consequence of Corollary 1 of Property A.8 in the case of small ohs.
Ultimately, joining the formulas (4), (5), (6) and (7), we’ll obtain:
\int_1^x \log t \left| R\left(\frac{x}{t}\right) \right| dt \leq \frac{1}{2} \left( x \beta^{\prime} \log^2 x + o(x \log^2 x) \right)
Substituting into (1), we’ll obtain:
|R(x)| \log^2 x \leq x \beta^{\prime} \log^2 x + o(x \log^2 x) + O(x \log x)
and finally, by simplifying asymptotic orders:
|R(x)| \log^2 x \leq x \beta^{\prime} \log^2 x + o(x \log^2 x)
Since $x \log x = o(x \log^2 x)$, we’ll have that $O(x \log x) = O(o(x \log^2 x)) = o(x \log^2 x)$, where in the last equality we applied Property A.16. So $o(x \log^2 x) + O(x \log x) = o(x \log^2
x) + o(x \log^2 x) = o(x \log^2 x)$, where in the last passage we applied Corollary of Property A.9, which can be proved similarly for small ohs.
The last equality can be developed as follows:
\begin{aligned} |R(x)| \log^2 x \leq x \beta^{\prime} \log^2 x + o(x \log^2 x) & \Rightarrow \\ \left|\frac{R(x)}{x}\right| \log^2 x \leq \beta^{\prime} \log^2 x + o(\log^2 x) & \Rightarrow \text{[by
(3)]} \\ |V(\log x)| \log^2 x \leq \beta^{\prime} \log^2 x + o(\log^2 x) & \Rightarrow \text{(*)}\\ |V(\log x)| \leq \beta^{\prime} + o(1) & \Rightarrow \text{(**)} \\ \limsup_{x \to +\infty} |V(\log
x)| \leq \limsup_{x \to +\infty} \left(\beta^{\prime} + o(1)\right) & \Rightarrow \text{(***)} \\ \limsup_{x \to +\infty} |V(\log x)| \leq \beta^{\prime} & \Rightarrow \\ \alpha \leq \beta^{\prime} \
• In the passage (*) we divided all by $\log^2 x$, assuming that $x \gt 1$ and applying Property A.8A concerning the small oh;
• In the passage (**) we assumed that, if a sequence is lower than another one, also its limit superior is less than or equal to the limit superior of the other sequence. We’ll not prove this
property, but we think it’s rather intuitive.
• In the passage (***) we considered that the limit of a constant is the constant itself and that by definition the limit of a $o(1)$ is zero, i.e. $\lim_{x \to +\infty} \beta^{\prime} = \beta^{\
prime}$ and $\lim_{x \to +\infty} o(1) = 0$, hence $\lim_{x \to +\infty} \left( \beta^{\prime} + o(1) \right) = \beta^{\prime}$; so, by Property A.3, also $\limsup_{x \to +\infty} \left( \beta^{\
prime} + o(1) \right) = \beta^{\prime}$.
Final remarks
We’ll conclude this post with a graphical representation of the relationship $\alpha \leq \beta^{\prime}$ that we have just proved. We know that it’s a relationship between two limit superiors,
because $\alpha$ is defined as the limit superior of the function $|V(\log x)|$ and $\beta^{\prime}$ is defined as the limit superior of the function $\frac{1}{\log \xi} \int_0^{\log \xi} |V(u)|\ du$
(the first function defined on positive integers, the second one on real numbers greater or equal than 1). The two functions are shown in the following picture:
Figure 1: Comparison between the functions which the constants α, α’ e β’ used in this post are based on
From the picture it’s clear that the function $|V(\log x)|$ tends to zero, hence $\limsup_{x \to +\infty} |V(\log x)| = \alpha = 0$. This relationship, remembering Lemma N.7, substantially coincides
with the statement of the prime number Theorem. Considering the integer values $x = \xi = 1, 2, 3, \ldots$, the relationship $|V(\log x)| \leq \frac{1}{\log \xi} \int_0^{\log \xi} |V(u)|\ du$ is
satisfied for all punctual values of $x$ and $\xi$ shown in the graph, but we have to consider that the descent of the function $\frac{1}{\log \xi} \int_0^{\log \xi} |V(u)|\ du$ is very slow, because
$\log \xi$ increases very slowly as $\xi$ increases, hence we cannot exclude that there are some $x$es such that $|V(\log x)| \gt \frac{1}{\log x} \int_0^{\log x} |V(u)|\ du$. However, even if such
values of $x$ existed, that would not be in contradiction with the inequality $\alpha \leq \beta^{\prime}$; for example, the sequence shown in Figure 3 of the post The limit inferior and the limit
superior of a sequence has limit superior 1 even if infinitely many values of it are greater than 1 (but they are closer and closer to it as we move towards the right in the graph). | {"url":"http://www.dimostriamogoldbach.it/en/relationship-alpha-beta-prime/?doing_wp_cron=1730820637.6654829978942871093750","timestamp":"2024-11-05T15:30:38Z","content_type":"text/html","content_length":"307036","record_id":"<urn:uuid:22af04ba-6bef-4a5e-9560-916bdb549d87>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00121.warc.gz"} |
GRE Prep, According to ETS
GRE Prep, According to ETS
I am planning to start studying for the GRE and According to the test - they linked the following to use for prep. I want to effectively plan out my studies and was curious about how long each course
/subject was - please see the link below.
A huge thank you ahead of time for answering this post :)
1.1 Integers
1.2 Fractions
1.3 Exponents and Roots
1.4 Decimals
1.5 Real Numbers
1.6 Ratio
1.7 Percent
Sections 2.1 through 2.9
2.1 Operations with Algebraic Expressions
2.2 Rules of Exponents
2.3 Solving Linear Equations
2.4 Solving Quadratic Equations
2.5 Solving Linear Inequalities
2.6 Functions
2.7 Applications
2.8 Coordinate Geometry
2.9 Graphs of Functions
3.1 Lines and Angles
3.2 Polygons
3.3 Triangles
3.4 Quadrilaterals
3.5 Circles
3.6 Three-Dimensional Figures
4.1 Graphical Methods for Describing Data
4.2 Numerical Methods for Describing Data
4.3 Counting Methods
4.4 Probability
4.5 Distributions of Data, Random Variables, and Probability Distributions
4.6 Data Interpretation Examples
4 comments
Dear team,
Yes, please! I would also love to see a section on GRE prep, and Khan Academy is my go-to resource. I am planning for a physics GRE; please consider this suggestion!
I agree, this would be great!
Thank you so much for linking the math topics! I've been able to find GRE vocab flashcards fairly easily, but math is what I need the most practice on! Hope Khan Academy makes a GRE test section
Hi! Yes, ETS has a Khan Academy-mapped list of GRE Quantitative Reasoning topics. Request to KA Test Prep and Math teams: please compile those instruction and practice content into a mastery-enabled
Official GRE Math course, like Digital SAT Math.
It'll be infinitely helpful, even if a full GRE Prep will take some time. ETS has already partnered with KA for Praxis, so I hope this Quantitative Reasoning course is at least possible. All the
content is already available across KA's huge Math library, it's only a matter of curating the lessons and making the Official Partnership happen.
I've requested this before in multiple places, but I'm happy to see an active thread for it!
Please sign in to leave a comment. | {"url":"https://support.khanacademy.org/hc/en-us/community/posts/5226512153741-GRE-Prep-According-to-ETS","timestamp":"2024-11-02T23:10:52Z","content_type":"text/html","content_length":"55207","record_id":"<urn:uuid:67115db9-5350-4407-aed2-af76b8a2a0b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00694.warc.gz"} |
Microscopical Measurements and Drawing to scale
Microscopical measurements or micrometry and drawing to scale are of significant value in the examination of crude drugs, particularly in differentiating an authentic drug from an adulterant, which
may have the same tissues, but constituent cells of varying sizes and shapes. Sizes of cells, cellular elements, cell inclusions and other minute structures are measured under the microscope by the
use of a micrometer scale inserted into the eyepiece. Tissues, cells and other minute cellular structures are drawn to scale to represent them in their exact natural shape and arrangement by the use
of Camera lucida and other similar instruments attached to the microscope.
Measurement of microscopic structures:
Measurements under the microscope are done, as mentioned before, by the use of a micrometer scale called the eyepiece micrometer, fitted inside the eyepiece of the microscope. This micrometer is an
arbitrary scale engraved on a glass disc and is divided into 10 equal large divisions, each of which is again divided into 10 equal small divisions. Thus the whole scale has (10 x 10) 100 small equal
divisions (see Fig. 73 A). The actual size of each small division on this scale is dependent on the objective magnification · and tube length at which it is being used. In order to determine the size
of one small division of the eyepiece micrometer scale. it ts calibrated by the use of a stage micrometer scale. The stage micrometer is a scale of definite length. usually 1 mm long, engraved in a
central circle of a glass slide. This scale is also divided into 10 equal large divisions, each measuring 0.1 mm or 100 µm. Each of this large division is again divided into 10 equal small divisions.
each measuring 0.01 mm or 10 µm. Thus the stage micrometer scale has also 100 small divisions of 10 µm size (see Fig. 73 B). The marks of the ten large divisions on both the scales are numbered 0 to
Fig. 73: Micrometer Scales: A, Eyepiece micrometer: B. Stage micrometer
To calibrate the eyepiece micrometer. the required objective (under L.P. or H.P.) is put in position. The Stage micrometer is then focussed in the usual way and the Eyepiece micrometer is
superimposed on it in such a way that the O mark on this micrometer coincides exactly with the 0 marks of the Stage micrometer. If calibrating for measurements under low power (L.P.) magnification, a
reading is taken at that position on the. Eyepiece micrometer scale to find out the mark which most nearly coincides with the 100 marks of the Stage micrometer scale. For example (see Figure 77), the
75th small eyepiece micrometer division mark coincides with the 100th small stage micrometer division mark. Since one small stage micrometer division is equal to 10 µm, 75 small eyepiece micrometer
divisions measure 100 x 10 µm = I 000 µm. Therefore, under this magnification, one small eyepiece micrometer division is equal to 1000 µm + 75 = 13.3 µm.
If the Eyepiece micrometer scale is calibrated for measurements under high power (H.P.) magnification, then after focussing the Stage micrometer scale under the H.P. objective a reading is taken at
that position on the Stage micrometer scale (now highly magnified) to find out the mark which most nearly coincides with the 100 mark of the Eyepiece micrometer scale (which has not been magnified
simultaneously). For example (see Figure 78), the 35th small stage micrometer division mark coincides with the 100th small eyepiece micrometer division mark. Since one small stage micrometer division
is equal to 10 µm, 100 small eyepiece micrometer division measure 35 x 10 µm = 350 µm. Therefore, under this magnification, ·one small eyepiece micrometer division is dual to 350 µm + 100 = 3.5 µm.
Thus it is apparent that the size of a small eyepiece m.icrometer division is dependent on the magnification of the objective used, Hence, every time the magnification is changed the Eyepiece
micrometer scale has to be freshly calibrated. Once the Eyepiece micrometer is calibrated the Stage micrometer is removed and microscopic measurements are carried out with the Eyepiece micrometer
scale, which is still inside the eyepiece, by superimposing it on the object to be measured and counting the number of small divisions covered by the object. The total value of these divisions is the
size of the object.
Drawing to scale:
Tissues and other microscopic structures can be drawn to scale and in their exact natural shape and arrangement while being examined under the microscope. For this purpose various types of apparatus
are now available commercially. The most commonly used ones of these include the. The Swift-lves camera lucida and the abbe drawing apparatus, The Swift-lves camera lucida (Fig. 79a) is made up of
two prisms placed suitable in a dark-coloured metallic casing, and it fits well on top of the eyepiece of the microscope. When in use, light from the object under examination passes direct to the
eyes of the
observer through an opening in the silvered surface of the left-hand prism sitting directly on the eyepiece. At the same time light from the suitably positioned drawing paper on the Abbe drawing
board (Fig. 79b) and the pencil are reflected by the right hand or side prism and the silvered surface of the left hand prism to reach the observer’s eye simultaneously. Thus the pencil appears
superimposed on the object, which may then be traced on the drawing paper.
Leave a Comment | {"url":"https://thepharmacognosy.com/microscopical-measurements-and-drawing-to-scale/","timestamp":"2024-11-10T15:01:42Z","content_type":"text/html","content_length":"97402","record_id":"<urn:uuid:1bb8a295-6bfb-4ba5-ac92-9ba3d5c6e869>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00127.warc.gz"} |
Polynomial Division with a Box. Polynomial Multiplication: Area Method x + 5 x 2 x 3 5x25x2 -4x 2 -4x-4x -20x x +1 5 Multiply (x + 5)(x 2 – 4x + 1) x. - ppt download
Presentation is loading. Please wait.
To make this website work, we log user data and share it with processors. To use this website, you must agree to our
Privacy Policy
, including cookie policy.
Ads by Google | {"url":"https://slideplayer.com/slide/4498998/","timestamp":"2024-11-11T11:23:34Z","content_type":"text/html","content_length":"138059","record_id":"<urn:uuid:47ec374d-270e-4070-a226-26434fdfaeb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00778.warc.gz"} |
Program Schedule (tentative)
Mon Jul 5
Content Math Training Camp: open space (i.e., most tutors will be around, and those who are interested in learning how to use their software, can gather in small groups)
Tue Jul 6
Content Math Training Camp: open space
Jan Willem Knopper: MathDox software for communicating with Computer Algebra Systems using OpenMath
Jan Willem Knopper: Creating MathDox documents enriched with content math: a tool using LaTeX
Wed Jul 7 (morning)
Content Math Training Camp: short presentations and open space
Michael Kohlhase: sTeX, a semantically enhanced (La)TeX input language for OMDoc
Constantin Jucovschi: sTeXIDE, an editor and development environment for sTeX document collections
Jónathan Heras Vicente: an OpenMath Content Dictionary Editor (slides)
Urs Holzer: Gemse, a visual editor for Content and Presentation MathML 3
Christoph Lange: Processing and publishing OMDoc and other XML Content Math formats: overview of the JOMDoc Java API and the JOBAD JavaScript API (slides)
Philipp Schalldach: AIOS, an application for clustering and visualizing large amounts of 2D data
Ewaryst Schulz: OMDoc import/export of Hets (Heterogeneous Tool Set) (slides)
Jan Willem Knopper: MathDox formula editor: an easy to integrate editor that outputs OpenMath
Mickaël Gastineau: the SCSCP C Library (for C and C++)
Christoph Lange: TNTBase, a versioned database for XML documents (with some special OMDoc support) (slides)
Wed Jul 7 (afternoon)
Doctoral Programme: tutorials for Ph.D. students, and talks by Ph.D. students
Tutorial James Davenport: "So the thesis is going well: what else should I do with the work I've done" (slides)
Xiaoyu Chen: Geometric Knowledge Management
Jónathan Heras: Symbolic Computation in Algebraic Topology
Daniel Kuehlwein: The Naproche project
Mélanie Jacquel: B Proof Automation
Thu Jul 8 (afternoon)
Doctoral Programme continues: tutorials for Ph.D. students, and talks by Ph.D. students
Tutorial Serge Autexier: "How to write a research paper" (slides)
Fulya Horozal: A Comprehensive Logical Framework
Constantin Jucovschi: Tools for Efficient Semantic Enhancement of Mathematical Documents
Pierre-Nicolas Tollitte: TBD
Osama Taleb: A Flexible Framework for Experimental Mathematics
Su Wei: Web-based Mathematics Education and Mathematical Knowledge Management
Fri Jul 9
Content Math Training Camp: short presentations and open space
Paul Libbrecht: jEditOQMath, an editor for OMDoc documents targetting ActiveMath
Makarius Wenzel: jEdit editing interface for the Isabelle proof assistant | {"url":"http://cicm2010.cnam.fr/cmtc/program.html","timestamp":"2024-11-04T18:32:36Z","content_type":"application/xhtml+xml","content_length":"11313","record_id":"<urn:uuid:c8219002-05fd-4df6-90a1-cab7f44682a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00675.warc.gz"} |
Reciprocity and the difference between usury and interest
At the IASH meeting on the
Human-Business interface
I attended last week, Michael Northcott discussed the relationship between capitalism and sustainability. Michael is an authority on both subjects, having written on
environmental ethics
. At the end of Michael's presentation I made the point that theologians distinguished interest ad usury in the past, a distinction that was missing in Michael's discussion.
Usury derives from the Latin
, meaning 'use', and referred to the charging of a fee for the use of money. Interest comes from the Latin
, meaning 'compensation for loss', and originated, in the Roman legal codes as the fee someone was paid if they suffered a loss as a result of a contract being broken. So a lender could charge
interest to compensate for a loss, but they could not make a gain by lending.
It is easier to understand this distinction with a simple example. A farmer lends a cow to their cousin for a year. In the normal course of events, the cow would give birth to a calf and the cousin
would gain the benefit of the cow's milk. At the end of the loan, the farmer could expect the cow
the calf to be returned. The interest rate is 100%, but it is an interest since the farmer, if they had not lent the cow to their cousin, would have expected to end the year with a cow and a calf.
Similarly, if the farmer lent out grain, they could expect to get the loan plus a premium on the basis that their cousin planted the grain, he would reap a harvest far greater than the sum lent.
Because money is 'barren', unlike land or labour it could not 'produce' anything. As a result, money can have no intrinsic value, other than its use to facilitate exchange, and so charging for the
lending of money is essentially selling something that has no value. Thomas Aquinas argued that
To take usury for money lent is unjust in itself, because this is to sell what does not exist, and this evidently leads to inequality which is contrary to justice.
So, usury contradicts 'natural law'. Even if you could convince the canon lawyers that you were, in fact, selling something that did exist, the theologians might argue that usury was an affront to
God because, since money was barren, the usurer was charging for time, and "time was God's exclusive property''.
In theory, this is all very clear, in practice there was still the question of where the dividing line between usury and interest was and almost everyone who was handling money was looking to charge
as much interest as was permissible.
Around 1236, an English professor of canon (church) law, Alanus Anglicus, argued that usury did not exist if the future price of the good was
in the mind of the merchant. These theories became established in the medieval legal system between 1246 and 1253 by Pope Innocent IV, a former professor of law at Bologna. Not only could a merchant
adjust the 'just price' to cover their labour and expenses, but also they could also adjust the price to take into account the risk they bore, called an aleatory contract, from the Latin word
for chance. In establishing this principle, a Catholic jurist initiated the scientific study of financial risk.
Today, financial economics models interest through a force of interest,
, and so the value of a loan of
at time
= 0 would be repaid by an amount
given by
This implies that the repayment amount X[t] is the solution to the most basic differential equation,
that is,
grows at a constant rate
. This links to Piketty’s thesis that capitalism induces inequality because
, the return on money, is greater than
the growth rate of the economy. This is conventional economic theory.
I argue that at the heart of financial economics is not growth but reciprocity
, so how do I account for interest?
In 1837 Poisson wrote
Recherches sur la probabilité des jugements en matiére criminelle et en matiére civile
(‘Research on the Probability of Judgments in Criminal and Civil Matters’). Despite its title, most of Possion’s book was a development of ‘probability calculus’, and according to the historian of
probability, Ivo Schneider, after its publication “there was hardly anything left that could justify a young mathematician from taking up probability theory”. The heart of
was a single chapter on determining the probability of someone being convicted in a court, by a majority of twelve jurors, each of whom is “is subject to a given probability of not being wrong” and
taking into account the police’s assessment of the accused’s guilt. In order to answer this problem, Poisson needed to understand what has become known as the ‘Law of Rare Events’, in contrast to the
Law of Large Numbers. Poisson’s starting point was the Binomial Model, based on two possible outcomes such as the toss of a coin, or the establishment of innocence or guilt. De Moivre had considered
what would happen as the number of steps in the ‘random walk’ of the Binomial Model became very large, with the probability of a success being about half. Poisson considered what would happen if, as
the number of steps increased, the chance of a success decreased simultaneously, so that it became very small.
On this basis, Poisson worked out that if the rate of a rare events occurring, the number of wins per round, was
, then the chance of there being
wins in
rounds was given by
the Poisson distribution. Apart from being one of the key models in probability, along with the Binomial and the Normal, the Poisson distribution has an important financial interpretation.
Consider a banker lending a sum of money, X[0]. The banker is concerned that the borrower does not default, which is hopefully a rare event, and will eventually pay back the loan. Say the banker
assesses that the borrower will default at a rate of r defaults a day, and the loan will last T days. The banker might also assume that they will get all their money back, providing the borrower
makes no defaults in the T days, and nothing if the borrower makes one or more defaults. On this basis the bankers mathematical expectation of the value of the loan is
E[loan] = (Probability of no defaults × X[0]) + (Probability at least one default × 0)
we can ignore the second expression, since it is zero, and for the first, using the Law of Rare Events, the probability of no defaults is given when
= 0 we have that (
= 1 and 0! = 1, so
E[loan] = X[0]e^-^rT
So the banker is handing over X[0] with the expectation of only getting X[0]e^-^rT^ < X[ 0] back. To make the initial loan amount equal the expected repayment, the banker needs to inflate the
expected repayment by e^rT^ ,
That is, the repayment amount needs to be
We can interpret interest in two ways, as a means of "growing" ones wealth, which would be usurious in the Scholastic sense, or as a compensation. If it is a compensation the wealth is not expected
to grow, that is, Piketty's whole argument becomes somewhat meaningless.
Michael Northcott argued that the Christian prohibition on usury/interest was related to an intent to inhibit human bondage and he noted that contemporary Islamic finance still prohibited the
charging of interest. There in lies a counter argument to the interest equates to slavery claim, it was capitalist Britain that led the way in the emancipation of slavery, Islamic jurisdictions
retained slavery into the second half of the twentieth century
I agree with Michael that usury is a form of bondage, but I do not agree that the charging of interest is usurious, particularly if the interest is determined on the principle of balanced
reciprocity. The problem contemporary finance faces is that it emphasising economic growth over social cohesion, the distinction between usury and interest is obscured.
9 comments:
1. I think the essence of a ban on usury was the “insight” that in a pre-modern economy with fixed resources and no technical change, the optimal interest rate was r=g=zero. Any departure from zero
would imply unjust monopoly power in some market.
2. "To sell what does not exist."
In college philosophy class I came across this argument: Theft is not possible because ownership is not possible, therefore, one should not steal.
But if theft wasn't possible we would hardly need to make arguments against it! Similarly, to say that money has no intrinsic value, therefore we shouldn't charge interest for its use is to deny
experience. If money has no intrinsic value then WHY do you want to borrow it?!
I think we need to accept that there is a difference between interest and usury, but not a clearly defined difference.
3. Given that contracts are mostly made on nominal terms and that taxes are levied on commercial transactions, is it not possible that the borrower can default on the loan simply because the money
interest has not been created into existence (an accounting fact that loans create the principal but not the interest)? Wouldn't the default event be certain?
1. That my friend has been my point for eons. That's where Booms and Busts come in. The Boom widens or expands the AMOUNT of currency in circulation, digitally or physically, the Bust returns
the cash to the system for redistribution as interest on Savings, Investments and of course the Bankers get their share for running this scam. But a lump of gold weighing a pound is the same
lump of gold weighing a pound no matter how much interest was promised to accrue.
2. Or more simply a 20% guaranteed fail rate seems built into our current system so as to perpetuate its existence. If 20% of the people are not in or at failure at all times then the system
could not work.
4. Isn't usury the same thing as rent seeking? And don't economists agree with the Christians (though on different grounds) that this would be wrong? And isn't there a lot to say for the fact that
rent seeking in the present day increases (e.g. CEO salaries)? And isn't this what Piketty is talking about? So I disagree with your opinion "that the charging of interest is usurious,
particularly if the interest is determined on the principle of balanced reciprocity", at least in our present real world.
5. The Jewish law of usury -- well-developed over the course of 1800 years of jurisprudence -- explicates these concepts extensively. For example, interest was permitted is the future market price
was unknown, but not if it was.
6. If I use borrowed money to buy a cow, to use your example, in a year I have two cows and a lot of cheese. I sell one, repay the loan, and still have a cow and much cheese. Money has no value?
What am I missing? Don't I at least owe the lender a bite of brie?
1. Its a model to get the idea across, it probably doesn't merit too much inspection | {"url":"https://magic-maths-money.blogspot.com/2014/06/reciprocity-and-difference-between.html","timestamp":"2024-11-07T16:19:05Z","content_type":"text/html","content_length":"102805","record_id":"<urn:uuid:1177c460-ce5b-4d51-94c7-9cd12919bbb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00782.warc.gz"} |
Verification of the Random Wave Conjecture for the Fourth Moment of Truncated Eisenstein Series
Verification of the Random Wave Conjecture for the Fourth Moment of Truncated Eisenstein Series
Core Concepts
This paper proves that the fourth moment of truncated Eisenstein series with large Laplacian eigenvalue follows Gaussian random behavior, confirming the Random Wave Conjecture for this case. The key
innovation is introducing an averaging over the truncation parameter, which simplifies the analysis and allows for the application of existing techniques for evaluating L-functions.
Translate Source
To Another Language
Generate MindMap
from source content
The fourth moment of truncated Eisenstein series
Djanković, G., & Khan, R. (2024). The Fourth Moment of Truncated Eisenstein Series. arXiv preprint arXiv:2408.14815v4.
This research paper aims to verify the Random Wave Conjecture (RWC) for the fourth moment of truncated Eisenstein series, a problem that has remained unsolved for over three decades. The conjecture
posits that highly excited eigenfunctions of a classically ergodic system should exhibit Gaussian random behavior.
Deeper Inquiries
Can this averaging technique be generalized to prove the Random Wave Conjecture for higher moments of Eisenstein series or other automorphic forms?
It's certainly a tantalizing prospect! While this paper ingeniously employs averaging over the truncation parameter A to conquer the fourth moment, generalizing to higher moments presents formidable
challenges. Let's delve into the intricacies: Increased Complexity: Higher moments inherently demand grappling with more intricate expressions involving products of Eisenstein series. The elegant
interplay between the smooth function h(A), integration by parts, and the properties of H(s, tj), which was crucial for the fourth moment, might not extend seamlessly to higher orders. Subconvexity
Bounds: The proof for the fourth moment surprisingly necessitates a sub-Weyl strength subconvexity bound for the Riemann Zeta function. For higher moments, even stronger bounds on L-functions might
be required, and obtaining such bounds is a notoriously difficult problem in analytic number theory. New Ideas Needed: The success with the fourth moment stemmed from a delicate balance of
techniques, including spectral decomposition, the introduction of the smooth averaging, and the use of Kuznetsov's formula. A successful generalization to higher moments would likely require
significant new ideas and a deep understanding of the analytic properties of the relevant objects. In a nutshell: While not impossible, generalizing this averaging technique to higher moments is far
from a straightforward task. It would necessitate overcoming substantial technical hurdles and potentially developing entirely new methods in analytic number theory.
Does the reliance on the sub-Weyl strength subconvexity bound for the Riemann Zeta function hint at a deeper connection between the distribution of primes and the behavior of Eisenstein series?
The unexpected appearance of the sub-Weyl bound indeed suggests a profound interplay between the seemingly disparate realms of prime distribution (encoded in the Riemann Zeta function) and the
behavior of Eisenstein series. Here's a perspective on this intriguing connection: Subconvexity and Cancellation: Subconvexity bounds for L-functions are intimately tied to the existence of
cancellation among terms in their Dirichlet series representations. The Riemann Zeta function, in particular, governs the distribution of primes. The need for a sub-Weyl bound in this context hints
that the fine-scale distribution of primes influences the intricate oscillations and cancellations within the fourth moment of truncated Eisenstein series. Modular Surface Dynamics: Eisenstein series
are deeply connected to the geometry and dynamics of the modular surface. The distribution of values taken by Eisenstein series reflects the behavior of geodesics on this surface. The reliance on a
sub-Weyl bound suggests that the chaotic nature of geodesic flow on the modular surface is subtly intertwined with the distribution of prime numbers. Arithmetic Quantum Chaos: This connection aligns
with the broader theme of "arithmetic quantum chaos," which explores the interplay between quantum phenomena (such as the behavior of eigenfunctions) in arithmetic settings and classical chaotic
dynamics. The fourth moment of Eisenstein series, in this light, provides a fascinating example where the distribution of primes seems to leave its imprint on the quantum behavior of these special
functions. In essence: The use of the sub-Weyl bound strongly suggests a deep and subtle connection between the distribution of primes and the intricate behavior of Eisenstein series. Further
exploration of this link could potentially unveil profound insights into both number theory and the dynamics of arithmetic surfaces.
How does the proven Gaussian random behavior of the fourth moment of truncated Eisenstein series relate to the chaotic dynamics of geodesic flow on the modular surface?
The established Gaussian random behavior of the fourth moment provides compelling evidence for the Random Wave Conjecture and offers a fascinating glimpse into the chaotic nature of geodesic flow on
the modular surface. Here's how these concepts intertwine: Eisenstein Series as Waves: Imagine Eisenstein series as "waves" propagating on the modular surface. The Random Wave Conjecture posits that
these waves, when highly "excited" (corresponding to large Laplacian eigenvalues), should exhibit behavior resembling random fluctuations. Fourth Moment as a Statistical Measure: The fourth moment
serves as a statistical measure of the distribution of values taken by the Eisenstein series. The proven Gaussian behavior implies that these values, in a sense, fluctuate randomly around their
average, much like the outcomes of independent coin tosses. Chaotic Geodesic Flow: Geodesic flow on the modular surface is known to be highly chaotic. This means that initially nearby geodesics
diverge exponentially quickly, leading to unpredictable long-term behavior. The Gaussian randomness of the Eisenstein series reflects this underlying chaos. As geodesics explore the surface
chaotically, the values of the Eisenstein series along these geodesics fluctuate in a seemingly random manner. Quantum-Classical Correspondence: This connection exemplifies a remarkable
correspondence between the quantum world (represented by the eigenfunctions of the Laplacian, including Eisenstein series) and the classical world of chaotic dynamics. The random wave-like behavior
of Eisenstein series emerges as a manifestation of the underlying classical chaos of geodesic flow. In summary: The proven Gaussian random behavior of the fourth moment of truncated Eisenstein series
provides strong support for the idea that these functions, driven by the chaotic dynamics of geodesic flow on the modular surface, behave like random waves. This result beautifully illustrates the
profound connections between number theory, quantum mechanics, and chaotic systems. | {"url":"https://linnk.ai/insight/scientific-computing/verification-of-the-random-wave-conjecture-for-the-fourth-moment-of-truncated-eisenstein-series-rtZ0bb48/","timestamp":"2024-11-04T11:10:24Z","content_type":"text/html","content_length":"292291","record_id":"<urn:uuid:6650a9a2-300b-488d-ab3d-1228555a5958>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00626.warc.gz"} |
The Most Effective SAT Math Crash Course
Also included in: SAT Math Test Prep Bundle
Original price was: $69.99.Current price is: $35.99.
A Comprehensive Study Guide for the SAT Math Test!
This book features:
✓ Content that is 100% aligned with the 2021 SAT test
✓ A beginner-friendly guide for all SAT Math topics
✓ The foundations of the SAT Math Test
✓ Complete coverage of all SAT Math concepts and topics that you will be tested on
✓ Updated questions that have appeared on the most recent SAT Math tests
✓ 2 full-length practice tests (featuring new question types) with detailed answers
✓ Over 1,000 additional SAT Math practice questions grouped by topic, allowing you to focus on your weaker areasThis book will go over a handful of SAT Math topics such as Fractions, Mixed numbers,
Integers, Percent, Equations, Polynomials, Exponents, Radicals, and more. All topics are simply and concisely explained, allowing you to develop your mathematics skills.
With this book, a student can focus on rapidly improving their SAT Math test scores. It doesn’t matter if you don’t have a tutor, as this comprehensive SAT Math study guide was designed for
self-study in mind. However, this book can be used with a tutor or for classroom usage.Effortlessly and confidently follow the step-by-step instructions in this study guide to ace the SAT Math in a
short period of time. | {"url":"https://www.effortlessmath.com/product/sat-math-in-30-days-the-most-effective-sat-math-crash-course/","timestamp":"2024-11-13T08:29:32Z","content_type":"text/html","content_length":"49424","record_id":"<urn:uuid:98e808df-0e6e-4e15-9fec-4b45dbb54795>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00368.warc.gz"} |
Math Guide Archives | Math Tutor
Who would say that they’re not a math person? Maybe they don’t have a math brain. Dispelling the Math Myths will help you to look at math in a new way. So let’s bust some math myths. A while ago, I
was walking down the hallway at the Middle School, where I work as a […]
Dispelling the Math Myths Read More »
How to Improve Math Skills
Most of the times students struggle to understand the core of the mathematics concepts. This will lead them to understand math in higher education. The poor knowledge of the basics will discourage
them in later studies. This doesn’t have to be that way. In this article, we are going to discuss how to improve your
How to Improve Math Skills Read More » | {"url":"https://mathtutory.com/tag/math-guide/","timestamp":"2024-11-08T21:24:01Z","content_type":"text/html","content_length":"125901","record_id":"<urn:uuid:7664e65e-28f8-4817-821c-8cb5ee8641d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00214.warc.gz"} |
Annual Income Calculator
Knowing your annual income can help you plan out your taxes, financial goals, and more. Here's a calculator that can compute it for you.
Your annual income can be a great help when it comes to financial planning.
Knowing how much you earn in a year lets you set your financial goals and plan out your taxes. It also gives you an idea of where you're at in terms of financial security.
Unfortunately, calculating it manually isn't only complicated, it's also time-consuming. To help lighten the load, you can use this annual income calculator. It's quick and easy to use.
How do you calculate your annual income?
How to Use This Annual Income Calculator
To use this annual income calculator, all you need to do is follow these steps:
1. Enter your salary and choose a time frame from the dropdown menu.
2. Input the number of hours and days you work per week.
3. Specify how many weeks you work per year.
4. Add any additional income you earned for the year (bonuses, profits from side hustles, etc.)
From there, click "Calculate" and let it do its thing. You should now see your annual income. It should also display how much was earned from your salary and how much income came from other sources.
Note that this calculator doesn't take your taxes and other deductions into account. For that, you'll need to calculate your net pay or net income.
What matters most to you in an income calculator?
Convert Hourly Rate Into Annual Income
Your hourly rate is the amount that you earn for every hour of work you do.
To convert it into your annual income, multiply the number of hours you work in a week by the number of weeks you work in a year. As a reference, the average American usually works 40 hours a week
and 52 weeks a year.^[1]
Annual Income = Hourly Rate x (Hours Worked per Week x Weeks Worked per Year)
Let's say that your hourly rate is $28. You work 40 hours a week and 52 weeks a year. Your annual income would be:
$28 x (40 x 52) = $58,240
If you received any other income for the year, add it to the formula above. Other income can include money from side jobs, freelancing, investments, bonuses, etc. The formula should now look like
Annual Income = Hourly Rate x (Hours Worked per Week x Weeks Worked per Year) + Other Income
Convert Monthly Income Into Annual Income
Converting your monthly income into your annual income is simple. All you need to do is multiply your monthly income by 12 (the number of months in a year). If you received any other income for the
year, just add it to the formula:
Annual Income = (Monthly Income x 12) + Other Income
Let's say that you earn a monthly income of $5,500. On top of that, you received $12,000 in dividends for the stocks you invested in. Your annual income would be:
($5,500 x 12) + $12,000 = $78,000
To convert your weekly income into your annual income, multiply how much you make in a week by the number of weeks you work per year.
Calculating Annual Income as a Freelancer
If you work regular hours (40 hours a week, 52 weeks a year) as a freelancer, using the calculator is simple and easy. However, that isn't the case for everyone. Some are independent contractors,
freelancers, or multiple job holders.
Freelancing jobs are often paid on a per-project or task basis. To keep track of your annual income as a freelancer, be sure to keep a record of the pay you receive per job.
It could be helpful to use an app or a spreadsheet like Excel or Google Sheets. Then, at the end of the year, add it up and calculate your annual income.
However, if you're the type of employee who has side hustles outside of your normal work hours, you could treat income from that as other income.
Let's say that your annual salary is $60,000 and the total income you earn from side gigs is $7,500. Your annual income will be $67,500.
Besides holding multiple jobs, you can also diversify your earnings through assets. Check out our list of
income generating assets
you can add to your finance strategy.
Calculating Annual Income as a Multiple Job Holder
If you hold multiple jobs, calculating your annual income can be simple or complex depending on your circumstances.
Let's say you work two part-time jobs that have different rates and hours. For part-time job #1, you earn $30 per hour and work 10 hours a week. For part-time job #2, you earn $22 per hour and work
30 hours a week. You work 52 weeks a year for both jobs.
To calculate your annual income, you'll first need to calculate your annual income for each job. You can do that for each by using this formula:
Annual Income = Hourly Rate x (Hours Worked per Week x Weeks Worked per Year)
For your first part-time job, your annual income would be:
$30 x (10 x 52) = $15,600
For your other part-time job, your annual income would be:
$22 x (30 x 52) = $34,320
Once you calculate your annual income for each job, add the two to get your total annual income. In this scenario, your total annual income would be $49,920.
Gross Pay Vs Net Pay
Your gross annual income (gross pay) represents the amount you earned for the year before taxes or other deductions. This is the amount you'd typically see on your contract if you're a salaried
More often than not, the amount that you actually receive from your employer isn't the same as your gross pay. This is because what you actually receive is your net pay. This is the amount that's
left of your gross pay after your employer withheld a certain amount for taxes and other deductions.
Factors That Affect Your Net Pay
Your employer is required to withhold employment taxes from your gross pay. These taxes include:
• FICA (Social Security and Medicare Taxes)
These taxes fund the Social Security and Medicare programs of the US government.
• Federal Income Tax
Anyone who earns an income in the US is legally obligated to pay federal income taxes. This is calculated using a bracket system that increases progressively based on your income.
• State Income Tax (if applicable)
Depending on which state you earned your income, you may have to pay state income taxes. Some states use progressive tax brackets (similar to the federal income tax) while some use fixed rates.
• Local Income Tax (if applicable)
In addition to the state income tax, some cities may impose their own income tax.
Aside from the above taxes, your employer might also withhold for legal and voluntary deductions such as:
• Court-Ordered Wage Garnishments
A court can order your employer to withhold a percentage of your salary or wages to pay for your debt. Examples of garnishments include alimony, child support, student loan debt, etc.
• Health Insurance Premiums
This is a voluntary deduction. Your employer may withhold some of your gross pay to cover your health insurance premiums. In some cases, your employer might fully cover for them.
• Retirement Savings
You may let your employer withhold a portion of your gross pay to invest in some retirement plans such as 401(k).
Bottom Line
Calculating your annual income manually can be complicated and time-consuming. Luckily, there's an annual income calculator that can help. All you need to do is fill in the fields and press
"Calculate"—it does all the heavy lifting for you.
Once you get your annual income, you can use it to plan out your taxes, finances, and financial goals.
What's your primary reason for calculating your annual income?
Write to Patrick Santos at feedback@creditdonkey.com. Follow us on Twitter and Facebook for our latest posts.
Subscribe to CreditDonkey: Get updates on the latest deals and keep up with the best money moves. | {"url":"https://www.creditdonkey.com/annual-income-calculator.html","timestamp":"2024-11-10T15:31:52Z","content_type":"application/xhtml+xml","content_length":"81205","record_id":"<urn:uuid:edb6fc09-8b6f-44d3-9442-fff0fb999782>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00305.warc.gz"} |
Harmonic Function / Laplace’s equation
harmonic 这个词的意思太多了,比如在 periodic signals 里翻译成 “谐波”。而 Harmonic Function 的翻译是 “调和函数”
本篇 quote from V7. Laplace’s Equation and Harmonic Functions, M.I.T. 18.02 Notes, Exercises and Solutions by Jeremy Orloff
Laplace operator
The two-dimensional Laplace operator, or laplacian as it is often called, is denoted by $\nabla^2$ or $lap$, and defined by
\[\nabla^2 = \frac {\partial^{2}}{\partial x^{2}} + \frac {\partial^{2}}{\partial y^{2}}\]
\[\nabla^2 = \frac {\partial^{2}}{\partial x_1^{2}} + \frac {\partial^{2}}{\partial x_2^{2}} + \cdots + \frac {\partial^{2}}{\partial x_n^{2}}\]
注意 $\nabla^2$ 其实是一个参数为 $f$ 的函数,只是我们不写成 $\nabla^2(f)$ 而是直接用 $\nabla^2 f$ 表示:
\[\nabla^2 f = \frac {\partial^{2} f}{\partial x^{2}} + \frac {\partial^{2} f}{\partial y^{2}}\]
where $f(x, y)$ is a twice differentiable functions.
The notation $\nabla^2$ comes from thinking of the operator as a sort of symbolic scalar product:
\[\nabla^2 = \nabla \cdot \nabla = \left ( \frac{\partial}{\partial x} \mathbf{i} + \frac{\partial}{\partial x} \mathbf{j} \right ) \cdot \left ( \frac{\partial}{\partial x} \mathbf{i} + \frac{\
partial}{\partial x} \mathbf{j} \right ) = \frac {\partial^{2}}{\partial x^{2}} + \frac {\partial^{2}}{\partial y^{2}}\]
Notice that the laplacian is a linear operator, that is it satisfies the two rules
• $\nabla^2 (u + v) = \nabla^2 u + \nabla^2 v$
• $\nabla^2 cu = c(\nabla^2 u)$
for any two twice differentiable functions $u(x, y)$ and $v(x, y)$ and any constant $c$.
Laplace equation
\[\nabla^2 f = 0\]
Harmonic Function
A function $\phi(x, y)$ which has continuous second partial derivatives and satisfies Laplace’s equation is called a harmonic function. I.e.
\[\phi \text{ is harmonic} \iff \nabla^2 \phi = 0\]
Considering laplacian is a linear operator, we have:
\[\phi \text{ and } \psi \text{ harmonic} \Rightarrow (\phi + \psi) \text{ and } c\phi \text{ are harmonic}\]
Examples of harmonic functions
仅列举 Harmonic homogeneous polynomials in two variables 的例子。更多请参考教程。
• Degree $0$: all constants $c$ are harmonic.
• Degree $1$: all linear polynomials $ax + by$ are harmonic.
• Degree $2$: the quadratic polynomials $x^2 − y^2$ and $xy$ are harmonic; all other harmonic homogeneous quadratic polynomials are linear combinations of these, e.g.:
\[\phi(x, y) = a(x^2 − y^2) + bxy\]
where $a b$ are constants.
• Degree $n$: the real and imaginary parts of the complex polynomial $(x + \mathrm{i} y)^n$ are harmonic. | {"url":"https://listcomp.com/math/2018/06/05/harmonic-function-laplaces-equation","timestamp":"2024-11-08T05:58:09Z","content_type":"text/html","content_length":"19840","record_id":"<urn:uuid:2976b1ad-bd4d-405c-bd82-9c92c48d7e4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00508.warc.gz"} |
Compute arbitrary terms - low-level generic assembly procedures (deprecated)
Compute arbitrary terms - low-level generic assembly procedures (deprecated)¶
This section present the first version of generic assembly procedure which has been implemented in GetFEM and is now considered as deprecated. It allows to make the assembly of arbitrary matrices in
the linear case. In the nonlinear case, some special “non_linear_term” object have to be implemented, which could be a bit tricky and obliges to use very low-level internal tools of GetFEM. The
generic weak form language (GWFL) has been developed to circumvent these difficulties (see Compute arbitrary terms - high-level generic assembly procedures - Generic Weak-Form Language (GWFL)).
As it can be seen in the file getfem/getfem_assembling.h, all the previous assembly procedures use a getfem::generic_assembly object and provide it an adequate description of what must be done. For
example, the assembly of a volumic source term for a scalar FEM is done with the following excerpt of code:
getfem::generic_assembly assem;
The first instructions declare the object, and set the data that it will use: a mesh_im object which holds the integration methods, two mesh_fem objects, the input data F, and the destination vector
The input data is the vector \(F\), defined on mfdata. One wants to evaluate \(\sum_{j} f_j (\int_\Omega \phi^i \psi^j)\). The instruction must be seen as something that will be executed for each
convex cv of the mesh. The terms #1 and #2 refer to the first mesh_fem and the second one (i.e. mf and mfdata). The instruction Z=data(#2); means that for each convex, the “tensor” Z will receive the
values of the first data argument provided with push_data, at indexes corresponding to the degrees of freedom attached to the convex of the second (#2) mesh_fem (here, Z = F[mfdata.ind_dof_of_element
The part V(#1)+=... means that the result of the next expression will be accumulated into the output vector (provided with push_vec). Here again, #1 means that we will write the result at indexes
corresponding to the degrees of freedom of the current convex with respect to the first (#1) mesh_fem.
The right hand side comp(Base(#1).Base(#2))(:,j).Z(j) contains two operations. The first one is a computation of a tensor on the convex: comp(Base(#1).Base(#2)) is evaluated as a 2-dimensions tensor,
\(\int\phi^i \psi^j\), for all degrees of freedom \(i\) of mf and \(j\) of mfdata attached to the current convex. The next part is a reduction operation, C(:,j).Z(j): each named index (here \(j\)) is
summed, i.e. the result is \(\sum_j c_{i,j} z_j\).
The integration method used inside comp(Base(#1).Base(#2)) is taken from mim. If you need to use integration methods from another mesh_im object, you can specify it as the first argument of comp, for
example comp(\%2, Base(#1).Grad(#2)) will use the second mesh_im object (New in getfem++-2.0).
An other example is the assembly of the stiffness matrix for a vector Laplacian:
getfem::generic_assembly assem;
Now the output is written in a sparse matrix, inserted with assem.push_mat(SM). The $1 in M$1(#1,#1) just indicates that we refer to the first matrix “pushed” (it is optional, but if the assembly
builds two matrices, the second one must be referred this way). The sym function ensure that the result is symmetric (if this is not done, some round-off errors may cancel the symmetricity, and the
assembly will be a little bit slower). Next, the comp part evaluates a 7D tensor,
where \(\varphi^i_j\) is a \(jth\) component of the \(ith\) base function of mf and \(\psi^p\) is a (scalar) base function of the second mesh_fem. Since we want to assemble
\[\int a(x).\nabla\phi^i.\nabla\phi^j, \quad\text{with}\quad a(x)=\sum_p a^p \psi^p(x),\]
the reduction is:
\[\sum_{j,k,p}\left( \int \partial_k\varphi^{i}_{j} \partial_k\varphi^m_j \psi^p \right)a^p\]
In the comp function, vGrad was used instead of Grad since we said that we were assembling a vector Laplacian: that is why each vGrad part has three dimensions (dof number, component number, and
derivative number). For a scalar Laplacian, we could have used comp(Grad(#1).Grad(#1).Base(#2))(:,k,:,k,p).a(p). But the vector form has the advantage to work in both vector and scalar case.
The last instruction, assem.assembly(), does evaluate the expression on each convex. For an assembly over a boundary just call assem.assembly(rg), where rg is a getfem::mesh_region object. rg might
also be a number, in that case the mesh region taken into account is mim.linked_mesh().region(rg).
The third example shows how to compute the \(L^2\) norm of a scalar or vector field on a mesh boundary:
std::vector<scalar_type> v(1);
This one is easy to read. When assembly returns, v[0] will contain
\[\sum_{i,j,k}\left(\int_{boundary} u_i \varphi^{i}_{k} u_j \varphi^j_k \right)\]
The fourth and last example shows an (sub-optimal) assembly of the linear elasticity problem with a complete Hooke tensor:
"M(#1,#1)+= sym(e(:,j,k,:,m,n,p).h(j,k,m,n,p))");
The original equations are:
\[\int\varepsilon(\varphi^i):\sigma(\phi^j), \quad\text{with}\quad \sigma(u)_{ij}=\sum_{kl} h_{ijkl}(x) \varepsilon_{kl}(u)\]
where \(h\) is the Hooke tensor, and \(:\) means the scalar product between matrices. Since we assume it is not constant, \(h\) is given on the second mesh_fem: \(h_{ijkl}(x)=\sum_p h_{ijkl}^p \psi^p
\). Hence the first line declares that the first data “pushed” is indeed a five-dimensions tensor, the first fourth ones being all equal to the target dimension of the first mesh_fem, and the last
one being equal to the number of degrees of freedom of the second mesh_fem. The comp part still computes the same 7D tensor than for the vector Laplacian case. From this tensor, one evaluates \(\
varepsilon(\varphi^i)_{jk}\varepsilon(\phi^l)_{mn}\psi^p\) via permutations, and finally the expression is reduced against the hook tensor.
available operations inside the comp command¶
• Base(#i): evaluate the value of the base functions of the ith mesh_fem
• Grad(#i): evaluate the value of the gradient of the base functions of the ith mesh_fem
• Hess(#i): evaluate the value of the Hessian of the base functions of the ith mesh_fem
• Normal(): evaluate the unit normal (should not be used for volumic integrations !)
• NonLin$x(#mf1,... #mfn): evaluate the xth non-linear term (inserted with push_nonlinear_term(pnonlinear_elem_term)) using the listed mesh_fem objects.
• GradGT(), GradGTInv(): evaluate the gradient (and its inverse) of the geometric transformation of the current convex.
you may reference any data object inside the comp command, and perform reductions inside the comp(). This feature is mostly interesting for speeding up assembly of nonlinear terms (see the file
getfem/getfem_nonlinear_elasticity.h for an example of use).
others operations¶
Slices may be mixed with reduction operations t(:,4,i,i) takes a slice at index 4 of the second dimension, and reduces the diagonal of dimension 3 and 4. Please note that index numbers for slices
start at 1 and not 0 !!
mdim(#2) is evaluated as the mesh dimension associated to the second mesh_fem, while qdim(#2) is the target dimension of the mesh_fem.
The diagonal of a tensor can be obtained with t{:,:,3,3} (which is strictly equivalent to t{1,2,3,3}: the colon is just here to improve the readability). This is the same operator than for
permutation operations. Note that t{:,:,1,1} or t{:,:,4,4} are not valid operations.
The print command can be used to see the tensor: "print comp(Base(#1));" will print the integrals of the base functions for each convex.
If there is more than one data array, output array or output sparse matrix, one can use data$2, data$3, V$2, M$2,… | {"url":"https://getfem.readthedocs.io/en/latest/userdoc/gasm_low.html","timestamp":"2024-11-09T19:25:26Z","content_type":"text/html","content_length":"32872","record_id":"<urn:uuid:38da6b5f-ba53-4539-b5fb-ca38e8bde3e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00245.warc.gz"} |
Length-Based Reference Points for Data-Limited Situations: Applications and Restrictions
How to translate text using browser tools
21 December 2009 Length-Based Reference Points for Data-Limited Situations: Applications and Restrictions
Jason M. Cope, André E. Punt
Current fisheries management policies generally require an assessment of stock status, which is a difficult task when population and fisheries data are limited. Three simple metrics based on catch
length compositions (i.e., that reflect exclusive take of mature individuals, P[mat]; that consist primarily of fish of optimal size, the size at which the highest yield from a cohort occurs, P[opt];
and that demonstrate the conservation of large, mature individuals, P[mega]) can be used to monitor population status relative to exploitation. The metrics (collectively referred to as P[x]) were
intended to avoid growth and recruitment overfishing, but there was no quantitative linkage to stock status and calculation of future sustainable catches. We attempt to make this connection by
exploring the relationship of P[x] measures to fishing mortality and spawning biomass (SB). The relationships are compared specifically to the current target reference point (0.4 times the virgin, or
unfished, SB [SB[0]]) and limit reference point (0.25SB[0]) used for the U.S. West Coast groundfish fishery by using simulations based on a deterministic age-structured population dynamics model.
Sensitivity to fishery selectivity, life history traits, and recruitment compensation (steepness) is explored. Each P[x] measure showed a wide range of possible values depending on fishery
selectivity, steepness, and the ratio of the length at maturity (L[mat]) to the optimal fishing length (L[opt]). Although the values of P[x] may be compatible with sustainable fishing, these values
are not always sufficient to ensure stock protection from overfishing. Moreover, values for P[x] cannot be interpreted adequately without knowledge of the selectivity pattern. A new measure, P[obj]
(the sum of P[mat], P[opt], and P[mega]), is introduced to distinguish selectivity patterns and construct a decision tree for development of stock status indicators. Heuristic indicator values are
presented to demonstrate the utility of this approach. Although several caveats remain, this approach builds on the recommendations of previous literature by giving further guidance related to
interpreting catch length composition data under variable fishery conditions without collecting additional information. It also provides a link to developing harvest control rules that inform
proactive fisheries management under data-limited conditions.
The Magnuson–Stevens Fishery Conservation and Management Act (reauthorized in 2006) mandates sustainable fishery actions in the United States, with a particular goal of continuing resource use while
avoiding overfishing, maintaining healthy stocks, and rebuilding overfished stocks (Restrepo and Powers 1999). Achieving these goals generally requires the ability to identify when overfishing is
occurring and when a stock has reached an overfished state. Life histories, environmental complexity, and resource removal context combine to complicate the “true” definition of overfishing and being
overfished, necessitating the use of reference points (RPs; Caddy and Mahon 1995). Reference points often attempt to buffer uncertainty in maximum sustainable removals, thus promoting a risk-based
trade-off between resource use (e.g., yield) and conservation (Mace 1994). The precautionary behavior of this trade-off, however, is usually poorly understood (Hilborn 2002).
Reference points, by definition, rely on some measure of the stock in question that relates (or refers) to status. Reference points can either be targets (levels that management attempts to maintain
the stock at or around) or limits (levels that are to be avoided; Caddy and Mahon 1995; Caddy 2004). Target RPs (TRPs) therefore reflect desired biological or ecological states, whereas limit RPs
(LRPs) relate to resource protection and persistence (Caddy and Mahon 1995; Botsford et al. 2004).
Current quantitative stock assessment techniques, ranging from surplus production models to age-structured statistical catch-at-age models, provide an array of outputs that can be used when defining
RPs, with estimates of fishing mortality (F) and biomass (relative to unfished or optimal conditions) the most widely applied (Quinn and Deriso 1999; Walters and Martell 2002). Although theoretically
informative, conventional stock assessment techniques based on fitting population dynamics models to data cannot be applied to a large fraction of the world's exploited fishery resources because of
data limitations. The challenge thus becomes devising alternative assessment methods that require limited data but are capable of revealing stock status so as to fulfill the requirements of fishery
management mandates.
Contemporary approaches to inform stock vulnerability or status for data-poor situations take the form of state indicators (Garcia and Staples 2000; Jennings 2005), qualitative risk assessments (
Stobutzki et al. 2001), and inferences about vulnerability based on life history characteristics (Musick et al. 2000; King and McFarlane 2003). Such approaches strive to apply transparent protocols
to simple or limited data. However, the ability to validate these methods and use them to inform future catches remains a formidable task (Rochet and Trenkel 2003; Jennings and Dulvy 2005; Smith et
al. 2007).
Froese (2004) introduced a method that relies on well-established relationships between fisheries management and life history theory (Reynolds et al. 2001) as applied to catch length composition
data. Its straightforward approach has gained attention (Jennings 2005; Lewin et al. 2006; Francis et al. 2007) and is based on three simple ideas: (1) catch length compositions should reflect almost
exclusive take of mature individuals (P[mat]; Leaman 1991; Myers and Mertz 1998); (2) catch length compositions should consist primarily of fish of the size at which the highest yield from a cohort
occurs (P[opt]; Beverton 1992); and (3) catch length compositions should demonstrate the conservation of large, mature individuals (P[mega]; Berkeley et al. 2004). These proposed metrics are meant to
capture catch characteristics indicative of sustainable catches, such as avoidance of growth (Beverton and Holt 1957) and recruitment (Ricker 1954) overfishing, while using easily collected fisheries
data (e.g., length frequencies of catch). We will hereafter refer to these three ideas as “Froese's (2004) sustainability recommendations.” While the approach is intuitively appealing, it has not
been formally explored to see how these proportional measures relate to currently used RPs.
Our aim for this article is to develop the concepts of Froese (2004) further and to explore how they relate to F, spawning biomass (SB), and current RPs based on SB. Specifically, we ask, “Can
informative RPs be developed by using P[mat], P[opt], and P[mega] that correspond to the current SB-based RPs commonly used in U.S. fisheries management?” Froese (2004) provided general guidelines
regarding target values for P[mat], P[opt], and P[mega], and we explore how these values perform as the basis for fishery RPs. As a point of comparison, the analyses are based, to the extent
possible, on management and biological scenarios typical of those related to the U.S. West Coast groundfish fishery.
Defining catch composition proportions
The three length-based catch proportions of interest (hereafter referred to collectively as P[x]) are calculated as follows:
and where
is the proportion of the catch that is in length-class
is the length at 50% maturity,
is the maximum length, and
is the length at which the biomass of a cohort is maximized (defined here as the length corresponding to the age [
] at which the product of weight at age and numbers at age under zero
is maximized;
Beverton 1992
). Calculation of
, and
requires information on the catch length composition for the fishery, an estimate of
, and an estimate of
. This catch length composition is not dependent on the species in question being targeted by a fishery; it is equally relevant to species in the bycatch as well. The major assumption is that the
catch length composition is representative of the fishery catch at length.
Population dynamics model
A deterministic age-structured population dynamics model is used to explore the implications of fishery selectivity, recruitment compensation (quantified by steepness h; Mace and Doonan 1988), and
life history traits on P[mat], P[opt], and P[mega]. The stable age structure for this model is given by:
is the number of animals of age
and gender
is the instantaneous rate of natural mortality (assumed to be independent of age and gender),
is the fully selected fishing mortality,
is the selectivity of animals of age
to the fishery, ω is the longevity and plus-group age (i.e., age at which older-aged individuals are accumulated in the population dynamics model), and
is the number of recruits based on a modified Beverton–Holt stock–recruit relationship (
Mace and Doonan 1988
) when fully selected fishing mortality equals
: where
is recruitment in the absence of fishing (arbitrarily set to 1) assuming a 50:50 sex ratio,
is the steepness of the stock–recruitment relationship (the fraction of
when SB is reduced to 20% of the virgin, or unfished, SB [SB
is the SB per recruit when fully selected fishing mortality equals
, and
is the SB per recruit in the absence of fishing. where δ
is the fraction of animals of age
that are mature,
is the weight of an animal of age
based on the allometric growth model (
) where length at age is assumed to be governed by the von Bertalanffy growth function (VBGF), rearranged to express age as a function of length: where
is the asymptotic length,
is the growth coefficient, and
is the theoretical age at a length of zero.
The maturity function is defined as
is the age at 50% maturity and β is the difference between
and the age at 95% maturity.
Age is considered at intervals of 0.1 year for greater resolution of lengths, but mortality rates are reported on an annual basis. Although this model is gender structured, the life history parameter
values are set equal between genders to reduce complexity.
Defining life histories and parameter values
The simulated lengths on which the catch length proportions are based depend on the values assigned to the parameters of the age-structured population dynamics model. These values (Table 1) were
chosen to capture correlations commonly found among life history characteristics (Adams 1980; Winemiller and Rose 1992), were scaled based on data for species composing the U.S. West Coast groundfish
fishery, and were selected to allow key areas of parameter uncertainty to be explored.
Three additional life history relationships relevant to defining P[x] values were calculated from 21 assessed West Coast groundfish species as a way to represent the biological scenarios of West
Coast groundfishes and maintain consistent relationships among life history parameters when simulating populations (Table 2): (1) P[mat] at unfished levels (P[mat,F=0]); (2) L[mat]/L[opt]; and (3) A
[mat]/A[opt]. The P[mat,F=0] measure represents P[mat] prior to fishing (equation 1) and calculated from the virgin age structure:
The estimates of
for gopher rockfish were outliers, so summary statistics with gopher rockfish excluded are also provided in
Table 2
The base case choice for the relationship between age and length (i.e., VBGF parameters L[∞], k, and A[0]; life history LH2, Table 1) was taken from a meta-analysis of growth curves for the genus
Sebastes (Helser et al. 2007), and sensitivity was explored to three other choices for k (life histories LH1, LH3, and LH4) providing a range of possible life history conditioning among groundfishes,
though not representing any one species in particular. Simulation results were insensitive to the choice of L[∞], so this parameter was kept constant among each life history variation. Hoenig's
equation (Hoenig 1983) for estimating M from ω was used as a way to establish ω and M for LH2 (Table 1) consistent with values of P[mat,F=0] near the median of 0.5 (Table 2). This creates populations
that under exponential decay via M and zero F produce mature population proportions consistent with those estimated for virgin populations of assessed West Coast groundfish stocks. Values for M and
hence ω for the other life history scenarios (LH1, LH3, and LH4; Table 1) were set so that M/k was a constant, and the parameter values for these scenarios were checked to ensure that the resulting P
[mat,F=0] values were between the first and third quartiles for P[mat,F=0] in Table 2. The M/k value that forms the base case for the analyses in this article differs from those values found in other
empirical studies (Jensen 1996; Simpfendorfer 1999). Using those literature values for M/k leads to P[mat,F=0] values near 0.3 (i.e., much lower than most of the values in Table 2). Sensitivity to
alternative values for M/k (1.0 and 1.5) was, however, explored (Jensen 1996; Froese et al. 2008). The values for the parameters of the length–weight relationship (intercept a and slope b) were set
to be characteristic of those for groundfish species off the U.S. West Coast (0.001 and 3.0, respectively; Burton et al. 2000).
The values for L[mat] were computed from L[opt] by using the ratio of L[mat] to L[opt] (Froese and Binohlan 2000). The values assumed for this ratio (L[mat]/L[opt] = 0.65, 0.75, and 0.9; Figure 1)
are taken from Table 2 and from additional estimates given by Froese and Binohlan (2000) and reported by Froese et al. (2008). The low and high values for L[mat]/L[opt] capture the first and third
quartiles for this ratio in Table 2, while the central value is close to the harmonic mean of L[mat]/L[opt] based on all species. Conversion of L[mat] to A[mat] was accomplished by using the VBGF,
and β was set to A[mat]/4 (a general relationship consistent with values seen for West Coast groundfish).
Nine values of h (Table 3) were considered. The lowest h-value was set to 0.25 following He et al. (2006), who argued that values less than this are highly unlikely.
Fishing, sampling, and calculating P[x] proportions of populations
Fishing the simulated populations requires defining a fishing rate F and a selectivity-at-age (S[A]) curve. Values for F from 0 (unfished) to 0.3, 0.6, 1.0, and 1.5 were considered for each of the
four life histories, respectively (Table 3), and the limits to F were defined by the nature of the life history (e.g., F = 0.3 was the maximum F for LH1 because it had the lowest M; conversely, LH4
had the highest M and thus the highest potential F of 1.5). In addition, the F-values corresponding to those that reduced the population to 40% (F[40]) and 25% (F[25]) of SB[0] were also calculated.
Six selectivity patterns (converting age selectivity to length selectivity via the VBGF) were chosen to explore a wide range of possibilities that might affect resultant P[x] values (Figure 2; Table
3). Three selectivity patterns (logistic = maturity ogive [hereafter, “logistic”]; full selectivity of fish larger than 0.9L[opt] [hereafter, “>0.9L[opt]”]; and full selectivity of optimally sized
fish [hereafter, “L[opt]”]) comply with Froese's (2004) sustainability recommendations, while the other three patterns (full selectivity of fish smaller than 0.9L[opt] [hereafter, “<0.9L[opt]”]; full
selectivity of fish smaller than 1.1L[opt] [hereafter, “1.1L[opt]”]; and the inverse of the above age-based logistic selectivity [hereafter, “reverse logistic”]) violate Froese's (2004)
sustainability recommendations. Given that three values of L[mat] are considered, three equivalent logistic selectivity curves (and the resultant reverse logistic selectivity) were also considered (
Figure 2).
The catch-based length proportions were computed by (1) “sampling” the catch age compositions (without error) by using the selectivity pattern used to drive the population dynamics and then (2)
converting the age compositions to length compositions. The proportion of the catch in age-class A, P[A], was given by:
The values for
, and
were then compared to identify patterns with selectivity,
, and
that would indicate whether the population was below either the TRP (0.4SB
) or the LRP (0.25SB
) for consistency with how West Coast groundfish are managed (
Punt 2003
Sensitivity of Length Proportions to Life Histories and Ratio of Length at Maturity to Optimal Fishing Length
Differences in the values for the catch-based length proportions (P[mat], P[opt], and P[mega]) among the four life histories and two M/k ratios were insufficient (i.e., demonstrating a strong linear
trend along the 1:1 comparison line) to permit further consideration of the additional life histories (Figure 3); all subsequent results are therefore only reported for life history LH2. However, the
catch proportions (particularly P[mat]) were sensitive to the assumed value for L[mat]/L[opt] (Figure 4). This sensitivity was caused when L[mat]/L[opt] was equal to 0.9, which is the same value as
the lower length (0.9L[mat]) used to calculate P[opt]. When these two values are equivalent, all measures of P[x] are affected. Consequently, sensitivity to L[mat]/L[opt] was retained throughout the
rest of the analyses.
Sensitivity of Length Proportions to Selectivity
The catch-based length proportions do not depend greatly on h given F within a selectivity pattern, but they do depend on the value assumed for L[mat]/L[opt] and vary among selectivity patterns (
Figure 5). Values for P[mat] and P[opt] are greater for the selectivity patterns that satisfy Froese's (2004) sustainability recommendations (Figure 5) than for those selectivity patterns that do not
(Figure 6), as expected. However, the length proportions and their trends with F differ among the selectivity patterns that satisfy Froese's (2004) sustainability recommendations (Figure 5). Such
differences also occur for the selectivity patterns in Figure 6, but the effect is much smaller.
Using Length Proportions as Reference Points
The values for P[x] at which the TRP and LRP for SB are obtained depend on h (Figures 5, 6; with the lowest F corresponding to the lowest h). Higher values of F are obtained at TRPs and LRPs when the
selectivity pattern satisfies Froese's (2004) sustainability recommendations (Figure 5, intersection of black and red lines). These selectivities also demonstrate the largest sensitivities to h and
associated F at the RPs.
The values for P[x] are often not very sensitive to stock status (Figures 7, 8). For example, the ranges of values for the proportions corresponding to an overfished stock (SB = 0.25SB[0]), a stock
at the target level (SB = 0.4SB[0]), and an unfished stock (SB[0]) often overlap to a substantial extent (Figures 7, 8). Only when the productivity of a stock is very high (e.g., h approaches 1.0) is
there meaningful contrast between the length-based catch proportions at 0.25SB[0], 0.4SB[0], and SB[0]. In other cases, there is no contrast in the values of P[mat], P[opt], or P[mega], such as when
fishing occurs at L[opt] (by definition the optimal fishing pattern).
Also troubling is the wide range of possible values for the catch-based length proportions. For example, values of P[mat] that indicate that the stock is at 0.25SB[0] range from 1.0 to approximately
0.2 when L[mat]/L[opt] is 0.65 (Figure 7) and from 0 to 1 when L[mat]/L[opt] is 0.9 (Figure 8), with the range primarily reflecting the impact of the selectivity pattern. The same holds true for P
[opt], while P[mega] could range from 0.5 to 0.0. These results challenge the notion that values for P[mega] between 0.3 and 0.4 or at 0 will indicate healthy stocks (Froese 2004). Moreover, these
results indicate that distinguishing selectivity is critical to establishing RPs based on catch length compositions.
The sum of the catch-based length proportions (P[mat], P[opt], and P[mega]), herein referred to as P[obj], provides a means to distinguish selectivity patterns. There is a more consistent
relationship between P[obj] and SB across h-values and between values for L[mat]/L[opt] (Figure 9) than using P[mat], P[opt], or P[mega] alone. Two strong relationships are apparent from Figure 9:
(1) a P[obj] value less than 1 is indicative of selectivity patterns that do not follow Froese's (2004) sustainability recommendations; and (2) a P[obj] value greater than 1 is indicative of
selectivity patterns that follow Froese's (2004) recommendations. Within the latter distinction, a P[obj] value between 1 and 2 clearly distinguishes selectivity patterns containing some immature and
suboptimally sized fish (e.g., the logistic selectivity pattern) from those for which P[obj] is equal to 2 (e.g., the >0.9L[opt] and L[opt] selectivity patterns). Within the former, the <1.1L[opt]
(small and optimally sized fish) and reverse logistic (all but the largest fish) selectivity patterns can be distinguished by using P[mega] values. Finally, if P[obj] is less than 1 and if P[opt] + P
[mega] = 0, the fishery is one that fishes only immature individuals and is considered highly undesirable under Froese's (2004) sustainability recommendations (see also Myers and Mertz 1998).
The results in Figures 7–9 allow the construction of a decision tree (Figure 10) for indicators based on P[x] values. Given values for P[obj], P[mat], and P[opt] and the relationship between L[mat]
and L[opt], Figure 10 provides a set of rules for defining when a stock is below the TRP or LRP that does not require knowledge of F, SB, and h.
Table 4 illustrates the trade-offs of the indicators suggested in Figure 10. It is clear from Table 4 that the catch-based length proportions are not sensitive enough across all h-values to reliably
determine when the stock is at the TRP or below the LRP, so any assigned indicator values should be considered trigger points and should be determined under case-specified risk analysis. The proposed
indicator values offered here are used only as examples to explain the basic approach.
In general, the example decision tree allows stocks that are below 0.25SB[0] to be identified as such unless h is less than 0.3. For example, following one branch of the tree, let us consider the
case when P[obj] is less than 1, P[opt] + P[mega] is greater than 0, and L[mat] equals 0.65L[opt] (Table 4). As long as the P[mat] value is above the suggested trigger point of 0.4, the stock is
correctly assessed to be (1) above 0.25SB[0] unless h is 0.25 or less and (2) above 0.4SB[0] unless h is 0.3 or less when selectivity is governed by the <1.1L[opt] pattern. However, under the reverse
logistic selectivity pattern, this trigger point fails to detect a stock that is below 0.4SB[0] (but still above 0.25SB[0]) for any value of h or a stock that is below 0.25SB[0] if h is less than
0.5. Alternatively, one could separate the <1.1L[opt] and reverse logistic selectivity patterns by using P[mega] values (Figures 7, 8), so an additional trigger point based on P[mega] could be
included to reduce error in detection between these selectivities.
The probability that the stock is below the SB-based TRP, LRP, or both for all possible values of a P[x] trigger point is listed in Table 5. Table 5 integrates the uncertainty across h-values and the
selectivity patterns for each P[obj] branch of the decision tree. For instance, given complete uncertainty in h, there is a 26% chance that the true SB is below 0.4SB[0] (the TRP) and a 17% chance
that it is below 0.25SB[0] (the LRP) if P[obj] is less than 1, L[mat] is less than 0.75L[opt], and P[mat] equals 0.4 (Tables 4, 5).
Length-Based Reference Points and Harvest Control Rules
Size-related measures (e.g., mean length or weight; length compositions) have long been used as indicators of response to population decline (Beverton and Holt 1957; Smith 1994b). Recent extensions
of this idea demonstrate that size composition changes with increased F (Gislason and Rice 1998; Bianchi et al. 2000), although such studies, as applied mostly to fish communities and ecosystems,
illustrate general trends rather than linking size-related measures to RPs (Rochet and Trenkel 2003). Given that catch length frequencies are among the easiest data to collect, it is valuable to know
how to interpret such information in the context of providing directed fishery management advice.
Froese (2004) advanced this approach by providing simple but fundamentally sound guidance for how to interpret fishery length composition data to avoid overly depleting stocks. However, translating
the broad suggestions of Froese (2004; i.e., to target the mature and optimally sized individuals) into practical management advice can be problematic (Rochet and Trenkel 2003; Link 2005), especially
given the strong interaction between selectivity and stock status. Punt et al. (2001) formally evaluated size-based indicators and their potential use as RPs, offering cautionary advice on the
potentially imprecise but informative nature of using mean size and size compositions as proxies of population depletion.
This article continues such thoughts, related specifically to the measures offered by Froese (2004), and shows that fish stocks with P[mat] and P[opt] values much less than 1.0 can theoretically be
sustainably fished, while also demonstrating that major decreases in population biomass can occur even when P[opt] equals 1.0, the theoretically ideal situation. We also illustrate how Froese's
(2004) original suggestion of targeting a P[mega] value either between 0.30 and 0.40 or at 0 may encourage overfishing under certain circumstances.
The approach outlined in this article offers a way to interpret length composition data even if direct information on mortality, fishery selectivity, and recruitment compensation (i.e., h) is
unknown. The decision tree (Figure 10) can provide context-specific guidance for interpreting stock status when a fishery is not operating at an optimal fishing selectivity (L[opt]), thus providing
more flexible management advice. Noting that the sensitivity to different life histories is low (Figure 3), these results are germane to a wide range of stocks and fisheries.
Identifying a link between a trigger point and stock status (by using Figure 10) is the first step to using catch length composition data to inform future catch recommendations via harvest control
rules (HCRs). Harvest control rules define the functional response of removals to the current state of the resource as reflected by indicators or RPs (Restrepo and Powers 1999; Smith et al. 2008).
The decision tree (Figure 10) and risk assessment (Tables 4, 5) suggest ways in which catch-based length proportions can be used to provide advice regarding harvest regulation within the context of
existing management RPs. Specifically, an HCR could act on both P[obj] and the related P[x] indicator(s) (Figure 10) to adjust catches up or down. However, a full examination of P[x]-based HCRs
requires a management strategy evaluation (MSE; Smith 1994a; Sainsbury et al. 2000; Smith et al. 2007) and is hence beyond the scope of this study.
Using Froese's (2004) P[x] values to interpret catch-based length compositions in the context of status determination remains less than ideal in several ways. For example, it is not possible to
evaluate stock status when selectivity contains only “optimal” individuals (i.e., when P[mat] = 1; represented here by the >0.9L[opt] and L[opt] selectivity patterns). There are two main ways of
transitioning from a selectivity pattern that includes all optimally sized and larger individuals (e.g., >0.9L[opt]) to the “most optimal” selectivity pattern that encompasses all optimally sized
individuals (L[opt]): (1) stop fishing individuals that are counted in P[mega]; or (2) increase F until there are no more fish of P[mega] size. The former would be considered conservative and the
latter reckless with respect to maintaining stock status above RPs, yet both strategies result in the same P[x] values during monitoring. This suggests that if a fishery is taking only mature
individuals, but not just at P[opt], it may be most precautionary to assign an HCR that forces the L[opt] strategy. This would be carried out by ensuring that a P[obj] equal to 2 and a P[mat] equal
to 1 are both heavily enforced in an HCR. In a similar approach, it may be wise to have a large penalty on catch for a fishery that takes only immature fish (far left branch of Figure 10) unless
there is additional evidence that the M/F relationship is much smaller than 1 (Figure 6).
One major drawback with the simulation approach we took is the assumption that there is no variation in length at age. This assumption helps to simplify the analyses and helps one discern general
patterns, but it loses realism and therefore the power of the decision rule (Table 5) is probably overestimated. Furthermore, deviations from our presented results would be seen if maturity is age
based instead of size based, although the exploration of a range of L[mat]/L[opt] ratios does address this uncertainty to some extent. Also not considered here is how the L[mat]/L[opt] ratio may
change with fishing-induced alteration of life history traits (Conover and Munch 2002).
Another challenge associated with using P[x] values is the very small difference between expectations of P[x] for fished populations at the trigger points and for unfished populations, especially
those with low h. The contrast between P[x] at SB[0] and 0.4SB[0] or 0.25SB[0] increases with h, but h is unknown for most fishes. This contrast, or the lack thereof, will affect the functionality of
subsequent HCRs, particularly when trying to adjust catch upwards. Although this ability is almost negligible when h is low, this is not surprising given low recruitment compensation and yield
potential in such instances (Punt et al. 2008). Additionally, the responsiveness of P[x] values to population status change could not be determined in this study. Other studies using different forms
of size-based indicators imply weak (Piet and Jennings 2005) to moderate (Punt et al. 2001) linkages. Such responsiveness could affect the reliability of P[x] measures as indicators of stock status
on time scales that are germane to management and thus needs examination via MSE.
Finally, the evaluation of power in Table 5 does not account for sampling error, which can be substantial for many data-poor fisheries. Again, an MSE approach will ultimately show the utility of the
basic approach when fisheries monitoring data are limited.
The approach of this article enhances the recommendations of Froese (2004) by giving further guidance related to interpreting catch-based length composition data under a range of fishery conditions
without the collection of information other than basic biological parameters related to growth and maturity. It also lays the groundwork from which HCRs that use length composition data could be
developed and tested to inform fishery management decisions. Finally, the progress made with these simple measures hints at the value of identifying additional alternative measures based on length-
or age-composition information (e.g., indicators measuring age truncation; Longhurst 1998; Rochet and Trenkel 2003) that may increase the ability of management to interact proactively with fisheries
under variable fishing and data-limited conditions.
We acknowledge funding through a National Marine Fisheries Service grant (NA04NMF4550330).
P. B. Adams 1980. Life-history patterns in marine fishes and their consequences for fisheries management. U.S. National Marine Fisheries Service Fishery Bulletin 78:1–12.
Google Scholar
S. A. Berkeley, M. A. Hixon, R. J. Larson, and M. S. Love . 2004. Fisheries sustainability via protection of age structure and spatial distribution of fish populations. Fisheries 29 (8):23–32.
Google Scholar
R. J. H. Beverton 1992. Patterns of reproductive strategy parameters in some marine teleost fishes. Journal of Fish Biology 41 (Supplement B):137–160.
Google Scholar
R. J. H. Beverton and S. J. Holt . 1957. On the dynamics of exploited fish populations. Chapman and Hall. London.
Google Scholar
G. Bianchi, H. Gislason, K. Graham, L. Hill, X. Jin, K. Koranteng, S. Manickchand-Heileman, I. Payá, K. Sainsbury, F. Sanchez, and K. Zwanenburg . 2000. Impact of fishing on size composition and
diversity of demersal fish communities. ICES Journal of Marine Science 57:558–571.
Google Scholar
L. W. Botsford, A. Campbell, and R. Miller . 2004. Biological reference points in the management of North American sea urchin fisheries. Canadian Journal of Fisheries and Aquatic Sciences
Google Scholar
E. J. Burton, J. M. Cope, L. A. Kerr, and G. M. Cailliet . 2000. Biological characteristics of nearshore fishes of California: a review of existing knowledge and proposed additional studies for the
Pacific Ocean interjurisdictional fisheries management plan coordination and development. Pacific State Marine Fisheries Commission. Portland, Oregon. Available:
. (October 2008).
Google Scholar
J. F. Caddy 2004. Current usage of fisheries indicators and reference points, and their potential application to management of fisheries for marine invertebrates. Canadian Journal of Fisheries and
Aquatic Sciences 61:1307–1324.
Google Scholar
J. F. Caddy and R. Mahon . 1995. Reference points for fisheries management. FAO Fisheries Technical Paper 347.
Google Scholar
D. O. Conover and S. B. Munch . 2002. Sustaining fisheries yield over evolutionary time scales. Science 297:94–96.
Google Scholar
R. C. Francis, M. A. Hixon, M. E. Clarke, S. A. Murawski, and S. Ralston . 2007. Ten commandments for ecosystem-based fisheries science. Fisheries 32 (5):217–233.
Google Scholar
R. Froese 2004. Keep it simple: three indicators to deal with overfishing. Fish and Fisheries 5:86–91.
Google Scholar
R. Froese and C. Binohlan . 2000. Empirical relationships to estimate asymptotic length, length at first maturity, and length at maximum yield per recruit in fishes, with a simple method to evaluate
length frequency data. Journal of Fish Biology 56:758–773.
Google Scholar
R. Froese, A. Stern-Pirlot, H. Winker, and D. Gascuel . 2008. Size matters: how single-species management can contribute to ecosystem-based fisheries management. Fisheries Research 92:231–241.
Google Scholar
S. M. Garcia and D. J. Staples . 2000. Sustainability reference systems and indicators for responsible marine capture fisheries: a review of concepts and elements for a set of guidelines. Marine and
Freshwater Research 51:385–426.
Google Scholar
H. Gislason and J. Rice . 1998. Modelling the response of size and diversity spectra of fish assemblages to changes in exploitation. ICES Journal of Marine Science 55:362–370.
Google Scholar
X. He, M. Mangel, and A. MacCall . 2006. A prior for steepness in stock-recruitment relationships based on an evolutionary persistence principle. U.S. National Marine Fisheries Service Fishery
Bulletin 104:428–433.
Google Scholar
T. E. Helser, I. J. Stewart, and H. L. Lai . 2007. A Bayesian hierarchical meta-analysis of growth for the genus
in the eastern Pacific Ocean. Canadian Journal of Fisheries and Aquatic Sciences 64:470–485.
Google Scholar
R. Hilborn 2002. The darker side of reference points. Bulletin of Marine Science 70:403–408.
Google Scholar
J. M. Hoenig 1983. Empirical use of longevity data to estimate mortality rates. U.S. National Marine Fisheries Service Fishery Bulletin 82:898–902.
Google Scholar
S. Jennings 2005. Indicators that support an ecosystem approach to fisheries. Fish and Fisheries 6:212–232.
Google Scholar
S. Jennings and N. K. Dulvy . 2005. Reference points and reference directions for size-based indicators of community structure. ICES Journal of Marine Science 62:397–404.
Google Scholar
A. L. Jensen 1996. Beverton and Holt life history invariants result from optimal trade-off of reproduction and survival. Canadian Journal of Fisheries and Aquatic Sciences 53:820–822.
Google Scholar
J. R. King and G. A. McFarlane . 2003. Marine fish life history strategies: applications to fishery management. Fisheries Management and Ecology 10:249–264.
Google Scholar
B. M. Leaman 1991. Reproductive styles and life history variables relative to exploitation and management of
stocks. Environmental Biology of Fishes 30:253–271.
Google Scholar
W. C. Lewin, R. Arlinghaus, and T. Mehner . 2006. Documented and potential biological impacts of recreational fishing: insights for management and conservation. Reviews in Fisheries Science
Google Scholar
J. S. Link 2005. Translating ecosystem indicators into decision criteria. ICES Journal of Marine Science 62:569–576.
Google Scholar
A. Longhurst 1998. Cod: perhaps if we all stood back a bit? Fisheries Research 39:101–108.
Google Scholar
P. M. Mace 1994. Relationships between common biological reference points used as thresholds and targets of fisheries management strategies. Canadian Journal of Fisheries and Aquatic Sciences
Google Scholar
P. M. Mace and I. J. Doonan . 1988. A generalised bioeconomic simulation model for fish population dynamics. New Zealand Fisheries Assessment Research Document 88/4. MAFFish Fisheries Research
Centre. Wellington.
Google Scholar
J. A. Musick, S. A. Berkeley, G. M. Cailliet, M. Camhi, G. Huntsman, M. Nammack, and M. L. Warren Jr. . 2000. Protection of marine fish stocks at risk of extinction. Fisheries 25 (3):6–8.
Google Scholar
R. A. Myers and G. Mertz . 1998. The limits of exploitation: a precautionary approach. Ecological Applications 8:S165–S169.
Google Scholar
G. J. Piet and S. Jennings . 2005. Response of potential fish community indicators to fishing. ICES Journal of Marine Science 62:214–225.
Google Scholar
A. E. Punt 2003. Evaluating the efficacy of managing West Coast groundfish resources through simulations. U.S. National Marine Fisheries Service Fishery Bulletin 101:860–873.
Google Scholar
A. E. Punt, R. A. Campbell, and A. D. M. Smith . 2001. Evaluating empirical indicators and reference points for fisheries management: application to the broadbill swordfish fishery off eastern
Australia. Marine and Freshwater Research 52:819–832.
Google Scholar
A. E. Punt, M. W. Dorn, and M. A. Haltuch . 2008. Simulation evaluation of threshold management strategies for groundfish off the U.S. West Coast. Fisheries Research 94 (3):251–266.
Google Scholar
T. J. Quinn and R. B. Deriso . 1999. Quantitative fish dynamics. Oxford University Press. New York.
Google Scholar
V. R. Restrepo and J. E. Powers . 1999. Precautionary control rules in U.S. fisheries management: specification and performance. ICES Journal Marine Science 56:846–852.
Google Scholar
J. D. Reynolds, S. Jennings, and N. K. Dulvy . 2001. Life histories of fishes and population responses. Pages 147–168.
J. D. Reynolds, G. M. Mace, K. H. Redford, and J. G. Robinson . Conservation of exploited species. Cambridge University Press. Cambridge, UK.
Google Scholar
W. E. Ricker 1954. Stock and recruitment. Journal of the Fisheries Research Board of Canada 11:559–623.
Google Scholar
M. J. Rochet and V. M. Trenkel . 2003. Which community indicators can measure the impact of fishing? A review and proposals. Canadian Journal of Fisheries and Aquatic Sciences 60:86–99.
Google Scholar
K. J. Sainsbury, A. E. Punt, and A. D. M. Smith . 2000. Design of operational management strategies for achieving fishery ecosystem objectives. ICES Journal of Marine Science 57:731–741.
Google Scholar
C. A. Simpfendorfer 1999. Mortality estimates and demographic analysis for the Australian sharpnose shark,
Rhizoprionodon taylori
, from northern Australia. U.S. National Marine Fisheries Service Fishery Bulletin 97:978–986.
Google Scholar
A. D. M. Smith 1994a. Management strategy evaluation: the light on the hill. Population Dynamics for Fisheries Management. Perth, Western Australia.
Google Scholar
A. D. M. Smith, E. J. Fulton, A. J. Hobday, D. C. Smith, and P. Shoulder . 2007. Scientific tools to support practical implementation of ecosystem-based fisheries management. ICES Journal of Marine
Science 64:633–639.
Google Scholar
A. D. M. Smith, D. C. Smith, G. N. Tuck, N. Klaer, A. E. Punt, I. Knuckey, J. Prince, A. Morison, R. Kloser, M. Haddon, S. Wayte, J. Day, G. Fay, F. Pribac, M. Fuller, B. Taylor, and L. Little .
2008. Experience in implementing harvest strategies in Australia's south-east fisheries. Fisheries Research 94 (3):373–379.
Google Scholar
T. D. Smith 1994b. Scaling fisheries: the science of measuring the effects of fishing, 1855–1955. Cambridge University Press. Cambridge, UK.
Google Scholar
I. Stobutzki, M. Miller, and D. Brewer . 2001. Sustainability of fishery bycatch: a process for assessing highly diverse and numerous bycatch. Environmental Conservation 28:167–181.
Google Scholar
C. Walters and S. J. D. Martell . 2002. Stock assessment needs for sustainable fisheries management. Bulletin of Marine Science 70:629–638.
Google Scholar
K. O. Winemiller and K. A. Rose . 1992. Patterns of life-history diversification in North America fishes: implications for population regulation. Canadian Journal of Fisheries and Aquatic Sciences
Google Scholar
Table 1.
Life history (LH) parameter values used for simulation testing, base case analyses, and sensitivity tests. Asterisks denote changes to the base case (LH2).
Table 2.
Estimates of the ratio of the proportion of mature fish in an unfished population (P[mat,F=0]), the ratio of the length at maturity (L[mat]) to length at optimal yield (L[opt]), and the ratio of the
age at maturity (A[mat]) to age at optimal yield (A[opt]) for assessed U.S. West Coast groundfish species. Summary statistics across species are provided at the bottom of the table.
Table 3.
Summary of the categories of uncertainty considered in this study. The additional sensitivities are based on the specifications for the base case (LH2) in Table 1. A qualitative description of the
fish sampled by each selectivity curve is given in parentheses.
Table 4.
Values of catch-based length proportions P[x] (P[mat] or P[opt]; defined in Methods) given depletion (spawning biomass [SB]) and steepness (h) for different values of length at maturity (L[mat] =
0.65 or 0.9 times optimal fishing length [L[opt]]) and P[obj] (sum of P[mat], P[opt], and P[mega]), and selectivity patterns (defined in Methods; RevLog = reverse logistic pattern). When P[obj] is
less than 1, only selectivities for which P[opt] + P[mega] is greater than 0 are given. Asterisks indicate P[x] values outside the trigger point (see also Figure 10).
Table 4.
Table 5.
Probability (p) of being below the target (0.4 times unfished spawning biomass [SB[0]]) and limit (0.25SB[0]) spawning biomass reference points given trigger values for catch-based length proportions
P[x] (either P[mat] or P[opt]; defined in Methods) for different values of P[obj] (sum of P[mat], P[opt], and P[mega]) and length at maturity (L[mat] ≤ 0.75 × optimal fishing length [L[opt]] or L
[mat] = 0.9L[opt]) integrated across steepness values. Probabilities assigned to P[obj] values less than 1 are integrated over the <1.1L[opt] and reverse logistic selectivity patterns (defined in
Methods); probabilities assigned to P[obj] values between 1 and 2 are based on the logisitic selectivity pattern; and probabilities assigned to P[obj] values equal to 2 are based on the >0.9L[opt]
selectivity pattern.
Jason M. Cope and André E. Punt "Length-Based Reference Points for Data-Limited Situations: Applications and Restrictions," Marine and Coastal Fisheries: Dynamics, Management, and Ecosystem Science
2009(2009), 169-186, (21 December 2009). https://doi.org/10.1577/C08-025.1
Received: 15 October 2008; Accepted: 29 April 2009; Published: 21 December 2009 | {"url":"https://complete.bioone.org/journals/marine-and-coastal-fisheries/volume-2009/issue-2009/C08-025.1/Length-Based-Reference-Points-for-Data-Limited-Situations--Applications/10.1577/C08-025.1.full","timestamp":"2024-11-09T12:36:11Z","content_type":"text/html","content_length":"273675","record_id":"<urn:uuid:267d8a6a-f754-49e5-a1c3-1244131afcbe>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00558.warc.gz"} |
Imagine the Universe!
Observing the Spectrum of M31
Recap: Your astronomy professor has tasked the class with determining the velocity of Andromeda with respect to the Milky Way. You thought of three possible ways to do this, one of which will give
you the right answer. You've decided to try using the Doppler shift of emission lines from M31's spectrum to find the velocity of M31.
You learned that the spectrum of a source can be shifted when that source is moving toward or away from you. The amount of the shift in the spectrum depends on the velocity of the source a higher
velocity results in a larger shift. The effect is called Doppler shift and is described mathematically by the following equation.
In this equation, λ' is the shifted wavelength, λ[0] is the wavelength of light emitted in lab (or at rest with respect to the observer), v is the velocity of the source, and c[0] is the speed of the
wave in a stationary medium.
Learn more about Doppler shift
Use this technique to solve for M31's velocity | {"url":"https://imagine.gsfc.nasa.gov/features/yba/M31_velocity/spectrum/index.html","timestamp":"2024-11-12T02:25:37Z","content_type":"text/html","content_length":"11728","record_id":"<urn:uuid:514816e4-4456-4c3a-af3f-3aab43fcea3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00779.warc.gz"} |
cut.data.frame: Change numeric variables into factors in dad: Three-Way / Multigroup Data Analysis Through Densities
This function changes numerical columns of a data frame x into factors. For each of these columns, its range is divided into intervals and the values of this column is recoded according to which
interval they fall.
For that, cut is applied to each column of x.
## S3 method for class 'data.frame' cut(x, breaks, labels = NULL, include.lowest = FALSE, right = TRUE, dig.lab = 3L, ordered_result = FALSE, cutcol = NULL, ...)
x data frame (can also be a tibble).
list or numeric.
• If breaks is a list, its length is equal to the number of columns in the data frame. It can be:
□ a list of numeric vectors. The j^{th} element corresponds to the column x[, j], and is a vector of two or more unique cut points
□ or a list of single numbers (each greater or equal to 2). breaks[[j]] element gives the number of intervals into which th j^{th} variable of the folder is to be cut. The
elements breaks[[j]] corresponding to non-numeric columns must be NULL; if not, there is a warning.
• If breaks is a numeric vector, it gives the number of intervals into which every column x[, j] is to be cut (see cut).
list of character vectors. If given, its length is equal to the number of columns of x. labels[[j]] gives the labels for the intervals of the j^{th} columns of the data frame. By
default, the labels are constructed using "(a,b]" interval notation. If labels = FALSE, simple integer codes are returned instead of a factor.
See cut.
include.lowest logical, indicating if, for each column x[, j], an x[i, j] equal to the lowest (or highest, for right = FALSE) 'breaks' value should be included (see cut).
right logical, indicating if the intervals should be closed on the right (and open on the left) or vice versa (see cut).
integer or integer vector, which is used when labels are not given. It determines the number of digits used in formatting the break numbers.
dig.lab • If it is a single value, it gives the number of digits for all variables of the folder (see cut).
• If it is a list of integers, its length is equal to the number of variables, and the j^{th} element gives the number of digits for the j^{th} variable of the folder.
ordered_result logical: should the results be ordered factors? (see cut)
cutcol numeric vector: indices of the columns to be converted into factors. These columns must all be numeric. Otherwise, there is a warning.
... further arguments passed to or from other methods.
If breaks is a list, its length is equal to the number of columns in the data frame. It can be:
a list of numeric vectors. The j^{th} element corresponds to the column x[, j], and is a vector of two or more unique cut points
or a list of single numbers (each greater or equal to 2). breaks[[j]] element gives the number of intervals into which th j^{th} variable of the folder is to be cut. The elements breaks[[j]]
corresponding to non-numeric columns must be NULL; if not, there is a warning.
If breaks is a numeric vector, it gives the number of intervals into which every column x[, j] is to be cut (see cut).
list of character vectors. If given, its length is equal to the number of columns of x. labels[[j]] gives the labels for the intervals of the j^{th} columns of the data frame. By default, the labels
are constructed using "(a,b]" interval notation. If labels = FALSE, simple integer codes are returned instead of a factor.
logical, indicating if, for each column x[, j], an x[i, j] equal to the lowest (or highest, for right = FALSE) 'breaks' value should be included (see cut).
logical, indicating if the intervals should be closed on the right (and open on the left) or vice versa (see cut).
integer or integer vector, which is used when labels are not given. It determines the number of digits used in formatting the break numbers.
If it is a single value, it gives the number of digits for all variables of the folder (see cut).
If it is a list of integers, its length is equal to the number of variables, and the j^{th} element gives the number of digits for the j^{th} variable of the folder.
numeric vector: indices of the columns to be converted into factors. These columns must all be numeric. Otherwise, there is a warning.
A data frame with the same column and row names as x.
If cutcol is given, each numeric column x[, j] whose number is contained in cutcol is replaced by a factor. The other columns are unmodified.
If any column x[, j] whose number is in cutcol is not numeric, it is unmodified.
If cutcol is omitted, every numerical columns are replaced by factors.
data("roses") x <- roses[roses$rose %in% c("A", "B"), c("Sha", "Sym", "Den", "rose")] cut(x, breaks = 3) cut(x, breaks = 5) cut(x, breaks = c(0, 4, 6, 10)) cut(x, breaks = list(c(0, 6, 8, 10), c(0,
5, 7, 10), c(0, 6, 7, 10))) cut(x, breaks = list(c(0, 6, 8, 10), c(0, 5, 7, 10)), cutcol = 1:2)
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/dad/man/cut.data.frame.html","timestamp":"2024-11-13T09:36:08Z","content_type":"text/html","content_length":"35920","record_id":"<urn:uuid:f0754369-d279-4d8c-b105-06963157723f>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00777.warc.gz"} |
Font, Paragraph, And Page Formatting - G1
Questions and Answers
Font, Paragraph, and Page Formatting
• 1.
Used to indicate emphasis during pronunciation.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Accent Symbol
The accent symbol is used to indicate emphasis during pronunciation. It is a diacritical mark that is placed above a letter to show that it should be stressed or pronounced with more force. This
helps in distinguishing between words that have the same spelling but different meanings, such as record (noun) and record (verb). The accent symbol is commonly used in languages like French,
Spanish, and Italian to guide correct pronunciation.
• 2.
Used to indicate web links.
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Underline
Underline is used to indicate web links. It is a formatting option that allows users to emphasize or highlight text by drawing a horizontal line beneath it. This can be particularly useful for
indicating clickable links within a body of text, making them stand out and easily identifiable to the reader.
• 3.
Used to apply global font formats to text.
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Style
The "Style" option is used to apply global font formats to text. This means that by selecting "Style," the user can apply a specific font style, such as italic or underline, to the entire text.
This option allows for consistent formatting throughout the document, ensuring that all text with the selected style will have the same appearance.
• 4.
Used to indicate book titles and other published works.
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Italics
Italics are used to indicate book titles and other published works. When writing, it is common to use italics to differentiate the title of a book or any other published work from the rest of the
text. This helps to make the title stand out and easily identifiable to the reader. By using italics, the title is given emphasis and is visually distinct from the surrounding text, making it
easier for readers to recognize and understand that it is a title.
• 5.
Text is in all caps but first letter is larger - used for formatting titles and headings.
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Small Caps
The given text formatting options are listed in all capital letters with the first letter larger. This is a common convention used to format titles and headings in written text. However, the
"Small Caps" option is different from the others as it refers to a specific text formatting style where all the letters are in uppercase, but they are slightly smaller in size compared to regular
uppercase letters. This formatting style is often used to add emphasis to certain words or phrases in a text.
• 6.
Used to indicate emphasis on a specific word or group of words.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Bold
Bold is used to indicate emphasis on a specific word or group of words. It makes the text stand out and draws attention to the emphasized words. By using bold, the reader can easily identify the
important or significant parts of the text. It is commonly used in headings, titles, important points, or to highlight key information.
• 7.
Source reference where the first line is at the left margin and the second and remaining lines are indented.
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Hanging indent
A hanging indent is a type of indentation where the first line of a paragraph is aligned with the left margin, while the remaining lines are indented. It is commonly used in academic papers,
bibliographies, and reference lists to make the information more organized and visually appealing. The hanging indent helps to distinguish between the first line, which typically includes the
author's name or title, and the subsequent lines, which provide additional details or explanations. This formatting style is widely recognized and accepted in various writing styles, such as APA
and MLA.
• 8.
Automatic continuation of text from line to line.
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Word Wrap
Word Wrap refers to the automatic continuation of text from one line to the next when the end of a line is reached. It ensures that the text fits within the width of a given space, such as a
document or a text box, without requiring manual line breaks. This feature is commonly used in word processors, text editors, and other applications to improve readability and formatting.
• 9.
Used to list non-sequential items.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Bullets
The correct answer is "Bullets" because bullets are commonly used to list non-sequential items. Bullets are small dots or symbols that help to visually separate each item in a list, making it
easier to read and understand. They are often used when the order of the items is not important or when the items are not related to each other in a specific sequence.
• 10.
Used to list items for procedural lists.
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Numbers
The correct answer is "Numbers" because when creating a procedural list, using numbers is a common way to list items in a sequential order. This helps to provide clarity and organization to the
list, allowing readers to easily follow the steps or items being presented. Bullets, stars, and symbols are also commonly used in lists, but they may not necessarily indicate a specific order or
sequence like numbers do.
• 11.
Gives the reader a quick idea about the content of the paragraph.
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. ParagrapH Heading
The correct answer is "Paragraph Heading" because it accurately describes the purpose of the given text. A paragraph heading is used to provide a brief summary or overview of the content that
follows, giving the reader a quick idea about what to expect in the paragraph. This helps the reader navigate the text more efficiently and understand the main points of the paragraph before
diving into the details.
• 12.
Automatically used to manage the content on a page.
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Soft Break
A soft break is automatically used to manage the content on a page. It is a type of line break that is inserted automatically by word processors or text editors to wrap text to the next line
without creating a new paragraph. This allows for better readability and formatting of the content on the page.
• 13.
Manually used to manage the content on a page.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Hard Break
A hard break is manually used to manage the content on a page. It is a formatting tool that forces a line or paragraph to start on a new line, creating a clear separation between the content
before and after the break. This is useful when you want to control the layout and appearance of the page and ensure that certain elements are visually distinct from each other.
• 14.
The amount of white space around the sides of a document.
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Margin
Margin refers to the amount of white space around the sides of a document. It is the blank space between the content and the edges of the page. Margins are used to create a visually pleasing
layout, provide space for binding or hole-punching, and make the document easier to read. Adjusting the margins can affect the overall appearance and readability of the document.
• 15.
Used to format text for documents such as newspapers and newsletters.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Column
Column is the correct answer because it is a formatting feature commonly used in documents such as newspapers and newsletters. Columns allow for the text to be organized into multiple vertical
sections, making it easier to read and navigate. This formatting option is particularly useful when there is a large amount of text to be displayed, as it helps to maximize the use of space and
improve the overall layout and appearance of the document.
• 16.
Used to align and organize info into groups.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Tabs
Tabs are used to align and organize information into groups. Unlike spaces, which create consistent spacing, tabs allow for flexible alignment and can be adjusted to suit specific formatting
needs. They are commonly used in word processing and spreadsheet programs to create columns and indentations, making it easier to read and understand the information presented.
• 17.
Used to indicate a new paragraph and offset long quotes.
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Indents
Indents are used to indicate a new paragraph and offset long quotes. They create a visual separation between paragraphs or sections of text, making it easier for readers to distinguish between
different ideas or pieces of information. Indents are commonly used in academic writing, formal documents, and publishing, where proper formatting and organization of the text are important. They
help improve readability and make the text more visually appealing by providing a clear structure and hierarchy to the content.
• 18.
Used in page formatting to add lines around text or graphic images.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Borders
Borders are used in page formatting to add lines around text or graphic images. They help to define the boundaries of a particular element and provide visual separation between different sections
or components on a page. Borders can be customized in terms of color, thickness, and style to enhance the overall appearance and organization of the content.
• 19.
Used to enhance the appearance and improve readability of a document.
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Page Orientation
Page orientation refers to the direction in which content is displayed on a page, either in portrait (vertical) or landscape (horizontal) mode. It is used to enhance the appearance and improve
readability of a document by allowing the content to fit better on the page. Depending on the type of content and layout preferences, page orientation can be adjusted to optimize the visual
presentation of the document.
• 20.
Single line of a paragraph left at the bottom of a page.
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. OrpHan
An orphan is a child who has lost both parents. In the given context, the single line of a paragraph left at the bottom of a page could be referring to the child being left alone without any
family or support. This aligns with the concept of an orphan, making it the correct answer.
• 21.
Single line of a paragraph left at the top of a page.
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Widow
The correct answer is "Widow" because the term "widow" refers to a woman whose spouse has died. In the given context, where the options are "Child," "Orphan," "Husband," and "Widow," the term
"widow" is the most appropriate choice as it completes the pattern of relationships. The other options do not fit the pattern or the description of a woman whose spouse has passed away.
• 22.
Which is an example of Paragraph Formatting?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Line Spacing
Paragraph formatting refers to the arrangement and appearance of text within a paragraph. Underlining, italicizing, and highlighting/selecting are examples of text formatting options that affect
individual words or phrases within a paragraph. On the other hand, line spacing refers to the vertical distance between lines of text within a paragraph, which is a type of paragraph formatting.
• 23.
Which menu do you use to change your margins?
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. File
To change margins, you would typically use the "Format" menu. However, in this particular question, the correct answer is "File." This may be because the question is referring to a specific
software or program where the option to change margins is located within the "File" menu.
• 24.
Arrangement of text on a page.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Page Formatting
Page formatting refers to the arrangement of text on a page. It includes settings such as margins, page size, orientation, and page numbering. This formatting ensures that the content is visually
appealing and well-organized on the page. Font formatting, on the other hand, deals with the appearance of the text itself, such as the font style, size, and color. Paragraph formatting focuses
on the layout and spacing of paragraphs. Style formatting involves applying predefined styles to text for consistent formatting throughout the document. Therefore, page formatting is the most
appropriate answer as it directly relates to the arrangement of text on a page.
• 25.
Arrangement of text within paragraphs.
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. ParagrapH Formatting
Paragraph formatting refers to the arrangement and organization of text within paragraphs. It includes various elements such as indentation, line spacing, alignment, and paragraph spacing. This
formatting helps in improving the readability and visual appeal of the text. Font formatting, on the other hand, deals with the appearance of individual characters, while page formatting focuses
on the layout and design of the entire page. Style formatting involves applying predefined styles to text. Therefore, the correct answer is paragraph formatting.
• 26.
Appearance, size, and attributes of text.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Font Formatting
Font formatting refers to the appearance, size, and attributes of text. It includes options such as choosing a specific font style, adjusting the font size, applying bold or italic formatting,
changing the font color, and adding special effects like underline or strikethrough. Font formatting allows users to customize the visual presentation of text in a document or on a webpage,
making it more visually appealing and easier to read. | {"url":"https://www.proprofs.com/quiz-school/story.php?title=odqzodgwv977","timestamp":"2024-11-13T18:49:23Z","content_type":"text/html","content_length":"516893","record_id":"<urn:uuid:fd3c90c9-9614-40de-bca7-a68a6050df45>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00898.warc.gz"} |
Molecular Simulation/Molecular Dynamics of the Canonical and Isothermal-Isobaric Ensembles - Wikibooks, open books for an open world
Distribution of the potential energy of a molecular simulation in the canonical (NVT) ensemble.
Isothermal-Isobaric Molecular Dynamics Simulation of Water
An ensemble is a representation of various stated of the system in the thermodynamic equilibrium. The constraints operating in the system determined the type of ensemble.
A canonical ensemble represents the possible states of a system characterized by constant values of N, V, and T (constant volume and temperature). The energy of the microstates can fluctuate, giving
a distribution of energies. In this ensemble, the system is in contact with a heat bath at a fixed temperature.
The isothermal–isobaric ensemble is a collection of systems characterized by the same values of N, P and T (constant temperature and constant pressure ensemble). This ensemble allows the volume and
the energy to fluctuate, giving a distribution of energies and volume.This ensemble has Boltzmann-weighted configurations pressure of p, surrounded by a heat bath at temperature T.
A thermostat is a modification of the equation of motion to generate a statistical ensemble at a constant temperature. The most used thermostats in molecular dynamics are the Langevin, Anderson, and
Nosé–Hoover Thermostat.
The Langevin thermostat is a Stochastic thermostat that applies friction and random force to momenta.
The Andersen thermostat assigns velocity of random particle to new velocity from Maxwellian distribution. In this thermostat the system is couple to a heat bath to impose the desired temperature. The
equations of motion are Hamiltonian with stochastic collision term.
In this stochastic thermostat, dynamics are not physical, for this is not more time-reversible or deterministic.
The Nose-Hoover thermostat allows to simulate a system which is in the NVT ensemble. The idea is to introduce a fictitious degree of freedom. This approach couples the dynamics to the heat bath
through the system Hamiltonian. The Nose equation is reversible and deterministic, and able to represent the distribution sample equivalent to a canonical ensemble.
In barostats, similar to the temperature coupling, an additional term is added to the equations of motion that effect a pressure change. Two of the must used barostat are the Anderson thermostat and
the Parrinello-Rahman barostat.
In the Nose-Hoover thermostat the Hamiltonian have a fictitious degree of freedom for heat bath:
${\displaystyle {\mathcal {H_{N}}}=\sum _{i=1}^{N}{\frac {\mathbf {p} _{i}^{2}}{2ms^{2}}}+u (r)+{\frac {p_{s}^{2}}{2Q}}+gk_{B}T\ln \left(s\right),}$
${\displaystyle P_{s}}$: is the momentum of the degree of freedom.
Q: is the effective mass
s: extended variable.
${\displaystyle gk_{B}T\ln \left(s\right)}$ :is chosen to be the potential energy of the degree of freedom.
Equations of motion follow from Hamilton's equations.
${\displaystyle {\operatorname {d} \!r_{i} \over \operatorname {d} \!t}={\partial H_{N} \over \partial p_{i}}={P_{i} \over ms^{2}}}$ Velocities of the particles
${\displaystyle {\operatorname {d} \!s \over \operatorname {d} \!t}={\partial H_{N} \over \partial p_{s}}={P_{s} \over Q}}$ velocity of the "agent"
${\displaystyle {\operatorname {d} \!p_{i} \over \operatorname {d} \!t}={\partial H_{N} \over \partial r_{i}}=-{\partial U(r) \over \partial r_{i}}=F_{i}}$ Acceleration of the particle
${\displaystyle {\displaystyle {\operatorname {d} \!p_{s} \over \operatorname {d} \!t}={\partial H_{N} \over \partial s{}}={1 \over s}\left(\sum _{i=1}^{N}{\frac {\mathbf {p} _{i}^{2}}{ms^{2}}}-gK_
{B}T\right)}}$ Acceleration on the agent | {"url":"https://en.wikibooks.org/wiki/Molecular_Simulation/Molecular_Dynamics_of_the_Canonical_and_Isothermal-Isobaric_Ensembles","timestamp":"2024-11-02T12:02:41Z","content_type":"text/html","content_length":"75495","record_id":"<urn:uuid:f5868a79-e1f6-4e9d-8bdb-60e147bdc303>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00875.warc.gz"} |
Robust recovery for stochastic block models
Nov 16, 2021
We develop an efficient algorithm for weak recovery in a robust version of the stochastic block model. The algorithm matches the statistical guarantees of the best known algorithms for the vanilla
version of the stochastic block model. In this sense, our results show that there is no price of robustness in the stochastic block model. Our work is heavily inspired by recent work of Banks,
Mohanty, and Raghavendra (SODA 2021) that provided an efficient algorithm for the corresponding distinguishing problem. Our algorithm and its analysis significantly depart from previous ones for
robust recovery. A key challenge is the peculiar optimization landscape underlying our algorithm: The planted partition may be far from optimal in the sense that completely unrelated solutions could
achieve the same objective value. This phenomenon is related to the push-out effect at the BBP phase transition for PCA. To the best of our knowledge, our algorithm is the first to achieve robust
recovery in the presence of such a push-out effect in a non-asymptotic setting. Our algorithm is an instantiation of a framework based on convex optimization (related to but distinct from
sum-of-squares), which may be useful for other robust matrix estimation problems. A by-product of our analysis is a general technique that boosts the probability of success (over the randomness of
the input) of an arbitrary robust weak-recovery algorithm from constant (or slowly vanishing) probability to exponentially high probability.
* 203 pages, to appear in FOCS 2021 | {"url":"https://www.catalyzex.com/paper/robust-recovery-for-stochastic-block-models","timestamp":"2024-11-06T14:38:02Z","content_type":"text/html","content_length":"54432","record_id":"<urn:uuid:1689c92e-5ab9-4f2f-a9ec-d6b1b3040df5>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00893.warc.gz"} |
Fixed Point Theory
Science topic
Fixed Point Theory - Science topic
In mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general
terms. Results of this kind are amongst the most generally useful in mathematics.
Questions related to Fixed Point Theory
Schauder Fixed Point conjecture deals with the existence of fixed points for certain types of operators on Banach spaces. It suggests that every non-expansive mapping of a non-empty convex, weakly
compact subset of a Banach space into itself has a fixed point. The status of this conjecture may depend on the specific assumptions and settings.
Relevant answer
A search with keywords "weak fixed point property" (which is the official name of the property you are interested in) and with "weak normal structure" (which is a widely used sufficient condition for
this property) may give you a lot of information on the subject. | {"url":"https://www.researchgate.net/topic/Fixed-Point-Theory","timestamp":"2024-11-10T09:05:24Z","content_type":"text/html","content_length":"1041112","record_id":"<urn:uuid:f4bbacc2-bff5-4909-ad48-eadefcd26c39>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00258.warc.gz"} |
An in-depth study of U-net for seismic data conditioning: Multiple removal by moveout discrimination
Seismic processing often involves suppressing multiples that are an inherent component of collected seismic data. Elaborate multiple prediction and subtraction schemes such as surface-related
multiple removal have become standard in industry workflows. In cases of limited spatial sampling, low signal-to-noise ratio, or conservative subtraction of the predicted multiples, the processed
data frequently suffer from residual multiples. To tackle these artifacts in the postmigration domain, practitioners often rely on Radon transform-based algorithms. However, such traditional
approaches are both time-consuming and parameter dependent, making them relatively complex. In this work, we present a deep learning-based alternative that provides competitive results, while
reducing the complexity of its usage, and, hence simplifying its applicability. Our proposed model demonstrates excellent performance when applied to complex field data, despite it being exclusively
trained on synthetic data. Furthermore, extensive experiments show that our method can preserve the inherent characteristics of the data, avoiding undesired oversmoothed results, while removing the
multiples from seismic offset or angle gathers. Finally, we conduct an in-depth analysis of the model, where we pinpoint the effects of the main hyperparameters on real data inference, and we
probabilistically assess its performance from a Bayesian perspective. In this study, we put particular emphasis on helping the user reveal the inner workings of the neural network and attempt to
unbox the model.
In seismic exploration, geophysicists interpret reflections of acoustic waves to extract information from the subsurface. These reflections can be classified as primaries or multiples. Primary
reflections are those seismic events whose energy has been reflected once, and they are used to describe the subsurface interfaces. In contrast, multiples are events whose energy has been reflected
more than once and appear when the signal has not taken a direct path from the source to the receiver. The presence of multiples in the recorded data set can trigger erroneous interpretations because
they interfere not only with the analysis in the poststack domain, (e.g., stratigraphic interpretation) but also with the prestack analysis, (e.g., amplitude-variation-with-offset (AVO) inversion).
For this reason, the demultiple process plays a crucial role in any seismic processing workflow. Multiple-attenuation methods can be classified as predictability- and separation-based.
Predictability-based approaches exploit the repetitive nature of multiples and their inherent connection to primaries. In general, they consist of two steps: a multiple prediction step, in which a
model of multiples is created, followed by adaptive subtraction (Verschuur et al., 1992; Abma et al., 2005) where the predicted multiples are adaptively matched and removed from the recorded
wavefield. Some of the most widely used methods are wavefield extrapolation (Berryhill and Kim, 1986; Wiggins, 1988; Wang et al., 2011), surface-related multiple elimination (SRME) (Berkhout, 1985;
Verschuur, 1991; Ma et al., 2019) and the inverse scattering series free-surface multiple elimination (Carvalho et al., 1991; Weglein et al., 1997, 2003; Ma et al., 2019). All of these approaches are
recognized for their effectiveness in mitigating free-surface multiples. Nevertheless, they involve numerous steps, and their efficacy is highly influenced by factors such as acquisition setting and
geometry, as well as signal-to-noise ratio (S/N) (Gisolf and Verschuur, 2010; Kostov et al., 2015; Ma et al., 2019). In addition, to not risk damaging weak primaries, the adaptive subtraction step is
often applied conservatively, resulting in residual multiple energy in the final image (Wang et al., 2011; Zhang et al., 2021). Recently, closed-loop SRME (CL-SRME) (Lopez and Verschuur, 2015; Zhang
and Verschuur, 2021) has been proposed to tackle the shortcomings of SRME in shallow-water settings, nonetheless, the high computational demand still poses a challenge.
However, separation-based methods translate seismic data into intermediate domains, where one can eliminate multiples based on different characteristics of multiples and primaries (Weglein et al.,
2011). The concept here is to exploit the fact that, on average, multiples have encountered a lower velocity than the primaries, and thus multiples are expected to exhibit an increasing residual
moveout (RMO) along the offset dimension. Although suffering from their own set of limitations, separation-based methods are of a computationally simpler nature and can be applied at various stages
of the processing workflow. One of the most widespread approaches to making use of this feature is the parabolic Radon transform (PRT) (Hampson, 1986). It translates prestack gathers from a
time-offset to a $tau$-p space, by mapping them by a set of parabolic events. By design, PRT works best in the case of multiples perfectly following parabolic paths and for unlimited offset axis,
both of these aspects are, nevertheless, not realizable in practice (Hampson, 1986). As a consequence, PRT can potentially degrade parts of the primary signal. Another limitation appears when dealing
with gathers that are coarsely sampled. In such cases, data sparsity can lead to false energy mapping to the $tau$-p space, which in turn leads to insufficient separation of primaries and multiples.
This either creates residual multiple energy or removes primary energy. To address some of the aforementioned weaknesses, the high-resolution Radon multiple removal method has been introduced (Sacchi
and Ulrych, 1995; Sacchi and Porsani, 1999; Trad et al., 2003). It is, however, an approach of higher complexity, requiring the interpreter to manually fine-tune numerous parameters. Moreover,
another disadvantage arises from the necessary time-consuming step of picking an appropriate mute function in the $tau$-p space to separate primaries from potential multiples. Oftentimes, the nature
of the data set requires a laterally varying mute function design, adding yet another level of complexity. When it comes to industry workflows, the usage of predictability-based methods in the
premigration domain, e.g., SRME, and separation-based methods in the postmigration gather conditioning, e.g., PRT demultiple, are typically combined. In this fashion, interpreters can leverage the
best from both methodologies and achieve more reliable outcomes.
With the introduction of deep learning, a new vein of methods has emerged (Breuer et al., 2020; Bugge et al., 2021; Qu et al., 2021; Wang et al., 2022). These approaches are based on artificial
neural network architectures, which are universal approximators, i.e., they can, in theory, model any continuous function. Breuer et al. (2020) present a deep learning-based method to trim statics
and remove multiples on postmigration common-depth point (CDP) gathers using a moveout discriminator approach trained on synthetic data. Subsequently, Bugge et al. (2021) propose a similar approach
that simultaneously tackled both demultiple and denoising on prestack gathers. Qu et al. (2021) present a hybrid workflow combining a deep neural network trained on synthetic data for shallow
reflection reconstruction and PRT for deeper event reconstruction followed by CL-SRME. Finally, Wang et al. (2022) introduce a solution that exploits noise and data augmentation applied to training
data generated using SRME or PRT for the free-surface multiple removal. Unfortunately, although the aforementioned methods have contributed to improving state-of-the-art results on multiple removal,
they still suffer from generalization problems. To deal with this issue, Qu et al. (2021) require the generation of synthetic training data for each field of interest. Similarly, the approach by Wang
et al. (2022) necessitates the synthetization of labeled data per survey using conventional multiple elimination methods for real data applications. Note, however, that these are proxy solutions, as
they do not attempt to solve the survey-data set dependency of the model, but rather bypass it.
In this paper, we introduce and perform a detailed analysis of a separation-based automated end-to-end deep-learning approach, which can be applied on moveout-corrected post-migration CDP gathers to
remove events that follow parabolic-like patterns while preserving the primary energy at cross-points. As already pointed out by Qu et al. (2021), training the model on data sets preprocessed using
traditional methods introduces the limitations of such methods into the model as a side effect. To decouple the model from such limitations, we follow the workflow introduced in Breuer et al. (2020)
and train a convolutional neural network (CNN) with synthetic pairs of multiple-contaminated and multiple-free gathers. The network is trained on feature-rich synthetic CDP gathers designed to enable
the trained network to identify multiples in the prestack domain based on the reflection moveout paths rather than periodicity, thus making the model highly generalizable and independent of
acquisition design. Furthermore, our approach works in a parameter-free manner, relieving the user from any manual task. In addition, we conduct an in-depth hyperparameter search, where we study the
role played by the different components and their impact on the outcome. To that end, we visualize the inner workings of our neural network, to pinpoint the effect of the main hyperparameters on
physical events. Finally, extensive in-field evaluations show that our model is able to preserve the inherent characteristics of the data in different scenarios, and thus, to generalize well. As a
result, our approach can be seen as an alternative to traditional moveout separation-based approaches in the postmigration stage, such as PRT, in existing processing workflows.
U-net (Ronneberger et al., 2015) is a CNN topology, which was initially designed for semantic segmentation tasks in the medical domain. However, due to its generalization capacity, it has been widely
adapted to various other domains. The architecture of U-net is divided into two paths: the contraction path, known as the encoder, designed to capture the image’s context, and the expanding path,
referred to as the decoder, responsible for facilitating accurate localization. Both paths are symmetric and made of blocks of convolutional layers followed either by a down-sampling operation
(encoder) or by an up-sampling operation (decoder). In addition to the encoder-decoder scheme, U-net has long skip connections that bypass some layers and connect different blocks from the encoder to
their counterparts from the decoder. These shortcuts provide alternative paths for the gradient during back-propagation that help the model to incorporate fine-grained details in the predictions.
Figure 1 shows the architecture of U-net for the demultiple scenario.
CNN architectures are successfully used in a large variety of applications, ranging from computer vision to natural language processing. They are made up of neurons that have learnable parameters
arranged in filter-shape structures. Each of these neurons receives some inputs, performs a dot product, and finally, applies a nonlinear activation function (e.g., sigmoid or rectified linear unit
[ReLU]) (Nair and Hinton, 2010). The output of the activation for a given filter is called a feature map or an activation map. Although the learning mechanism (back-propagation) is well understood,
the intrinsic details, such as the reason why a specific decision or prediction is made, are not. As a result, neural networks are typically treated as black box models. To better understand the
internal workings, we visualize different components of the network. In particular, we investigate the filters and the feature maps to try to conceptually unravel the learning of the model when
dealing with demultiple problems.
On the left side of Figure 2, we can see some filters that the network has learned. Seemingly, they do not display any human-recognizable pattern from which one can draw conclusions. The statistics
are, however, more informative. The filters’ weights appear to always follow a Gaussian distribution, independent of the layer. Similar observations by Gavrikov and Keuper (2022) suggest that
convolution filters do not have distribution shifts along different axes of meta-parameters, such as data type, task, architecture, or layer depth. Nonetheless, we notice that the first block might
break these empirical deductions, meaning that depth could indeed play a certain role in shallow layers. On the right side of Figure 2, we can observe some feature maps from different blocks. These
intermediate representations display how the network modifies the input image and help us understand how multiples are identified and suppressed. On the one hand, as expected, we can visually assess
a gradual loss of resolution (high-frequency components) in the first blocks, due to their down-sampling operations from the contraction path. The opposite effect is seen in the last blocks, caused
by the up-sampling operations from the expanding path. However, contrary to what might be intuitive, the network is not learning to suppress multiples directly from the beginning. In fact, they are
present in all the blocks, and almost in all feature maps. What the network seems to learn, instead, is to identify the multiples in each block to have a full understanding of the event. In this
manner, in the very last layer, the model combines the feature maps in such a way that the undesirable events (multiples) are canceled out.
When interpreting real seismic data, we do not have the ground truth (GT) (annotated data). Unfortunately, these labeled data are some of the cornerstones of any supervised deep-learning model.
Manual interpretation is an effective way to acquire GT, but it is an expensive and time-consuming process. Furthermore, its outcomes rarely contain all the events that would define the
characteristics of the subsurface. To address this issue, in the demultiple scenario, one could create real labeled data, by using a traditional approach, for example, the PRT (Wang et al., 2022).
Nevertheless, the network would be biased and limited by the performance of the traditional approach (Qu et al., 2021).
In this work, we introduce a network that is able to suppress multiples regardless of the domain and nature of the seismic gathers, i.e., offset or angle domain and time or depth domain. To achieve
this, we systematically generate a substantial data set comprising 40,000 synthetic pairs of multiple-contaminated and multiple-free gathers. Exercising precise control over the features of the
synthetically generated data set via an extensive parameter space empowers us to create a training data set that significantly enhances the model’s capacity to perform well on a wide range of
real-world scenarios. Crucially, it is worth noting that our network’s proficiency does not stem from its ability to identify multiples based on their periodic relationships to specific primary
signals. Instead, our goal is to exploit geometric differences in the RMO and cross-points between multiples and even barely visible primaries in the prestack gather. This parameter space consists of
(1) variations of the density of multiples and primaries, and their position along the vertical axis; (2) variations of the strength of the RMO effect controlling minimum multiple moveout; (3)
variations of the spectral components of the source wavelet together with a central frequency decay along the vertical axis; and (4) variations of amplitude change with offset/angle.
The synthetic gathers for training are created by generating a prestack reflectivity series $rp(t0)=rp(t,h=0)$ at zero offset $h=0$, first, for the primary reflections $p$. For the expansion to
nonzero-offset $rp(t,h>0)$, linear interval velocity functions are defined, converted to RMS $vp$ velocity, and applied in the hyperbolic normal moveout (NMO) formula to calculate the event time $tp
(t0,h,vp(t0))$. For the amplitude part, the Shuey approximation (Shuey, 1985) is used with $rp(t0)$ as the intercept of the amplitude variation with angle equation, to which we add a gradient term.
A preliminary version of the primary-only gather is generated by convolving $rp(t,h)$ with a synthetic source wavelet of Ricker-type whose degrees of freedom are central frequency, bandwidth, and
phase shift. Furthermore, we generate custom wavelets through the superposition of two individual wavelets, which are weighted and mutually shifted. Analogously, we also generate a nonzero-offset
reflectivity series $rm(t,h)$ for the multiples, followed by convolution with the same wavelets as used for the primaries. The main difference from the primary reflectivity is the lower velocity $vm$
used to calculate $tm(t0,h,vm(t0))$. This gather of the multiples is added to the primary-only gather to generate a gather that contains both, the primaries and the multiples. Subsequent NMO
correction of the gathers with perturbed RMS velocity, obtained by time-dependent perturbations of the interval primary velocity model, approximates gathers after prestack migration. The primaries
appear almost flat (not necessarily perfectly flat) and multiples show stronger positive moveout than the primaries, thus they are seen in the gathers as events that intersect primaries and have a
larger vertical extent. The NMO-corrected primary-only gathers are the GT in the process of training the network; the NMO-corrected gathers of primaries and multiples are the input gathers. The
values of all parameters are obtained by Monte Carlo sampling of the parameter space within the bounds defined by the user of the modeling routine. Setting such bounds follows the guidelines of the
variability of the corresponding parameters in field data acquisition and data preprocessing. It seems reasonable to let, e.g., the central frequency of the wavelet vary between 10 and 150 Hz to
account (for deep seismic data) for the range of frequency content of various typical sources and the decay of frequency toward large depth. For the case of synthetic training data for shallow
applications with typically much higher frequency content, one could stay within the same frequency limits and the same vertical resolution, because the network does not acknowledge physical units
and, thus, makes no difference between realizations of N times higher frequency data on an N times higher resolved vertical grid. Some parameter bounds, however, have a steering effect on the
functionality of the trained network. For example, defining a minimum-allowed moveout for the removed multiples teaches the network not to suppress potentially nonflattened primaries.
Hyperparameters are values that control the learning process of neural networks. They define different aspects of the model, such as the learning rate, optimizer, depth, activation function, and loss
function, just to mention a few. In general, neural networks are notorious for being very sensitive to the choice of hyperparameters, resulting in relatively different outcomes when the parameters
are slightly modified.
In this section, we identify and describe the empirical effects that some hyperparameters have on our multiple-attenuation network. In particular, we focus on the impact of the optimizer, sampling
technique, kernel size, loss function, and depth. To that end, we average validation results of 25 independent runs to guarantee reproducibility. We evaluate these results on four different metrics:
mean-square error (MSE), S/N, structural similarity, and peak correlation. Furthermore, we validate the outcome on synthetic and real data sets. In this manner, we ensure certain generalizability and
neutrality in our observations.
Optimization functions
Within a neural network, the optimizer is an algorithm that modifies the weights of the network to minimize the loss function. They are built upon the idea of gradient descent, i.e., the greedy
approach of iteratively decreasing the loss function by following the gradient. There are two main groups of optimizers: adaptive and nonadaptive methods. Hardt et al. (2016) argue that nonadaptive
methods, such as stochastic gradient descent (SGD), are conceptually more stable for convex and continuous optimization, having smaller generalization errors. They also prove that, under certain
conditions, the results can be carried over to nonconvex loss functions. Follow-up work by Wilson et al. (2017) finds empirical evidence of the poor generalization performance of adaptive
optimization methods, such as adaptive moment estimation (Adam) (Kingma and Ba, 2014). Even when adaptive methods achieve a better training loss than nonadaptive methods, the test performance is
worse. Finally, Choi et al. (2019) claim that the hyperparameter of the optimizer could be the reason that adaptive optimization algorithms failed to generalize.
In our experiments, we evaluate the impact of SGD with momentum and Adam optimizers for the demultiple task. Figure 3a shows the validation metrics in synthetic data for the two selected optimizers.
In these plots, we can observe how the Adam optimization converges faster than the nonadaptive one (SGD) and also ends up in lower local minima, i.e., all the metrics reach better values.
Nonetheless, although the gap between both optimizers seems to be significant when inspecting synthetic results, the differences are negligible (see Figure 3b). Furthermore, surprisingly, the
demultiple outcomes on the real data set suggest that the model trained with the Adam optimizer tends to fail to generalize more often, and its results are not always consistent, varying among
different runs. In Figure 3c, we display some results on real data, where we see how the Adam approach occasionally suppresses the primary energy, as it does for the reflection marked by the red
rectangle from the second gather, and leaves some residual multiples in the far stack, as it does for the reflection marked by the red rectangle from the sixth and seventh gathers. Despite the fact
that our model is trained using synthetic data, the system is meant to be applied to real data. Therefore, we prefer to use the SGD optimizer.
Sampling technique and kernel size
The CNN-based models gradually down-sample their inputs so that the receptive fields of the deeper filters can reach most of the image at a certain depth. By doing that, the pixel dependencies, which
lie far away from each other in an image, can be captured. This is an important aspect for any neural network that needs to interact with content that is spread on the input image, such as in fault
detection or multiple removal. In our study, we conduct a twofold analysis related to the sampling, we evaluate the effect of different sampling techniques, and we analyze the impact of the kernel
Sampling techniques refer to those methods that decrease or increase the size of an input. In the contraction path of U-net, there are two down-sampling approaches: the pooling operation and the
convolution operation. Although typically the pooling operation does not have learnable parameters (less computationally demanding), the convolutional operation does have such parameters. As a
consequence, the latter can capture additional information, whereas the pooling will always imply a loss of information. In the expanding path, the decoder recombines the features sequentially until
it recovers the original input size. To that end, this path requires up-sampling operations. Similarly to the contraction path, there are two main approaches: interpolation operation and transposed
convolution operation. The first type of operation is parameter-free and lossy, and the second is the opposite. To evaluate the impact of the sampling methods, both down- and up-sampling, we check
the different combinations. For the sake of simplicity, we restrict our analysis to the default configurations, which are max-pooling as a nonlearnable down-sampling technique and bilinear as a
nonlearnable up-sampling technique.
Based on Figure 4a, experiments with transposed convolutions have less stable runs, nonetheless, all the sampling techniques have similar performance. Therefore, the extra computational cost of the
learnable operations is not justified. Furthermore, the combination of max-pooling and bilinear, which are both nonlearnable sampling methods, provides the most stable results. Testing with synthetic
and real data shows no difference among the configurations.
In addition to the sampling techniques, the kernel size might also contribute to the final outcomes. This hyperparameter determines to what degree the sampling operation down- and up-samples the
corresponding input. Given that we work with elongated events, we empirically analyze the impact of kernels with square and nonsquare shapes and assess the impact of more aggressive sampling,
i.e., the down- and up-sampling factors. Table 1 and Figure 5 describe the scenarios of our examples. Although the validation metrics seem to report the same behavior for all of the kernels, we
observe a consistent improvement after quality control when using a 1 × 1, 2 × 2, 2 × 2, 2 × 2 kernel sequence (see Figure 4b). Models trained with the larger max-pooling kernels appear to remove
multiples more aggressively, i.e., oversmoothing results and suppressing far stack energy of events that exhibit small moveout, marked by the rectangles in Figure 4c. According to Araujo et al.
(2019), the effective maximum receptive field of the model trained with the chosen kernel sequence is 112 samples, meaning that a single pixel in the output is influenced by a square of 112 × 112
pixels from the input, as shown in Figure 6. This appears to be sufficient to observe multiples and their localized interactions with primaries, and hence we conclude that such a localized view is
more important than the global view of the gather for this task. Moreover, the models trained with larger kernels seem to be more sensitive to the initial weights than their counterparts trained with
smaller max-pooling kernels, as confirmed in the probabilistic study (see the following section).
Loss function
The selection of a loss function is a challenging task that has a direct impact on the model’s behavior. For this reason, it is important to choose a function that captures the relevant information
that needs to be propagated through the network. In this work, we advocate for the use of MSE for its simplicity and capacity to deal with outliers. This loss calculates the difference between the
model’s predictions
and the ground truth
, squares and averages it, across the entire data set (
samples). Mathematically, it can be formulated as
Besides formulating the loss function, it is crucial to define the primary objective. This entails clearly outlining the specific task that the network is designed to achieve. To elaborate further,
we propose two distinct objectives: direct and inverse. Given an input image x, the direct proposal tackles the demultiple problem by optimizing the prediction $y^$, which is a multiple-free image.
The inverse approach, however, formulates the solution from another perspective. It defines the objective task as an optimization problem, where the prediction $y^$ should contain only the multiples
of the input image, i.e., $x−y$ (see Figure 7a). In this way, the network should focus exclusively on identifying the multiples, omitting the rest. Once the model is able to do that, we can subtract
the prediction from the input image to obtain a multiple-free image. In Figure 7b, we plot the metrics using different objective functions. Interestingly, the results from both scenarios are similar.
We hypothesize that the network learns to cancel out the same features, in the direct and inverse formulation, and consequently, the outcomes seem equivalent. Nonetheless, more advanced loss
functions could potentially improve the results.
Depth of the network
The goal of our neural network is to model a function F that maps the raw input data x to a multiple-free output. To that end, we create F by concatenating n nonlinear functions f, i.e., $F(x)=f1(f2
(..fn(x))))$. Notice that adding more layers provides higher capacity to the network, which leads to deeper networks. In our experiments, we investigate the effect of three levels of depth. We take
as a baseline the standard model shown in Figure 1, which consists of nine blocks. Then, we remove two down-sampling and two up-sampling layers to create a smaller version, called “small U-net.”
Finally, we repeat the procedure, but this time adding two down-sampling and two up-sampling layers into the baseline. We call this last model big U-net. Table 2 shows the details of each topology
and their inference times.
Figure 8a and 8b shows the depth analysis from which we derive the following statements. (1) The small U-net is too shallow and does not have sufficient capacity to suppress the multiples and
occasionally oversmoothes the gathers. As a result, metrics and real data underperform when compared with the standard model. (2) The big U-net model is overparametrized, and therefore, the extra
layers do not offer any further improvement. Note, however, that this analysis involves a training data set of a constant size and thus, training the big U-net model on a larger data set could yield
different results. In summary, our standard model has the optimal trade-off between quality and size.
Alternative topologies
The attention U-net architecture, proposed by Oktay et al. (2018), enhances the standard U-net model by incorporating self-attention mechanisms (Jetley et al., 2018). These mechanisms, such as
channel and spatial attention, allow the model to adaptively emphasize relevant features during both the encoding and decoding stages. By selectively highlighting informative regions and suppressing
noise or irrelevant details, the attention U-net improves its overall performance. In contrast, the MultiResUNet architecture introduced by Ibtehaz and Rahman (2020) introduces the concept of
multiresolution residual blocks within the U-net structure. The main idea is that the incorporation of multiple resolution paths will help the architecture to effectively capture local and global
contextual information. The fusion of information from different resolution levels enables MultiResUNet to learn intricate details and capture a broader context, enhancing its segmentation
capabilities. In terms of architecture details, attention U-net and MultiResUNet consist of nine layers with max-pooling operations at resolutions of 2 × 2, 2 × 2, 2 × 2, 2 × 2. Attention U-net has a
parameter count of 34.9 million and uses a combination of max-pooling and bilinear interpolation for down-sampling and up-sampling. MultiResUNet has 7.2 million parameters and uses max-pooling for
down-sampling and transposed convolution for up-sampling. In Figure 9a, we show the evaluation scores from topology analysis. Twenty-five models have been trained with each topology and tested on
synthetic testing data. Based on the depicted curves, the performance of the attention U-net is comparable with that of the proposed U-net architecture, whereas the MultiResUNet demonstrates
noticeably inferior results. Figure 9c shows the results of U-net, MultiResUNet, and attention U-net and amplitudes extracted along two selected reflectors, which are plotted above the gathers. Based
on these plots, it becomes evident that the MultiResUNet affects the absolute amplitudes of primaries, whereas the U-net and attention U-net output primaries with an overall similar amplitude
intercept and gradient. Moreover, the MultiResUNet has not successfully suppressed the multiple crossing of the red reflector. Figure 9b shows another comparison of these three topologies, this time,
however, on numerous gathers from the Norwegian Sea. A comparable observation can be made based on this figure.
In summary, both the attention U-net and the MultiResUNet introduce modifications to the standard U-net architecture to address specific limitations and potentially enhance performance. For our use
case, the MultiResUNet demonstrates a tendency to diminish the absolute amplitude across the gathers rather than solely addressing the presence of multiples. In contrast, the U-net and attention
U-net exhibit similar outcomes, although the attention U-net occasionally exhibits too severe of an effect. It is important to note that the attention U-net, as opposed to standard U-net, is not
fully convolutional and thus is input-shape dependent. Despite its competitive performance, the constraint of this topology to a specific input shape is too limiting for our use case.
Once we have analyzed the role of the different parameters, it is important to quantify the uncertainty of the model, i.e., epistemic uncertainty. In this manner, one can determine how reliable the
actual predictions are, avoiding miscalibrated models. To that end, we need to move from a deterministic approach, where we solely rely on a point estimator, to a probabilistic approach, where we
leverage Bayesian probabilities via Bayesian neural networks (BNNs). Although traditionally BNNs have been computationally expensive and difficult to train, recent approximations, such as deep
ensembles (Lakshminarayanan et al., 2017), concrete dropout (Gal et al., 2017), and stochastic weight averaging Gaussian (Maddox et al., 2019), have eased these constraints.
In this work, we have implemented deep-ensemble learning, which can be considered a special case of BNNs (Wilson et al., 2022). The idea behind ensemble learning comes from the observation that
aggregating the predictions of a large set of average-performing but independent predictors can lead to better predictions than a single well-performing expert predictor (Breiman, 1996). In our case,
however, we prefer to use such a method to obtain the uncertainty associated with the underlying processes. This is achieved by normalizing and then computing the standard deviation of the
predictions of numerous sampled model parameterizations. Notice that the resulting range of values indicates the percentage with respect to the output signal amplitude. As a result, if the different
models agree on the multiple-free solutions and their absolute amplitudes, then the uncertainty is low. Otherwise, the uncertainty is high.
Figure 10a–10d shows four prestack gathers from a real data set and their associated uncertainties for a set of experiments (see Table 3). These uncertainty figures show the areas of the prestack
gather where the models have a lack of knowledge, resulting in a certain ambiguity within the multiple removal process. In practice, this manifests itself as variations in the amplitude or shape of
the removed events across parameterized models. Low uncertainty is displayed in black or dark purple, high uncertainty is displayed in pink and yellow. Given that the demultiple model is not perfect
and hence its epistemic uncertainties are not zero, one has to target a model that does not exhibit high amplitude uncertainties that align with primaries. Otherwise, this would suggest that some
realizations of the model remove or suppress primary energy, which is highly undesirable. However, uncertainties following a parabolic or a linear moveout are tolerated, as they potentially belong to
a multiple. Such uncertainty suggests that the model is not certain about whether the event is a multiple or there is a mismatch in the amplitude. We observe that Models B, C, E, and H exhibit a
clear increased uncertainty throughout the entire gather, hinting that some of these model realizations do affect the amplitudes of primaries. On the contrary, Models A, D, F, G, and I only produce
uncertainties with significant values that follow parabolic events which we presume to be multiples. Finally, although these five models provide similar uncertainty maps, Models A, F, and I achieve
the best qualitative performance (see the previous section). Therefore, as already mentioned, we prefer Model A because it offers a better trade-off between quality, size, and flexibility.
Figure 11a shows the outcomes of our method and compares them to the results obtained from the Radon-based demultiple technique. The assessment is carried out on gathers obtained from a synthetic
data set. This data set is created using a 3D finite-difference method that incorporates a free-surface boundary condition. The gathers are represented in the depth-offset domain, and our
deep-learning approach was directly applied in this domain. Both our method and the Radon-based demultiple technique successfully eliminate the clearly defined parabolic events within a depth range
of 3–5 km. However, in the far-offset shallow section, our deep-learning approach exhibits superior performance in removing steeply dipping linear noise when compared with the Radon-based demultiple
method. For deep-learning approaches, which take seismic data as input and produce seismic data as output, amplitude preservation of the primaries is of utmost importance. Figure 11b shows the
amplitude preservation capabilities of the U-net (our deep-learning model) and the Radon-based demultiple results. Displayed amplitudes are extracted along the red and blue lines from the raw gather
and plotted above their respective gathers. The red line follows a potential phase-reversal event with a positive intercept, whereas the blue line traces an event with a negative intercept and a
positive gradient. Both the deep-learning approach and the Radon-based method preserve the overall amplitude trend. In addition to multiple removal, an AVO-preserving denoising effect of the
deep-learning approach can be observed. In the difference plots, marked by arrows, we observe how high amplitude events in the removed energy along the lines align closely for both approaches.
In addition to tests on synthetic data, the trained model has been tested on numerous real postmigration data sets without any additional fine-tuning. Figure 12 shows the results of our method as
compared with a traditional Radon-based demultiple approach on a real data set from the Norwegian Sea, subsequently referred to as Field Data A. In addition, Figure 13 shows a similar comparison
using prestack gathers of the Volve field data set (Equinor, 2018) from the Norwegian North Sea, subsequently referred to as Field Data B. The deep-learning approach was applied directly in the
depth-angle domain, whereas the PRT application involved depth-to-time and angle-to-offset conversions. For both real data examples, we plot the removed multiples for the proposed and the traditional
methods to help visualize the main discrepancies between the two systems. From this visualization, we observe that PRT predominantly removes events along idealized parabolas, which are unlikely to
closely represent multiples in real data CDP gathers. Complex overburden causes deviations from the parabolic shape, thus, the mapping of such multiples to clusters of points in $tau$-p space is in
disagreement with the attempt of modern high-resolution PRTs to achieve a sparse $tau$-p representation. Our deep-learning approach, in contrast, does not make use of any specific path of the
multiples and is able — based on what was shown to the network during training — to remove along its full path any given event that intersects other events with smaller RMO. In this way, it also
removes converted wave energy and steeply down-dipping linear noise, and it is better suited to remove residuals of a demultiple process of the premigration steps, i.e., which appear only in the far
stack. Such events can be seen in the far stack of the first three gathers in Figure 12. We also provide the results of the same data sets as full-stack sections in Figures 14 and 15. Herein, we can
see how the lateral coherency of the removed events is consistent in both approaches. For Field Data A, the removed multiples by the U-net model appear to align better with the overlaying
stratigraphy, resulting in sharper results.
Given postmigration prestack gathers, our deep-learning approach identifies the multiples and cancels them out from the output result based on their moveout and geometric interference with primaries
in a parameter-free manner. The main success of our implementation is not only the ability to remove multiples, but to do it while preserving the high-frequency components that characterize the data,
and to generalize to different data scenarios without the need to retrain. Although denoising is a common postprocessing step targeting these frequency components, a noncontrolled application of it
can lead to smoothing of the data, resulting in a loss of relevant features. We believe that seismic interpretation is a challenging task, therefore, any processing method needs to guarantee the
preservation of these characteristics.
Despite the fact that in the past years CNNs have been extensively used in seismic applications, there is still a lack of rigorous explanation of hyperparameters choice. Thus, we think that the
geophysics community would benefit from our approach to unbox neural networks to establish the relationship between the neural network parameters and their effects on the demultiple task from a
deterministic and probabilistic perspective. In particular, our extensive set of experiments has determined that for multiple removal (1) the SGD optimizer is a better candidate than Adam, as it
leads to more stable, consistent results; (2) the choice of the sampling operations seems to play a minor role and thus, we prefer to keep the model simpler with less demanding up- and down-sampling
operators; (3) as for the kernel size, we empirically have found that square- and small-sized kernels do consistently outperform other kernel shapes when applied on real data; (4) both direct and
inverse loss functions provide similar results; and finally (5) the depth of the network has a dramatic effect on the performance and consequently, one needs to determine the correct trade-off
between network capacity and inference time. Although the results are encouraging, the empirical assessment only represents a subset of the total number of possible hyperparameter configurations.
Nonetheless, they are sufficient to decide which hyperparameters play an important role in improving the transferability of features learned from synthetic to real data applications. As demonstrated
by Breuer et al. (2020), similar neural network topologies can be used for different gather-to-gather processing steps, such as trim-statics. Hence, our hyperparameter analysis should also be of
value for other seismic gather-to-gather approaches based on the U-net architecture.
In general, it is relatively trivial to train a neural network that can yield accurate results on a synthetic data set. However, it is highly challenging to obtain similar performance on unseen real
data with potentially very different acquisition, geology, and processing settings. For this reason, producing synthetic data that realistically mimic subsurface events is crucial ongoing research (
Durall et al., 2021). During our experimental evaluation, we have iteratively modeled different synthetic training data to investigate the effects on real data. This data-driven methodology has
allowed us to generate a concise multiple-oriented data set, with high generalization properties. Instead of focusing on the large-scale periodic relationships between primary and multiple events, in
our approach, we use their geometric shapes and localized interactions. Counterintuitively, the proposed approach does not require a global view of the gather to complete the task. As a result,
training the model with larger or elongated max-pooling kernels to increase the receptive field size does not enhance performance; instead, it introduces unwanted compression-decompression artifacts
on the primary features (Figure 10). Nonetheless, for tasks where a global view of the gather is of critical importance (e.g. approaches using periodic relationships between events), elongated
max-pooling kernels might prove beneficial. Moreover, the main objective of our hyperparameter study was the ability of the model to generalize. To that end, we test the intermediate models on
numerous data sets and evaluate their performance qualitatively, as opposed to solely benchmarking using quantitative metrics on synthetic testing data. This fact, together with a feature-rich
training data set containing primaries and multiples of various frequencies, moveouts, densities, and noise levels, allows us to reliably process data sets of various characteristics.
The model is applicable to both offset and angle gathers in the time and depth domains, using a parameter-free approach. In this way, our approach can expedite interpretation tasks, providing human
experts with assistance in managing extensive volumes of real data.
In this work, we propose a demultiple model that can be interpreted as an image-to-image transformation system in the category of separation-based multiple removal approaches. Thanks to elaborate
hyperparameter analysis using ensemble methods and iterative synthetic training data generation, our approach has proven to generalize well when applied to various synthetic and real field data
without the necessity to retrain the model. The events removed by our method and PRT are mostly similar, with occasional advantages for the proposed methodology. This advantage is pronounced in cases
where the remnant multiple energy is concentrated in the far stack. Due to its parameter-free nature and independence of the CDP gather domain (i.e., offset, angle, depth, and time), this approach
has the potential to drastically reduce the turn-over time for postmigration gather conditioning.
The first and second authors contributed equally to this paper. The authors wish to express their gratitude to the members of the Fraunhofer ITWM DLSeis consortium (http://dlseis.org) for their
generous financial support. Additionally, we extend our appreciation to Equinor ASA, Vår Energy ASA, Petoro AS, and ConocoPhillips Skandinavia AS for granting us permission to utilize their Field
Data A, and to ExxonMobil for providing the synthetic data set featured in this paper. Furthermore, we acknowledge Equinor and the Volve License partners for making the Volve seismic field data
(Field Data B) available under an Equinor Open Data Licence.
Data associated with this research are confidential and cannot be released.
Biographies and photographs of the authors are not available. | {"url":"https://pubs.geoscienceworld.org/seg/geophysics/article/89/1/WA233/630531/An-in-depth-study-of-U-net-for-seismic-data","timestamp":"2024-11-08T12:33:00Z","content_type":"text/html","content_length":"287621","record_id":"<urn:uuid:d37ff592-46ec-47b1-94bc-c0db1ac612cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00154.warc.gz"} |
CVL functions allow you to reuse parts of a specification, such as common assumptions, assertions, or basic calculations. Additionally they can be used for basic calculations and for function
The syntax for CVL functions is given by the following EBNF grammar:
function ::= [ "override" ]
"function" id
[ "(" params ")" ]
[ "returns" type ]
See Basic Syntax for the id production, Types for the type production, and Statements for the block production.
• Function with a return:
function abs_value_difference(uint256 x, uint256 y) returns uint256 {
if (x < y) {
return y - x;
} else {
return x - y;
Using CVL functions
CVL Function may be called from within a rule, or from within another CVL function. | {"url":"https://docs.certora.com/en/latest/docs/cvl/functions.html","timestamp":"2024-11-08T02:30:36Z","content_type":"text/html","content_length":"13824","record_id":"<urn:uuid:0473e4fd-3ab7-4fb1-90fe-b71b8716d011>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00469.warc.gz"} |
Ordinal Data - (Data Visualization) - Vocab, Definition, Explanations | Fiveable
Ordinal Data
from class:
Data Visualization
Ordinal data refers to a type of categorical data where the values have a defined order or ranking, but the intervals between the values are not necessarily consistent. This kind of data allows for
comparison in terms of greater than, less than, or equal to, but does not provide precise information about the differences between the ranks. In the context of correlation analysis and
visualization, understanding ordinal data is crucial for accurately interpreting relationships and trends among variables.
congrats on reading the definition of Ordinal Data. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Ordinal data is often collected through surveys where respondents rank items, like satisfaction levels from 'very satisfied' to 'very dissatisfied'.
2. When analyzing ordinal data, non-parametric statistical methods are typically used, as these methods do not assume normal distribution.
3. In visualizations, ordinal data can be represented using bar charts or line graphs, where the order of categories is important.
4. Correlation coefficients that apply to ordinal data include Spearman's rank correlation, which assesses how well the relationship between two variables can be described using a monotonic
5. The key challenge with ordinal data in correlation analysis is that while you know the order, you donโ t know the exact distance between ranks.
Review Questions
• How does ordinal data differ from nominal and interval data in terms of measurement and analysis?
□ Ordinal data differs from nominal data because it has a defined order or ranking among categories, allowing for comparisons such as greater than or less than. In contrast to interval data,
which has consistent intervals between values allowing for mathematical operations, ordinal data does not provide equal distances between ranks. This means that while you can say one rank is
higher than another, you cannot quantify how much higher it is.
• What statistical methods are appropriate for analyzing relationships involving ordinal data, and why are they chosen over others?
□ Non-parametric statistical methods like Spearman's rank correlation are appropriate for analyzing relationships involving ordinal data. These methods are chosen because they do not assume
that the data follows a normal distribution and they focus on the ranks rather than the actual values. This allows researchers to make valid inferences about relationships without being
misled by the non-equal spacing of ranks.
• Evaluate how visualizing ordinal data can affect the interpretation of correlation results in research studies.
□ Visualizing ordinal data effectively can significantly enhance the interpretation of correlation results by clearly displaying the ranked relationships between variables. When using
appropriate charts like bar graphs or ordered scatter plots, viewers can quickly grasp trends and patterns within the ranked categories. However, if visualizations misrepresent ordinal
relationships or fail to emphasize their inherent ranking, it could lead to incorrect conclusions about the strength or nature of correlations. Thus, careful design choices in visual
representation are essential for accurate insights.
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/data-visualization/ordinal-data","timestamp":"2024-11-10T01:48:55Z","content_type":"text/html","content_length":"173800","record_id":"<urn:uuid:846bb39d-72b2-4137-90d1-9d8a75688d47>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00066.warc.gz"} |
CO2 Electrolysis to CO (Carbon Monoxide) and then to Graphite
CO2 Electrolyzers
Carbon Dioxide Electrolyzers and Components
Boudouard Reaction
Disproportionation of carbon monoxide into carbon dioxide and graphite or its reverse:
2CO ⇌ CO2 + C
Pumping CO2 underground is probably not a good idea. Graphite is a common mineral form of carbon in the earths crust. Reducing CO2 to graphite and then burying the graphite is essentially reversing
the business of digging up coal or pumping oil out of the ground. Graphite could be landfilled in old or existing coal mines.
'Reducing' CO2 to CO, and then releasing the CO into a chamber where it spontaneously reacts to form CO2 and graphite, is all that's necessary to accumulate graphite for sequestration. The graphite
takes the form of soot at that point, it needs to be compressed into something that won't blow around or get carried into groundwater.
The links at the top describe the efficiency and reliability of the technology for doing this.
This is one answer to large scale carbon sequestration. All that's needed is sufficient 'alternative' energy sources.
2CO ⇌ CO2 + C
'Reducing' CO2 to CO, and then releasing the CO into a chamber where it spontaneously reacts to form CO2 and graphite, is all that's necessary to accumulate graphite for sequestration??????????????
First to reduce CO2 to CO you need Carbon (graphite or coked coal) and the reaction only shifts to the left at high temps
so you consume carbon and then you want to inject Hot CO into a chamber ?(has to be hot because if you prematurely cool it the reaction shifts to the right and create CO2 and C before you get it
injected). Upon Injection into a chamber you end up with what you started with CO2 and Carbon. In the end you get what you started with and also you are heating up the products and the heating up
consume lots of ??? coal or nat gas???? Let nature do its job . That is, plants convert CO2 plus sunlight and water into complex Carbon based molecules and Oxygen at room temperature......best way
to sequester CO2????? Let nature do its job....Plant more trees.
Edited by notsonice
2 hours ago, notsonice said:
2CO ⇌ CO2 + C
'Reducing' CO2 to CO, and then releasing the CO into a chamber where it spontaneously reacts to form CO2 and graphite, is all that's necessary to accumulate graphite for
First to reduce CO2 to CO you need Carbon (graphite or coked coal) and the reaction only shifts to the left at high temps
so you consume carbon and then you want to inject Hot CO into a chamber ?(has to be hot because if you prematurely cool it the reaction shifts to the right and create CO2 and C before you get it
injected). Upon Injection into a chamber you end up with what you started with CO2 and Carbon. In the end you get what you started with and also you are heating up the products and the heating
up consume lots of ??? coal or nat gas???? Let nature do its job . That is, plants convert CO2 plus sunlight and water into complex Carbon based molecules and Oxygen at room
temperature......best way to sequester CO2????? Let nature do its job....Plant more trees.
Did you read through the first two links above? They describe a CO2 electrolyzer that makes CO. The starting input is CO2 and water vapor.
10 hours ago, Meredith Poor said:
Did you read through the first two links above? They describe a CO2 electrolyzer that makes CO. The starting input is CO2 and water vapor.
The starting input is CO2 and water vapor??????. and lots lots lots of energy . Again trying to turn CO2 into carbon should be left to plants. Better off not burning Carbon based fuels to begin with
and let nature solve the spiked CO2 levels
1 hour ago, notsonice said:
The starting input is CO2 and water vapor??????. and lots lots lots of energy . Again trying to turn CO2 into carbon should be left to plants. Better off not burning Carbon based fuels to begin
with and let nature solve the spiked CO2 levels
The device claims to work at 98% Faradaic efficiency. It does require energy, which presumably would come from renewable sources. Obviously if we're going to return the last 200 years worth of
'excess' combustion back to minerals, it's going to take a lot of energy.
Piotr Berman + 82
At the first reading in seems absurd, O2 + C -> CO2 + energy, so we can perform reverse reaction: energy + CO2 -> C + O2, but I suspect lot of loss of energy.
However, the largest problem of "clean energy" is storage on the scale of seasons and years. And C is easy to store. When there is a surplus of "clean energy" you run CO2 -> C + O2, and when there
is a deficit, you run a thermal power generator with C as the fuel. The remaining problem of storing millions of tons of CO2 is perhaps identical to the storage of CH4, and this is done already, no
new technology needed, and the current infrastructure can be converted to that use. Moreover, storing CO2 (waiting for conversion into useful fuel later) seems easier than storing H2 which some
countries contemplate.
Nevertheless, I would like to compare capital investments needed for this scheme and for nuclear power, and the distribution of potential storage sites.
19 minutes ago, Piotr Berman said:
At the first reading in seems absurd, O2 + C -> CO2 + energy, so we can perform reverse reaction: energy + CO2 -> C + O2, but I suspect lot of loss of energy.
However, the largest problem of "clean energy" is storage on the scale of seasons and years. And C is easy to store. When there is a surplus of "clean energy" you run CO2 -> C + O2, and when
there is a deficit, you run a thermal power generator with C as the fuel. The remaining problem of storing millions of tons of CO2 is perhaps identical to the storage of CH4, and this is done
already, no new technology needed, and the current infrastructure can be converted to that use. Moreover, storing CO2 (waiting for conversion into useful fuel later) seems easier than storing
H2 which some countries contemplate.
Nevertheless, I would like to compare capital investments needed for this scheme and for nuclear power, and the distribution of potential storage sites.
The point of storing graphite in old coal mines is to sequester it permanently. This will not be 'dug back up and burned' later.
footeab@yahoo.com + 2,190
Except no one with more than 2 brain cells will do this unless someone wants nearly pure graphite. Why? The Tropical troposphere shows, in get this REAL data, not 'computer models", Global warming
as a problem is completely overblown.
The real reason AGW is being pushed? Europeans have run out of coal/ng/oil for their civilization to prosper. Only the French appear to have joined reality by still pushing nuclear energy.
2 hours ago, Meredith Poor said:
The point of storing graphite in old coal mines is to sequester it permanently. This will not be 'dug back up and burned' later.
you should invest in their BS, you will soon be parted with your money. They want you to buy their membranes and catalysts . They are not selling a working efficient process at all. Their statement
, The idea is to capture CO[2] from the air and recycle it back to the fuels and chemicals that we use every day........ is a giant pipe dream. Why because the concentration of CO2 in the air is less
than 450 ppm. Think of the volumes of air you have to handle to get any mass of CO2. You need large masses of air IE to get 1 kg of CO2 from the air you need to process 2000 kg of air. Pumping C02
into solution is not easy on a mass scale. Their process is in a electrochemical cell with a cathode and an anode and a separation membrane (I bet you the membrane alone costs $1000 per square
meter). Pump 2000 kg of air (yep you have to compress it to pump it into solution) to get 1 kg in solution. The energy alone to move/pump the air will drain your bank account fast. Lots of snake oil
being sold with their statement The idea is to capture CO[2] from the air and recycle it back to the fuels and chemicals that we use every day..
You obviously have no idea of what you are talking about .....storing graphite in old coal mines.....Whew you really are dreaming up something big.
2 hours ago, footeab@yahoo.com said:
Except no one with more than 2 brain cells will do this unless someone wants nearly pure graphite. Why? The Tropical troposphere shows, in get this REAL data, not 'computer models", Global
warming as a problem is completely overblown.
The real reason AGW is being pushed? Europeans have run out of coal/ng/oil for their civilization to prosper. Only the French appear to have joined reality by still pushing nuclear energy.
People are such idiots. How is it all these numbskulls can fly around from one country to another and edit their selfies on their smart phones and post them to Facebook and invest money in the stock
market and SCUBA dive in the Mediterranean? Oops, those are the rich people with their high powered university educations. Everyone else can't drive, can't type, can't handle a credit card, can't
compose a valid English sentence, much less understand something like chemistry.
14 minutes ago, notsonice said:
you should invest in their BS, you will soon be parted with your money.
Who is 'they'? The people selling these electrolyzers say they can make CO from CO2. They don't say anything about precipitating graphite out of CO, or burying graphite in landfills. Those are
conclusions I reached by looking up something else.
Ecocharger + 1,452
8 hours ago, notsonice said:
The starting input is CO2 and water vapor??????. and lots lots lots of energy . Again trying to turn CO2 into carbon should be left to plants. Better off not burning Carbon based fuels to begin
with and let nature solve the spiked CO2 levels
"Not burning carbon based fuels"? That means ignoring 84% of the world's energy supply....madness. And all for nothing, the CO2 theory of climate change is a pile of nonsense to begin with. The
models are flawed beyond belief.
1 hour ago, Meredith Poor said:
Who is 'they'? The people selling these electrolyzers say they can make CO from CO2. They don't say anything about precipitating graphite out of CO, or burying graphite in landfills. Those are
conclusions I reached by looking up something else.
yeah so you made up the whole ......'Reducing' CO2 to CO, and then releasing the CO into a chamber where it spontaneously reacts to form CO2 and graphite, is all that's necessary to accumulate
graphite for sequestration....... You basically took the Boudouard Reaction, spun it to something it cannot do ....... adding in electrolyzers........all a pie in the sky combo..........If you want
to remove CO2 form the atmosphere, again plant trees.
3 minutes ago, Ecocharger said:
"Not burning carbon based fuels"? That means ignoring 84% of the world's energy supply....madness. And all for nothing, the CO2 theory of climate change is a pile of nonsense to begin with. The
models are flawed beyond belief.
2014 86.3 %....2019 84.3%.....2020 83.1 percent . Keep babbling bs about models being flawed and climate change is nonsense....only way to reduce CO2 is switching from carbon based fuels to
renewables and nuclear energy. Rome was not built in a day and the switch will take time.
11 hours ago, notsonice said:
If you want to remove CO2 form the atmosphere, again plant trees.
Instead of emitting CO2, 'we' (or our trees) emit terpenes (C5 molecules):
The actual better way to do this is via algae. They are way more efficient than trees in removing CO2.
12 hours ago, Ecocharger said:
The models are flawed beyond belief.
This is sort of like saying cars are flawed beyond belief. Certainly looking back over automobile production since the 1920's, most of them are ghastly pieces of work. That doesn't stop billions of
people from using them anyway.
turbguy + 1,538
3 hours ago, Meredith Poor said:
The actual better way to do this is via algae. They are way more efficient than trees in removing CO2.
Wait long enough, and then drill for petroleum, too!
Andrei Moutchkine + 828
On 11/30/2021 at 1:14 AM, Piotr Berman said:
At the first reading in seems absurd, O2 + C -> CO2 + energy, so we can perform reverse reaction: energy + CO2 -> C + O2, but I suspect lot of loss of energy.
However, the largest problem of "clean energy" is storage on the scale of seasons and years. And C is easy to store. When there is a surplus of "clean energy" you run CO2 -> C + O2, and when
there is a deficit, you run a thermal power generator with C as the fuel. The remaining problem of storing millions of tons of CO2 is perhaps identical to the storage of CH4, and this is done
already, no new technology needed, and the current infrastructure can be converted to that use. Moreover, storing CO2 (waiting for conversion into useful fuel later) seems easier than storing
H2 which some countries contemplate.
Nevertheless, I would like to compare capital investments needed for this scheme and for nuclear power, and the distribution of potential storage sites.
The principle underlying reaction for all such schemes is the
CO + H[2]O ⇌ CO[2] + H[2]
since it is an equilibrium reaction, there is not really a reverse, with specific conditions determining which side ends up winning. Ditto for
2CO ⇌ CO2 + C
footeab@yahoo.com + 2,190
8 hours ago, Meredith Poor said:
This is sort of like saying cars are flawed beyond belief. Certainly looking back over automobile production since the 1920's, most of them are ghastly pieces of work. That doesn't stop billions
of people from using them anyway.
No one is deciding to ban cars because they are not perfect now are they?
When not ONE single model comes even close to matching the temperature/rainfall record, time to stop spouting Bull Shit about the Climate and its causes. When the models supposedly are based on
planet equilibrium with space, yet we THROW OUT THE TTO even though we have both balloon and satellite data for it, yet keep KNOWN FLAWED city data as the main driver of our "models"... you know the
political whores have taken over science.
Edited by footeab@yahoo.com
Ecocharger + 1,452
23 hours ago, notsonice said:
2014 86.3 %....2019 84.3%.....2020 83.1 percent . Keep babbling bs about models being flawed and climate change is nonsense....only way to reduce CO2 is switching from carbon based fuels to
renewables and nuclear energy. Rome was not built in a day and the switch will take time.
1980...84%. 2020...84%.
Some things never change, except the foolish behavior of governments.
The predictions from flawed climate change models will soon lead to the rejection of those models.
Edited by Ecocharger
4 hours ago, footeab@yahoo.com said:
When not ONE single model comes even close to matching the temperature/rainfall record, time to stop spouting Bull Shit about the Climate and its causes. When the models supposedly are based on
planet equilibrium with space, yet we THROW OUT THE TTO even though we have both balloon and satellite data for it, yet keep KNOWN FLAWED city data as the main driver of our "models"... you know
the political whores have taken over science.
Someone at one point asked me why we shouldn't be punishing economists when they make bad projections. This was one of Margaret Thatcher's ideas, in particular. Even after a century of miserably
'wrong' projections, economists still have jobs and still make meaningful contributions. They can't necessarily tell you what works, but they can definitely tell you what won't work.
Climate projections might be wrong 99% of the time. Stand on at the edge of a cliff 100 times, and being 'right' 99% of the time won't help much. If you fool around with something long enough, it
will break.
KeyboardWarrior + 527
On 11/29/2021 at 8:04 PM, notsonice said:
Why because the concentration of CO2 in the air is less than 450 ppm
I've always thought that CO2 capture from seawater would be more practical, but certainly not profitable.
KeyboardWarrior + 527
On 11/30/2021 at 5:30 PM, Andrei Moutchkine said:
The principle underlying reaction for all such schemes is the
CO + H[2]O ⇌ CO[2] + H[2]
since it is an equilibrium reaction, there is not really a reverse, with specific conditions determining which side ends up winning. Ditto for
2CO ⇌ CO2 + C
Which is exactly why we force it to one side, just like every other industrial reaction that has an equilibrium state. | {"url":"https://community.oilprice.com/topic/24956-co2-electrolysis-to-co-carbon-monoxide-and-then-to-graphite/","timestamp":"2024-11-11T17:57:01Z","content_type":"text/html","content_length":"514527","record_id":"<urn:uuid:799152c2-db69-40a6-a5aa-cbc3fb7a0427>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00599.warc.gz"} |
Why is Graph Theory so amazing? - part 1
Graph Theory has a special influence in our daily lives. Unbeknown to most people, many aspects of our day-to-day life are modelled by graphs: the GPS we use every day, our Facebook friend
suggestions, and even the web and the operating systems we use.
How can a concept that looks so simple - models describing relations between objects - become so powerful?
We are going to try and present some of the reasons behind this and explain some very interesting properties and facts about Graph Theory during a new entry in The Magic of Computing series - "Why is
Graph Theory so amazing?"
A little bit of history
The idea of a graph was firstly introduced by a very influential Swiss mathematician by the name of Leonhard Euler. During the first half of the 18th century, Euler made attempts to solve the famous
Königsberg bridge problem (the city is in Russia and is now called Kalinigrad), eventually suceeding in 1735.
(image courtesy of Encyclopedia Britannica, Inc)
In the photo above we can see a visual representation of how the bridges the problem speaks about were set. The question was whether a citizen could cross the bridges in such a way that each bridges
was crossed exactly once. Nowadays, this kind of traversal is called an Eulerian path.
Euler demonstrated that such a path cannot be achieved in this setting. He firstly supposed that a path existed. When attempting a traversal, each time a citizen encounters a landmass, apart from the
start and finish encounters, two bridges should be accounted for - the one on which the citizen just passed, and another one to take him to another landmass, so the number of bridges on each landmass
should be an even number. However, the picture shows that there are no landmasses with an even number of bridges. Therefore, such a traversal is impossible in the current setting.
Although Euler was one of the first to unknowingly experience with what was to become graph theory, graphs were only formally defined after 1870. In the formal definition, a graph is described by two
sets. One set, called V, containing vertices (or nodes), and E, a set edges - pairs of vertices which can be unordered (resulting in an undirected graph), or ordered (resulting in a directed graph).
Graphs in the modern context
Graph theory became exponentially more useful with the release of consumer computers. The concept is simple enough to easily represent in code and memory, and people even came up with various methods
that aim to optimise how graphs are handled. Although the other entries in the series contain C++ snippets, handling graphs in C++ needs more code and I think it would make the article harder to
Let G be a graph with N nodes and E edges. Below, we will showcase two different methods to create a Python class to handle this graph, using two different forms of graph representation. Both
representations will be of undirected graphs.
Adjacency matrix
A handy mode to represent G is by using a square boolean matrix of size N * N. Let this matrix be called A. Then, A[i][j] will be true if there is an edge between nodes i and j and false otherwise.
In an undirected graph, matrix A is symmetric, as the edge from i to j is bidirectional, so A[i][j] = A[i][j] for any i and j.
One advantage of this is that determining whether an edge (i, j) is in the graph can be done in constant time. However, if we aim for example to store a graph with a lot of nodes and a relatively
small number of edge, this method might prove inefficient from a memory standpoint (this graphs are also called sparse graphs). That leads us to another popular method of representing graphs.
Adjacency lists
This method aims to represent a graph by using N lists, enumerating the neighbours of each node.
While when using this method, checking whether a given edge exists is not constant (having an O(E) worst case), looping through all the neighbours of a node is much time-efficient in sparse graphs,
and there are also other benefits which we will see in future articles.
Other methods
Alternatively, graphs can also be represented by using a list of edges or a so-called incidence matrix.
The incidence matrix is a boolean E * N matrix, with one line for each edge. The columns representing the two extremities of the edge are marked with 1 on its corresponding line.
All methods have their own advantages and disadvantages as we shall see in other articles.
In the next article, we will delve deeper into graph theory, showing some interesting graph-related algorithms and their applications. Until then, be sure to check out other articles in The Magic of
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/kruzzy/why-is-graph-theory-so-amazing-part-1-5ii","timestamp":"2024-11-02T19:06:34Z","content_type":"text/html","content_length":"92145","record_id":"<urn:uuid:0f10477f-8117-4da0-8af5-f32d7dc4b8e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00021.warc.gz"} |
Reducing the disclosure risk
Statistical disclosure limitation methods can be classified in two categories:
• Methods based on data reduction. Such methods increase the number of individuals in the sample/population who share the same or similar identifying characteristics presented by the investigated
statistical unit. These procedures tend to avoid the presence of unique or rare recognizable individuals.
• Methods based on data perturbation. Such methods achieve data protection in two ways. First, if the data are modified, re‑identification by means of record linkage or matching algorithms is more
difficult and uncertain. Second, even when an intruder can re-identify a unit, he/she cannot be confident that the disclosed data are consistent with the original data.
An alternative solution consists of generating synthetic microdata.
Data reduction
Removing variables
The first obvious application of this method is the removal of direct identifiers from the data file. A variable should be removed when it is highly identifying and no other protection methods can be
applied. A variable can also be removed when it is too sensitive for public use or irrelevant for analytical purpose. For example, information on race, religion, HIV, etc. may not be released in a
public-use file, but they may be released in a licensed file.
Removing records
Removing records can be used as an extreme measure of data protection when the unit is identifiable in spite of the application of other protection techniques. For example, in an enterprise survey
dataset, a given enterprise may be the only one belonging to a specific industry. In this case, it may be preferable to remove this particular record rather than removing the variable "industry" from
all records. Since it largely impacts the statistical properties of the released data, removing records must be avoided when possible.
When the records to be removed are selected according to a sampling design, the method is called sub-sampling; it’s called sampling when the original matrix represents census data.
Global recoding
Global recoding consists of aggregating the values observed in a variable into pre-defined classes (for example, recoding the age into five-year age groups, or the number of employees into three
class sizes: small, medium, and large). This method applies to numerical variables, continuous or discrete. It affects all records in the data file.
When dealing with categorical variables (or numerical categorized), the global recoding method collapses similar or adjacent categories.
Consider, for example, the variable "marital status" that is often observed in the following categories: Single, Married, Separated, Divorced, Widowed. The sample frequency of the Separated category
may be low, especially when cross-tabulated with other variables. The two adjacent categories, Separated and Divorced, can be joined into a single category called "Separated or Divorced". The
frequency of the combinations in this new category would be higher than those relative to Separated and Divorced separately. The categories that can be combined depend on data utility as well as
statistical control of the frequencies.
This method can also be applied to key variables, such as geographic codes, to reduce their identifying effect.
Top and bottom coding
Top and bottom coding is a special case of global recoding that can be applied to numerical or ordinal categorical variables. The variables "Salary" and "Age" are two examples. The highest values of
these variables are usually very rare and therefore identifiable. Top coding at certain thresholds introduces new categories such as "monthly salary higher than 6,000 dollars" or "age above 75",
leaving unchanged the other observed values. The same reasoning applied to the smaller observed values defines bottom coding. When dealing with ordinal categorical variables, a top or bottom category
is defined by aggregating the "highest" or "smallest" categories, respectively.
Local suppression
Local suppression consists of replacing the observed value of one or more variables in a certain record with a missing value. Local suppression is particularly suitable for setting categorical key
variables and when combinations of scores on such variables are at stake. In this case, local suppression consists of replacing an observed value in a combination with a missing value. The method
reduces the information content of rare combinations, resulting in an increase in the frequency count of records containing the same (modified) combination. For example, suppose the combination
"Marital status=Widow; Age=17" is a population unique. If the information on Age is suppressed, the combination "Marital status=Widow; Age=missing" will no longer be identifying. Alternatively, one
can suppress the information on Marital status as well. A criterion is therefore necessary to decide which variable in a risky combination must be locally suppressed. The primary criterion is
obviously to minimize the number of local suppressions. For example, consider the values of key variables, "Sex=Female; Marital status=Widow; Age=17; Occupation=Student," observed in a unit. Both the
combinations "Marital status=Widow; Age=17" and "Sex=Female; Marital status=Widow; Occupation=Student" characterize the unit and may be population unique, i.e., combinations at risk. To minimize the
number of local suppressions, one can choose to replace the variable “Marital status” with missing values, so that both combinations are simultaneously protected using a single local suppression. If
the variables were considered independently, two local suppressions would be required. Another criterion can be defined according to a measure of information loss (for example, the value minimizing
an entropy indicator might be selected for local suppression). Moreover, suppression weights can be assigned to the key variables to drive the local suppression to less important variables. Local
suppression also requires a selection criterion for the records. The previous paragraph indicated several rules defining a record at risk; local suppression could be applied only to risky records,
i.e., records that contain combinations at risk.
Data perturbation
Micro-aggregation is a perturbation technique first proposed by Eurostat as a statistical disclosure method for numerical variables. The idea is to replace an observed value with the average computed
on a small group of units (small aggregate or micro-aggregate), including the investigated one. The units belonging to the same group will be represented in the released file by the same value. The
groups contain a minimum predefined number k of units. The k minimum accepted value is 3. For a given k, the issue consists of determining the partition of the whole set of units in groups of at
least k units (k-partition), minimizing the information loss usually expressed as a loss of variability. Therefore, the groups are constructed according to a criterion of maximum similarity between
units. The micro-aggregation mechanism achieves data protection by ensuring that there are at least k units with the same value in the data file.
When micro-aggregation is independently applied to a set of variables, the method is called individual ranking. When all variables are averaged at the same time for each group, the method is called
multivariate micro‑aggregation.
The easiest way to group records before aggregating them is to sort the units according to their similarity and the values resulting from this criterion, and to aggregate consecutive units into fixed
size groups. Size adjustment is eventually required for the first or last group. For univariate micro-aggregation, the sorting criterion may be the variable itself.
For multivariate micro-aggregation, similarity can be used as a criterion for the observed variables or, to increase the effectiveness of the method, it can be defined as a combination of variables.
For example, the first principal component or the sum of Z-score values along the set of variables can be criteria for fixed-size micro-aggregation.
Multivariate micro-aggregation is considered much more protective than individual ranking because the method guarantees that at least k units in the file are identical (all variables are averaged at
the same time), but the information loss is higher.
Data swapping
Data swapping was initially proposed as a perturbation technique for categorical microdata, and intended to protect tabulation stemming from the perturbed microdata file. Data swapping consists of
altering a proportion of the records in a file by swapping values of a subset of variables between selected pairs, or swap pairs, of records.
The level of data protection depends on the perturbation level induced in the data. A criterion must be applied to determine which variables and records (the swapping rate) to be swapped. For
categorical data, swapping is frequently applied to records that are sample unique or sample rare, as these records usually present higher risks of re-identification.
Finding data swaps that provide adequate protection while preserving the exact statistics of the original database is impractical. Even when the univariate moments are maintained, data swapping
generally modifies the data too much.
Post-randomization (PRAM)
As a statistical disclosure control technique, PRAM induces uncertainty in the values of some variables by exchanging them according to a probabilistic mechanism. PRAM can therefore be considered as
a randomized version of data swapping. As with data swapping, data protection is achieved because an intruder cannot be confident whether a certain released value is true, and therefore matching the
record with external identifiers could lead to mismatch or attribute misclassification. The method has been introduced for categorical variables, but it can be generalized to numerical variables as
Adding noise
Adding noise consists of adding a random value ε, with zero mean and predefined variance σ2, to all values in the variable to be protected. Generally, methods based on adding noise are not considered
very effective in terms of data protection.
Resampling is a protection method for numerical microdata that consists of drawing with replacement t samples of n values from the original data, sorting the sample, and averaging the sampled values.
Data protection level guaranteed by this procedure is generally considered quite low.
Synthetic microdata
Synthetic microdata are an alternative approach to data protection, and are produced by using data simulation algorithms. The rationale for this approach is that synthetic data do not pose problems
with regard to statistical disclosure control because they do not contain real data but preserve certain statistical properties. Initially, Rubin proposed synthetic data generation through multiple
imputations, while Feinberg proposed using bootstrap methods. Additional approaches have been suggested, such as multiple imputation, Latin hypercube sampling, modeling, and data distribution by
Generally, users are not comfortable with synthetic data as they cannot be confident of the results of their statistical analysis. This approach, however, can help produce “test microdata sets,”
where synthetic data files would be released to allow users to test their statistical procedures to successively access “true” microdata in a data enclave. | {"url":"http://ihsn.org/anonymization-risk-reduction","timestamp":"2024-11-14T13:39:33Z","content_type":"application/xhtml+xml","content_length":"79347","record_id":"<urn:uuid:dcaa885d-d1ff-495c-97ce-49da881de95f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00844.warc.gz"} |
Interdimensional Cubes
Submitted by cubex on Fri, 09/10/2004 - 08:04.
As a thought experiment consider the case of the familar 4x4x4 cube with a 2x2x2 cube embedded inside it, instead of the usual mechanism. I'll call this the "Interdimensional 4x4x4 cube" for lack of
a better name. Now clearly if we turn the slices of the 4x4x4 cube it would have an effect on the internal 2x2x2 cube. Now moving the slice adjacent to the U face and moving the slice adjacent to the
R face this would be the equivalent of turning the internal 2x2x2's U face and R face.
My question is: Is it possible to reach all the positions of the internal 2x2x2 without having any constraints on the 4x4x4 cube? How many positions are there?
Clearly it is possible to manipulate the internal 2x2x2 cube without touching the 4x4x4 cubes' corners, but what about the centre pieces and edge pieces of the 4x4x4?
Read the bit about 'eviscerat
Submitted by
on Mon, 09/13/2004 - 02:33.
Read the bit about 'evisceration' in Singmaster's Cubic Circular:
There are many well known move sequences on the 3x3x3 that affect the corners of a cube without affecting the edges or even the orientation of the face centres. This sequence applied to a 4x4x4 will
have the same effect on its corners without disturbing the centres or edges (or invisible centre). Any such sequence applied to the inner slices of the 4x4x4 cube however, will affect only the
invisible inner cube in that way.
There is a parity constraint. Every slice move on the 4x4x4 cube is an odd permutation on the invisible inner cube, and an odd permutation on the edges. Therefore any odd permutation on the inner
cube will be an odd permutation on the edges, and so will have to disturb some edges.
For each particular position of the outside layers, the inner cube therefore has 8!/2 * 3^7 = 44089920 possible positions. This puzzle has 44089920 times as many positions as the normal 4x4x4 cube.
Jaap's Puzzle Page:
Evisceration, interesting concept
Submitted by
on Mon, 09/27/2004 - 14:11.
I remember seeing this idea in the late 80's but never used it.
I've tried to eviscerate some processes and was able to move a ring of 4 edges adjacent to the F face. An eviscerated process that would rotate the U centre and F centre on a standard 3x3x3 cube
would move the corresponding ring of edges adjacent to U and F on the 4x4x4. Thus a clockwise rotation of the U centre would end up causing a clockwise rotation of the 4 edge ring.
I'll like to hack up some software to help to see the invisible interior 2x2x2 while manipulating the 4x4x4. Maybe it could have partly translucent colours or just small squares of the 2x2x2
appearing over the big squares of the 4x4x4.
More thoughts
Submitted by
on Sun, 01/16/2005 - 02:04.
I see. Since we are talking about half of the positions of the interior 2x2x2 (8!/2 * 3^7) inside the 4x4x4 there is no division by 24.
So 88,179,840 / 2 = 44,089,920
So the "interdimensional" 4x4x4 would have:
44,089,920 * 7,401,196,841,564,901,869,874,093,974,498,574,336,000,000,000
326318176648849198250599213408124182588293120000000000 or
3.263 x 10^53 | {"url":"http://forum.cubeman.org/?q=node/view/16","timestamp":"2024-11-04T23:06:17Z","content_type":"application/xhtml+xml","content_length":"17094","record_id":"<urn:uuid:26c5f06d-38ee-4553-a841-313130170db6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00375.warc.gz"} |
Essentials of Mixed and Longitudinal Modelling
Admission requirements
It is recommended that students are familiar with linear and generalized linear models, such as the logistic regression for binary data. Students should also be familiar with matrix algebra and
programming in R. Within this master this prerequisite knowledge can be acquired from the courses 'Linear and generalized linear models', 'Mathematics for statisticians' and 'Statistical Computing
with R'. Thus, it is strongly recommended to have followed these courses first.
Linear regression models and generalized linear models, such as the logistic regression model for binary data or the log-linear model for count data, are widely used to analyze data in a variety of
applications. However, these models are only appropriate for independent data. In many fields of application dependent data may occur. For instance, when individuals belong to the same family or when
data are collected repeatedly in time for the same subjects.
Introduction of random effects in the linear or generalized linear model is a simple and constructive expedient to generate feasible dependence structures. The extended classes of models are referred
to as linear mixed models (LMMs) and generalized linear mixed models (GLMMs). The use of such models is the subject of this course. Competing models, where dependence is not modeled by introduction
of extra random effects, will be discussed as well. Part of this course will focus upon analysis of repeated measurements or longitudinal data.
Inferential techniques comprise restricted (or residual) maximum likelihood (REML), a modified version of maximum likelihood, but also generalized estimation equations (GEE) that require less
strenuous model assumptions.
In particular, the course consists of five main sections:
• Marginal models
• Linear mixed models
• Generalized Estimating equations
• Missing Data
• Generalized Linear Mixed Models.
In this course, emphasis will be on gaining an understanding of the models and the kind of data that can be analyzed with these models. Different inferential techniques will be discussed, but without
undue emphasis on mathematical rigor.
Course Objectives
In general, when students are confronted with practical data they should be able: (1) to decide whether there is a need to model dependence between the data, (2) to decide upon a model with an
appropriate dependence structure and (3) to perform a proper analysis.
At the end of the course, the MSc student can:
• Distinguish which of the methods presented can be used for the analysis of normal longitudinal data and which for non-normally distributed measurements.
• Identify and explain the limitations of simplistic analysis methods that ignore the correlations in repeatedly measured data.
• Recommend which of the methods are appropriate in studies with unbalanced measurements and long follow-up or missing data.
• Identify the different mechanisms that generate missing data and which of the discussed methods in this course give valid inference under the different mechanisms.
• Explain the pros and cons of a population averaged approach versus a subject specific approach in modelling dependent discrete data.
• List the strengths and limitations of various estimation procedures for generalized linear mixed models.
• Identify which are the hypotheses of interest, which model parameters are involved in these hypotheses, and which tests are appropriate.
• Be able to identify, for a practical problem, which factors and variables should be in the model and whether they should be represented by fixed or random effects.
• Determine a proper strategy for model building.
• Apply tools to evaluate the validity of the model assumptions.
• Use statistical software, e.g., R, to perform an analysis with multivariate models and generalized linear mixed models, using the (Restricted) Maximum Likelihood or Generalized Estimating
Equations method.
• Interpret the output from the software in terms of the practical problem.
• Be able to interpret fixed and random effects in terms of population means and dependence structures.
• Be able to decide what kind of test is required for testing fixed effects (Kenward & Roger Approximate F-test) or dispersion parameters (likelihood ratio test) for unbalanced data.
• Be aware of possible boundary problems and remedies in testing dispersion parameters with the likelihood ratio test.
You will find the timetables for all courses and degree programmes of Leiden University in the tool MyTimetable (login). Any teaching activities that you have sucessfully registered for in MyStudyMap
will automatically be displayed in MyTimeTable. Any timetables that you add manually, will be saved and automatically displayed the next time you sign in.
MyTimetable allows you to integrate your timetable with your calendar apps such as Outlook, Google Calendar, Apple Calendar and other calendar apps on your smartphone. Any timetable changes will be
automatically synced with your calendar. If you wish, you can also receive an email notification of the change. You can turn notifications on in ‘Settings’ (after login).
For more information, watch the video or go the the 'help-page' in MyTimetable. Please note: Joint Degree students Leiden/Delft have to merge their two different timetables into one. This video
explains how to do this.
Mode of instruction
The material will be covered using lectures, quizzes and practical sessions. The course is given in a blended learning style integrating online media as well as traditional face-to-face on campus
• During the lectures the theory will be covered and worked-out examples will be discussed. The lectures will be given mainly with online media combined with quizzes and face-to-face teaching
sessions where a short review of the material will be provided followed by questions and discussions on the covered topics.
• During the practical sessions, the theory covered will be applied by analysing real datasets. Questions on the online components and the practicals may be posted online on the forum before each
face-to-face teaching session.
Lecture notes are leading and worked-out case studies in R are given with solutions for self-study. Some books are suggested (optional) for further details. Study material, including data sets for
the case studies mentioned, is available on Brightspace.
About halfway down the course students will start working in groups on case studies that are handed out, under supervision of the teacher. Each group of students will hand in a written report about
their case study. This report will be graded and together with the grade of the written exam determines the final grade of an individual student.
Assessment method
A written exam (2/3) with open questions and case study report (1/3). The case study report and the written exam should each be assessed with a minimum grade of 5 to obtain the course credits. The
final grade should be at least 5.5 (which will be rounded to 6) to get a pass. Students may take a written re-exam following the university rules. Unless the student decides to follow the course
again in a next year, the final grade for the case study is binding. The date for handing in the case study report will be agreed upon during the course.
Reading list
The following books are occasionally referred to for further reading, but they are not compulsory reading for the exam.
• Fitzmaurice, G., Laird, N., and Ware, J. (2011). Applied Longitudinal Analysis, 2nd Ed. Hoboken: John Wiley & Sons.
• Verbeke, G. and Molenberghs, G. (2000). Linear Mixed Models for Longitudinal Data. New York: Springer-Verlag.
• Diggle, P., Heagerty, P., Liang, K.-Y., and Zeger, S. (2002). Analysis of Longitudinal Data, 2nd edition. New York: Oxford University Press.
• Faraway (2006). Extending the linear model with R. generalized linear, mixed effects and nonparametric regression models. Chapman & Hall/CRC.
• McCulloch, Searle & Neuhaus (2008) Generalized, linear and mixed models. Wiley Blackwell.
The first two books are indicative for the applied level of this course. The third and fifth books are more technical and intended as reference. The Faraway book is relevant for the course about
linear and generalized linear models, as well. These books are occasionally referred to for further reading, but they are not compulsory reading for the exam.
It is the responsibility of every student to register for courses with the new enrollment tool MyStudyMap. There are two registration periods per year: registration for the fall semester opens in
July and registration for the spring semester opens in December. Please see this page for more information.
Please note that it is compulsory to both preregister and confirm your participation for every exam and retake. Not being registered for a course means that you are not allowed to participate in the
final exam of the course. Confirming your exam participation is possible until ten days before the exam.
Extensive FAQ's on MyStudymap can be found here. | {"url":"https://studiegids.universiteitleiden.nl/courses/121709/essentials-of-mixed-and-longitudinal-modelling","timestamp":"2024-11-04T10:39:48Z","content_type":"text/html","content_length":"20772","record_id":"<urn:uuid:93802bc1-c47f-4e44-9037-712934e99dd8>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00256.warc.gz"} |
Why Guinness has been good, even for those who never drank it
This blog is about the scientific method and the connection between medical science, brewing, and statistics. This is an edited version of a contribution to The Scientists' Scribe, the student led
magazine of the Department of Biological Sciences, Royal Holloway University of London.
It is something molecular biologists and biomedical scientists often ask: do we really have to do statistics? The question implies that if only they could get on with their important, life-saving
discoveries the world would be a better place.
There is some logic behind this. One of my once fellow students, now a well-respected microbiologist, once explained it to me as follows: in the lab something either works, or it doesn't. There isn't
much need for statistics in this.
But when it comes to translating a lab result into a successful, life-saving treatment, it isn't just like that. To do that one needs to show that the treatment is safe, is without significant side
effects, and that works better than existing treatments. This involves large and costly clinical trials and the careful evaluation of the results. And to do that, you guessed it, you need statistics.
But where does the Guinness come in, even for me as I don't drink it—I can hear you think—and how much good has it actually done? Well, all sorts of nutritional and psychological benefits have been
attributed to the beverage, but that is not what this is about. This is about the science that comes with brewing beer.
Just before the start of the twentieth century, the Guinness brewing company of Dublin was one of the largest breweries in the world. It had started to recruit the best chemistry graduates they could
find to appoint them as brewers, taking leading roles in the organisation. In 1899 they recruited William Sealy Gosset. It was a good appointment: Gosset was a capable administrator and he rose to
become the head brewer of the new Guinness Brewery in London in 1935.
But Gosset wasn't just a chemist: he had graduated with a combined degree in chemistry and mathematics. And although by day he mainly used his chemistry skills, he often used his math skills to work
on statistical problems in the evenings, at home.
He was fascinated by statistical problems that applied to his work. For beer, having good barley is crucial. So barley was grown in plots at different farms to find the best barley for the brewery.
But the variation in these experiments was high and the results were difficult to interpret, what could they say about the mean yield of these samples really? Statistical theory of that time required
large numbers of samples, and that allowed estimating the mean and the variance. But for Gosset's problems this didn't work: some of the data came from the plots on four farms, and with such a small
number of plots he had to do something different, but what?
Gosset worked out an alternative method that didn't need the estimation of the variance. See him sitting at his kitchen table, quietly working so you can hear the steady hiss of the gaslight. On the
table he has stacks of pieces of cardboard with numbers written on them, he painstakingly calculates the means of small samples he draws from the stacks, and then the distributions of those means.
Gosset's thinking was that if you have a small sample, the mean will differ for every sample. But if sample many times, the spread of all the means allows you to work out confidence intervals for the
mean. Gosset did something that we now know as bootstrapping: a statistical method in which data are resampled using a computer. But there weren't any computers then, all he had was stacks of cards.
To work out his alternative method he therefore had to use math. "Now let's assume that this data comes from a normal distribution ... ", he must have thought. And out came a simple result.
The results of his statistical endeavours were useful for the brewery. Gosset thought they had importance beyond that and wanted to publish them. The brewery was not keen: they had had bad
experiences with publication when a Master Brewer had inadvertently revealed some of the secrets of the brewing process. So they decided that if Gosset published, he had to use a pseudonym, either
'Pupil' or 'Student'. Gosset choose Student. His paper on “The probable error of a mean” was published in 1908.
Ronald Fisher was still at school when the paper came out. In 1912 he came across Student's paper as an undergraduate. Fisher realised the wider implications and used Student's distribution to morph
it into the test that we now know. And so, Student's t test became a workhorse of statistical analysis.
Fisher didn't stop there and kept himself busy. In 1917 he married Ruth Guinness; she was from the preaching rather than the brewing branch of the family. He developed another beverage related
statistic: in the 'lady tasting tea' experiment he challenged a female colleague if she was capable of tasting whether the tea or the milk had first been put in the cup. The test he designed for this
is Fisher's exact test; it is related to the chi square test. As it turned out, the lady proved Fisher wrong and could indeed taste the difference (p<0.05).
Under the exciting title “Analysis of Crop Variation II: The manurial response of different potato varieties”, Fisher first published on the method of analysis of variance (ANOVA). It has since has
found many applications beyond manure. Fisher's work wasn't limited to statistics either, he worked in evolution and genetics. To this day, every textbook on statistics, evolution and genetics is
likely to have whole chapters devoted to Fisher's work.
But why should medical scientists know about brewing beer, tasting tea, and the response of potato varieties to manure? Medicine has been around for thousands of years. Potions, ointments and pills
have been made forever, but we just didn't know if the individual recipes of herbologists and apothecaries actually worked. What turned medicine into medical science was the scientific method: the
systematic collection of evidence to support or contradict a theory. This methodology has revolutionised science since renaissance times.
In medicine, this took a little time. It was not until the statistical methods that allowed efficient discovery of small differences were developed that modern medicine really took off. These tests
are the mainstay of clinical science: in a review of nearly two thousand papers from medical journals Student's t test, Pearson's chi-square test, Fisher's exact test and ANOVA were used in over 90%.
Life expectancy at birth in the UK his risen from about 57 in 1922 to over 79 in 2017, to a large degree due to the development of novel drugs, vaccines and medical and public health technologies.
Gosset and Fisher were instrumental in this, and their work saved innumerable lives. And that is why Guinness has been so good for us and why —my dear fellow student of years ago— biomedical
scientists need statistics.
Vincent Jansen, March 2019
Further reading:
David Salsburg wrote a popular science book about the development of statistics, its main players and impact on science:
• Salsburg, D. (2002) The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century, W.H. Freeman/Owl Book. ISBN 0-8050-7134-2
The review paper about statistical tests in the field of medicine mentioned in the blog is: :
• Choosing statistical tests. Du Prel, J-B, Röhrig, B, Hommel, G & Blettner. Dtsch Arztebl Int 2010; 107(19): 343-348 DOI: 10.3238/arztebl.2010.0343 | {"url":"http://personal.rhul.ac.uk/ujba/115/blogs/guinness/guinness.html","timestamp":"2024-11-12T04:11:43Z","content_type":"text/html","content_length":"11772","record_id":"<urn:uuid:d7e0bc10-0810-4bd9-9af4-444a87d64cb4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00492.warc.gz"} |
PROC SURVEYREG: MODEL Statement :: SAS/STAT(R) 9.22 User's Guide
MODEL dependent = <effects> </ options> ;
The MODEL statement specifies the dependent (response) variable and the independent (regressor) variables or effects. The dependent variable must be numeric. Each term in a MODEL statement, called an
effect, is a variable or a combination of variables. You can specify an effect with a variable name or with special notation by using variable names and operators. For more information about how to
specify an effect, see the section Specification of Effects in Chapter 39, The GLM Procedure.
Only one MODEL statement is allowed for each PROC SURVEYREG statement. If you specify more than one MODEL statement, the procedure uses the first model and ignores the rest.
You can specify the following options in the MODEL statement after a slash (/):
I | INVERSE
VADJUST=DF | NONE
X | XPX | {"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_surveyreg_sect017.htm","timestamp":"2024-11-14T14:04:59Z","content_type":"application/xhtml+xml","content_length":"21330","record_id":"<urn:uuid:e69b4460-c36e-43df-814c-81dc5c75bac9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00213.warc.gz"} |
Historical Data
Historical Data #
This calculator uses historical data to run simulations. The data used is compiled by Nobel Prize-winning economist Robert Shiller, and it includes data all of the way back to January 1871.
You can download the Shiller data set online at Robert Shiller's website.
Asset Types #
This section describes where the numbers in Shiller's spreadsheets come from. The source for this information is his website.
Stocks #
From 1926 onward, the dividend and earnings data are computed from the S&P 500 four-quarter totals, with linear interpolation between the months.
Prior to 1926, annual data is used from Cowles and associates (Common Stock Indexes, 2nd ed. [Bloomington, Ind.: Principia Press, 1939]).
Bonds #
The bond numbers in Shiller's spreadsheet are the 10 year yields on U.S. Treasury securities.
Note that bond growth is not straightforward to calculate from this value. For more, refer to the Computed Historical Data guide.
Cash #
FI Calc does not use historical data for cash. Instead, you must supply a fixed annual growth for your cash investments (which defaults to 1.5%). Learn more here.
CPI #
Shiller's Consumer Price Index (which is used to calculate inflation in FI Calc) is the "Consumer Price Index - All Urban Consumers," which is published by the U.S. Bureau of Statistics. This data
begins in 1913. Prior to this year, Shiller computes the CPI using a different method. In his words:
"for years before 1913 1 spliced to the CPI Warren and Pearson's price index, by multiplying it by the ratio of the indexes in January 1913"
Considerations #
There are a few things to keep in mind when using Robert Shiller's data.
Applicability #
Index funds didn't exist until 1972, so some question whether the returns in Shiller's data before this time would have been attainable by a contemporary investor.
Comparisons to Popular Studies #
Both Bengen's study (which first described the 4% Rule) and the Trinity Study (which popularized the 4% Rule) used a different data set, the SSBI. This data set begins in 1926.
Additionally, Bengen's analysis only included data up to 1992, while the Trinity Study included data up to 1995.
Keep this in mind if you are attempting to recreate or compare results from FI Calc to the results in these studies.
Computed Data #
The numbers that Shiller provides aren't directly consumable by this calculator. Instead, they must be transformed before they can be used to simulate retirements. To learn more about the
transformations that must be made, refer to this guide. | {"url":"https://guide.ficalc.app/how-it-works/historical-data-source/","timestamp":"2024-11-11T10:55:39Z","content_type":"text/html","content_length":"19736","record_id":"<urn:uuid:746b3127-3d26-41b8-b706-e9b7b50b6df1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00368.warc.gz"} |
Direct Proportion - Definition, Formula, Graphical Representation and Examples
Direct Proportion
Direct proportion describes a connection between two variables where their ratio remains constant. This constant is called the proportionality constant, often represented as "k". When variables x and
y are directly proportional, their relationship can be expressed by the equation y = kx, where k is a constant that isn't zero.
What is Direct Proportion?
Direct proportion refers to a relationship between two variables where their ratio remains constant. This means as one variable increases or decreases, the other variable changes in a predictable
way, keeping their ratio consistent. Mathematically, this relationship is expressed as y = kx, where y and x are the variables, and k is a constant called the constant of proportionality. This
constant reflects how the variables are related, and it stays the same regardless of the values of x and y. Direct proportion is essential in various mathematical applications, such as calculating
growth rates, time, distance, speed, and other real-world scenarios where relationships between quantities are linear and predictable.
Also Check: Differential Equations
Direct Proportion Graphical fig:
Direct Proportion Examples in Real Life
Direct proportionality is a fundamental concept that describes how one quantity changes in relation to another. Here are some common examples where direct proportion can be observed:
1. Speed and Time: The distance traveled by a vehicle is directly proportional to its speed and the time it takes. If a vehicle's speed doubles, the time required to cover a certain distance is
2. Height and Shadow Length: The height of an object and the length of its shadow follow a direct proportion. If you double the height of an object, its shadow length also doubles.
3. Volume and Gas Pressure: The volume of a gas is directly proportional to its pressure, provided the temperature remains constant. For instance, doubling the pressure of a gas results in its
volume also doubling.
4. Current and Voltage in Circuits: In electrical circuits, the current flowing through a conductor is directly proportional to the voltage applied across it, assuming the resistance remains
constant. If the voltage doubles, the current in the circuit also doubles.
5. Area and Side Length of a Square: The area of a square increases directly with the square of its side length. If you double the side length of a square, its area quadruples.
Also Check: Difference Between Variance and Standard Deviation
Direct Proportion Formula
The relationship in direct proportionality can be mathematically represented by the equation y = kx, where:
• y and x are the related quantities,
• k is the constant of proportionality.
To find the constant k, divide one known value of y by the corresponding value of x. This constant k remains consistent for any pair of y and x values that exhibit direct proportionality.
Practical Applications
Direct proportionality is useful in modeling various real-world scenarios involving growth, decay, time, distance, speed, and more. It provides a precise mathematical framework to understand and
predict how quantities change relative to each other under specific conditions.
For instance, in physics and engineering, direct proportionality helps in designing systems where changes in one parameter affect another in a predictable manner. This includes designing circuits,
understanding gas behavior under different pressures, and predicting the behavior of physical objects in varying conditions.
Also Check: Difference Between Variance and Standard Deviation
Graphical Representation of Direct Proportion
A direct proportionality graph is a straight line passing through the origin (0,0) with a slope equal to the constant of proportionality k. The x-axis represents one quantity (usually x), and the
y-axis represents the other quantity (usually y). As x increases, y also increases in such a way that their ratio y/x remains constant, equal to k.
The equation of the straight line on the graph is y = kx, where the slope k represents how much y changes per unit change in x.
Direct Proportion Graph When you Double one you double the other:
Frequently Asked Questions on Direct Proportion
Direct proportion means that as one quantity increases, the other quantity also increases by the same rate. For example, the more hours you work, the more money you earn.
The formula for direct proportion is y = kx, where y and x are the two quantities, and k is a constant that represents their proportional relationship.
A direct proportion is a relationship between two quantities where one quantity increases or decreases at the same rate as the other. This means the ratio between the two quantities remains constant.
If A and B are directly proportional, it means that as A increases, B also increases by the same rate. The ratio between A and B remains constant. This can be expressed as A ∝ B or A = kB, where k is
the constant of proportionality.
Direct proportion: The more hours you work, the more money you earn. Indirect proportion: The more people working on a task, the less time it takes to complete. | {"url":"https://www.home-tution.com/maths-topics/direct-proportion","timestamp":"2024-11-03T11:01:20Z","content_type":"text/html","content_length":"117494","record_id":"<urn:uuid:0d3ad303-827c-49bd-9780-126d174db44f>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00057.warc.gz"} |
detail evenement – Laboratoire Jean Kuntzmann
Analysis of interfaces in protein complexes using Voronoi tessellations and graph neural networks
Séminaire Données et Aléatoire Théorie & Applications
16/11/2023 - 14:00 Kliment Olechnovic Salle 106
Given a molecular structure, it can be represented as a set of atomic balls, each ball having a van der Waals radius corresponding to the atom type. A ball can be assigned a region of space that contains all the points that are closer (or equally close) to that ball than to any other. Such a region is called a Voronoi cell and the partitioning of space into Voronoi cells is called Voronoi tessellation or Voronoi diagram. Two adjacent Voronoi cells share a set of points that form a surface called a Voronoi face. A Voronoi face can be viewed as a geometric representation of a contact between two atoms. The Voronoi cells of atomic balls may be constrained inside the boundaries defined by the solvent accessible surface of the same balls. The constrained Voronoi cells and their faces are remarkably versatile structural descriptors of atoms and their interactions. This talk will be focused on some of the protein structural analysis and assessment algorithms that are built upon the aforementioned Voronoi tessellation-derived descriptors. In particular, I will present VoroIF-GNN, a novel method for assessing inter-subunit interfaces in protein-protein complexes. Given a multimeric protein 3D structural model, the method derives interface contacts from the Voronoi tessellation of atomic balls, constructs a graph of those contacts, and predicts accuracy of every contact using an attention-based graph neural network. The contact-level predictions are then summarized to produce whole interface-level scores. VoroIF-GNN was blindly tested for its ability to estimate accuracy of protein complexes during CASP15 (15th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction) and showed strong performance in selecting the best multimeric model out of many. | {"url":"https://www-ljk.imag.fr/spip.php?article35&id=6516c31376f3ed101615a829&type=SEMINAIRE","timestamp":"2024-11-13T22:08:29Z","content_type":"text/html","content_length":"20981","record_id":"<urn:uuid:43291bd9-93c9-4efa-8d9f-fde30beb06e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00742.warc.gz"} |
Error Function
The methods available for computing the main functions in this chapter are analogous to those described in §§6.18(i)–6.18(iv) for the exponential integral and sine and cosine integrals, and similar
comments apply. Additional references are Matta and Reichel (1971) for the application of the trapezoidal rule, for example, to the first of (7.7.2), and Gautschi (1970) and Cuyt et al. (2008) for
continued fractions. | {"url":"https://dlmf.nist.gov/7.22","timestamp":"2024-11-11T07:03:43Z","content_type":"text/html","content_length":"26768","record_id":"<urn:uuid:0c1eea11-d963-456d-a8a2-86060fd1b66b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00669.warc.gz"} |
is nominal data qualitative or quantitative
QualitativeData Qualitative (two levels of qualitative data) " Nominal level (by name) No natural ranking or ordering of the data exists. Jindal Global University, Product Management Certification
Program DUKE CE, PG Programme in Human Resource Management LIBA, HR Management and Analytics IIM Kozhikode, PG Programme in Healthcare Management LIBA, Finance for Non Finance Executives IIT Delhi,
PG Programme in Management IMT Ghaziabad, Leadership and Management in New-Age Business, Executive PG Programme in Human Resource Management LIBA, Professional Certificate Programme in HR Management
and Analytics IIM Kozhikode, IMT Management Certification + Liverpool MBA, IMT Management Certification + Deakin MBA, IMT Management Certification with 100% Job Guaranteed, Master of Science in ML &
AI LJMU & IIT Madras, HR Management & Analytics IIM Kozhikode, Certificate Programme in Blockchain IIIT Bangalore, Executive PGP in Cloud Backend Development IIIT Bangalore, Certificate Programme in
DevOps IIIT Bangalore, Certification in Cloud Backend Development IIIT Bangalore, Executive PG Programme in ML & AI IIIT Bangalore, Certificate Programme in ML & NLP IIIT Bangalore, Certificate
Programme in ML & Deep Learning IIIT B, Executive Post-Graduate Programme in Human Resource Management, Executive Post-Graduate Programme in Healthcare Management, Executive Post-Graduate Programme
in Business Analytics, LL.M. Elem Stats 1.1/1.2 Vocab. Like Nick mentioned, we count nominals, so it can be confused with a numeric type, but its not. b. Qualitative (Nominal (N), Ordinal (O), Binary
(B)). These are usually extracted from audio, images, or text medium. Nominal data is also called the nominal scale. In this article, we discussed how the data we produce can turn the tables upside
down, how the various categories of data are arranged according to their need. On the other hand, there is non-traditional, or web data, collected from numerous external sources. Lets understand this
with some examples. Some researchers call the first two scales of measurement (Ratio Scale and Interval Scale) quantitative because they measure things numerically, and call the last scale of
measurement (Nominal Scale) qualitative because you count the number of things that have that quality. Data that are either qualitative or quantitative and can be arranged in order. Ordinal scales
are sort of in-between these two types, but are more similar in statistical analyses to qualitative variables. However, they can be also successfully used individually. Regards, Leaning. Continuous:
Continuous data have an infinite no of states. hb```g,aBAfk3: hh! By numerising the categories, it appears to "quantitativise" them even though strictly they a. Required fields are marked *. Our
learners also read: Excel online course free! One can easily visually represent quantitative data with various charts and graphs, including scatter plots, lines, bar graphs, and others. Qualitative
methods are often known as investigative as they can be used to answer the question why using open-ended questions. The political party of each of the first 30 American presidents is revealed in the
statistics below. Nominal or Ordinal document.getElementById( "ak_js_1" ).setAttribute( "value", ( new Date() ).getTime() ); document.getElementById( "ak_js_2" ).setAttribute( "value", ( new Date()
).getTime() ); UPGRAD AND IIIT-BANGALORE'S EXECUTIVE PG PROGRAM IN DATA SCIENCE. In other words, the qualitative approach refers to information that describes certain properties, labels, and
attributes. This type of data in statistics helps run market analysis through genuine figures and create value out of service by implementing useful information. Try to identify additional data sets
in this example. If it holds number of votes, the variable is quantitative, to be precise is in ratio scale. Determine the percentage and relative frequency distributions. Ordinal has both a
qualitative and quantitative nature. political affiliation (dem, rep, ind) " Ordinal level (by order) Provides an order, but can't get a precise mathematical difference between levels. Use MathJax to
format equations. Statistics and Probability. A Day in the Life of Data Scientist: What do they do? Is the weight of the backpacks a quantitative variable? You can use this type of . 2. Applications
of Quantitative and Qualitative Data. 2 types of qualitative Data Nominal Data Used to label variables w/h any quantitative value Nominal data doesn't have any meaningful order the values are
distributed into distinct categories Ex of nominal Data: Hair Colour Marital Status Nationality Ordinal Data Data has a natural order where a number is present in some kind of order by their position
on the scale ( qualitative data here the . Alternatively, you may find the same amount or fewer customers, which may mean that they charge a premium for their products and services.. Overview of
Scaling: Vertical And Horizontal Scaling, SDE SHEET - A Complete Guide for SDE Preparation, Linear Regression (Python Implementation), Software Engineering | Coupling and Cohesion. Answer (1 of 7):
An Ordinal variable assigns number "ranks" to an otherwise categorical data. How is nominal data different from ordinal data? Nominal types of statistical data are valuable while conducting
qualitative research as it extends freedom of opinion to subjects. Quantitative data. In the track meet, I competed in the high jump and the pole vault. Put another way, you can classify raw or
original data as first reported and as appearing in say the cell of a spreadsheet or database. The grading system while marking candidates in a test can also be considered as an ordinal data type
where A+ is definitely better than B grade. Ordinal logistic regression with continuous and categorical independent variable (both ordinal and nominal). Nominal, ordinal, interval, and ratio scales
explained. Nominal Data. Discrete quantitative variables (like counts) also can be measured using interval or ratio scale! Nominal data can be both qualitative and quantitative. Are these data
nominal or ordinal? For instance, the price of a smartphone can vary from x amount to any value and it can be further broken down based on fractional values. On the one hand, there is traditional
data, or internal data, produced by a particular company. Data science can be found just about anywhere these days. 2003-2023 Chegg Inc. All rights reserved. Which one is correct? For a customer,
object attributes can be customer Id, address, etc. A data object represents the entity. 3. Discrete data is often identified through charts, including bar charts, pie charts, and tally charts. 1. 0
l upGrads Exclusive Data Science Webinar for you , Transformation & Opportunities in Analytics & Insights. Types of soups, nuts, vegetables and desserts are qualitative data because they are
categorical. If, voter-names are known, and, it holds voter-names, then variable is nominal. Putting the scales of measurement on the same diagram with the data types was confusing me, so I tried to
show that there is a distinction there. Attribute is not really basic type but is usually discussed in that way when choosing an appropriate control chart, where one is choosing the best pdf with
which to model the system. Unlike ordinal data, nominal data cannot be ordered and cannot be measured. In statistics, nominal data (also known as nominal scale) is a typeof data that is used to label
variables without providing any quantitative value. Qualitative Quantitative or Qualitative The numbers of touchdowns in a football game Quantitative Quantitative or Qualitative The number of files
on a computer Quantitative Quantitative or Qualitative The ingredients in a recipe Qualitative Quantitative or Qualitative The makers of cars sold by particular car dealer Qualitative Nominal or
Ordinal The shirt sizes of Small, Medium, Large, and X-Large. The number of permitted values is uncountable. https://cdn.upgrad.com/blog/jai-kapoor.mp4, Executive Post Graduate Programme in Data
Science from IIITB, Professional Certificate Program in Data Science for Business Decision Making, Master of Science in Data Science from University of Arizona, Advanced Certificate Programme in Data
Science from IIITB, Professional Certificate Program in Data Science and Business Analytics from University of Maryland, Data Science Career Path: A Comprehensive Career Guide, Data Science Career
Growth: The Future of Work is here, Why is Data Science Important? The Casual Vacancy by J.K. Rowling Is this data quantitative or qualitative and then chose if its continuous, discrete, ordinal or
nominal, Counting the number of patients with breast cancer in a clinic( study recorded at random intervals throughout the year), Given example is ;Counting the number of patients with breast cancer
in a clinic .We know that ;A quantitative charact. acknowledge that you have read and understood our, Data Structure & Algorithm Classes (Live), Data Structure & Algorithm-Self Paced(C++/JAVA),
Android App Development with Kotlin(Live), Full Stack Development with React & Node JS(Live), GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus
for Scientist/Engineer Exam, Understanding Data Attribute Types | Qualitative and Quantitative, Movie recommendation based on emotion in Python, Python | Implementation of Movie Recommender System,
Item-to-Item Based Collaborative Filtering, Frequent Item set in Data set (Association Rule Mining). The amount of caffeine in a cup of starbucks coffee, Discrete or Continuous It depends what you
mean by "quantitative data" and "qualitative data". Quantitative (Numeric, Discrete, Continuous). For example, a company cannot have 15.5 employees it's either 15 or 16 employees. NW by Zadie Smith
The thing is that people understand words and concepts not fully identically but they prefer, for some long or short time, to stack to their own comfortable understanding. b. Data is the fuel that
can drive a business to the right path or at least provide actionable insights that can help strategize current campaigns, easily organize the launch of new products, or try out different
experiments. Assuming this to be the case, if a sample of 25 modified bars resulted in a sample average yield point of 8439lb8439 \mathrm{lb}8439lb, compute a 90%90 \%90% CI for the true average
yield point of the modified bar. 3. On the other hand, ordinal scales provide a higher amount of detail. Which type you choose depends on, among other things, whether . So here is the description of
attribute types. In the data, D stands for Democrat, DR for Democratic Republican, F for Federalist, R for Republican, and W for Whig. Suppose, for example, you ask people: What sort of data is this?
Now it makes sense to plot a histogram or frequency plot for quantitive data and a pie chart and bar plot for qualitative data. Ratio Level Nominal Data at the nominal level of measurement are
qualitative only. Nominal data is qualitative or categorical data, while Ordinal data is considered "in-between" qualitative and quantitative data. There are four levels of measurement (or scales) to
be aware of: nominal, ordinal, interval, and ratio. Regression analysis, where the relationship between one dependent and two or more independent variables is analyzed is possible only for
quantitative data. They may include words, letters, and symbols. In this article, I will focus on web data and provide a deeper understanding of the nuances of web data types. You might want to print
out the Decision Tree, then write notes on it when you learn about each type of analysis. These typologies can easily confuse as much as they explain. ratio: attributes of a variable are
differentiated by the degree of difference between them, there is absolute zero, and we could find the ratio between the attributes. As we've discussed, nominal data is a categorical data type, so it
describes qualitative characteristics or groups, with no order or rank between categories. Data-driven decision-making is perhaps one of the most talked-about financial and business solutions today.
Some examples include the number of web visitors, a company's total number of employees, and others., Some examples of quantitative data include credit card transactions, sales data or data from
financial reports, macroeconomic indicators, the number of employees or the number of job postings, and many more., Discrete data refers to certain types of information that cannot be divided into
parts. By providing your email address you agree to receive newsletters from Coresignal. Legal. ordinal: attributes of a variable are differentiated by order (rank, position), but we do not know the
relative degree of difference between them. Quantitative (Numeric, Discrete, Continuous) Qualitative Attributes: 1. And this is only one approach from Stanley Smith Stevens. That chart is better than
your last one. Qualitative data is typically words, but could also be images or other media, we will refer to this data in this course as categorical. Data objects are the essential part of a
database. Quantitative research aims to answer the question what. When we talk about data mining, we usually discuss knowledge discovery from data. The type of scale determines what specific
statistical analysis you should use. My only caution is that some videos use slightly different formulas than in this textbook, and some use software that will not be discussed here, so make sure
that the information in the video matches what your professor is showing you.] These depend on your objectives, the scope of the research project, and the purpose of your data collection.. It only
takes a minute to sign up. In bad news, statistical software will run what you ask, regardless of the measurement scale of the variable. Maybe its there because one counts nominal events discretely,
but even if that is why it is incorrect. For instance, a company like Flipkart produces more than 2TB of data on daily basis. It could be structured more easily and put into graphs and charts for
better readability. \text { R } & \text { D } & \text { R } & \text { D } & \text { R } & \text { R } & \text { R } & \text { D } & \text { R } & \text { R } Nominal : Ordinal : Meaning In this
scale, the data is grouped according to their names. Okay, that probably makes it seem like it's easy to know whether your variable is qualitative or quantitative. There is an aggregation to counts
(how many such deaths in a area and a time period), a reduction to rates (how many relative to the population at risk), and so on. Qualitative/nominal variables name or label different categories of
objects. Qualitative data may be classified as nominal or ordinal: Nominal data is used to label or categorize certain variables without giving them any type of quantitative value. endstream endobj
134 0 obj <>/Metadata 17 0 R/PageLabels 129 0 R/PageLayout/OneColumn/Pages 131 0 R/PieceInfo<>>>/StructTreeRoot 24 0 R/Type/Catalog>> endobj 135 0 obj <>/ExtGState<>/Font<>/ProcSet[/PDF/Text/ImageC/
ImageI]/XObject<>>>/Rotate 0/StructParents 0/Tabs/S/Type/Page>> endobj 136 0 obj <>stream If a decimal makes sense, then the variable is quantitative. The three cans of soup, two packages of nuts,
four kinds of vegetables and two desserts are quantitative discrete data because you count them. How long it takes you to blink after a puff of air hits your eye. With the Big Data industry
experiencing a surge in the digital market, job roles like data scientist and analyst are two of the most coveted roles. For example, you notice that your competitor's revenues are 50% higher than
yours. Qualitative variables are divided into two types: nominal and ordinal. It is the simplest form of a scale of measure. Qualitative variables are counted, and the counts are used in statistical
analyses.The name or label of a qualitative variable can be a number, but the number doesnt mean anything. Dissimilar to interval or ratio data, nominal data cannot be manipulated using available
mathematical operators. For example, the variable gender is nominal because there is no order in the levels female/male. Data science's effect has grown dramatically due to its advancements and
technical advancements, expanding its scope. Simple, right? There's one more distinction we should get straight before moving on to the actual data types, and it has to do with quantitative (numbers)
data: discrete vs. continuous data. This is because this information can be easily categorized based on properties or certain characteristics., The main feature is that qualitative data does not come
as numbers with mathematical meaning, but rather as words. When a data object is listed in a database they are called data tuples. The weights of the soups (19 ounces, 14.1 ounces, 19 ounces) are
quantitative continuous data because you measure weights as precisely as possible. An average gender of 1.75 (or whatever) doesn't tell us much since gender is a qualitative variable (nominal scale
of measurement), so you can only count it. Qualitative (Nominal (N), Ordinal (O), Binary (B)). For example, a company's financial reports contain quantitative data. Statistics and Probability
questions and answers. 1.4: Types of Data and How to Measure Them, { "1.04.01:_IV_and_DV-_Variables_as_Predictors_and_Outcomes" : "property get [Map
b__1]()", "1.04.02:_Qualitative_versus_Quantitative_Variables" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()",
"1.04.03:_Scales_of_Measurement" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()" }, { "1.01:_Why_are_you_taking_this_course" : "property get
[Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "1.02:_What_is_a_statistic_What_is_a_statistical_analysis" : "property get [Map
MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "1.03:_The_Scientific_Method" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>
c__DisplayClass228_0.b__1]()", "1.04:_Types_of_Data_and_How_to_Measure_Them" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()",
"1.05:_Populations_and_Samples" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "1.06:_Research_shows_that" : "property get [Map
MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.b__1]()", "1.07:_Learning_(Statistics)" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>
c__DisplayClass228_0.b__1]()" }, 1.4.2: Qualitative versus Quantitative Variables, [ "article:topic", "qualitative data", "quantitative data", "discrete data", "continuous data", "license:ccby",
"source-stats-705", "showtoc:yes", "source[1]-stats-5982", "source[2]-stats-705", "source[3]-stats-5982", "authorname:moja", "source[31]-stats-17291", "licenseversion:40" ], https://
%2FUnit_1%253A_Description%2F1%253A_Introduction_to_Behavioral_Statistics%2F1.04%253A_Types_of_Data_and_How_to_Measure_Them%2F1.04.02%253A_Qualitative_versus_Quantitative_Variables, \( \newcommand{\
vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand
{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \
( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm
{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\
ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\), 1.4.1: IV and DV- Variables as Predictors and Outcomes, short segment on these two types of variables, status page at https://status.libretexts.org, Score
on a depression scale (between 0 and 10). For example, volatile values such as temperature and the weight of a human can be included in the continuous value. Business Intelligence vs Data Science:
What are the differences? Nominal data includes names or characteristics that contain two or more categories, and the categories have no inherent ordering. in Corporate & Financial Law Jindal Law
School, LL.M. Table of contents Levels of measurement Examples of nominal data Numerical attributes are of 2 types, interval, and ratio. +M"nf p;xO?<3M 4 Q[=kEw.T;"|FmWE5+Dm.r^ However, the
quantitative labels lack a numerical value or relationship (e.g., identification number). Qualitative means you can't, and it's not numerical (think quality - categorical data instead). The
quantitative data, such as revenue numbers, does not help you understand why the company performs much better.. The number of steps in a stairway, Discrete or Continuous What is another example of a
qualitative variable? The continuous data flow has helped millions of organizations to attain growth with fact-backed decisions. Quantitative variables. For companies, data science is a significant
resource for making data-driven decisions since it describes the collecting, saving, sorting, and evaluating data. We also acknowledge previous National Science Foundation support under grant numbers
1246120, 1525057, and 1413739. If we consider the size of a clothing brand then we can easily sort them according to their name tag in the order of small < medium < large. Global Doctor of Business
Administration SSBM, Master of Business Administration (MBA) LBS and IMT, MBA (Global) Deakin Business School and IMT, Master of Science in Machine Learning & AI LJMU and IIIT-B, Advanced
Certification in Machine Learning and Cloud IIT-M, Executive PG Program in Machine Learning & AI IIIT-B, Advanced Certificate Program in Machine Learning and Deep Learning IIIT-B, Advanced
Certificate Program in Machine Learning and NLP IIIT-B, Master of Science in Machine Learning & AI LJMU and IIT-M, Master of Science in Data Science LJMU and IIIT-B, Executive PG Program in Data
Science IIIT-B, Professional Certificate Program in Data Science and BA University of Maryland, Caltech CTME Data Analytics Certificate Program powered by Fullstack Academy and upGrad, Advanced
Certificate Program in Data Science IIIT-B, Advanced Program in Data Science IIIT-B, Professional Certificate Program in Data Science for Business Decision Making IIM-K, Marketing Analytics
Certificate Program Emory University, Advanced Certificate in Digital Marketing and Communication MICA and upGrad, Full Stack Development Certificate Program Purdue University, Master of Science in
Computer Science LJMU and IIIT-B, Caltech CTME Cybersecurity Certificate Program powered by Fullstack Academy and upGrad, Executive PG Program in Software Development IIIT-B, Advanced Certificate
Program in Cloud Backend Development IIIT-B, Advanced Certificate Program in DevOps IIIT-B, Advanced Certificate Program in Cyber Security IIIT-B, Advanced Certificate Program in Big Data IIIT-B,
Blockchain Certificate Program Purdue University, Cloud Backend Development Certificate Program Purdue University, Product Management Certification Program Duke CE, Project Management Professional
(PMP) Certification Course upGrad Knowledgehut, Certified ScrumMaster (CSM) Course upGrad Knowledgehut, M.Sc in Data Science LJMU & IIIT Bangalore, Importance of Qualitative and Quantitative Data. | {"url":"http://www.miaminewmediafestival.com/iDcMFBZ/is-nominal-data-qualitative-or-quantitative","timestamp":"2024-11-10T16:13:32Z","content_type":"text/html","content_length":"32308","record_id":"<urn:uuid:2259c756-d3d7-4d36-b56d-583f888bcd45>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00003.warc.gz"} |
Pythagorean theorem entry
Pythagorean Theorem
(Mod IV)
Prices are going up. Lots of pages to change and it is taking longer than i thought. In the meantime take advantage of this sale.
All passwords including algebra and crash course.
Algebra with base ten blocks online!
30 Hour Course.
Click for details.
FOUR BOOKS, one low price, $19.99.
Absolutely Amazing Addends is done!
Two Wildly Wondrous Work Books are done!
Passwords will change again "SOON".
Base Ten Block CRASH COURSE
Just $97.00
Hourly rate is now $60.00 x 5 = $300.00.
$75.00 single classes.
Use the contact link for payments or other communication.
Crewton Ramone's No Mystery Theatre Now Has A Second Page!!!
(More FREE vids!)
And pardon me: for those of you that can't do math, the November 2020 election was absolutely stolen. Very basic math. Trump won in a landslide, so did a lot of others...this is why YOU need math.
The next page shows you how to make Pythagorean theorem easy and understandable for little kids.
Start by building squares. Talk about square roots, & what the symbol means. Then build Triangles using your squares. The 3-4-5 Pythagorean triple, is small and easy to count for little kids. But
the concept is huge.
a^2 + b^2 = c^2
The next page contains video & lessons to help you make this concept easy for your kids. This is one of the basic building blocks for Trigonometry. Understanding this concept early on lowers the
cognitive load later.
This page will help you avoid the problems that this student and teacher had.
Mathematics really can be child's play. Just because this is part of module four doesn't mean you can't start fooling around with the lessons you find on the next page at the same time they're
learning other basics like addition or division. Square numbers should be part of your early lessons (use module three password), on the next page you'll see little second graders having fun with
this concept.
There are two videos that show a classroom full of second graders who were only seven and eight years old building pyramids out of squares like the one you see above and counting them as part of the
lesson. None of them thought it was hard, or difficult to understand. In fact they went home and explained it to their parents who later mobbed me at a birthday party when they discovered I was the
guy that had taught their children Pythagorean theorem.
The children came home excited and happy...And when the parents asked, "what did you learn today?" Their children could explain this fundamental theorem, much the amazement of one of the parents who
had a degree in physics, & who would never have thought of teaching his little daughter this until high school.
Here are some triples to get started with. You can also double or triple them. In other words (3, 4, 5) & (6, 8, 10) & (9, 12, 15) all work and are easy to build with blocks.
Click enter to introduce your kids to Pythagorean Theorem, the easy and fun way.
(3, 4, 5) (5, 12, 13) (8, 15, 17) (7, 24, 25)
(20, 21, 29) (12, 35, 37) (9, 40, 41) (28, 45, 53)
(11, 60, 61) (16, 63, 65) (33, 56, 65) (48, 55, 73)
(13, 84, 85) (36, 77, 85) (39, 80, 89) (65, 72, 97)
"Numbers rule the Universe." ~Pythagoras
“Geometry has two great treasures; one is the Theorem of Pythagoras; the other, the division of a line into extreme and mean ratio. The first we may compare to a measure of gold; the second we may
name a precious jewel.” ~ Johannes Kepler
"It is the supreme art of the teacher to awaken joy in creative expression and knowledge." -- Albert Einstein
“When one teaches, two learn.” ~Robert Heinlein
"Technical skill is mastery of complexity while creativity is mastery of simplicity." ~ E Christopher Zeeman, Catastrophe Theory, 1977
More education quotes.
Want to see more free pages & lessons & other free stuff on this site?
Consider a dollar a month.
For $1 per month (the lowest level subscription) you get access to
Super Duper Super Secret Facebook Page.
You'll find hours and hours of videos with base ten blocks and information you won't may not find anywhere else not even on this website. I often post video tutoring sessions there. Other people
post vids and links there. Lessons cost the people doing them minimum $50.00 and hour. You can watch 2 to 10 of them a month for a dollar...Do the math. Currently 127 people are there. About half of
them are active.
You basically get a support group for a buck a month.
Here's My Patreon:
Note: from time to time the passwords change. Simply e-mail me for a new one or a new passport as the case may be. Annual passes are good for one year, lifetime passes are good for as long as the
site remains up, (site has been up for eight years now). All single page passwords have lifetime renewal.
Note: Mortensen Product Ordering Buttons Have Been Removed Due To Shipping/Inventory Issues. i basically DO NOT sell product for them anymore. Use eBay or other sources for base ten blocks. | {"url":"https://www.crewtonramoneshouseofmath.com/Pythagorean-Theorem-Entry-Page.html","timestamp":"2024-11-05T07:33:58Z","content_type":"text/html","content_length":"36411","record_id":"<urn:uuid:b741397a-2cbf-447d-90b0-491dfc024ab3>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00084.warc.gz"} |
Comparing Fractions Game
□ 02/17/16 | Adjusted: 07/03/18 | 4 files
□ Grades 3
Comparing Fractions Game
What we like about this task
• Allows students to compare fractions by using common numerators, common denominators, or benchmarks.
• Encourages students to reason about the size of fractions (3.NF.A.3).
In the classroom:
• Provides resources to allow students to compare fractions with or without a visual representation of the fractions the mathematics explicit.
• Prompts students to share their developing thinking and understanding.
• Captures student attention by using an engaging context.
This task was designed to include specific features that support access for all students and align to best practice for English Language Learner (ELL) instruction. Go here to learn more about the
research behind these supports. This lesson aligns to ELL best practice in the following ways:
• Provides opportunities for students to practice and refine their use of mathematical language.
• Allows for whole class, small group, and paired discussion for the purpose of practicing with mathematical concepts and language.
• Includes a mathematical routine that reflects best practices to supporting ELLs in accessing mathematical concepts.
• Making the Shifts
How does this task exemplify the instructional Shifts required by CCSSM?
Focus Belongs to the major work of third grade
Coherence Sets students up for work in grade 4 to compare a broader range of fractions with different numerators and denominators
Conceptual Understanding: primary in this task
Rigor ^ Procedural Skill and Fluency: not targeted in this task
Application: not targeted in this task
• Task
This activity is designed for pairs of students. They will require a set of cards (which are supplied as an attached resource, after the commentary). The goal is to compare the two fractions
appearing on each card, determine if they are equivalent and, if not, which is larger. Instructions for the activity are as follows:
a. Students go through the following steps with the fraction cards:
1. The pair of students select a card.
2. Each student individually decides whether the fractions are equal and, if not, which is greater. Then they show each other their choice.
3. If the partners agree, they take turns explaining their reasoning. If they disagree, they discuss until reaching a consensus.
4. Repeat 1 through 3 with a new card.
b. After 10 rounds, each pair records observations about what methods they used to compare the fractions.
• Illustrative Mathematics Commentary and Solution
The goal of this task is to compare fractions with a focus on providing explanations that demonstrate deep conceptual understanding. The main comparison techniques to look and listen for (for non
equivalent fractions) are
□ Using a common numerator (e.g. thirds are bigger than fourths, so two thirds are bigger than two fourths).
□ Using a common denominator (e.g. $\frac{1}{5}$ is less than $\frac{2}{5}$ because there is an extra fifth of a whole in $\frac{2}{5}$).
□ Using a whole as a benchmark (e.g. $\frac{2}{3}$ is less than $\frac{5}{4}$ because $\frac{2}{3}$ is less than $\frac{3}{3}$ or one whole and $\frac{5}{4}$ is larger than $\frac{4}{4}$ or one
Concerning the third method, using a benchmark to compare two fractions is explicitly mentioned in 4.NF.2. Because the meaning of a whole is fundamental to understanding a fraction, it is
appropriate to use 1 as a benchmark in the third grade. The teacher may, however, choose to remove those cards containing pairs of fractions where one is larger than a whole and the other is less
than a whole.
Two different sets of cards are provided as attachments, one with a picture of the two fractions being compared and one without. The pictures allow students to make a visual comparison of the
fractions which is important. However, the teacher may wish for students to provide these pictures as one means of explaining their decision. Similarly, the teacher may also wish to remove cards
having equivalent fractions if the goal is to work exclusively on inequalities.
Question 2 is intended to motivate a classroom discussion after students have completed the activity. In order to better prepare them for this, the teacher may suggest that students think about
the strategies they are using to compare fractions as they play the game. Some methods, like drawing pictures or using fraction strips, can be used for all of the pairs of fractions. But other
methods such as looking for a common numerator and common denominator are conceptually important and the teacher will want to make sure that these methods are discussed.
a. There are four types of fractions which students will need to compare:
1. Fractions having the same numerator. The denominator tells us how many equal pieces are in the whole, determining the size of each piece, and the numerator tells us how many of those pieces
we have. For example, to compare and , there are more fifths in the whole than thirds so fifths are smaller. This means that $\frac{2}{5} < \frac{2}{3}$.
2. Fractions having the same denominator. For example, we see that $\frac{2}{3} > \frac{1}{3}$ because $\frac{2}{3}$ is $\frac{1}{3}$ and an additional third so it is bigger. This relates to the
reasoning described in the common numerator situation: the denominator tells us there are the same number of pieces in the whole, however one fraction has more of those pieces than the other.
3. One fraction is less than 1 and the other fraction is larger than 1. For example,$\frac{2}{3} < \frac{3}{2}$ because $\frac{2}{3}$ is one third short of a whole while $\frac{3}{2}$ is an
entire whole with an additional half added.
4. Simple equivalent fractions such as $\frac{1}{2}$ and $\frac{2}{4}$. One way to show that these fractions represent the same quantity is with a picture:
Here the two large squares are equally sized wholes which have been divided into two equal parts (on the left) and four equal parts (on the right). The same fraction of the whole is shaded in
each picture so $\frac{1}{2}$ is equivalent to $\frac{2}{4}$.
b. There are many important lessons to be learned from comparing these fractions including:
□ If I draw a picture of the two fractions, the larger fraction will have more shaded than the smaller fraction. If the two fractions are equal, the same amount will be shaded in both.
□ The denominator tells me how many pieces to cut my whole into. When the whole is cut into more pieces, the pieces are smaller (this is why $\frac{1}{3}$ is less than $\frac{1}{2}$).
□ The numerator tells me how many equal sized pieces I have. So $\frac{3}{5}$ is more than $\frac{2}{5}$ because I have one extra piece.
□ Fractions are built from the unit fractions so it is important to understand and be able to represent the unit fractions.
□ If using the fraction cards with pictures, equal sized wholes are important when comparing fractions.
□ Equivalent fractions have different sized pieces, but the same total amount shaded.
□ When the numerator is a bigger number than the denominator, the fraction is greater than one whole.
□ When doing mathematics, patterns emerge. These patterns support students in making conjectures, supporting their reasoning, and proving mathematical claims.
• Additional Thoughts
This task was created as part of the Adapting Materials Project. The goal of this project was to create a replicable process for teachers intending to adapt their materials, and to help create an
environment of trust, where teachers felt empowered with the knowledge, confidence, and authority to change their own instructional materials in a way that better reflects the standards. To learn
more about the work of these districts, read the “Collaborative Learning and Updating Materials” article from Aligned or access the complete case study.
For more information on the specific expectations for students working with fractions in grade 3, including the need for fractions to be referring to the same whole, read pages 3–5 in the
progression document, Number and Operations–Fractions, available at www.achievethecore.org/progressions. | {"url":"https://achievethecore.org/page/2774/comparing-fractions-game","timestamp":"2024-11-08T07:53:17Z","content_type":"text/html","content_length":"90596","record_id":"<urn:uuid:68625f96-ce2e-4461-8a72-71d6401dcba9>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00088.warc.gz"} |
1.4. Support Vector Machines
1.4. Support Vector Machines#
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
The advantages of support vector machines are:
• Effective in high dimensional spaces.
• Still effective in cases where number of dimensions is greater than the number of samples.
• Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
• Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.
The disadvantages of support vector machines include:
• If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial.
• SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below).
The support vector machines in scikit-learn support both dense (numpy.ndarray and convertible to that by numpy.asarray) and sparse (any scipy.sparse) sample vectors as input. However, to use an SVM
to make predictions for sparse data, it must have been fit on such data. For optimal performance, use C-ordered numpy.ndarray (dense) or scipy.sparse.csr_matrix (sparse) with dtype=float64.
1.4.1. Classification#
SVC, NuSVC and LinearSVC are classes capable of performing binary and multi-class classification on a dataset.
SVC and NuSVC are similar methods, but accept slightly different sets of parameters and have different mathematical formulations (see section Mathematical formulation). On the other hand, LinearSVC
is another (faster) implementation of Support Vector Classification for the case of a linear kernel. It also lacks some of the attributes of SVC and NuSVC, like support_. LinearSVC uses squared_hinge
loss and due to its implementation in liblinear it also regularizes the intercept, if considered. This effect can however be reduced by carefully fine tuning its intercept_scaling parameter, which
allows the intercept term to have a different regularization behavior compared to the other features. The classification results and score can therefore differ from the other two classifiers.
As other classifiers, SVC, NuSVC and LinearSVC take as input two arrays: an array X of shape (n_samples, n_features) holding the training samples, and an array y of class labels (strings or
integers), of shape (n_samples):
>>> from sklearn import svm
>>> X = [[0, 0], [1, 1]]
>>> y = [0, 1]
>>> clf = svm.SVC()
>>> clf.fit(X, y)
After being fitted, the model can then be used to predict new values:
>>> clf.predict([[2., 2.]])
SVMs decision function (detailed in the Mathematical formulation) depends on some subset of the training data, called the support vectors. Some properties of these support vectors can be found in
attributes support_vectors_, support_ and n_support_:
>>> # get support vectors
>>> clf.support_vectors_
array([[0., 0.],
[1., 1.]])
>>> # get indices of support vectors
>>> clf.support_
array([0, 1]...)
>>> # get number of support vectors for each class
>>> clf.n_support_
array([1, 1]...)
1.4.1.1. Multi-class classification#
SVC and NuSVC implement the “one-versus-one” approach for multi-class classification. In total, n_classes * (n_classes - 1) / 2 classifiers are constructed and each one trains data from two classes.
To provide a consistent interface with other classifiers, the decision_function_shape option allows to monotonically transform the results of the “one-versus-one” classifiers to a “one-vs-rest”
decision function of shape (n_samples, n_classes), which is the default setting of the parameter (default=’ovr’).
>>> X = [[0], [1], [2], [3]]
>>> Y = [0, 1, 2, 3]
>>> clf = svm.SVC(decision_function_shape='ovo')
>>> clf.fit(X, Y)
>>> dec = clf.decision_function([[1]])
>>> dec.shape[1] # 6 classes: 4*3/2 = 6
>>> clf.decision_function_shape = "ovr"
>>> dec = clf.decision_function([[1]])
>>> dec.shape[1] # 4 classes
On the other hand, LinearSVC implements “one-vs-the-rest” multi-class strategy, thus training n_classes models.
>>> lin_clf = svm.LinearSVC()
>>> lin_clf.fit(X, Y)
>>> dec = lin_clf.decision_function([[1]])
>>> dec.shape[1]
See Mathematical formulation for a complete description of the decision function.
Details on multi-class strategies#
Note that the LinearSVC also implements an alternative multi-class strategy, the so-called multi-class SVM formulated by Crammer and Singer [16], by using the option multi_class='crammer_singer'. In
practice, one-vs-rest classification is usually preferred, since the results are mostly similar, but the runtime is significantly less.
For “one-vs-rest” LinearSVC the attributes coef_ and intercept_ have the shape (n_classes, n_features) and (n_classes,) respectively. Each row of the coefficients corresponds to one of the n_classes
“one-vs-rest” classifiers and similar for the intercepts, in the order of the “one” class.
In the case of “one-vs-one” SVC and NuSVC, the layout of the attributes is a little more involved. In the case of a linear kernel, the attributes coef_ and intercept_ have the shape (n_classes *
(n_classes - 1) / 2, n_features) and (n_classes * (n_classes - 1) / 2) respectively. This is similar to the layout for LinearSVC described above, with each row now corresponding to a binary
classifier. The order for classes 0 to n is “0 vs 1”, “0 vs 2” , … “0 vs n”, “1 vs 2”, “1 vs 3”, “1 vs n”, . . . “n-1 vs n”.
The shape of dual_coef_ is (n_classes-1, n_SV) with a somewhat hard to grasp layout. The columns correspond to the support vectors involved in any of the n_classes * (n_classes - 1) / 2 “one-vs-one”
classifiers. Each support vector v has a dual coefficient in each of the n_classes - 1 classifiers comparing the class of v against another class. Note that some, but not all, of these dual
coefficients, may be zero. The n_classes - 1 entries in each column are these dual coefficients, ordered by the opposing class.
This might be clearer with an example: consider a three class problem with class 0 having three support vectors \(v^{0}_0, v^{1}_0, v^{2}_0\) and class 1 and 2 having two support vectors \(v^{0}_1, v
^{1}_1\) and \(v^{0}_2, v^{1}_2\) respectively. For each support vector \(v^{j}_i\), there are two dual coefficients. Let’s call the coefficient of support vector \(v^{j}_i\) in the classifier
between classes \(i\) and \(k\) \(\alpha^{j}_{i,k}\). Then dual_coef_ looks like this:
\(\alpha^{0}_{0,1}\) \(\alpha^{1}_{0,1}\) \(\alpha^{2}_{0,1}\) \(\alpha^{0}_{1,0}\) \(\alpha^{1}_{1,0}\) \(\alpha^{0}_{2,0}\) \(\alpha^{1}_{2,0}\)
\(\alpha^{0}_{0,2}\) \(\alpha^{1}_{0,2}\) \(\alpha^{2}_{0,2}\) \(\alpha^{0}_{1,2}\) \(\alpha^{1}_{1,2}\) \(\alpha^{0}_{2,1}\) \(\alpha^{1}_{2,1}\)
Coefficients for SVs of class 0 Coefficients for SVs of class 1 Coefficients for SVs of class 2
1.4.1.2. Scores and probabilities#
The decision_function method of SVC and NuSVC gives per-class scores for each sample (or a single score per sample in the binary case). When the constructor option probability is set to True, class
membership probability estimates (from the methods predict_proba and predict_log_proba) are enabled. In the binary case, the probabilities are calibrated using Platt scaling [9]: logistic regression
on the SVM’s scores, fit by an additional cross-validation on the training data. In the multiclass case, this is extended as per [10].
The same probability calibration procedure is available for all estimators via the CalibratedClassifierCV (see Probability calibration). In the case of SVC and NuSVC, this procedure is builtin in
libsvm which is used under the hood, so it does not rely on scikit-learn’s CalibratedClassifierCV.
The cross-validation involved in Platt scaling is an expensive operation for large datasets. In addition, the probability estimates may be inconsistent with the scores:
• the “argmax” of the scores may not be the argmax of the probabilities
• in binary classification, a sample may be labeled by predict as belonging to the positive class even if the output of predict_proba is less than 0.5; and similarly, it could be labeled as
negative even if the output of predict_proba is more than 0.5.
Platt’s method is also known to have theoretical issues. If confidence scores are required, but these do not have to be probabilities, then it is advisable to set probability=False and use
decision_function instead of predict_proba.
Please note that when decision_function_shape='ovr' and n_classes > 2, unlike decision_function, the predict method does not try to break ties by default. You can set break_ties=True for the output
of predict to be the same as np.argmax(clf.decision_function(...), axis=1), otherwise the first class among the tied classes will always be returned; but have in mind that it comes with a
computational cost. See SVM Tie Breaking Example for an example on tie breaking.
1.4.1.3. Unbalanced problems#
In problems where it is desired to give more importance to certain classes or certain individual samples, the parameters class_weight and sample_weight can be used.
SVC (but not NuSVC) implements the parameter class_weight in the fit method. It’s a dictionary of the form {class_label : value}, where value is a floating point number > 0 that sets the parameter C
of class class_label to C * value. The figure below illustrates the decision boundary of an unbalanced problem, with and without weight correction.
SVC, NuSVC, SVR, NuSVR, LinearSVC, LinearSVR and OneClassSVM implement also weights for individual samples in the fit method through the sample_weight parameter. Similar to class_weight, this sets
the parameter C for the i-th example to C * sample_weight[i], which will encourage the classifier to get these samples right. The figure below illustrates the effect of sample weighting on the
decision boundary. The size of the circles is proportional to the sample weights:
1.4.2. Regression#
The method of Support Vector Classification can be extended to solve regression problems. This method is called Support Vector Regression.
The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training
points that lie beyond the margin. Analogously, the model produced by Support Vector Regression depends only on a subset of the training data, because the cost function ignores samples whose
prediction is close to their target.
There are three different implementations of Support Vector Regression: SVR, NuSVR and LinearSVR. LinearSVR provides a faster implementation than SVR but only considers the linear kernel, while NuSVR
implements a slightly different formulation than SVR and LinearSVR. Due to its implementation in liblinear LinearSVR also regularizes the intercept, if considered. This effect can however be reduced
by carefully fine tuning its intercept_scaling parameter, which allows the intercept term to have a different regularization behavior compared to the other features. The classification results and
score can therefore differ from the other two classifiers. See Implementation details for further details.
As with classification classes, the fit method will take as argument vectors X, y, only that in this case y is expected to have floating point values instead of integer values:
>>> from sklearn import svm
>>> X = [[0, 0], [2, 2]]
>>> y = [0.5, 2.5]
>>> regr = svm.SVR()
>>> regr.fit(X, y)
>>> regr.predict([[1, 1]])
1.4.3. Density estimation, novelty detection#
The class OneClassSVM implements a One-Class SVM which is used in outlier detection.
See Novelty and Outlier Detection for the description and usage of OneClassSVM.
1.4.4. Complexity#
Support Vector Machines are powerful tools, but their compute and storage requirements increase rapidly with the number of training vectors. The core of an SVM is a quadratic programming problem
(QP), separating support vectors from the rest of the training data. The QP solver used by the libsvm-based implementation scales between \(O(n_{features} \times n_{samples}^2)\) and \(O(n_{features}
\times n_{samples}^3)\) depending on how efficiently the libsvm cache is used in practice (dataset dependent). If the data is very sparse \(n_{features}\) should be replaced by the average number of
non-zero features in a sample vector.
For the linear case, the algorithm used in LinearSVC by the liblinear implementation is much more efficient than its libsvm-based SVC counterpart and can scale almost linearly to millions of samples
and/or features.
1.4.5. Tips on Practical Use#
• Avoiding data copy: For SVC, SVR, NuSVC and NuSVR, if the data passed to certain methods is not C-ordered contiguous and double precision, it will be copied before calling the underlying C
implementation. You can check whether a given numpy array is C-contiguous by inspecting its flags attribute.
For LinearSVC (and LogisticRegression) any input passed as a numpy array will be copied and converted to the liblinear internal sparse data representation (double precision floats and int32
indices of non-zero components). If you want to fit a large-scale linear classifier without copying a dense numpy C-contiguous double precision array as input, we suggest to use the SGDClassifier
class instead. The objective function can be configured to be almost the same as the LinearSVC model.
• Kernel cache size: For SVC, SVR, NuSVC and NuSVR, the size of the kernel cache has a strong impact on run times for larger problems. If you have enough RAM available, it is recommended to set
cache_size to a higher value than the default of 200(MB), such as 500(MB) or 1000(MB).
• Setting C: C is 1 by default and it’s a reasonable default choice. If you have a lot of noisy observations you should decrease it: decreasing C corresponds to more regularization.
LinearSVC and LinearSVR are less sensitive to C when it becomes large, and prediction results stop improving after a certain threshold. Meanwhile, larger C values will take more time to train,
sometimes up to 10 times longer, as shown in [11].
• Support Vector Machine algorithms are not scale invariant, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or
standardize it to have mean 0 and variance 1. Note that the same scaling must be applied to the test vector to obtain meaningful results. This can be done easily by using a Pipeline:
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.svm import SVC
>>> clf = make_pipeline(StandardScaler(), SVC())
See section Preprocessing data for more details on scaling and normalization.
• Regarding the shrinking parameter, quoting [12]: We found that if the number of iterations is large, then shrinking can shorten the training time. However, if we loosely solve the optimization
problem (e.g., by using a large stopping tolerance), the code without using shrinking may be much faster
• Parameter nu in NuSVC/OneClassSVM/NuSVR approximates the fraction of training errors and support vectors.
• In SVC, if the data is unbalanced (e.g. many positive and few negative), set class_weight='balanced' and/or try different penalty parameters C.
• Randomness of the underlying implementations: The underlying implementations of SVC and NuSVC use a random number generator only to shuffle the data for probability estimation (when probability
is set to True). This randomness can be controlled with the random_state parameter. If probability is set to False these estimators are not random and random_state has no effect on the results.
The underlying OneClassSVM implementation is similar to the ones of SVC and NuSVC. As no probability estimation is provided for OneClassSVM, it is not random.
The underlying LinearSVC implementation uses a random number generator to select features when fitting the model with a dual coordinate descent (i.e. when dual is set to True). It is thus not
uncommon to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. This randomness can also be controlled with the random_state parameter.
When dual is set to False the underlying implementation of LinearSVC is not random and random_state has no effect on the results.
• Using L1 penalization as provided by LinearSVC(penalty='l1', dual=False) yields a sparse solution, i.e. only a subset of feature weights is different from zero and contribute to the decision
function. Increasing C yields a more complex model (more features are selected). The C value that yields a “null” model (all weights equal to zero) can be calculated using l1_min_c.
1.4.6. Kernel functions#
The kernel function can be any of the following:
• linear: \(\langle x, x'\rangle\).
• polynomial: \((\gamma \langle x, x'\rangle + r)^d\), where \(d\) is specified by parameter degree, \(r\) by coef0.
• rbf: \(\exp(-\gamma \|x-x'\|^2)\), where \(\gamma\) is specified by parameter gamma, must be greater than 0.
• sigmoid \(\tanh(\gamma \langle x,x'\rangle + r)\), where \(r\) is specified by coef0.
Different kernels are specified by the kernel parameter:
>>> linear_svc = svm.SVC(kernel='linear')
>>> linear_svc.kernel
>>> rbf_svc = svm.SVC(kernel='rbf')
>>> rbf_svc.kernel
See also Kernel Approximation for a solution to use RBF kernels that is much faster and more scalable.
1.4.6.1. Parameters of the RBF Kernel#
When training an SVM with the Radial Basis Function (RBF) kernel, two parameters must be considered: C and gamma. The parameter C, common to all SVM kernels, trades off misclassification of training
examples against simplicity of the decision surface. A low C makes the decision surface smooth, while a high C aims at classifying all training examples correctly. gamma defines how much influence a
single training example has. The larger gamma is, the closer other examples must be to be affected.
Proper choice of C and gamma is critical to the SVM’s performance. One is advised to use GridSearchCV with C and gamma spaced exponentially far apart to choose good values.
1.4.6.2. Custom Kernels#
You can define your own kernels by either giving the kernel as a python function or by precomputing the Gram matrix.
Classifiers with custom kernels behave the same way as any other classifiers, except that:
• Field support_vectors_ is now empty, only indices of support vectors are stored in support_
• A reference (and not a copy) of the first argument in the fit() method is stored for future reference. If that array changes between the use of fit() and predict() you will have unexpected
Using Python functions as kernels#
You can use your own defined kernels by passing a function to the kernel parameter.
Your kernel must take as arguments two matrices of shape (n_samples_1, n_features), (n_samples_2, n_features) and return a kernel matrix of shape (n_samples_1, n_samples_2).
The following code defines a linear kernel and creates a classifier instance that will use that kernel:
>>> import numpy as np
>>> from sklearn import svm
>>> def my_kernel(X, Y):
... return np.dot(X, Y.T)
>>> clf = svm.SVC(kernel=my_kernel)
Using the Gram matrix#
You can pass pre-computed kernels by using the kernel='precomputed' option. You should then pass Gram matrix instead of X to the fit and predict methods. The kernel values between all training
vectors and the test vectors must be provided:
>>> import numpy as np
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from sklearn import svm
>>> X, y = make_classification(n_samples=10, random_state=0)
>>> X_train , X_test , y_train, y_test = train_test_split(X, y, random_state=0)
>>> clf = svm.SVC(kernel='precomputed')
>>> # linear kernel computation
>>> gram_train = np.dot(X_train, X_train.T)
>>> clf.fit(gram_train, y_train)
>>> # predict on training examples
>>> gram_test = np.dot(X_test, X_train.T)
>>> clf.predict(gram_test)
array([0, 1, 0])
1.4.7. Mathematical formulation#
A support vector machine constructs a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good
separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the
lower the generalization error of the classifier. The figure below shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”:
In general, when the problem isn’t linearly separable, the support vectors are the samples within the margin boundaries.
We recommend [13] and [14] as good references for the theory and practicalities of SVMs.
1.4.7.1. SVC#
Given training vectors \(x_i \in \mathbb{R}^p\), i=1,…, n, in two classes, and a vector \(y \in \{1, -1\}^n\), our goal is to find \(w \in \mathbb{R}^p\) and \(b \in \mathbb{R}\) such that the
prediction given by \(\text{sign} (w^T\phi(x) + b)\) is correct for most samples.
SVC solves the following primal problem:
\[ \begin{aligned}\min_ {w, b, \zeta} \frac{1}{2} w^T w + C \sum_{i=1}^{n} \zeta_i\\\begin{split}\textrm {subject to } & y_i (w^T \phi (x_i) + b) \geq 1 - \zeta_i,\\ & \zeta_i \geq 0, i=1, ..., n\end
{split}\end{aligned} \]
Intuitively, we’re trying to maximize the margin (by minimizing \(||w||^2 = w^Tw\)), while incurring a penalty when a sample is misclassified or within the margin boundary. Ideally, the value \(y_i
(w^T \phi (x_i) + b)\) would be \(\geq 1\) for all samples, which indicates a perfect prediction. But problems are usually not always perfectly separable with a hyperplane, so we allow some samples
to be at a distance \(\zeta_i\) from their correct margin boundary. The penalty term C controls the strength of this penalty, and as a result, acts as an inverse regularization parameter (see note
The dual problem to the primal is
\[ \begin{aligned}\min_{\alpha} \frac{1}{2} \alpha^T Q \alpha - e^T \alpha\\\begin{split} \textrm {subject to } & y^T \alpha = 0\\ & 0 \leq \alpha_i \leq C, i=1, ..., n\end{split}\end{aligned} \]
where \(e\) is the vector of all ones, and \(Q\) is an \(n\) by \(n\) positive semidefinite matrix, \(Q_{ij} \equiv y_i y_j K(x_i, x_j)\), where \(K(x_i, x_j) = \phi (x_i)^T \phi (x_j)\) is the
kernel. The terms \(\alpha_i\) are called the dual coefficients, and they are upper-bounded by \(C\). This dual representation highlights the fact that training vectors are implicitly mapped into a
higher (maybe infinite) dimensional space by the function \(\phi\): see kernel trick.
Once the optimization problem is solved, the output of decision_function for a given sample \(x\) becomes:
\[\sum_{i\in SV} y_i \alpha_i K(x_i, x) + b,\]
and the predicted class correspond to its sign. We only need to sum over the support vectors (i.e. the samples that lie within the margin) because the dual coefficients \(\alpha_i\) are zero for the
other samples.
These parameters can be accessed through the attributes dual_coef_ which holds the product \(y_i \alpha_i\), support_vectors_ which holds the support vectors, and intercept_ which holds the
independent term \(b\)
While SVM models derived from libsvm and liblinear use C as regularization parameter, most other estimators use alpha. The exact equivalence between the amount of regularization of two models depends
on the exact objective function optimized by the model. For example, when the estimator used is Ridge regression, the relation between them is given as \(C = \frac{1}{alpha}\).
The primal problem can be equivalently formulated as
\[\min_ {w, b} \frac{1}{2} w^T w + C \sum_{i=1}^{n}\max(0, 1 - y_i (w^T \phi(x_i) + b)),\]
where we make use of the hinge loss. This is the form that is directly optimized by LinearSVC, but unlike the dual form, this one does not involve inner products between samples, so the famous kernel
trick cannot be applied. This is why only the linear kernel is supported by LinearSVC (\(\phi\) is the identity function).
The \(\nu\)-SVC formulation [15] is a reparameterization of the \(C\)-SVC and therefore mathematically equivalent.
We introduce a new parameter \(\nu\) (instead of \(C\)) which controls the number of support vectors and margin errors: \(\nu \in (0, 1]\) is an upper bound on the fraction of margin errors and a
lower bound of the fraction of support vectors. A margin error corresponds to a sample that lies on the wrong side of its margin boundary: it is either misclassified, or it is correctly classified
but does not lie beyond the margin.
1.4.7.2. SVR#
Given training vectors \(x_i \in \mathbb{R}^p\), i=1,…, n, and a vector \(y \in \mathbb{R}^n\) \(\varepsilon\)-SVR solves the following primal problem:
\[ \begin{aligned}\min_ {w, b, \zeta, \zeta^*} \frac{1}{2} w^T w + C \sum_{i=1}^{n} (\zeta_i + \zeta_i^*)\\\begin{split}\textrm {subject to } & y_i - w^T \phi (x_i) - b \leq \varepsilon + \zeta_i,\\
& w^T \phi (x_i) + b - y_i \leq \varepsilon + \zeta_i^*,\\ & \zeta_i, \zeta_i^* \geq 0, i=1, ..., n\end{split}\end{aligned} \]
Here, we are penalizing samples whose prediction is at least \(\varepsilon\) away from their true target. These samples penalize the objective by \(\zeta_i\) or \(\zeta_i^*\), depending on whether
their predictions lie above or below the \(\varepsilon\) tube.
The dual problem is
\[ \begin{aligned}\min_{\alpha, \alpha^*} \frac{1}{2} (\alpha - \alpha^*)^T Q (\alpha - \alpha^*) + \varepsilon e^T (\alpha + \alpha^*) - y^T (\alpha - \alpha^*)\\\begin{split} \textrm {subject to }
& e^T (\alpha - \alpha^*) = 0\\ & 0 \leq \alpha_i, \alpha_i^* \leq C, i=1, ..., n\end{split}\end{aligned} \]
where \(e\) is the vector of all ones, \(Q\) is an \(n\) by \(n\) positive semidefinite matrix, \(Q_{ij} \equiv K(x_i, x_j) = \phi (x_i)^T \phi (x_j)\) is the kernel. Here training vectors are
implicitly mapped into a higher (maybe infinite) dimensional space by the function \(\phi\).
The prediction is:
\[\sum_{i \in SV}(\alpha_i - \alpha_i^*) K(x_i, x) + b\]
These parameters can be accessed through the attributes dual_coef_ which holds the difference \(\alpha_i - \alpha_i^*\), support_vectors_ which holds the support vectors, and intercept_ which holds
the independent term \(b\)
The primal problem can be equivalently formulated as
\[\min_ {w, b} \frac{1}{2} w^T w + C \sum_{i=1}^{n}\max(0, |y_i - (w^T \phi(x_i) + b)| - \varepsilon),\]
where we make use of the epsilon-insensitive loss, i.e. errors of less than \(\varepsilon\) are ignored. This is the form that is directly optimized by LinearSVR.
1.4.8. Implementation details#
Internally, we use libsvm [12] and liblinear [11] to handle all computations. These libraries are wrapped using C and Cython. For a description of the implementation and details of the algorithms
used, please refer to their respective papers. | {"url":"https://scikit-learn.qubitpi.org/modules/svm.html","timestamp":"2024-11-07T13:10:36Z","content_type":"text/html","content_length":"128244","record_id":"<urn:uuid:6dd5f339-a34e-402c-9a84-b36b51e8f209>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00236.warc.gz"} |
Unit 04: Sequences and Seeries
This is a forth unit of the book “Model Textbook of Mathematics for Class XI” published by National Book Foundation (NBF) as Federal Textbook Board, Islamabad, Pakistan. On this page we have provided
the solutions of the questions.
After reading this unit the students will be able to
• Define an arithmetic sequence and find its general term.
• Know arithmetic means between two numbers. Also insert $n$ arithmetic means between them.
• Define an arithmetic series and establish the formula to find the sum of $n$ terms of the series.
• Show that sum of $n$ arithmatic means between two numbers is equal to $n$ times their A.M.
• Solve real life problems involving arithmetic sequence, arithmetic means and arithmetic series.
• Define a geometric sequence and its general term.
• Know geometric means between two numbers, Also insert $n$ geometric means between them.
• Define geometric series and find the sum of $n$ terms of a geometric series.
• Find the sum of an infinite geometric series.
• Convert the recurring decimial into an equivalent common fracion.
• Define arithmetic geometric series.
• Find the sum of $n$ terms of the arithmetico-geometric series. | {"url":"https://www.mathcity.org/math-11-nbf/sol/unit04","timestamp":"2024-11-06T08:16:30Z","content_type":"application/xhtml+xml","content_length":"42665","record_id":"<urn:uuid:0983c4d2-f5a3-474e-b02b-ff38f402a765>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00351.warc.gz"} |
Applied Mathematics seminar | Duncan Mowbray, Using Neural Networks for Approximating Functionals: Applications in the Computational Materials Science | Applied Mathematics
Tuesday, August 8, 2017 3:00 pm - 3:00 pm EDT (GMT -04:00)
MC 6460
Duncan Mowbray
Professor, Department of Physics, School of Physical Sciences and Nanotechnology, Yachay Tech University, Ecuador
Using Neural Networks for Approximating Functionals: Applications in the Computational Materials Science
The mathematical problem of finding approximations to an unknown functional, that is, a function of a function, is still an open problem for material science almost twenty years after Walter Kohn won
the Nobel Prize for the development of density functional theory (DFT). This theory is based on the Hohenberg-Kohn Theorem, which states that there exists a unique unknown functional, called the
exchange and correlation functional, which relates the electron density of a system subject to a particular external potential to the system's ground state energy. In so doing, DFT effectively
reduces a problem of order O (p^3N) to one of order O (p^3), where N is the number of electrons in the system. However, the lack of a systematic method for approximating this unknown functional has
meant DFT calculations have an inherent and often unquantifiable error. Successive approximations to this unknown functional, based on the density at the same location (the local density
approximation or LDA), incorporating the gradient of the density at the same location (the generalized gradient approximation or GGA), and also including approximations to the screening of the
wavefunction (the so-called hybrid functionals), have met with varying degrees of success. However, the later are no longer functionals solely of the electron density, making their use much more
computationally demanding. Moreover, all such approximations to the exact functional often fail to describe even simple systems such as H[2]^+ and H[2] dissociation for various reasons. Altogether,
this has made the search for better approximations to the exact functional an area of intensive research for more than thirty years. In contrast to this, the Universal Approximation Theorem states
that a neural network with a single hidden layer containing a finite number of neurons can approximate continuous functions to an arbitrarily small accuracy ε for a given set of training data. In
this work, we employ simple neural networks to reproduce standard DFT functionals, and propose exact methods for producing training sets to train neural networks to reproduce the exact exchange and
correlation functional. | {"url":"https://uwaterloo.ca/applied-mathematics/events/applied-mathematics-seminar-duncan-mowbray-using-neural","timestamp":"2024-11-02T22:23:57Z","content_type":"text/html","content_length":"112520","record_id":"<urn:uuid:0feb3fe1-9865-447f-8dc5-d5daf93972b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00471.warc.gz"} |
Hierarchical Clustering Algorithm
One of the major considerations in using the K-means algorithm is deciding the value of K beforehand. The hierarchical clustering algorithm does not have this restriction.
The output of the hierarchical clustering algorithm is quite different from the K-mean algorithm as well. It results in an inverted tree-shaped structure, called the dendrogram. An example of a
dendrogram is shown below.
Let’s see how hierarchical clustering works.
In the K-Means algorithm, you divided the data in the first step itself. In the subsequent steps, you refined our clusters to get the most optimal grouping. In hierarchical clustering, the data is
not partitioned into a particular cluster in a single step. Instead, a series of partitions/merges take place, which may run from a single cluster containing all objects to n clusters that each
contain a single object or vice-versa.
This is very helpful since you don’t have to specify the number of clusters beforehand.
Given a set of N items to be clustered, the steps in hierarchical clustering are:
• Calculate the NxN distance (similarity) matrix, which calculates the distance of each data point from the other.
• Each item is first assigned to its own cluster, i.e. N clusters are formed.
• The clusters which are closest to each other are merged to form a single cluster.
• The same step of computing the distance and merging the closest clusters is repeated till all the points become part of a single cluster
Thus, what you have at the end is the dendrogram, which shows you which data points group together in which cluster at what distance. You will learn more about interpreting the dendrogram in the next
Look at the image given below and answer the question that follows. | {"url":"https://www.internetknowledgehub.com/hierarchical-clustering-algorithm/","timestamp":"2024-11-08T04:17:14Z","content_type":"text/html","content_length":"79176","record_id":"<urn:uuid:17409b42-3a3c-4c8b-a45a-562f33cbe332>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00358.warc.gz"} |
Cryptographers Discover a New Foundation for Quantum Secrecy | Quanta Magazine
Eliot Wyatt for Quanta Magazine
Say you want to send a private message, cast a secret vote or sign a document securely. If you do any of these tasks on a computer, you’re relying on encryption to keep your data safe. That
encryption needs to withstand attacks from codebreakers with their own computers, so modern encryption methods rely on assumptions about what mathematical problems are hard for computers to solve.
But as cryptographers laid the mathematical foundations for this approach to information security in the 1980s, a few researchers discovered that computational hardness wasn’t the only way to
safeguard secrets. Quantum theory, originally developed to understand the physics of atoms, turned out to have deep connections to information and cryptography. Researchers found ways to base the
security of a few specific cryptographic tasks directly on the laws of physics. But these tasks were strange outliers — for all others, there seemed to be no alternative to the classical
computational approach.
By the end of the millennium, quantum cryptography researchers thought that was the end of the story. But in just the past few years, the field has undergone another seismic shift.
“There’s been this rearrangement of what we believe is possible with quantum cryptography,” said Henry Yuen, a quantum information theorist at Columbia University.
In a string of recent papers, researchers have shown that most cryptographic tasks could still be accomplished securely even in hypothetical worlds where practically all computation is easy. All that
matters is the difficulty of a special computational problem about quantum theory itself.
“The assumptions you need can be way, way, way weaker,” said Fermi Ma, a quantum cryptographer at the Simons Institute for the Theory of Computing in Berkeley, California. “This is giving us new
insights into computational hardness itself.”
This Message Will Self-Destruct
The story begins in the late 1960s, when a physics graduate student named Stephen Wiesner started thinking about the destructive nature of measurement in quantum theory. Measure any system governed
by the rules of quantum physics, and you’ll alter the quantum state that mathematically describes its configuration. This quantum measurement disturbance was a hindrance for most physicists. Wiesner,
who took an unorthodox information-centric view of quantum theory, wondered whether it could be made useful. Perhaps it could serve as a form of built-in tamper protection for sensitive data.
But Wiesner’s ideas were too far ahead of their time, and he left academia after graduate school. Fortunately, he’d discussed his ideas with his friend and fellow physicist Charles Bennett, who
unsuccessfully tried to interest others in the subject for a decade. Finally, in 1979, Bennett met the computer scientist Gilles Brassard while swimming off the coast of Puerto Rico during a
conference. Together, they wrote a groundbreaking paper describing a new approach to an important cryptographic task. Their protocol was based on quantum measurement disturbance, and needed no
assumptions about the difficulty of any computational problems.
“The very nature of quantum information seems somewhat cryptographic,” Ma said.
BBVA Foundation; Lëa-Kim Châteauneuf
Bennett and Brassard’s breakthrough made researchers optimistic that similar quantum tricks could yield perfect security for other cryptographic tasks. Researchers focused mainly on a task called bit
commitment, which is useful on its own and is also a key component of most advanced cryptographic protocols.
To understand the basic idea behind bit commitment, imagine a two-player game in which you must make a secret decision that later gets revealed. One way to do this is to write the decision down on a
slip of paper and put it in a sealed envelope. That way, you can’t change your decision later on, and your opponent can’t prematurely peek at the result.
Now imagine you’re playing the same game online. To make cheating impossible, you need to seal the decision in a sort of digital envelope that neither player can open alone. That’s where cryptography
comes in. In 1981, the pioneering computer scientist Manuel Blum constructed the first bit commitment protocol — a way to build effectively unhackable envelopes out of hard computational problems.
But how hard is hard? Researchers in the field of computational complexity theory study many different kinds of hard problems, and not all of them are useful for cryptographers. Bit commitment and
all other cryptographic protocols rely on problems in a class that complexity theorists call “NP,” whose defining feature is that it’s easy to check whether a candidate solution is correct.
Unfortunately, researchers haven’t been able to prove that any NP problems are truly hard. There could still be some clever undiscovered procedure, or algorithm, for solving even the ones that seem
hardest. If there is, then all of classical cryptography would break.
Such considerations animated the search for quantum-based security guarantees. But in 1997, two papers proved that bit commitment schemes could never be completely secure if they were based solely on
the laws of quantum physics. The papers implied that some kind of computational hardness would be necessary for almost all cryptographic tasks.
That was the last word on the theoretical foundations of quantum bit commitments for nearly 25 years. Then, in 2021, a paper by a graduate student named William Kretschmer prompted researchers to
confront a question that nobody had thought to ask. Computational hardness was clearly necessary for bit commitments and most other forms of cryptography, but precisely what kind of hardness?
The answer would turn out to be weirder than anybody had anticipated.
Consulting Oracles
The 2021 paper came out of Kretschmer’s struggle to understand a specific version of a problem that sounds conceptually straightforward: How hard is it to distinguish, or discriminate between, two
quantum states that look superficially similar? Kretschmer, who’s now a postdoctoral researcher at the Simons Institute, was initially interested in the problem for reasons that had nothing to do
with bit commitment.
“Cryptography was not even on my radar,” he said.
The discrimination problem was interesting in part because it wasn’t even clear how to describe it using familiar mathematical language. Complexity theorists traditionally study problems with
different possible inputs represented by strings of bits, or 0s and 1s. For the problem of decomposing large numbers into their prime factors, for instance, this string represents the number to be
Even after researchers started studying how quantum physics might be harnessed for computation, they continued to focus on such “classical-input” problems. Typical quantum algorithms start with an
ordinary classical bit string and then process it using quantum trickery. But in “quantum-input” problems like Kretschmer’s, the inputs aren’t bit strings — they’re quantum states that are as easily
disrupted by computation as by measurement.
“The language with which we’ve described quantum computations in traditional complexity theory can’t directly talk about these problems,” Yuen said.
At first, Kretschmer thought he just needed to translate the problem into more standard language, but he couldn’t figure out how. So he did what complexity theorists often do when they’re desperate:
He turned to an oracle.
In complexity theory, the term “oracle” refers to a hypothetical device that can solve a specific problem instantly. A computer with access to an oracle might be able to solve other problems more
easily by consulting the oracle as an intermediate step in an algorithm. Of course, oracles don’t actually exist in the real world, but studying them helps complexity theorists understand the
relationships between the difficulty levels of different problems.
Kretschmer wondered what kind of oracle could make it easy to distinguish two quantum states — the so-called state-discrimination problem. He decided to start with a special oracle that would boost
the power of normal quantum algorithms, the ones that use quantum tricks to solve problems with classical bit string inputs. Such algorithms can solve some problems too hard for classical ones, like
factoring large numbers, but they’re not omnipotent — many other problems lie beyond their reach.
Access to Kretschmer’s oracle would enable such algorithms to solve certain classical-input problems too hard for real quantum computers. Kretschmer assumed that it would be overkill, but to his
surprise, he proved that the state-discrimination problem could still stump these souped-up quantum algorithms.
“I was really fascinated by William’s paper,” said Luowen Qian, a graduate student studying cryptography at Boston University. “I actually thought it had to be wrong, because it’s so
Xichen Li; Herve Attia/TerrificShot Photography; Heather Coit/Grainger Engineering
Qian, Yuen and others soon proved that if Kretschmer’s state discrimination problem really was hard, secure quantum bit commitment schemes would be possible. That would in turn imply security for a
slew of more advanced cryptographic protocols. The scope of quantum cryptography was far broader than researchers in the 1990s had realized, and it all came down to the hardness of one problem.
How Hard Could It Be?
Kretschmer’s result came with one big caveat — to make the proof work, he had to rely on an unusual oracle that only quantum algorithms could consult. Perhaps a more familiar oracle would make his
state discrimination problem easy, and therefore make secure quantum bit commitments impossible? In 2022, Kretschmer and Qian began working together to see what they could prove about an oracle
everybody could understand: one that could solve any NP problem instantaneously. In a world with such oracles, all classical cryptography would be impossible.
Kretschmer soon realized that the state discrimination problem was mathematically related to a superficially different problem in quantum complexity theory, and he enlisted the help of two experts in
the area, the complexity theorists Avishay Tal and Makrand Sinha. “William was really like a manager, and we were contractors,” Tal said.
Working together, the four researchers quickly proved that Kretschmer’s state discrimination problem could still be intractable even for computers that could call on this NP oracle. That means that
practically all of quantum cryptography could remain secure even if every problem underpinning classical cryptography turned out to be easy. Classical cryptography and quantum cryptography
increasingly seemed like two entirely separate worlds.
The result caught Ma’s attention, and he began to wonder just how far he could push the line of work that Kretschmer had initiated. Could quantum cryptography remain secure even with more outlandish
oracles — ones that could instantly solve computational problems far harder than those in NP? “Problems in NP are not the hardest classical problems one can think about,” said Dakshita Khurana, a
cryptographer at the University of Illinois, Urbana-Champaign. “There’s hardness beyond that.”
Xinyu Tan; Soya Park; Jessica Xu
Ma began brainstorming how best to approach that question, together with Alex Lombardi, a cryptographer at Princeton University, and John Wright, a quantum computing researcher at the University of
California, Berkeley. “It was just so fascinating and so mind-bending that I was immediately hooked,” Wright said.
After thinking about the question for a while and getting nowhere, Ma suggested that they consider the most extreme case possible: an oracle that could instantly solve any computational problem with
classical inputs. That would include all the problems complexity theorists have traditionally studied, even those known to be unsolvable in the real world.
“It sounded a little bit insane to me,” Lombardi said.
But the question turned out to be remarkably fruitful. After working on it for nearly a year, they finally published a striking result. No algorithm allowed to consult that all-powerful oracle
exactly once can distinguish the two quantum states, as is required to undermine a quantum bit commitment scheme.
Limiting algorithms to a single query is less of a constraint than it may sound, because quantum algorithms can effectively ask the oracle to solve multiple problems simultaneously by exploiting the
phenomenon called superposition. Algorithms that can make multiple queries sequentially could be more powerful, because they can use the oracle’s answers to previous queries to decide what to ask
next. Whether these algorithms are similarly limited remains an open question.
Ma, Lombardi and Wright’s paper was also significant for another reason. While the three researchers were wrestling with their problem, they realized it was closely linked to a major open problem
posed 16 years earlier by the complexity theorist Scott Aaronson and the mathematician Greg Kuperberg, about the difficulty of transforming one quantum state into another. The new paper was the first
significant step toward settling that question.
“It’s a very strong result and also a very surprising result,” said Tomoyuki Morimae, a quantum cryptography researcher at the Yukawa Institute for Theoretical Physics in Kyoto.
The string of recent results suggests that the innocuous-sounding problem of distinguishing two quantum states is not just hard, but almost inconceivably hard — far beyond the reach of normal quantum
algorithms and even more exotic ones. That’s good news for cryptography, but it also has broader implications for computational problems whose inputs are quantum states. Traditional complexity theory
seems unable to address these problems. Truly understanding them might require a radically new theoretical framework.
“It feels like there’s something fundamentally different about how quantum information behaves,” said Andrea Coladangelo, a quantum cryptographer at the University of Washington. “It’s bound to have
connections that are also beyond cryptography.”
Editor’s note: Scott Aaronson is a member of Quanta Magazine’s advisory board. His work was mentioned in this article but he played no part in the editorial process. Further, the Simons Institute for
the Theory of Computing was established with a grant from the Simons Foundation, which also funds this editorially independent publication. Simons Foundation funding decisions have no influence on
our coverage. | {"url":"https://www.quantamagazine.org/cryptographers-discover-a-new-foundation-for-quantum-secrecy-20240603/","timestamp":"2024-11-02T21:31:30Z","content_type":"text/html","content_length":"226602","record_id":"<urn:uuid:76cd07cb-0dc8-4a35-8816-a3862fade0e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00036.warc.gz"} |
NCERT Solutions for Class 12 Maths Chapter 6 Exercise 6.3
NCERT Solutions for Class 12 Maths Chapter 6 Exercise 6.3 Application of Derivatives in Hindi and English Medium updated for CBSE 2024-25. Class 12 Maths ex. 6.3 is now modified following the
rationalised syllabus for new session 2024-25.
12th Maths Exercise 6.3 Solutions in Hindi and English Medium
NCERT Solutions for Class 12 Maths Chapter 6 Exercise 6.3
Class: 12 Mathematics
Chapter 6: Exercise 6.3
Topic Name: Application of Derivatives
Sub Topic: Maxima and Minima
Content Type: Text and Videos Format
Medium: Hindi and English Medium
Grade XII Mathematics Exercise 6.3 solutions (Maxima and Minima) in Hindi Medium as well as English Medium for all students using latest NCERT Books. Download CBSE Solutions Apps updated as per the
latest CBSE Syllabus for CBSE and other Boards. The Video solutions related to 6.3 of 12th Maths in Hindi and English Medium also given below free to access or download.
12th Maths Exercise 6.3 Solutions
NCERT Solutions for Class 12 Maths Chapter 6 Exercise 6.3 AOD – Application of Derivatives in English Medium Maxima and Minima. NCERT Solutions are based on latest NCERT Books following the new CBSE
Syllabus. Join the Discussion forum to share your knowledge.
1. An inverted cone has a depth of 10 cm and a base of radius 5 cm. Water is poured into it at the rate of 3/2 c.c. per minute. Find the rate at which the level of water in the cone is rising
when the depth is 4 cm.
2. A swimming pool is to be drained for cleaning. If L represents the number of litres of water in the pool t seconds after the pool has been plugged off to drain and L = 200(10 − t)². How fast
is the water running out at the end of 5 sec. and what is the average rate at which the water flows out during the first 5 seconds?
3. A spherical ball of salt is dissolving in water in such a manner that the rate of decrease of the volume at any instant is proportional to the surface area. Prove that the radius is
decreasing at a constant rate.
4. The length of a rectangle is increasing at the rate of 3.5 cm/sec. and its breadth is decreasing at the rate of 3 cm/sec. Find the rate of change of the area of the rectangle when length is
12 cm and breadth is 8 cm.
5. Find the equation of the tangent to the curve y = x² – 2x + 7 which is (1) Parallel to the line 2x − y + 9 = 0 (2) Perpendicular to the line 5y – 15x = 13.
Class 12 Maths Chapter 6 Exercise 6.3 Solutions in Videos
Important Questions for Practice
1. Find the equation of the normal at a point on the curve x² = 4y, which passes through the point (1, 2). Also find the equation of the corresponding tangent.
2. Find the point on the curve 9y² = x³ where the normal to the curve makes equal intercepts with the axes.
3. If the sum of length of hypotenuse and a side of a right angled triangle is given, show that area of triangle is maximum, when the angle between them is π/3.
4. Show that the cone of the greatest volume which can be inscribed in a given sphere has an altitude equal to 2/3 of the diameter of the sphere.
5. A wire of length 36 m is to be cut into two pieces. One of the pieces is to be made into a square and the other into a circle. What should be the length of the two pieces, so that the
combined area of the square and the circle is minimum?
What are the main concepts that students will study in exercise 6.3 of NCERT Book 12th Maths?
In exercise 6.3 of class 12th Maths, students will use the concept of derivatives to calculate the maximum or minimum values of various functions. Students will find the ‘turning points’ of the graph
of a function and thus find points at which the graph reaches its highest (or lowest) locally. The knowledge of such points is very important in sketching the graph of a given function. Further, we
will find the absolute maximum and absolute minimum of a function that is necessary for the solution of many applied problems.
Are there any theorems that students can use to solve questions of exercise 6.3 of class 12th Maths?
Five theorems (Theorems 2, 3, 4, 5, 6) are there that students can use to solve questions of exercise 6.3 of class 12th Maths. These Theorems are easy and interesting. There is no need to do the
proofs of these theorems. Students only have to study the statement of the above mentioned theorems. Without these theorems, students cannot solve the questions of exercise 6.3 of grade 12th Maths.
How long it takes to prepare exercise 6.3 Class 12th Maths?
Exercise 6.3 of class 12th Maths has 45 problems (16 examples and 29 questions). Students need a maximum of five days to prepare exercise 6.3 (chapter 6) of class 12th Maths if they give 2-3 hours
per day to this exercise. This time depends on many factors like student’s working speed, efficiency, capability, etc.
Is exercise 6.3 of NCERT Class 12th Maths important for the term Exam?
Yes, exercise 6.3 of grade 12th Maths is most important from the exam point of view. Every year questions come from exercise 6.3 in the exams. All the questions of this exercise are significant and
can come in the exams. But the most important questions of this exercise are examples 28, 30, 32, 33, 35, 37, 38, 40, 41 and questions 1, 2, 5, 7, 9, 11, 12, 14, 15, 18, 19, 20, 22, 23, 24, 25, 26,
28. 4 to 6 marks question can come from this exercise in the board exam.
Is exercise 6.3 of 12th Maths difficult to understand?
Exercise 6.3 of class 12th Maths is neither very simple and not very hard to solve and understand. It lies in the mid of simple and hard because some examples and questions of this exercise are easy,
and some are complex. However, the difficulty level of any topic/problem varies from student to student. So, exercise 6.3 of class 12th Maths is easy, or tough depends on students also. Some students
find it complex, some find it simple, and some find it in the middle of easy and difficult.
Last Edited: November 13, 2023 | {"url":"https://www.tiwariacademy.com/ncert-solutions/class-12/maths/chapter-6/exercise-6-3/","timestamp":"2024-11-11T08:12:19Z","content_type":"text/html","content_length":"277498","record_id":"<urn:uuid:4f7ef53a-95e7-403f-96ba-6a9bf6e87782>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00323.warc.gz"} |
Why 4=10 has exactly 500 levels
from Guide to Hacking on Mar 16, 2024
Why 4=10 has exactly 500 levels
A extremely simple math game called "4=10" caught my attention recently. Here's the premise: You are given four single-digit numbers, and your goal is to make the number 10 — by using addition,
subtraction, division, multiplication, and a pair of parentheses at most once.
However, one aspect of the game stuck out to me: There are only 500 "levels" in the game. Why is that? There are theoretically 10,000 possible sets of four numbers to pull from. What's stopping the
creator from adding more?
One obvious explanation for the meager number of levels is that many sets of four numbers can't ever equal to 10, given these rules. Let's first see how many sets are possible. In other words, we're
looking for the possible number of solutions.
I'll programmatically search all possible options, but a pure brute force solution searching over 1.2 million^1 options would take prohibitively long to run. We can cut down the number of options
drastically just with a few common sense simplifications.
Simplification #1. Get rid of number ordering. There are four numbers, with 10 possible values for each number. In theory, there are 10,000 total permutations, but this counts duplicates — i.e., any
ordering of the numbers "9, 9, 9, 9" is indistinguishable from any other.
To address this, we can consider all possible combinations of four numbers in increasing order. This is simple for us to count with a few for loops.
def get_possible_four_number_combinations():
combinations = set()
for i in range(10):
for j in range(10):
for k in range(10):
for l in range(10):
combinations.add(tuple(sorted((i, j, k, l))))
return combinations
def get_possible_number_combinations(maximum=10, count=4):
if count == 0:
return {tuple()}
combinations = set()
This gives us 715 total combinations. We can also calculate this in close form: Picking four numbers in increasing order is the same as distributing 9 "+1s" between 5 slots^2.
• Any "+1s" we place in the first slot determines the first number.
• Any "+1s" we place in the second slot determines how much to add to the first number, to get the second number.
• Any "+1s" we place in the last slot are unused.
• Say "+1s" are denoted with *, and slots are divided by |. Then, the four numbers "1346" become *|**|*|**|***.
In other words, this is also known as a "Stars and Bars" setup. There are 9 stars and 4 bars, so we can use the Stars and Bars expression for $s$ stars and $b$ bars to get the same result 715.
$${s+b \choose b} = {9 + 4 \choose 4} = \frac{13!}{(13-4)!4!} = 715$$
In short, we now only have 715 possible combinations of numbers.
Simplification #2. Ignore ineffective parentheses. For example, say we have the expression 4*3+2-0. Parentheses around (4*3) are useless, since the order of operations already ensures that
multiplication happens first.
As a result, instead of counting possible parentheses using positions, we can count using the operations selected. Parentheses including just top-priority operation or just the top two priority
operations are ineffective. Parentheses including just the second operation, just the third operation, second and third, or first and third. This makes at most four possible parentheses placements.
Combining both simplifications, we now have 715 possible number combinations x 24 possible operation combinations x 4 parentheses placements = 183,040 options. We've now reduced the number of
possible expressions to search, by an order of magnitude — from 1.2 million to 68,640.
We can now search the above set of options exhaustively, to see how many of those expressions actually total 10. We generate all of those possible expressions and evaluate them naively.
return eval(equation) == 10
return False
def get_valid_equations(maximum=10, count=4, parens=True):
equations = []
combinations = get_possible_number_combinations(maximum=maximum, count=count)
for combination in combinations:
for equation in get_possible_equations(combination, parens=parens):
if is_equation_equal_to_ten(equation):
print(f"Possible {count}-number combinations: {len(combinations)}")
print(f"Possible equations: {len(equations)}")
return equations
import string
def get_numbers_from_equations(equation):
return tuple(item for item in equation if item in string.digits)
Running the above script then gives us a total of 597 total equations. In other words, there are 597 possible solutions. This is quite close to the 500 levels we observed in the game! With that said,
there's just one more hitch: The number above includes all possible equations, which is the number of possible solutions; we want the number of questions, where each question is just a set of four
numbers. This begs the question: How many sets of four numbers — excluding operations — can equal to 10?
Start by updating the script above to count the number of four-number sets that have at least one equation that evaluates to 10.
import string
def get_numbers_from_equations(equation):
return tuple(item for item in equation if item in string.digits)
def get_valid_levels(maximum=10, count=4):
remaining = set()
for equation in get_valid_equations(maximum=maximum, count=count):
print(f"Possible {count}-number sets: {len(remaining)}")
return remaining
def get_operations_from_equations(equation):
return tuple(sorted(item for item in equation if item in OPS))
This gives us 300 total, unique four-number sets. This is a little odd: How does 4=10 have 500 levels then? There simply aren't enough four-number combinations. An unsatisfying explanation would be
that some sets are repeated.
However, there's a clue: Some levels actually restrict the operations you can use. Let's see how many sets of four numbers and operations are, combined, unique.
def get_operations_from_equations(equation):
return tuple(sorted(item for item in equation if item in OPS))
def get_valid_levels(maximum=10, count=4, parens=True):
remaining = set()
for equation in get_valid_equations(maximum=maximum, count=count, parens=parens):
remaining.add(get_numbers_from_equations(equation) + get_operations_from_equations(equation))
print(f"Possible sets of numbers and operations: {len(remaining)}")
return remaining
import time
for count in range(4, 7):
start = time.time()
get_valid_levels(maximum=10, count=count, parens=False)
This gives us 507 total unique combinations of four numbers and operations. And that's it! There are 507 possible levels. This is awful close to the actual number of levels in 4=10 — namely, 500.
Odds are, the author wanted a nice even number.
To start, consider increasing the difficulty of the puzzles.
1. Restrict the set of options: We could restrict each operation to exactly one usage. However, after filtering all of our existing equations, we find there is only one possible solution: 3*3+7/7.
2. Find all possibilities: We could alternatively make this more difficult. Ask the player to find all the possible equations for a give combination of numbers. Unfortunately, it looks like only 41
sets of numbers have 4 or more possible equations attached.
One simple way to increase the number of levels is to add more numbers. Instead of using 4 numbers, use 5 or 6. Let's see how the number of possible levels scales with the number of numbers. Before
running the entire pipeline however, we can quickly compute the number of possible number combinations to get a rough idea of scaling:
Maximum Count Number Combinations
This grows decently quickly, at approximately 2x per count, even without factoring in the number of parentheses. Let's see how search time scales with count.
Maximum Count Number of Solutions Number of Levels Search Time (s)
10 4 597 507 3.7
10 5 9368 6438 82.5
Scaling is pretty poor, so let's run a few more counts but without factoring in the parentheses.
Maximum Count Number of Solutions Number of Levels Search Time (s)
10 4 576 420 0.3
10 5 4022 2573 5.4
10 6 38602 17032 45.9
This is certainly a large expansion of the game, from 500 levels to at least 17,000. Perhaps a future DLC for 4=10 could include bonus levels of this type.
To summarize, we arrived at this number by taking the following steps:
1. There are 3.2 million total permutations of 4 numbers, 4 arithmetic operations, and parentheses.
2. After accounting for permutations, reduce 10,000 permutations of four single-digit numbers to just 715 unique combinations. This reduces the total to 85,800 options.
3. After accounting for ineffective parentheses placement, we reduce from 5 to just 4 possible parentheses placements, reducing the total to just 68,640 options.
4. Evaluating all of these options naively, we find 597 possible equations that evaluate to 10. These are all the possible solutions.
5. Of those valid equations, there are 507 unique sets of four numbers and operations. These are all the possible levels that 4=10 could introduce, by additionally limiting the set of possible
This is extremely close to 4=10's number of levels — namely, 500. So, we declare victory. We also find that increasing the number of numbers grows the number of possible levels by an order of
magnitude each time. More possibilities for more levels.
1. 10,000 digits x 24 possible operations x 5 possible parentheses placements = 1.2 million options. For four numbers, there are 5 possible parenthetical statements: (AB)CD, (ABC)D, A(BC)D, A(BCD),
AB(CD). ↩
2. An alternative explanation from SO would be to see this as "voting" for one of 10 options for each digit — since there are 4 options, there are 4 votes. ↩
Want more tips? Drop your email, and I'll keep you in the loop. | {"url":"http://alvinwan.com/why-410-has-exactly-500-levels/","timestamp":"2024-11-05T19:03:56Z","content_type":"text/html","content_length":"25226","record_id":"<urn:uuid:99751f8f-ecd6-4d68-b8f3-571f92439217>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00526.warc.gz"} |
Computer System - Cp.2 - 1 Bit and Byte in C
We will go over data repersentation of computer systems this chapter. That is, when we need to think in a computer way, you should know how computer think at first (in binary).
Chapter. 2 Repersentation in C
1. Base
What is the base of a number?
Generally, the number that we use in daily life like 50 dollars, 100 pounds, 1,000,000 people are all base 10, which means that these number are all decimal.
And usually, there are different bases we use in studying system, like binary number (base 2), octal number (base 8), and hexadecimal number (base 16).
We can see them being used in computer world:
When the numebr is followed by a character "b", that means it is a binary number:
0101b (BIN) = 0 * 2^3 + 1 * 2^2 + 0 * 2^1 + 1 * 2^0 = 5 (DEX)
Number like '0X1E', is HEX number:
0X is the prefix of the hexadecimal number. We have:
0X1E (HEX) = 1 * 16^1 + 14 * 16^0 = 30 (DEX)
2. Bit and Byte
1 Byte = 8 Bits
And the important repersentations of every data type in C:
Specially, one another data type in C is called POINTER, which is also significant:
• 4 bytes in 32-bit-system
• 8 bytes in 64-bit-system
The knowledge that we are reviewing, is usually based on 32-bit systems
3. Conversion between different bases
There is the same way to do with other bases number except for BIN.
• BIN to DEC: mentioned above
• X base number to Y base number:
1. X base to DEC
2. And then DEC to Y base
4. Addition and multiplication of binary
5. One's Complement
When comes to 1100 1001 (which is equal to 201 in decimal):
THe 1's complement of 1100 1001 is 0011 0110, which turning 1 to 0, and turning 0 to 1.
6. Two's Complement
Two's complement is used to do the substraction or repersenting negative number in binary (Of course, adding a negative is substracting a positive number).
How to repersent -5 in binary, when we know 5 is 0101 in binary in a 8-bit machine?
1. Get the 0101's 1's complement in 8-bit machine, having 1111 1010.
2. Add one, having 1111 1011. Then, 1111 1011 is the number repersenting -5 in 8-bit machine.
That is:
2's complement = 1's complement + 1
7. Intro of Overflow and Underflow
They happen when sum of 2 positive (or negative) number but get the negative (positive) number in machine, due to the fact that when machine has no more bit to extend to repersent this number.
Overflow: Sum of 2 positive but get the negative
Underflow: Sum of 2 negative but get the positive
8. Extension and Truncation between high-bit and low-bit
This part would be the most important part of bit and byte chapter. I would like to go over this part in next post, because I think it would cost me few hours to design how to explain it in a
comfortable way.
Hope you like my post! You red heart or subsrcibtion are the energy of me! Thank you!
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/divide_r_conquer/computer-systems-principles-cp2-nfo","timestamp":"2024-11-09T07:00:38Z","content_type":"text/html","content_length":"72516","record_id":"<urn:uuid:f95d2814-9a24-46f9-a5db-6cce5c465e5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00460.warc.gz"} |
The impact of data errors on uncertainty analysis
Since there is much uncertainty in reservoir modelling, it makes sense to start with coarse-scale models, so that a wide range of scenarios can be assessed rapidly, before focussing on fewer, more
detailed models. The simplest model for reservoir analysis is the material balance equation, and this forms a good starting point for uncertainty appraisal. Although there are drawbacks with this
method, such as the assumption of pressure equilibration throughout the reservoir (or compartment), there is the advantage that a minimum number of a priori assumptions are made regarding the
reservoir volume and drive mechanism. As the first stage in a top-down reservoir evaluation procedure, we have applied stochastic history matching and uncertainty analysis to a material balance
problem, using a synthetic reservoir model which had aquifer influx and high rock compressibility. A truth case simulation was run and noise was added to the resulting fluid production and pressure
values to generate synthetic data sets. The parameters adjusted were the volume of oil (STOIIP), the initial aquifer size and the rock compressibility. A thorough analysis of the errors was
performed, including propagation of errors in the pressure data to determine their effect on the modelled production. The Neighbourhood Approximation (NA) method was used to home in on models with
low misfit. Then the posterior probability distributions and their correlations were assessed using a Bayesian approach. Results showed that the shape of the posterior probability distributions
(PPDs) depended on the assumed level of the noise. In particular, they indicated that, if the amount of noise is not assessed correctly, the position of the maximum likelihood value may be estimated
Conference 10th European Conference on the Mathematics of Oil Recovery, ECMOR 2006
Country/Territory Netherlands
City Amsterdam
Period 4/09/06 → 7/09/06
Dive into the research topics of 'The impact of data errors on uncertainty analysis'. Together they form a unique fingerprint. | {"url":"https://researchportalplus.anu.edu.au/en/publications/the-impact-of-data-errors-on-uncertainty-analysis","timestamp":"2024-11-13T15:35:01Z","content_type":"text/html","content_length":"54517","record_id":"<urn:uuid:d02c8d0b-c1cb-46ae-8dd9-d23bee7a3b30>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00153.warc.gz"} |
Direct and adaptive quantification schemes for extreme event statistics in complex dynamical systems
Mohamad, Mustafa A
Other Contributors
Massachusetts Institute of Technology. Department of Mechanical Engineering.
Themistoklis P. Sapsis.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.
Quantifying extreme events is a central issue for many technological processes and natural phenomena. As extreme events, we consider transient responses that push the system away from its statistical
steady state and that correspond to large excursions. Complex systems exhibiting extreme events include dynamical systems found in nature, such as the occurrence of anomalous weather and climate
events, turbulence, formation of freak waves in the ocean and optics, and dynamical systems in engineering applications, including mechanical components under environmental loads, ship rolling and
capsizing, critical events in power grids, as well as chemical reactions and conformational changes in molecules. It has been recognized that extreme events occur more frequently than Gaussian
statistics suggest and thus occur often enough that they have practical consequences, and sometimes catastrophic outcomes, that are important to understand and predict. A hallmark characteristic of
extreme events in complex dynamical systems is non-Gaussian statistics (e.g. heavy-tails) in the probability density function (pdf) describing the response of their observables. For engineers and
applied mathematicians, a central issue is how to efficiently and accurately describe this non-Gaussian behavior. For random dynamical systems with inherently nonlinear dynamics, expressed through
intermittent events, nonlinear energy transfers, broad energy spectra, and large intrinsic dimensionality, it is largely the case that we are limited to (direct) Monte-Carlo sampling, which is too
expensive to apply in real-world applications. To address these challenges, we present both direct and adaptive (sampling based) strategies designed to quantify the probabilistic aspects of extreme
events in complex dynamical systems, effectively and efficiently. Specifically, we first develop a direct quantification framework that involves a probabilistic decomposition that separately
considers intermittent, extreme events from the background stochastic attractor of the dynamical system. This decomposition requires knowledge of the dynamical mechanisms that are responsible for
extreme events and partitions the phase space accordingly. We then apply different uncertainty quantification schemes to the two decomposed dynamical regimes: the background attractor and the
intermittent, extreme-event component. The background component, describing the 'core' of the pdf, although potentially very high-dimensional, can be efficiently described by uncertainty
quantification schemes that resolve low-order statistics. On the other hand, the intermittent component, related to the tails, can be described in terms of a low-dimensional representation by a small
number of modes through a reduced order model of the extreme events. The probabilistic information from these two regimes is then synthesized according to a total probability law argument, to
effectively approximate the heavy-tailed, non-Gaussian probability distribution function for quantities of interest. The method is demonstrated through numerous applications and examples, including
the analytical and semi-analytical quantification of the heavy-tailed statistics in mechanical systems under random impulsive excitations (modeling slamming events in high speed craft motion),
oscillators undergoing transient parametric resonances and instabilities (modeling ship rolling in irregular seas and beam bending), and extreme events in nonlinear Schrodinger based equations
(modeling rogue waves in the deep ocean). The proposed algorithm is shown to accurately describe tail statistics in all of these examples and is demonstrated to be many orders of magnitude faster
than direct Monte-Carlo simulations. The second part of this thesis involves the development of adaptive, sampling based strategies that aim to accurately estimate the probability distribution and
extreme response statistics of a scalar observable, or quantity of interest, through a minimum number of experiments (numerical simulations). These schemes do not require specialized knowledge of the
dynamics, nor understanding of the mechanism that cause or trigger extreme responses. For numerous complex systems it may not be possible or very challenging to analyze and quantify conditions that
lead to extreme responses or even to obtain an accurate description of the dynamics of all the processes that are significant. To address this important class of problems, we develop a sequential
algorithm that provides the next-best design point (set of experimental parameters) that leads to the largest reduction in the error of the probability density function estimate for the scalar
quantity of interest when the adaptively predicted design point is evaluated. The proposed algorithm utilizes Gaussian process regression to infer dynamical properties of the quantity of interest,
which is then used to estimate the desired pdf along with uncertainty bounds. We iteratively determine new design points through an optimization procedure that finds the optimal point in parameter
space that maximally reduces uncertainty between the estimated bounds of the posterior pdf estimate of the observable. We provide theorems that guarantee convergence of the algorithm and analyze its
asymptotic behavior. The adaptive sampling method is illustrated to an example in ocean engineering. We apply the algorithm to estimate the non-Gaussian statistics describing the loads on an offshore
platform in irregular seas. The response of the platform is quantified through three-dimensional smoothed particle hydrodynamics simulations. Because of the extreme computational cost of these
numerical models, quantification of the extreme event statistics for such systems has been a formidable challenge. We demonstrate that the adaptive algorithm accurately quantifies the extreme event
statistics of the loads on the structure through a small number of numerical experiments, showcasing that the proposed algorithm can realistically account for extreme events in the design and
optimization processes for large-scale engineering systems.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 171-183).
Date issued
Massachusetts Institute of Technology. Department of Mechanical Engineering
Massachusetts Institute of Technology
Mechanical Engineering. | {"url":"https://dspace.mit.edu/handle/1721.1/113542","timestamp":"2024-11-14T08:35:34Z","content_type":"text/html","content_length":"31582","record_id":"<urn:uuid:83bad54a-8053-4299-92a4-4f963f44c519>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00435.warc.gz"} |
We were discussing the various basic concepts such as Euler’s Equation of motion, Bernoulli’s equation from Euler’s equation, derivation of discharge through venturimeter and derivation of
discharge through Orifice meter, in the subject of fluid mechanics, in our recent posts.Â
We had already seen the application of Bernoulli’s equation in the working principle of Venturimeter and orifice meter. Now we will go ahead to find out the other practical applications of
Bernoulli’s equation, in the subject of fluid mechanics, with the help of this post.
Today we will see here the basic concept of Pitot tube and also we will secure here the expression of velocity of flow at any point in the pipe or channel with the help of this post.
Pitot TubeÂ
Pitot tube is basically defined as a device which is used for measuring the velocity of flow at any point in the pipe or a channel.
Working Principle of Pitot tube
Pitot tube works on the principle of Bernoulli’s equation. If the velocity of flow at a point decreases, pressure will be increased at that point due to the conversion of kinetic energy in to
pressure energy.
Pitot tube will be made of a glass tube bent at right angle as displayed here in following figure. Lower end of Pitot tube will be bent at right angle and will be directed in upstream direction as
displayed here.
Due to conversion of kinetic energy in to pressure energy, liquid will rise up in the glass rube. Rise of liquid level will provide the velocity of flow at any point in the pipe or a channel.
Derivation of veloctiy of flow through pitot tube
Let us consider one pitot tube as displayed here in following figure. Let us say that water is flowing through the horizontal pipe.
P[1]Â = Pressure at section 1 (Inlet section)
v[1]Â = Velocity of fluid at section 1 (Inlet section)
A[1]Â = Area of pipe at section 1 (Inlet section)
P[2]Â = Pressure at section 2
v[2]Â = Velocity of fluid at section 2
A[2]Â = Area at section 2
H = Depth of tube in the liquid
h = Rise of kiquid in the tube above the free surface.
Let us recall the Bernoulli’s equation and applying at section 1 and section 2.
According to Bernoulli’s theorem.....
In an incompressible, ideal fluid when the flow is steady and continuous, the sum of pressure energy, kinetic energy and potential energy will be constant along a stream line.
Assumptions made for deriving the expression for velocity of flow at any point in the pipe or channel is as mentioned here.
1. Fluid is ideal, i.e. inviscid and incompressible.
2. Fluid flow is steady and continuous
3. Fluid flow is irrotational
4. Frictionless inner surface
Do you have any suggestions? Please write in comment box.
Fluid mechanics, By R. K. Bansal
Image Courtesy: Google
Also read
No comments: | {"url":"https://www.hkdivedi.com/2018/06/pitot-tube-working-and-principle.html","timestamp":"2024-11-13T12:04:40Z","content_type":"application/xhtml+xml","content_length":"292104","record_id":"<urn:uuid:598ea908-2be7-428c-99e1-56c656db1090>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00751.warc.gz"} |
A union B Complement - Formula, Proof, Examples
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A union B Complement
A union B complement is a formula in set theory that is equal to the intersection of the complements of the sets A and B. Mathematically, the formula for A union B Complement is given by, (A U B)' =
A' ∩ B' or (A U B)^c = A^c ∩ B^c, where ' or ^c denote the complement of a set. This formula of A union B complement is named after the mathematician De-Morgan as one of De-Morgan's Laws of Union of
Sets. The statement of this law is given as 'The complement of the union of two sets is equal to the intersection of the complements of the two sets.'
Further in this article, we will explore the A union B complement formula in detail with the help of its Venn diagram, and formula. We will also consider a few examples of sets and determine A union
B Complement for a better understanding of the concept.
1. What is A union B Complement?
2. A union B Complement Venn Diagram
3. A union B Complement Formula
4. Proof of A union B Complement
5. FAQs on A union B Complement
What is A union B Complement?
A union B complement is an important De-Morgan's Law of Union and is equal to the intersection of the complement of the set A and the complement of the set B. We have two laws of De-Morgan, namely A
union B complement and A intersection B complement. In this article, we will mainly focus on the A union B complement formula. The formal statement of this law is given as: The complement of the
union of two sets A and B is equal to the intersection of the complements of the two sets A and B.
A union B Complement Venn Diagram
Now that we know that A union B complement is equal to the intersection of A' and B', let us now understand its concept visually with the help of A union B complement Venn diagram. The diagram given
below shows the universal set U with two sets A and B in it. Now, the shaded portion in blue indicates the region covered by the A union B complement. The shaded portion in blue consists of elements
of the universal set U which does not include any element of the sets A and B. Hence, we can say that the blue region highlights the elements of the intersection of the complements of the two sets A
and B.
A union B Complement Formula
Next, we will determine the formula for A union B complement. As we know that A union B Complement consists of those elements of the universal set U which are not in A U B, therefore, the required
formula can be written in any of the following forms:
• (A U B)' = A' ∩ B'
• (A U B)^c = A^c ∩ B^c
, where ' or ^c denote the complement of a set.
Proof of A union B Complement
Now we know that A union B complement is equal to the intersection of the complement of the set A and the complement of the set B, that is, (A U B)' = A' ∩ B'. Therefore, we will now prove this
formula of A union B Complement by showing (A U B)' and A' ∩ B' as subsets of each other. For this, we will consider an arbitrary element in each of these sets.
Proof: Assume a to be an arbitrary element that belongs to (A U B)'
⇒ a ∈ (A U B)'
⇒ a ∉ (A U B) [Because an element belonging to the complement of a set cannot belong to the set]
⇒ a ∉ A and a ∉ B
⇒ a ∈ A' and a ∈ B' [Using complement of a set definition]
⇒ a ∈ A' ∩ B'
⇒ (A U B)'' ⊆ A' ∩ B' --- (1)
Next, let us assume b to be an arbitrary element in A' ∩ B'
⇒ b ∈ A' ∩ B'
⇒ b ∈ A' and b ∈ B'
⇒ b ∉ A and b ∉ B [Because an element belonging to the complement of a set cannot belong to the set]
⇒ b ∉ A U B
⇒ b ∈ (A U B)'
⇒ A' ∩ B' ⊆ (A U B)' --- (2)
From (1) and (2), we get (A U B)' = A' ∩ B'. Hence, we can say that A Union B Complement is equal to the intersection of the complements of the two sets A and B.
Important Notes on A Union B Complement
• A union B complement is named after the mathematician De-Morgan as one of De-Morgan's Laws of Union of Sets.
• A Union B Complement is equal to the intersection of the complements of the two sets A and B.
• The formula for A union B Complement is given by, (A U B)' = A' ∩ B' or (A U B)^c = A^c ∩ B^c
☛ Related Topics:
A union B Complement Examples
1. Example 1: Verify the A union B complement formula (A ∪ B)' = A' ∩ B' for the sets A = {10, 11, 12, 13, 15}, B = {10, 12, 14} and U = {10, 11, 12, 13, 14, 15, 16, 18}
Solution: We need to prove (A ∪ B)' = A' ∩ B'. For this,
A ∪ B = {10, 11, 12, 13, 14, 15}
(A ∪ B)' = U - (A ∪ B)
= {16, 18} --- (1)
A' = U - A
= {14, 16, 18}
B' = U - B
= {11, 13, 15, 16, 18}
A' ∩ B' = {16, 18} --- (2)
From (1), (2), we get (A ∪ B)' = A' ∩ B'
Answer: Hence, we have verified the A union B complement formula (A ∪ B)' = A' ∩ B'
2. Example 2: Determine the elements of A union B complement if U = {1, 2, 3, 4, 5, 6, 7}, A = {2, 4, 6}, and B = {1, 3, 5}
Solution: We have A = {2, 4, 6}, and B = {1, 3, 5}, then A U B is given by,
A U B = {1, 2, 3, 4, 5, 6}, then A union B complement is given by,
(A ∪ B)' = U - (A ∪ B)
= {1, 2, 3, 4, 5, 6, 7} - {1, 2, 3, 4, 5, 6}
= {7}
Answer: (A ∪ B)' = {7}
View Answer >
Great learning in high school using simple cues
Indulging in rote learning, you are likely to forget concepts. With Cuemath, you will learn visually and be surprised by the outcomes.
A union B Complement Questions
Check Answer >
FAQs on A union B Complement
What is A union B Complement in Math?
A union B complement is a formula in math that is equal to the intersection of the complements of the sets A and B. Mathematically, the formula for A union B Complement is given by, (A U B)' = A' ∩
What is the Formula of A union B Complement?
The formula for A union B Complement can be written in two ways:
• (A U B)' = A' ∩ B'
• (A U B)^c = A^c ∩ B^c
, where ' or ^c denote the complement of a set.
How to Find A union B Complement?
A union B Complement can be evaluated using its formula. As we know that the complement of the union of two sets is equal to the intersection of the complements of the two sets, therefore A union B
Complement is equal to the intersection of the complements of sets A and B.
Why is A union B Complement called the De-Morgan's Law of Union?
We have two main laws of De-Morgan, namely De-Morgan's Law of Union and De-Morgan's Law of Intersection. De-Morgan's Law of Union is nothing but A union B complement formula given by (A U B)' = A' ∩
B' and De-Morgan's Law of Intersection is A intersection B complement formula which is given by, (A ∩ B)' = A' U B'.
How to Do You Prove A union B Complement Formula?
We can prove the A union B Complement Formula (A U B)' = A' ∩ B' by proving both the sets on each side of the equality as subsets of each other. We can do this by considering an arbitrary element in
each set and showing that it belongs to the other set.
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/algebra/a-union-b-complement/","timestamp":"2024-11-06T08:31:08Z","content_type":"text/html","content_length":"223702","record_id":"<urn:uuid:0e97656d-ea16-4c4c-9f76-a02f9b2105a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00593.warc.gz"} |
User Guide
The in-depth guide on how to use Anaqsim
Anaqsim is an analytic element software for simulating groundwater flow. It uses subdomains as described in Fitts (2010), which gives it strong capabilities with respect to heterogeneity and
anisotropy. It also employs high-order line elements, spatially-variable area sinks, and finite-difference time steps to allow multi-level aquifer systems and wide-ranging transient flow
simulations. Anaqsim is a product of Practical Groundwater and was developed by Dr Charlie Fitts. It is coded in C#.
Anaqsim Versions
There are two versions of Anaqsim: 'Student' and 'Licensed'. The installed software is the same for both, but the capabilities of the software vary depending on whether the software is Licensed or
By default the download provides you with an un-licensed version that will not solve any models.
The student version is not for commercial use, and incorporates a watermark. It can be used to run all of the tutorial models on the website.
The student version allows modeling of a wide array of situations, both steady and transient. When solving, its limits are as follows:
• Allows multi-level (3D) simulations with up to three aquifer levels.
• It allows steady state and limited transient simulations with up to two different time periods and up to five time steps in each period.
• The number of equations in the model's system of equations is limited to 2000. This allows fairly complex models, but users needing greater complexity, or for commercial use should purchase a
Activating a commercial license (either the free trial, monthly or annual subscriptions) allows you to solve larger, more complex models with these capabilities:
● Allows multi-level (3D) simulations with up to 15 aquifer levels.
● It allows steady state and fully transient simulations with any number of time periods and up to 20 time steps in each period.
● The number of equations in the model's system of equations is not limited, but somewhere around 40,000 to 50,000 equations is a practical upper limit based on computer memory and the time
required to solve.
Anaqsim is licensed through either a monthly or annual recurring subscription that can be cancelled at any time. Please refer to licensing section for further details, and see www.anaqsim.com for
license and pricing details.
It is possible to plot and analyse results from a model that has been solved on a licensed version of Anaqsim on an unlicensed version. See installation-manual for further details.
System Requirements
• 64-bit Windows operating system.
• At least 500 MB of available disk space.
• At least 1 GB memory. More memory is needed for larger problems and most users have 8+ Gb.
• Microsoft .NET v 4.8 or higher.
Installing Anaqsim
Please see the Installation documentation for details on how to install Anaqsim, and how to activate and de-activate licenses. Anaqsim licenses can be purchased as either a monthly or annually
recurring subscription via the website.
Release Version and History
The current Anaqsim release number and a history of Anaqsim releases, including lists of changes from one release to another, are posted on the Anaqsim website. The release numbering scheme gives
the year followed by the release number in that year. New releases are posted to the website and may be downloaded, installed, and run by anyone with a valid license. See the next topic about how
to update to a newer release.
You can see the release number of your installed Anaqsim by selecting About Anaqsim on the main menu.
Updating to a Newer Release
As long as you are within the term of your Anaqsim license, you may upgrade Anaqsim to the current release. To update, first check that there is a newer release available from www.anaqsim.com/
version-history. If there is and you want to update:
1. Download the newer release installation file from the www.anaqsim.com/version-history to your computer,
2. Uninstall the older release from your computer using the Windows Control Panel -> Add/Remove Programs dialog,
3. Install the newer release as described in the Installing Anaqsim section.
Updating will not affect your license, as the Anaqsim license file is not removed by the uninstall operation.
Help and Documentation
Help is available in many forms: this User Guide, tutorials, and example models available at the website.
The Anaqsim User Guide can be accessed by selecting Help from the main menu in Anaqsim.
Three tutorials that walk the user through construction of Anaqsim models of increasing complexity are available on www.anaqsim.com. A quick way to get familiar with Anaqsim is to work through these
tutorials in sequence, building your own Anaqsim models like the ones shown in the tutorials. The tutorials lead you through creating these models step-by-step. The tutorials are pdf files with
outlines at the first page that contain clickable links, so you can easily jump forward and backward to review, if needed. The completed tutorial models are included in the Documentation directory
of the Anaqsim software installation:
• tutor1.anaq is the input file for a simple one-level steady model with irregular boundary conditions, a recharge basin, a river, and a well, and anisotropic K.
• tutor2.anaq is the input file for a steady model with heterogeneity and a 3D area with multiple levels.
• tutor3.anaq is the input file for a transient dewatering simulation model within a 3D area.
All three tutorials may be done with either the student or licensed Anaqsim.
Contact and Support
Support is available for licensed Anaqsim users to make sure that Anaqsim is functioning properly on their computer. Please first check the Anaqsim User Guide (Help) and the tutorials on the website
to see if your question may be answered there.
The contact information for support is support @anaqsim.com
Anaqsim Modeling Concepts
This section does not repeat details that are published elsewhere in books and articles, but instead gives a quick outline of the techniques used in Anaqsim and cites the appropriate references for
those interested in the details.
Anaqsim employs the analytic element method (AEM), which superposes analytic solutions to yield a composite solution consisting of equations for head and discharge as functions of location and time.
The AEM is described in detail in books by Strack (1989) and Haitjema (1995). Shorter summaries of the method may be found in Fitts (2023) and Strack (2003). The AEM is fundamentally different
than numerical methods like finite elements and finite differences, where the domain is broken into small blocks or elements and simple head distributions (e.g. linear) are assumed within these
blocks or elements. In the AEM, boundaries of the domain are discretized, but the domain itself is not.
Anaqsim uses a variation of the AEM that divides the modeled region into subdomains, each with its own definition of aquifer parameters and its own separate AEM model (Fitts, 2010). The model for a
given subdomain (called a domain in Anaqsim) includes contributions from elements inside and on the external boundary of the subdomain; elements beyond the subdomain do not contribute. Each
subdomain model is written in terms of two-dimensional functions, but three-dimensional flow may be simulated using multiple levels in a model. In multi-level models, the resistance to vertical flow
is accounted for in the vertical leakage between levels.
This subdomain approach allows for a high degree of flexibility with respect to a model's heterogeneity, anisotropy, and layering. For example, it is possible for a subdomain that is anisotropic to
be adjacent to another subdomain that is isotropic or anisotropic with a different direction and ratio. The subdomain approach allows mixed layering schemes. For example, an area with multiple
levels (subdomains stacked vertically and leaking vertically to each other) can abut an area with subdomains in just a single level. This allows the model to focus layering and computational effort
in the area of interest, with a simpler single-level model for distant areas.
Another key aspect of Anaqsim is that it allows complete transient simulation capabilities by using finite difference time steps as suggested by Haitjema and Strack (1985). The transient term in the
flow equations is handled in essentially the same manner as it is in finite difference programs like MODFLOW.
Subdomains and Model Levels
Anaqsim uses one separate two-dimensional model for each subdomain. In Anaqsim, subdomain input is entered in the Model Input / Domains data tables (Domain in Anaqsim's menu system is short for
subdomain). In each subdomain model, the resistance to vertical flow is neglected and the head is independent of elevation within the subdomain (Dupuit assumption). Resistance to vertical flow and
three-dimensional flow are modeled by using multiple levels with vertical leakage between levels. To illustrate how subdomains and model levels are implemented in Anaqsim, we will go through several
examples, starting from simple and working toward complex.
The simplest model would be one level (two-dimensional), and only one subdomain (homogeneous). A plan view of such a simple model is shown below.
The properties of the subdomain (hydraulic conductivity, porosity, elevations, etc) are defined in the Model Input / Domains menu. The spatial extent of the subdomain (blue) is defined by the line
boundaries that are listed as being on the external perimeter of the subdomain. This is different from TWODAN or other analytic element programs where one domain is the infinite "background" domain,
and a heterogeneity (domain) lies inside a polygon dedicated to defining the heterogeneity boundary. The scheme in Anaqsim allows different kinds of boundaries to define the limits of a domain,
which is much more flexible. In the simple model shown above, there are two line boundaries that form the external boundary of the domain: one that is head-specified (red) and one that is
normal-flux specified (black).
The algorithm that determines what points lie inside the domain works as follows: a point is inside the subdomain if going in the positive x direction (right) from the point, you cross a boundary of
the domain an odd number of times. For most points inside the domain, there is only one boundary crossing in the positive x direction (like "a" above), but it is possible to have three or more
crossings (like "b" above).
In general, subdomain boundaries should combine to form a perfectly closed polygon, with exactly matching points where the line boundary end/start points meet. If the line boundaries defining the
boundary of the domain have a gap, it can lead to erroneous definitions of domain areas and odd model results. The boundary gap in the example below allows a strip to the left of the gap (white)
that is technically not inside that domain (blue); points in this strip have zero boundary crossings to the right. A similar result happens if there is overlap in the location of two end/start
points; where they overlap, the algorithm sees two crossings in the positive x direction.
Now consider a model that is still one level (two-dimensional), but is heterogeneous with three subdomains, blue, yellow, and green (below). This example has three types of line boundaries which are
labeled: head-specified (hs), normal flux-specified (nfs), and inter-domain (id). Each different line boundary is shown in a different color and is labeled hs, nfs, or id. At all the common
intersections where two or more line boundaries meet, the coordinates of the end points must match exactly, so that each subdomain region is defined without gaps or errors.
Along inter-domain boundaries, you specify which domain lies to the left and which domain lies to the right. For example, if the coordinates of the inter-domain boundary between blue and yellow are
listed in order from bottom to top, blue is to the left and yellow is to the right. If the coordinates of the boundary between yellow and green is defined in clockwise order, yellow is to the left
and green is to the right.
Now imagine that the model shown above is a single level in the blue and yellow areas (two-dimensional), but has four levels in the green area (three-dimensional). A vertical cross section along
A-A' for such a model is shown below.
The convention for naming levels in Anaqsim is to start at level 1 at the top and increase the level numbers with depth in a multi-level stack. In the green area, level 1 is at the top and level 4
is at the bottom. Anaqsim assumes that there is vertical leakage between subdomains that exist in the same area but at different vertical levels. It is possible to skip a level number in a stack of
subdomains; for example, the green area in the above section could have domains with levels 2,3,5 and 6 rather than levels 1,2,3 and 4. Anaqsim finds the next domains above and below, no matter what
the level numbers are and even if there are gaps in level numbers. This is helpful in cases with complex layering schemes such as where a domain has limited extent compared to the domains above and/
or below it. The resistance to vertical flow between levels is computed based on the vertical hydraulic conductivities and thicknesses specified for the domains involved.
The brown inter-domain boundary separating the blue and yellow subdomains would have one subdomain on the left (blue) and one on the right (yellow). The green inter-domain boundary separating the
yellow and green subdomains would have one subdomain on the left (yellow) and four on the right (shades of green), assuming that boundary is defined with coordinates in clockwise order. Across
inter-domain boundaries, there is approximate continuity of head and approximate continuity of the normal component of discharge. This approximation is discussed more in the Line Boundary Conditions
section. Interdomain boundaries can connect subdomains in different levels (e.g. level 1 on one side and levels 2 and 3 on the other).
Left / Right with Respect to Line Boundaries
With interdomain line boundaries and with normal flux-specified line boundaries, it is important to specify boundaries to the left and/or to the right of the line boundary. To explain what this
means, refer to the following figure.
Assume that north is up and south is down in this map-view plot of an Anaqsim model. The different colored areas represent different subdomains, which are bounded by various line boundaries.
Consider the red interdomain line boundary (id) that separates the blue and yellow subdomains. If the coordinates for this line boundary are in order from south to north, then the blue subdomain
would be to the left and the yellow subdomain would be to the right. Think of it as though you were walking along the line boundary from the first vertex towards the last vertex in the coordinates.
The "left" side is to your left as you walk along the boundary and the "right" side is to the right. Alternatively, if the coordinates of this interdomain boundary are listed in order from north to
south, then the yellow domain is to the left and the blue domain is to the right.
Now consider the green interdomain boundary separating the green and yellow subdomains. If the coordinates of that boundary are specified in clockwise order, the yellow domain is to the left and the
green domain is to the right. Alternatively, if the coordinates of that boundary are specified in counter-clockwise order, the green domain is to the left and the yellow domain is to the right.
When a normal flux-specified boundary is external or with a head-dependent normal flux boundary, the vertex coordinates must be specified in counter-clockwise order, with the domain to the left of
the boundary as you proceed along it. Consider the purple normal flux-specified boundary (nfs) in the figure. It's coordinates must be specified in counter-clockwise order (west to east) with the
yellow domain to the left. Likewise, for the blue normal flux-specified boundary, the coordinates must be specified counter-clockwise from southeast to northwest, keeping the yellow domain to the
left. The normal flux is positive for flow across the boundary from left to right as you proceed from the start toward the end (flow out of subdomain). Normal flux is negative for flow from right
to left as you proceed from the start toward the end (flow into subdomain).
Recharge, Leakage and Transient Storage
Like in any flow model, the flow equation in Anaqsim is based on Darcy's Law and conservation of mass (and volume, with constant density). The conservation equation, in its simplest is:
where ∇Q is the divergence of the two-dimensional aquifer discharge vector field and γ is the net extraction per area (sink term, units of L/T). The sink term γ may have contributions from leakage
out the top of the subdomain (L[t]), leakage out the bottom of the subdomain (L[b]), and transient discharge/area into storage(S ∂h/∂t) . See equations 4-6 of Fitts (2010).
The vertical leakages L[t] and L[b] are specific discharges proportional to the head difference between the domain and the head above or below (could be specified head or another domain at a
different level), and proportional to the equivalent vertical hydraulic conductivity K[e] that is based on the vertical conductivities and the saturated thicknesses at the average heads specified for
the domains involved. If there is leakage between two domains at different levels, the equation used for computing K[e] is
where b1 and b2 are the average saturated thicknesses of the two layers and K1 and K2 are the vertical hydraulic conductivities of the two layers. In Anaqsim, K[e] is assumed to be constant and
independent of head and actual saturated thickness.
Under restricted circumstances (uniform recharge, steady flow, and a single-level model), γ is constant and independent of location. In such cases, the uniform γ distribution may be modeled exactly
using a uniform area sink.
In many practical cases, the model needs spatially-variable extraction (γ varies with x, y) due to spatially-variable vertical leakage and/or spatially-variable storage changes. When that is the
case, the model needs spatially-variable area sinks to approximate the proper distribution of γ. The spatially-variable area sink functions in Anaqsim create a smooth, continuous, irregular γ
surface within a subdomain. The model is using equation 13 of Fitts (2010) as a model of the distribution of γ, with γ equal to the perfect extraction/area (computed by equation 6 of Fitts (2010))
at each basis point, but approximating equation 6 of Fitts (2010) between basis points. The modeled distribution of γ satisfies the flow equation perfectly at basis points and approximates the flow
equation between basis points. This approximation is more accurate if the basis point spacing is smaller.
To check the accuracy of this approximation, Anaqsim provides an analysis tool (Analysis Menu / Conditions Along a Line). If checks with this tool reveal a poor approximation, the basis point
spacing is too large. If the approximation is fine, you may be able to decrease the basis point spacing and save some computation.
Spatially-variable area sinks can rack up a large number of equations and computational burden on the system, so use them sparingly. Where possible (often the far-field), use a single level in the
model, which if it is steady means you can use the very efficient uniform area sinks instead of spatially-variable area sinks. Use the special well basis point spacings around wells to gain accuracy
with minimal computation.
The analytic elements used in Anaqsim include the elements described in the following references:
• The standard well element described by Strack [1989] is used in isotropic subdomains and the well element of Fitts [2006] is used in anisotropic subdomains.
• Line boundaries are represented by high-order line elements similar to those described by Jankovic and Barnes [1999]. Line elements are either linesinks, line dipoles, or line doublets. Line
elements may have up to ten unknown parameters (degrees of freedom). For example, a linesink with one parameter has a constant discharge/length along its length, a linesink with two parameters
has discharge/length that varies linearly along its length, and a linesink with three parameters has discharge/length that varies parabolically along its length. Strack [1989] gives a good
overview of line elements.
• Uniform area source/sinks (constant over a subdomain) are modeled with the uniform recharge functions described by Strack [1989]. For these, the extraction/area is uniform over the entire
subdomain. This is appropriate in cases with a single level, steady flow, and a uniform recharge rate or zero recharge.
• Spatially-variable area source/sinks are modeled with the multi-quadric radial basis function elements described by Strack and Jankovic [1999]. With these, the extraction/area varies with
location. Spatially-variable extraction occurs when the extraction = recharge + leakage + transient storage flux is spatially variable. This is generally the case in multi-level models where
spatially-variable leakage occurs and in transient models where spatially-variable storage flux occurs. See the previous section and Fitts (2010) for details of how these elements are used to
represent recharge, leakage and storage flux.
There are three possible kinds of pumping wells in Anaqsim:
1. Discharge-specified wells in just one subdomain. With this type of well, you specify the known discharge of the well.
2. Discharge-specified wells spanning multiple subdomains. This is for wells that are screened across multiple levels and subdomains in a multi-level part of a model. For example, a well could be
screened across the two lowest levels in a 4-level part of a model. You specify the known discharge of the well, and Anaqsim imposes these conditions: a) the heads at the well radii in each
screened subdomain are identical, and b) the total discharge of the well elements in the screened subdomains equals the specified discharge of the well.
3. Head-specified wells in just one subdomain. With this type of well, you specify the known head at the well radius. Anaqsim determines the discharge needed to meet this condition.
Interdomain Line Boundary Conditions
At the inter-domain boundaries, two different boundary conditions are enforced by Anaqsim. First, heads are matched across the boundary at specific points in all subdomains that intersect the
boundary. For example with the green inter-domain boundary shown in the figures below, heads are matched in the yellow domain and the four green domains at control points along the boundary.
Plan view (above) and section view (below) of a multi-domain model.
Secondly the normal component of discharge is matched across line segments on the inter-domain boundary. For the green inter-domain boundary, this means that the sum of the normal components of
discharge in the four green subdomains must equal the normal component of discharge in the yellow subdomain. This condition is enforced across intervals on a line segment. If each line segment on
the inter-domain boundary has n parameters, then the condition is enforced across n intervals per line segment.
A good way to check that you have enough parameters specified on any line boundary condition is to use right-click/ Check Line Boundary Conditions, which creates a plot of the boundary condition
accuracy along a line segment of the line boundary. If you don't have satisfactory accuracy, you can either (1) go under Line Boundaries and specify more parameters (control points) per boundary
line segment, and/or (2) break a long boundary line segment into multiple, shorter segments.
One consequence of the head-matching condition is that there will be no vertical hydraulic gradient between different levels on the multi-level side of an inter-domain boundary. This is consistent
with the fact that there is no vertical resistance to flow accounted for on the single-level side of an inter-domain boundary. Because of this, it is not appropriate to use one inter-domain boundary
with multiple levels on both sides of the boundary; doing so would remove vertical head gradients at the boundary, which defeats the purpose of having multiple levels.
With inter-domain boundaries, there should generally be just one domain (level) on one of the two sides of the boundary. Anaqsim checks the input and gives a warning if there are two or more levels
on both sides of an inter-domain boundary. To explain why, consider the case illustrated below, with two levels on the yellow side of the inter-domain boundary(s) and four on the green side. If
there was just one inter-domain boundary there, it would be as though an infinitely conductive thin vertical boundary were inserted between the yellow side and the green side, and heads would match
in all six domains that meet there, meaning there can be no vertical head gradient on either side of the boundary. The total normal component of discharge would match (total normal discharge in the
two yellow domains would match the total normal discharge of the four green domains), but that normal discharge may be distributed oddly between different levels (e.g. a lot of flow from the lower
yellow level going into the uppermost green level). A better solution would be to use two inter-domain boundaries:
1. one with the lower yellow domain on one side and the two lowest green domains on the other, and
2. one with the upper yellow domain on one side and the upper one or two green domains on the other.
This way, there can be vertical gradients between the yellow/green interface, and upper-level flows are matched with each other and lower-level flows are matched with each other.
Pathline Tracing
Pathlines are traced in the horizontal plane using the aquifer discharge vector function in the domain and a numerical tracing procedure outlined in section 26 of (Strack, 1989). Three-dimensional
pathline tracing is done using a finite-difference form of equation 24 in the paper by Strack (1984), which uses three-dimensional flow continuity to approximate the vertical component of pathlines.
For transient models, pathlines are traced through a transient flow field that evolves through the duration of the transient simulation (this is new in release 2016-2; earlier releases traced
pathlines through the flow field of the last time step in a transient model). From release 2016-2 on, pathlines in transient models start at the specified start time, and can be traced backward to
the beginning of the simulation or forward until the end of the simulation. In steady models, pathlines are traced forward or backward until they either exit the model or until the specified total
time is reached.
Where pathlines cross inter-domain boundaries, the elevation of the pathline is adjusted so that the fraction of the normal component (perpendicular to the boundary segment) of flow above and below
the pathline on each side of the inter-domain boundary match. For example, say you have an inter-domain boundary with one domain on the left and three on the right. If the pathline comes to the
boundary from the left with 45% of the normal component of discharge above the pathline and 55% below it, the domain and elevation of the pathline side will be determined so that this 45/55 ratio is
preserved on the right side as well. There is typically a jump in the elevation of the pathline as it crosses an inter-domain boundary for the following reasons:
• The elevations of the domains on left and right do not necessarily match
• Where there are multiple levels on one side of the boundary, the distributions of the normal components of flow vary from one level to another due to K and flow field variations
• The total normal components of flow match only approximately from left to right side (see Fitts (2010)). Anaqsim is not matching these normal components of flow perfectly at every point on the
boundary - it is matching the total normal discharge across segments of the boundary perfectly.
When the normal discharge components across an inter-domain boundary are in different directions in different levels (usually where flow is nearly stagnant) this algorithm breaks down, and under such
conditions pathlines may terminate at the inter-domain boundary.
When pathlines intersect an internal line-sink boundary such as a river boundary, there are two possibilities: 1) the pathline could be consumed by the linesink and terminate, or 2) the pathline
could continue on underneath the linesink. Whether option 1 or 2 occurs is determined based on the elevation of the pathline as it approaches the linesink, the discharge/length of the linesink at
the intersection, and the normal component of the domain discharge at the intersection. It is assumed that the linesink is at the top of the domain and draws its water from the upper portion of the
domain discharge. If the result is case 2 with the pathline continuing under the linesink, its elevation will generally jump as it crosses under the linesink, gaining elevation if the linesink is
extracting water and losing elevation if the linesink is injecting water.
In multi-level areas of models, pathlines can cross from one level to another vertically.
Where pathlines exit the bottom or top of the model due to recharge or leakage (e.g. at the water table for pathlines traced upstream), Anaqsim draws a circle at the end of the pathline and if you
move the cursor over the circle, it will display elapsed time, domain, elevation, etc.
User Interface
User Interface
The user interface consists of one main menu plus three different tabs:
• Plot: for graphically displaying map-based inputs and outputs
• Data: for editing input data. The data view is automatically displayed when you choose to edit something under the Model Input, Plot Input, or Analysis Input menus.
• Log: for displaying text outputs. The run log displays information when files are opened or saved, when the Solve menu is executed, and in response to a number of choices under the Analysis
More information about each of these tabs is given in the following pages. Change from one tab to another by clicking on the tab at the left. The current active tab is highlighted in blue. In the
following image, the Plot tab is selected.
User Interface
Plot Tab
The Plot tab shows the model inputs and results plotted in map view. This view is automatically shown after opening an existing model, and after making a plot under the Make Plot menu. Most of the
Plot view is a map view of the model that can display a basemap, model elements, simulated heads, flow vectors, pathlines, etc. The plot area has scroll bars that allow you to shift the view left/
right and up/down while retaining the same scale. The scroll wheel on most mouse devices will cause the view to zoom in and out. Pressing down on the scroll wheel and moving the mouse will allow
you to pan the view with most mouse devices.
On the upper left is a separate plot view menu (Plot File, View Manager,...Zoom Previous) that applies just to the plot. Choices in this menu allow you to zoom to a different view, save or print the
plot, digitize and edit coordinates, or add annotations to the plot. See the tutorial videos at the website to see the various functions in the plot view menu explained and demonstrated.
Right-clicking your mouse over the plot brings up a context menu with many choices for digitizing, editing line boundaries, and generating outputs related to the cursor location.
Context Menu (right click over plot)
When the cursor is over the plot, you may bring up a context menu by clicking your right mouse button. This menu allows the following options, which are handy short cuts while building, editing, and
analyzing a model:
• Edit Domain Properties - switches to the data view with a domain data table displayed and one highlighted row: the row of the domain where the cursor is located.
• Edit Nearest Well - switches to the data view with a well data table displayed and one highlighted row: the row of the well nearest the cursor location.
• Edit Nearest Line Boundary - switches to the data view with a line boundary data table displayed and one highlighted row: the row of the line boundary nearest the cursor location.
• Edit Area Sink that Applies Here - switches to the data view with an area sink data table displayed and one highlighted row: the row of the area sink that applies at the cursor location.
• Digitize - includes the most commonly used commands (Point, Polyline, Clear Digitizing Marks) from the Plot View/Digitize menu, but without the pop-up instructional windows. These are efficient
for experienced users.
• Edit Line Boundary - allows the same operations (Insert Vertex, Delete Vertex) as are found in the Plot View/Digitize menu, but without the pop-up instructional windows. These are efficient for
experienced users. One line boundary, spatially-variable area sink (SVAS) polygon boundary, or vertical leakage polygon must be selected before executing either of these menu items.
• Set Plot Window to Current View - this is the same as Plot Input/Set Plot Window to Current View, setting the window for subsequent plots to the current view.
• Set Plot Window to Entire Model - this is the same as Plot Input/Set Plot Window to Entire Model, setting the window for subsequent plots to the entire model.
• Check Nearest Well Head and Discharge - this writes the head and discharge of the nearest well to the Log tab.
• Check Nearest Line Boundary Condition - this allows you to check the accuracy of the approximation of boundary conditions along the particular segment of a line boundary that is closest to the
cursor when this is selected. This causes a graph to be made of the conditions on that boundary segment. The graphs vary depending on the type of line boundary. You may export a bitmap graphic
of the graph or the underlying data (see the Exporting X-Y Graphs topic). Prior to release 2016-2, this feature was under the Analysis menu.
• Graph Head Hydrograph(s) Here, All Levels - for a transient simulation, this causes hydrographs (graphs of head vs. time) to be made that show heads in all model levels at the location of the
cursor. These graphs do not contain the initial heads (time=0) at these locations. If you want hydrographs that include initial heads, use Analysis Input/Hydrograph Points and Analysis/Head
Hydrographs and Analysis/Drawdown Hydrographs.
Data at Cursor Location
To the left of the plot and below the plot menu is an area that displays the coordinates of the cursor, model information and model results (X,Y Coordinates, Model Level, Domain Name, Head,...) at
the location of the cursor. As you move the cursor, this data updates based on the cursor location.
Each item may be hidden or visible; to toggle this, click on the triangle to the left of the label. If the window is not tall enough to display all items, use the scrollbar immediately to the right
of this area.
In transient models, the values reflect the time step of the selected period, which ends at the time listed. Head and interface values are at the end of the time step at the time listed.
Discharge-related values apply over the duration of the time step listed.
The model results shown are described below.
• Head Above - Head is the difference in head from the level above to the level of the plot.
• Head - Head Below is the difference in head from the level of the plot to the level below.
• Interface Elevation is the elevation of the fresh-salt interface (only applies in models with fresh-salt interface domains).
• Domain Discharge is the specific discharge times the saturated thickness, which equals the discharge in the domain per unit length normal to the discharge.
• Flow Direction is the direction of the specific discharge or average linear velocity vector, assuming the positive x axis is zero degrees.
• Top of Model Condition is the condition at the top of the topmost domain of the model at this x,y, defined by the area sink input.
• Bottom of Model Condition is the condition at the bottom of the bottommost domain of the model at this x,y, defined by the area sink input.
• Modeled Extraction is the extraction per area γ [L/T] that Anaqsim is modeling at the point, using the spatially-variable area sink functions. This quantity is defined by equation 13 of Fitts
• Extraction from Heads is defined as:
where L[t] is leakage out the top of the subdomain computed using head differences and vertical hydraulic conductivities, L[b] is the leakage out the bottom of the subdomain computed similarly,
and S ∂h / ∂t is the transient discharge/area into storage computed using the head change over a time step and storativity. At spatially-variable basis points, this equals the modeled
extraction, but between basis points, the two are unequal, but close if the basis point spacing is small enough. This quantity is defined by equation 6 of Fitts (2010). In a transient
simulation, this cannot be computed for the first time step, since the initial head at time zero at the cursor location is not known and therefore ∂h can't be computed.
• Leakage out Top is the leakage out the top of the domain, L[t] as defined above [L/T].
• Leakage out Bottom is the leakage out the bottom of the domain, L[b] as defined above [L/T].
• Storage Flux is the flux into storage in a transient model, S ∂h / ∂t, as defined above [L/T].
• Leakage Factor Above is the leakage factor (LF) computed for upward leakage from the subdomain. If the subdomain is at the top of the model with no overlying level, the leakage factor is
computed as:
Where T is the transmissivity of the subdomain, b is the saturated thickness of the subdomain, K[v] is the vertical hydraulic conductivity of the upper half of the subdomain. If the subdomain
has another subdomain overlying it, the leakage factor is computed with the following equation:
Where T[a] is the transmissivity of the subdomain above, b[a] is the saturated thickness of the subdomain above, and K[va] is the vertical hydraulic conductivity of the lower half of the
subdomain above. Leakage factors are used as guidance for determining appropriate basis point spacing for spatially-variable area sinks. See the discussion under spatially-variable area sinks.
• Leakage Factor Below is the leakage factor (LF) computed for downward leakage from the subdomain. It is computed in a manner analogous to that described above for Leakage Factor Above.
User Interface
Data Tab
The data tab allows you to edit model inputs, plot inputs, and analysis inputs with a data grid component that displays data from the underlying data tables. To display and edit a table, make a
selection under the Model Input, Plot Input, or Analysis Input menus.
There is a context menu that pops up when you right-click over the grid. Options in this menu include
• Paste New Rows - this pastes in new rows of data that are tab-delimited between columns. This format allows you to paste data copied from spreadsheets like Excel.
• Copy Selected Rows - this copies the selected rows to the system clipboard, which can then be pasted into spreadsheets like Excel.
• Copy All Rows - this copies all rows in the spreadsheet to the system clipboard, which can then be pasted into spreadsheets like Excel.
Using the Data Grid
A data table is displayed in the grid when you select an item under the Model Input, Plot Input, or Analysis Input menus. The displayed data is linked to one of several database tables, and when you
edit the displayed data, the underlying data table is updated.
The table of data is displayed with headers that define each column, such as Label, Domain, Parameters_per_line, ... as shown below.
You can move from cell to cell with the arrow keys or using the mouse. The current row is highlighted blue. You can enter new values by navigating to a cell and typing a new entry, or you can
double-click on a cell to edit cell contents with a text editor, as shown below, where the contents of the cell with "101" is being edited.
When you enter a value in a grid cell, the underlying database is updated when you press enter or move to a different cell. At this point the cell value is checked to make sure it is compatible
(e.g. a positive real number for hydraulic conductivity). If the value is incompatible, an error message is displayed and you must correct the cell entry. Be sure to remember to press enter or move
to another cell after editing the value in a cell, otherwise the value will not be changed in the database.
New rows are created by editing the blank row at the bottom of the table. A new row of data is entered into the underlying data table only when you press enter after editing the row, at which point
a new blank line appears below the line just entered. The following two screen shots shows a new 2nd row before it has been entered in the database (no blank row shows below it), and after (blank
row below 2nd row).
In data tables that contain multiple rows, the leftmost field is often called Label, and it is always displayed even if you scroll far to the right. No entry is required in this field, and it
accepts any text. It is wise to fill in a text label in this field (e.g. “PW-103” for a pumping well). The label will help you know which feature this row represents, and many analysis outputs make
use of this label. Also you can sort the data based on entries in this column to easily find the row you want. The contents of the table can be sorted by clicking on the column header. Clicking a
second time reverses the sort order. It is a good idea to choose labels that easily allow you to sort features. For example, if you want to easily find a group of wells on property A, you could
give them labels such as "A_MW102", "A_MW105", "A_MW113"... so they would be grouped together after sorting by the label column.
Column widths are automatically adjusted to fit the contents. You can increase or shrink column widths by dragging the left or right the vertical line that separates columns in the header (top) row.
Double-clicking on this vertical line automatically resizes the column width to fit the contents.
Some columns, like the Parameters_per_line column in the table shown above, are edited using a drop-down list of choices. To see the list, double-click the cell, then select the item you want.
Other columns, like the Coordinates column, contain buttons to edit or select data; these are edited by clicking on the button.
Number Formats
All data grid cells that expect numerical input have common format constraints. You can input real numbers with formats such as the following:
The last one is scientific notation for 1.4x10-2.
You should not insert commas to mark thousands, millions, (e.g. 1,200,000) as the comma may be interpreted as a decimal mark. In North America, the convention is to use a period for the decimal
marker. In Europe, the comma is often used as a decimal marker. There is a Windows operating system setting to switch between these modes. Often, European users need to adjust these settings to use
Editing Coordinates
In many of the data input tables, there are columns and cells that display an "Edit" button in the Coordinates column. When you click the button, a text box window pops up and you enter coordinate
data there:
Often, you will digitize the coordinates in the plot tab and then paste the coordinates into this text box window. Alternately, you can just type coordinates in. The OK button records the edited
coordinates and the Cancel button does not.
Once input, coordinates can be edited graphically by selecting the line boundary and then moving the vertexes or inserting or deleting vertexes.
Deleting Data Rows
Delete one or more rows of data in the data table by selecting rows and then pressing the Delete key. Row(s) are selected by clicking (and dragging for multiple rows) in the leftmost column of the
grid. A dialog will ask you if you really want to delete those records from the data table.
Importing and Exporting Data
To import data from Excel into a data table, highlight a block of data in an Excel sheet that corresponds to row(s) of data in a data table, copy that block in Excel, then right-click over the data
grid and select Paste New Rows. This will add these copied rows to the data table. Make sure that the columns in the copied block match the columns in the data table. Data in Coordinates columns
cannot be be pasted in due to their multi-line structure, but all other columns can be pasted in. In the case of a Coordinates column, a paste operation leaves that blank and you must enter the
coordinates by clicking on the Edit button in that column.
To export rows of data to Excel or a text file, select rows of data (see section above) and then right-click over the data grid and select Copy Selected Rows. After doing this the rows of data are
in the computer’s clipboard as tab-delimited data, which can then be pasted into Excel or into text files.
User Interface
Log Tab
The Log tab holds the run log, which is an area that displays text output from the program. The run log continues to accrue more text as you execute various tasks such as updating license
information, opening a file, solving the system of equations, checking boundary conditions, checking calibration results, or closing a file. If the text in the run log gets long enough, a scroll bar
will appear to let you scroll through the entire log, as shown below. You can select all or a portion of the text in the run log and cut, copy, and paste this text. This is an easy way to move text
results to another document.
User Interface
Menu Keyboard Shortcuts
You can access common menu items with keyboard shortcuts by pressing the key sequences as listed below. Many are standard Windows shortcuts.
│Ctrl-O│File/Open │
│Ctrl-S│File/Save │
│F12 │File/SaveAs │
│Ctrl-W│File/Close │
│Alt-FE│File/Exit │
│Alt-S │Solve │
│Alt-PA│Make Plot/All Selected Features │
│Alt-PE│Make Plot/Elements Only │
│Alt-V │Switch View │
│Alt-H │Help │
General Modelling Sequence
Creating a model follows this general sequence:
1. You can start a new model either right after starting the program or after selecting File/Close which closes the current input and begins a new input data set. Once either of these steps is
taken, you may edit data tables under the Model Input, Plot Input, or Analysis Input menus.
2. Create model input using the Model Input menu. Make sure to define Domains before adding Well, Line Boundary, or Area Source/Sink elements. This sequence is necessary because the input for the
elements includes specification of the domain(s) they are in. When adding elements, it helps a lot to use a basemap and digitize coordinates on top of the basemap.
3. Define what you want displayed in plots with the Plot Input menu.
4. Define what analysis features you want with the Analysis Input menu.
5. Save your model frequently as you build your input.
6. When the model input is complete, select Solve to solve the system of equations. This is required after making any model input changes and before making output plots or using the Analysis menu.
7. View plots of the model results with Make Plot.
8. Examine model results with the Analysis menu.
9. Loop back through steps 2-8 to revise the model, re-solving the system after revision and before examining results.
File Menu
This selection opens a dialog that allows you to find and open existing input files (.anaq extension). These files store the data you edit under the Model Input, Plot Input, or Analysis Input menus
in XML file format. XML is a common ASCII database file format. You could edit these directly with a text or XML editor, but that is not recommended since it risks corrupting input with improper
values or format. When you open a file, the layout of elements in the model is drawn to the plot view.
If you want to be able to open .anaq files by double-clicking on them in Windows Explorer, in addition to opening them from the File/Open menu, the Windows operating system must associate .anaq files
with Anaqsim. In case this association was not established during installation you can manually do it with Windows Explorer. To do this, locate a .anaq file in Windows Explorer. Right click on the
file and then select Open with, then select Chose default program. In the dialog that pops up, check the box next to Always use the selected program to open this kind of file and then select the
Browse button and browse to find the Anaqsim.exe file in the Program Files / Fitts Geosolutions / Anaqsim software directory. Now Windows will associate .anaq files with Anaqsim.exe, and you can
open any .anaq file directly from Windows Explorer by double-clicking on it.
Save, Save As
The Save As option brings up a dialog that allows you to save your input to a file with a new name. Using Save saves the input to the same file name. If you have yet to save input and have no
filename, it will function like Save As. When you save, you save the input data tables to an XML format database file with the .anaq extension.
This closes the input you are working on and clears all the associated data tables in memory. After selecting Close, you may begin editing a new model.
Save locations for Initial Transient Heads
A transient model needs initial heads so it can compute the head change that occurs during the first time step. These values are needed at the location of each basis point in each spatially-variable
area sink, which account for storage fluxes. The initial head values come from a pre-existing model, which could be steady or transient. Initial heads are also retrieved for discharge-specified
wells, hydrograph points, and transient line conditions (see Analysis Input Menu for the last two items).
The initial conditions model must have the same number and extent of layers as the transient model, at least in areas with basis points, wells, hydrograph points, or transient line condition lines.
When the initial head locations are written, each location is identified by its x,y coordinates and its layer. This is new in release 2020-1; prior releases wrote the x,y coordinates and the
internal domain number. The change made in release 2020-1 allows different domain configurations between the initial and the transient models, which can be helpful. Because of this change, you must
not mix initial head location or initial head files created prior to release 2020-1 with a release 2020-1 or later model. To avoid incompatibility when you switch to release 2020-1, re-create the
initial head location file and the initial heads file as outlined below using release 2020-1.
To create a transient model that has a proper set of initial heads, these steps are necessary:
1. Make sure the transient model you begin to create has been saved to it's own unique file name, different from the file that contains the input for the model that will provide the initial
condition heads.
2. Set up the transient model (i.e. uncheck Steady under Model Input/General, establish the time step sequence under Model Input/Time Steps, adjust input for the transient case, set up all
spatially-variable area sinks, make the appropriate settings under Analysis Input/Hydrograph Points and Analysis Input/Transient Line Conditions, etc.
3. Select File/Save Locations for Initial Transient Heads, which saves the locations for transient starting heads from the transient model (these are the locations of all basis points associated
with spatially-variable area sinks in the transient model and locations of hydrograph points, wells, and transient line conditions. This saves the level and the coordinates of each of these to a
binary file with the .ihl extension.
4. Close the transient model.
5. Open the initial conditions model (the one that represents conditions at the start of the transient run). Solve it. Select File/Write Initial Transient Heads, click on the initial heads file
you would like to use for this simulation, and click Open. This reads in the locations saved from the .ihl file saved in step 3.
6. A dialog opens asking you to name the binary initial heads file. The default name is the same as the initial conditions input file but with the .hds ending. Anaqsim then writes the initial
heads at the locations in the .ihl file to a binary file with the .hds file extension.
7. Close the initial conditions model.
8. Open the transient model. After checking that all model parameters are set correctly for the transient run, select Solve. At this point Anaqsim will ask you to select the .hds file containing
the initial heads created in step 6.
When you solve the transient model, the heads are read in and used to determine the head change in the first time step at each basis point.
Write Initial Transient Heads
See the discussion under Save Locations for Initial Transient Heads for an overview of setting up initial heads for transient models.
Save Solution
This allows you to save the model solution after you have solved. Later, you can open the model input file, then load the saved solution, and avoid the "Solve" step. This is particularly handy for
large models that have longer solve times, and allows you to save your solution and come back to it later for making plots or doing analysis of the solution. All model objects with their strengths
are saved in a binary format, to a file that has the same name as the input file, but with the ".solu" filename extension instead of ".anaq".
Load Saved Solution
This allows you to load in a previously saved model solution. To make use of this, first open the model input file for the model, then load the saved solution, which avoids the need for the "Solve"
step. This is particularly handy for large models that have longer solve times, and allows you to save your solution and come back to it later for making plots or doing analysis of the solution.
All model objects and their strengths are read in from a binary file that has the same name as the input file, but with the ".solu" filename extension instead of ".anaq".
Export Input Data to Excel File
This causes the entire input database to be written to one excel file in the same directory as the input file (*.anaq), written to a file with the same name but the excel suffix (*.xlsx). The Excel
file has multiple sheets, one for each data table. Each sheet contains the same headers as the data tables plus all rows of input. This is a handy way to document model inputs all in one readable
This exits Anaqsim. A dialog asks if you want to save the current input before exiting. The same is achieved by clicking on the red "x" at the upper right corner of the Anaqsim window.
Edit Menu
This is like edit menus in most other Windows applications with Cut, Copy, and Paste menu choices. These functions are also available with the usual keyboard shortcuts: control-x (cut) control -c
(copy) control -v (paste).
Model Input Menu
This item has only one line of input. The first item is checked if the model is steady-state and not checked if the model is transient. To simulate storage fluxes in transient models,
spatially-variable area sinks (SVAS) are required. If you try to solve a transient model without any SVAS, Anaqsim gives an error message.
The other three items are text values that document the length and time units used in the model, and provide comments to document the run. The model uses consistent length and time units throughout.
For example, if you chose meters and days, then hydraulic conductivity, specific discharge, and average linear velocity are in m/day, well discharges are in m3/day, and time markers on pathlines are
in days.
The two data tables under this menu define settings involved in solving the system of equations in your model. The first defines parameters involved in solver iterations, and the second lays out
the solution accuracy needed before iteration ceases.
Solve Settings
• Underrelaxation is a parameter that governs how unknown strength parameters are updated at each iteration. If underrelaxation is 1.0, the new strength parameter equals the parameter from the
most recent iteration. If this value is 0.7, the new strength parameter is weighted 70% by the parameter at the most recent iteration and 30% by the parameter at the previous iteration. Lower
values help damp out oscillations in parameter values from iteration to iteration and may improve convergence in some situations. Higher values close to 1.0 speed convergence when oscillation is
not a problem. A good range for most cases is 0.9 to 1.0.
• Maximum Iterations. When solving, iteration continues until the tolerances specified in Check Settings are met at all boundary condition control points or basis points, or if those criteria are
not met, iteration ceases when this maximum number of iterations is reached.
• Starting Heads. For transient runs, the first time step needs initial heads from some source. There are two possibilities here:
1. Assign a constant value of head for each domain. The starting head at a point is set to the domain's average head, which is specified under Domains input. In the case of a
Discharge-Specified (Multi-Domain) well, the starting head is set to the average head of the first domain listed. This option should only be used for very simple transient runs where the
initial condition is a uniform, flat, constant head.
2. Read initial heads from a file. This file is written by the initial (time zero) model, which is another Anaqsim model of the same area that may be steady or transient (see discussion of this
file under the File menu).
• Almost_dry_fraction. This parameter affects the solution in cases with unconfined, confined/unconfined, unconfined interface, and confined interface domains, where the heads can drop to low
enough levels that the freshwater saturated thickness of the domain approaches zero. Instead of letting the domain have zero or negative saturated thickness, Anaqsim has a minimum saturated
thickness. In an unconfined, confined/unconfined, or unconfined interface domain, the minimum saturated thickness = (average head - base elevation) * Almost_dry_fraction. In a confined
interface domain, the minimum saturated thickness = (top elevation - bottom elevation) * Almost_dry_fraction. When heads drop to or below a level that would create this minimum saturated
thickness, the aquifer is treated like a confined aquifer with this fixed minimum saturated thickness. This prevents domains from actually going completely dry and helps models converge despite
portions of some domains approaching "dry" conditions. Setting
• Almost_dry_fraction to a low value means that very little simulated horizontal discharge will occur in "'dry" portions of the domain. Heads in these "dry"' areas will drop to unrealistically low
levels (below the base of an unconfined domain, for example), and have little real meaning. When contour plots are made of head, these unrealistically low values are neglected. These low heads
do appear in other outputs such as the panel to the left of the plot and in profiles. Using higher values of Almost_dry_fraction may help convergence by limiting the magnitude of head gradients
in "dry" areas, at the expense of allowing more actual discharge in "dry" areas. You can check the amount of discharge in a "dry" area by using the Analysis/Conditions Along a Line and making a
profile of domain discharge along or across a line. Chose a low enough value of Almost_dry_fraction so the discharge in these "dry" areas is acceptably small.
• Interface_leakage_option. This specifies one of two options for computing vertical leakage at a spatially-variable area sink basis point where a fresh/salt interface is present. If unchecked
(default), the vertical leakage is computed just like it is in all other cases: the vertical leakage rate is proportional to the difference in head from one level to the next. If this column is
checked and an interface is present in the overlying layer, the head difference is computed by using the freshwater head that is at pressure equilibrium with static salt water at the base of the
overlying layer; the head in the overlying layer is not used to compute the head difference. The head that is used to represent the overlying layer is the same as the head at the toe of the
interface in the overlying layer. The unchecked mode is appropriate when the resistance to vertical flow between levels is due to the K3 of the domains themselves. The checked mode is
appropriate when the resistance to vertical flow between levels is due to an aquitard between the levels that is not explicitly modeled (in this case the resistance of the aquitard must be
incorporated into the K3 values specified in the upper and/or lower domain). See Fitts et al (2015) for comparisons of these methods and discussion.
Check Settings
These settings define the accuracy of boundary conditions required of the solution; iteration continues until these conditions are met. If when Solve is pressed the solution converged before
reaching the maximum number of iterations, all boundary conditions were met within the tolerances specified here.
These settings also affect the function of Analysis/Check Boundary Conditions at Latest Iteration, which is used to check how well the solution meets specified boundary conditions. Such conditions
include heads at head-specified wells and linesinks, extraction at spatially-variable area sink basis points, etc. When you select Analysis/Check Boundary Conditions at Control Points, each boundary
condition is checked, and if the discrepancy between the specified condition and the model-simulated condition is greater than a threshold you specify here, the program prints the discrepancy to the
run log. If the discrepancy is less than the threshold, nothing is printed. In cases where the solution did not converge to within these tolerances during the Solve process, the offending boundary
conditions are listed along with their accuracy. This can be a help to home in on which boundary conditions are holding up the Solve process.
Four kinds of boundary condition tolerances are defined as follows.
• Head_check_tolerance is the threshold for the magnitude of discrepancy between specified and modeled heads, used at head-specified wells and line boundaries.
• Qn_check_tolerance is the threshold for discrepancies in the computed discharge per length along River line elements. It units are discharge/length [(L^3/T)/L = L^2/T].
• Q_check_tolerance is the threshold for discrepancies in discharge, which are used along inter-domain line boundaries and normal-flux specified boundaries. In both cases, the condition being
checked is the total discharge across a segment along the line [L^3/T]. For inter-domain boundaries, it is the comparison of discharges on opposite sides of the boundary. For normal-flux
specified boundaries it is the difference between modeled and specified discharges based on the specified discharge per length times the length of the segment.
• Extraction_check_tolerance is the threshold for discrepancies in the modeled extraction (Equation 13 of Fitts (2010)) and the extraction computed from head values (Equation 6 of Fitts (2010)).
The units of extraction and this threshold are discharge/area [(L^3/T)/L^2 = L/T].
Time Steps
This input is used only for transient models. If you are doing a transient model, make sure you uncheck Steady under Model Input/General. Each row of input in the Time Steps table defines a time
period, during which all boundary conditions are constant. For example, a model could have three time periods with different recharge rates, river stages, or well discharge rates in each of the three
periods, but within each period the values remain constant.
For each time period, you specify the total length of the time period (Period_Length), the number of time steps the period is divided into (Steps_in_period), and the time step multiplier
(Step_multiplier). The multiplier causes the length of successive time steps to grow by a factor equal to the time step multiplier. The following table illustrates the lengths of time four time
steps for a period that is 100 time units long, using various time step multipliers.
│Time Step │Multiplier = 1.0│Multiplier = 1.5│Multiplier = 2.0│
│1 │25.00 │12.31 │6.67 │
│2 │25.00 │18.46 │13.33 │
│3 │25.00 │27.69 │26.67 │
│4 │25.00 │41.54 │53.33 │
│Total Time:│100.00 │100.00 │100.00 │
In all cases, the total time of the period is 100, but the lengths of the four steps change as the time step multiplier changes. This scheme is the same as employed in MODFLOW. Using a multiplier
larger than 1.0 helps concentrate computing power early in the time period when there is more transient change occurring. Transient storage fluxes, which are part of spatially-variable extraction,
are computed for each time step using a finite-difference approximation of the governing equation (Equation 6 of Fitts, 2010).
The properties of each domain (called a subdomain in Fitts, 2010) are set with data tables under this menu. Different tables define the properties of different kinds of domains. A domain is a
polygonal region of the model in a certain model level. Inside a domain, the aquifer properties (hydraulic conductivities, base elevation, storativity, porosity, etc) are homogeneous.
The boundary of a particular domain is defined by a combination of line boundaries that, in their input, are listed as external to the domain. Head-specified, normal flux-specified, and inter-domain
boundaries can be external boundaries for domains. Other line boundaries like river and discharge-specified line boundaries are internal to domains and do not define domain boundaries. See the
discussion under Subdomains and Model Levels for more detail and some examples.
All domain input data tables may be accessed through the main menu or by using a pop-up context menu when the cursor is over the plot.
Boundaries of Domains
The geometry of the boundary of each domain is not specified in the domain data tables, but is determined by the distribution line boundaries that define the external domain boundaries. Line
boundaries that can be external domain boundaries are head-specified, normal flux-specified, and inter-domain. The data that is input for these types of line boundaries include information about
which domain(s) that they bound. All domains should be completely bounded by such line boundaries, so that their geometry is unambiguous. See the discussion under Subdomains and Model Levels for
more detail and some examples.
For the best accuracy, make sure the coordinates of the starting and ending points of adjacent external line boundaries match exactly (copy them). For example, if a domain boundary has two line
boundaries defining it - one head-specified and the other inter-domain, make sure that the start/end points where these two line boundaries join have the exact same coordinates.
Input Common to All Domains
Several parameters are common and required input for any type of domain:
• The label_unique allows you and the model keep track of which domain is which. When adding wells, line boundaries, etc. to the model, this label defines which domain these features are in.
These labels are required and must be unique (no two domains should have the same label). Once the labels have been declared and other features have been added to the model, do not change the
labels because doing so would require changing the domain label of each well, line boundary, and area source/sink that is in that domain.
• The level of a domain refers to where this domain fits in the vertical sequence of model levels. In a multi-layered part of a model, the level begins at 1 at the uppermost level and increases in
deeper levels. The level may be from 1 to a maximum of 15. If there are vertically stacked domains in a multi-level part of the model, vertical leakage is assumed to occur between domains that
are at different levels but occur at the same x,y coordinate. For example, if three domains in a three-level part of the model were assigned levels 1, 2, and 4, there could be vertical leakage
between levels 1 and 2 and between levels 2 and 4, if this area of the model has spatially-variable area source/sinks. When making plots of model results, plots show one level at a time.
• For Average_head, list your best estimate of the average head in this domain for your simulation. This then defines a constant that is added to the discharge potential for this domain. More
details are given about this in the next topic.
• The value of Porosity is used in computing average linear velocity and advection travel times along pathlines. The average linear velocity = specific discharge/porosity. For a solute that
adsorbs to the porous medium, the solute travel time will be be longer than the pure water advective travel time. If you want to plot travel times that factor in retardation of a solute, you may
input Retardation Factor * Porosity instead of Porosity in this column and the travel times will reflect travel of a retarded solute plume.
• K1_horizontal is the horizontal hydraulic conductivity. In a domain that is anisotropic in the horizontal plane, K1 differs from K2, and these represent the principle hydraulic conductivities in
the horizontal plane. K1 > K2, K1 < K2, and K1 = K2 are all possible.
• K2_horizontal is the horizontal hydraulic conductivity. In a domain that is anisotropic in the horizontal plane, K2 is in the direction normal to the K1_horizontal direction. To simplify
inputs, you can enter "=K1" if K2=K1 and the domain is isotropic in the plane of the domain. If you want a fixed ratio of anisotropy, you can enter "=K1*D" where D is a real number. Using "=
K1" or "=K1*D" is particularly handy for parameter estimation, to limit the number of parameters being estimated.
• Angle_K1_to_x is the angle, in degrees, between the x axis and the direction of K1_horizontal. Positive angles are measured counter-clockwise from the x axis.
• K3_vertical_top defines the vertical hydraulic conductivity of the upper half of the domain. This parameter is only used if there is vertical leakage with spatially-variable area source/sinks.
To simplify inputs, you can enter "=K1" if K3=K1 and the domain is isotropic normal to the plane of the domain. If you want a fixed ratio of anisotropy, you can enter "=K1*D" where D is a real
number less than or equal to 1.0. For example, make K1/K3 = 10 by entering "=K1*0.1". Using "=K1" or "=K1*D" is particularly handy for parameter estimation, to limit the number of parameters
being estimated.
• K3_vertical_bottom defines the vertical hydraulic conductivity of the lower half of the domain. This parameter is only used if there is vertical leakage with spatially-variable area source/
sinks. To simplify inputs, you can enter "=K1" if K3=K1 and the domain is isotropic normal to the plane of the domain. If you want a fixed ratio of anisotropy, you can enter "=K1*D" where D is
a real number less than or equal to 1.0. For example, make K1/K3 = 10 by entering "=K1*0.1". Using "=K1" or "=K1*D" is particularly handy for parameter estimation, to limit the number of
parameters being estimated.
Details about Average_head
In other two-dimensional AEM programs such as TWODAN, the flow region is open to infinity, and one unknown that needs to be solved for is the amount of flow that goes between the modeled area and
infinity. In these programs, to generate an equation to solve for that additional unknown, you specify a head at one location ("reference head" in TWODAN).
In Anaqsim, each domain model is closed and finite, so there is not that extra unknown. You specify the average head in each domain. which in turn defines a constant that is added to the potential
for that domain. Since there are linesinks that bound each subdomain, the flow field outside those linesinks does not matter (the flow to/from infinity doesn't affect the solution inside the domain
boundary). You could specify a variety of different average head values, within a reasonable range (close to the actual average), and get essentially identical results.
Say you have a simple Anaqsim one-domain model that has head-specified boundaries all around the external boundary, with h=100. There is zero recharge, so h should be 100 everywhere inside the
domain. If you specify the average domain h=100, the program adds a constant to the potential that is the potential corresponding to h=100. On solving, it will turn out that the boundary conditions
are met perfectly everywhere on the boundary and the boundary linesinks all have zero discharge; the analytical model will boil down to the simple equation h(x,y)=100. With zero discharges in the
boundary linesinks, there is no flow to or from infinity to the model boundary from the outside (even though you never see this part of a domain model, it exists).
Now imagine that instead you set the average domain head to 110, which adds a larger constant to the potential for this domain. Now, to achieve the boundary h=100, the boundary linesinks need to
extract water to pull the head surface down. In this case the solution on and inside the boundary will still be approximately h=100, but there will be flow to the outside of the boundary linesinks
from infinity. Likewise, if you set the average domain head to 90, the solution on and inside the boundary will be approximately h=100, but there will be flow from the outside of the boundary
linesinks to infinity. When you change the average head for a domain, it changes the part of the domain solution that you never see - the part that lies outside the external boundary of the domain.
If you use long line elements with few parameters and the average head is not close to the actual average, the differences in the external, unseen part of the model may have some visible impact on
the model within the domain boundary. The most likely manifestation will be some lumpiness in the head surface near those boundary elements. Correct this by choosing a more representative average
head and/or shortening line boundary elements and increasing the number of parameters per line.
Confined and/or Unconfined
Starting with release 2015-1, confined, unconfined, and confined/unconfined domains are in the same data table, which allows the user to quickly switch between these domain types. For confined and/
or unconfined domains, these parameters are needed in addition to those that are common to all domains:
• Domain_Type determines which type of domain is to be modeled. Confined, Unconfined, and Confined/Unconfined are the three options. For Confined, the domain is always a fixed saturated thickness
equal to the top elevation minus the bottom elevation, and the domain's transmissivity is independent of head. Confined domains generate linear equations, while the other two options can
generate nonlinear equations. It is often wise to begin your modeling with confined domains which tend to converge faster and be less prone to numerical issues associated with nonlinearity and
drying up. Later, it is easy to switch to unconfined or confined/unconfined domain types. With the Unconfined domain type, the domain is always unconfined and the Top_elevation is not used
(although some value must be input). The Confined/Unconfined domain type behaves like a confined aquifer when the head equals or exceeds the top elevation, but like an unconfined aquifer where
the saturated thickness depends on head, when head drops below the top elevation (see Strack, 1989, section 8; Haitjema, 1995, section 3.1.3; Strack, 2003, equation 3).
Never put an unconfined domain beneath an overlying domain, because the unconfined domain saturated thickness is always computed as head minus base elevation. If you think an underlying domain may
become unconfined, use confined/unconfined rather than unconfined
• Top_elevation defines the elevation of the top of the domain. Not used for unconfined domain type.
• Bottom_elevation defines the elevation of the bottom of the domain.
• Storativity (S) is the dimensionless elastic storage parameter. S normally is the saturated thickness times specific storage (Ss). Starting with release 2021-1, there is an option to input the
value of specific storage instead of storativity. This can simplify input by avoiding the step of multiplying by saturated thickness. To input specific storage, enter "Ss:" in front of the
value. For example, if you entered "Ss: 0.0037", the value of storativity in this confined domain would be set to 0.0037 * b, where b is the saturated thickness (top elevation - bottom
elevation). See Storage Parameter Details for more on how different storage parameters apply for different domain types.
• Specific_yield (Sy) is the dimensionless storage parameter for the unconfined domain type. See Storage Parameter Details for more on how these parameters apply for different domain types.
For numerical stability where the saturated thickness of an unconfined or confined/unconfined domain approaches zero, Anaqsim imposes a minimum saturated thickness. When heads drop near or below the
bottom, the domain reverts to a confined-type domain with a fixed minimum saturated thickness. This facet of Anaqsim is governed by a parameter called Almost_dry_fraction under Solution/Solve
Confined Interface
For confined interface domains, these parameters are needed in addition to those that are common to all domains:
• Top_elevation defines the elevation of the top of the domain.
• Bottom_elevation defines the elevation of the bottom of the domain. The saturated thickness is the difference between the top and bottom elevations.
• Storativity is the dimensionless storage parameter that applies in the confined portion of this type of aquifer, equal to saturated thickness times specific storage. As described under Confined
and/or Unconfined domains, you may enter specific storage values here instead of storativity values.
• Salt_elevation defines the elevation of the surface of the salt water, which is assumed to be static. Typically this is about the elevation of sea level.
• DensityRatio is the ratio of the salt water density to fresh water density. This varies from place to place, but is often near 1.025.
Interface domains in Anaqsim are based on the Ghyben-Herzberg approximation:
• The salt water is hydrostatic - pressure is proportional to depth below Salt_elevation. This assumption is reasonable when the flow is roughly steady.
• The fresh/salt water interface is sharp, with no mixing.
• Fresh water within a domain is hydrostatic (Dupuit approximation); there is no vertical resistance to flow.
The confined interface domains are based on the techniques presented by Strack (1989) on pages 101-106 and in Fitts et al (2015). These domains are confined with fresh water from top to bottom when
heads are high enough that there is no interface, or they are confined with an interface and some salt water if heads are low enough. Confined interface domains would go to zero fresh water
saturated thickness when the fresh water head drops to a level where the fresh water pressure at the top of the domain equals the salt water pressure at that elevation. This occurs where the
freshwater head = Top_elevation + (Salt_elevation - Top_elevation) * DensityRatio. For numerical stability where the fresh water saturated thickness approaches zero, Anaqsim imposes a minimum
saturated thickness. When heads are low enough, the domain reverts to a confined-type domain with this minimum saturated thickness. This facet of Anaqsim is governed by a parameter called
Almost_dry_fraction under Solution/Solve Settings.
Generally transient simulations with interface domains will be inaccurate because it is assumed that the salt water has a hydrostatic distribution of pressure on the interface. In most transient
situations, the salt water is moving, and when that movement has a vertical component, the hydrostatic pressure assumption is violated. If you feel the hydrostatic salt water assumption is still
reasonable, you may proceed with a transient simulation but Anaqsim will issue a warning. See Storage Parameter Details for more on how storage parameters apply to this domain type.
Unconfined Interface
For unconfined interface domains, these parameters are needed in addition to those that are common to all domains:
• Bottom_elevation defines the elevation of the bottom of the domain. The saturated thickness is the difference between the head and the bottom elevation.
• Specific_yield is the dimensionless storage parameter for an unconfined aquifer.
• Salt_elevation defines the elevation of the surface of the salt water, which is assumed to be static. Typically this is about the elevation of sea level.
• DensityRatio is the ratio of the salt water density to fresh water density. This varies from place to place, but is often near 1.025.
Interface domains in Anaqsim are based on the Ghyben-Herzberg approximation:
• The salt water is hydrostatic - pressure is proportional to depth below Salt_elevation.
• The fresh/salt water interface is sharp, with no mixing.
• Fresh water within a domain is hydrostatic (Dupuit approximation); there is no vertical resistance to flow.
The unconfined interface domains are based on the techniques presented by Strack (1989) on pages 108-111 and in Fitts et al (2015). These domains are unconfined with fresh water from the water table
to the bottom when heads are high enough that there is no interface, or they are unconfined with an interface and some salt water if heads are low enough. Unconfined interface domains would go to
zero fresh water saturated thickness when the fresh water head drops to Salt_elevation. To avoid dry conditions, keep all heads in the domain above Salt_elevation. For numerical stability where the
fresh water saturated thickness approaches zero, Anaqsim imposes a minimum saturated thickness. When heads are low enough, the domain reverts to a confined-type domain with this minimum saturated
thickness. This facet of Anaqsim is governed by a parameter called Almost_dry_fraction under Solution/Solve Settings.
Generally transient simulations with interface domains will be inaccurate because it is assumed that the salt water has a hydrostatic distribution of pressure on the interface. In most transient
situations, the salt water is moving, and when that movement has a vertical component, the hydrostatic pressure assumption is violated. If you feel the hydrostatic salt water assumption is still
reasonable, you may proceed with a transient simulation but Anaqsim will issue a warning. See Storage Parameter Details for more on how storage parameters apply to this domain type.
Storage Parameter Details
Storage parameters are defined differently for different domain types as explained below.
• In confined domains, the storage parameter is always S (storativity) regardless of head.
• In unconfined domains, the storage parameter Sy (specific yield) applies when h > the head at minimum saturated thickness (see Almost_dry_fraction under Solve Settings). When head is lower than
the level at minimum saturated thickness, the storage parameter equals S * Almost_dry_fraction (usually a very small value). This scheme helps with convergence in cases where heads fall below
the bottom of the domain, but makes storage changes small under these conditions.
• In confined/unconfined domains, storage is like a confined domain when h >= top elevation, and like an unconfined domain when h < top elevation.
• In confined interface domains, the storage parameter = S + (porosity / (DensityRatio - 1)) where an interface is present. Where the domain is confined with no interface present (higher heads),
the storage parameter = S. Generally, storage contributed by interface shifts greatly exceed elastic storage; S << porosity / (DensityRatio - 1).
• In unconfined interface domains, the storage parameter = Sy + (porosity / (DensityRatio - 1)) where an interface is present. Where head is high enough that there is no interface (inland from
the toe of the interface), the storage parameter = Sy. Generally the storage contributed by interface shifts is larger than water table storage; Sy < porosity / (DensityRatio - 1).
In all but confined domains, the storage parameter is a function of head. The head at the start of a time step is used to determine the storage parameter that applies for the time step, even though
the head at the end of the time step may correspond to a different storage parameter. This approximation helps convergence and is minor if time steps are small enough.
Pumping Wells
Pumping wells may be either discharge-specified or head-specified. The discharge-specified type may be screened in one domain or across multiple domains if the well screen spans multiple model
levels. All well input data tables may be accessed through the main menu or by using a pop-up context menu when the cursor is over the plot.
Input Common to all Pumping Wells
With all types of pumping wells, the following parameters are required.
• Label is a text label that helps you keep track of multiple wells.
• X,Y defines the horizontal coordinates of the well. To graphically edit a well's coordinates, left click to select it. Once selected, the well will be enclosed in a purple square box as shown
When highlighted in this way, the well can be moved by clicking on the purple box and dragging it. This will automatically alter the coordinates of the well in the model input table. To stop graphic
editing, press Esc to de-select the well.
• Radius defines the radius of the well. If there is a high conductivity filter pack around the well screen, the radius should be the radius of the borehole, not the radius of the screen.
With discharge-specified wells, negative rates are used for extraction from the aquifer and positive rates are used for injection into the aquifer. Additional parameters defined here are:
• Domain defines the domain that the well screen is in.
• Discharge is the discharge of the well in units of [L^3/T]. In transient simulations, this parameter may vary from one time period to the next.
Discharge-Specified (Multi-Domain)
Use this type of well to simulate a well with a screen that spans multiple domains and levels in the vertical direction. Anaqsim computes the appropriate discharge from each domain spanned so that
the total discharge equals the specified discharge, and the heads at the well radius in each domain match each other. With discharge-specified wells, negative rates are used for extraction from the
aquifer and positive rates are used for injection into the aquifer.
• Domains defines the domains that the well screen spans.
• Discharge is the discharge of the well in units of [L^3/T]. In transient simulations, this parameter may vary from one time period to the next.
With head-specified wells, you specify a head that applies at the well radius and Anaqsim computes the discharge needed to achieve that head. The discharge of a head-specified well may be checked
after solving from the Analysis menu.
• Domain defines the domain that the well screen is in.
• Head_at_well defines the head at the well radius. In transient simulations, this parameter may vary from one time period to the next.
• Off_Periods is used in transient simulations if you want the well discharge to be zero during certain periods. The periods you want the well off are delimited with comma(s). For example, the
following input is for a transient model with five time periods. The well pumps at rates such that at the end of period 1 the head at the well is 80, the well is off during periods 2 and 3, and
the well pumps so that the head at the well is 115 and 118, respectively, at the ends of periods 4 and 5.
Line Boundaries
A variety of line boundary conditions are available in Anaqsim. Each line boundary is a multi-segmented line (polyline) and the user inputs a list of vertexes in sequence from one end of the
polyline to the other. The line boundary condition is approximated using linesink elements similar to those described by Jankovic and Barnes (1999).
Most line boundaries have a parameter that varies from one value at the starting vertex to another value at the ending vertex. The interpolation scheme between the end points is described in the
next topic.
Anaqsim approximates the specified boundary conditions along line boundaries, as discussed by Fitts (2010). You may check the accuracy of line boundary condition approximations under the Analysis
For internal line boundaries, the coordinates of all polyline points must not be outside the subdomain boundary, otherwise numerical havoc will be wreaked! It is possible for the start or end point
of an internal line boundary to coincide exactly with a corner point of an external line boundary.
All line boundary input data tables may be accessed through the main menu or by using a pop-up context menu when the cursor is over the plot.
Input Common to all Line Boundaries
The following input items are common to all of the line boundaries:
• Label is a text label that helps you keep track of multiple line boundaries. Please see River Line Boundary Discharges for special labeling of River line boundaries so that you may sum the
discharges of groups of river boundaries.
• Each line segment has a line element with a certain number of unknown parameters that is defined in the Parameters_per_line column. If you choose 1 for this, the strength of the element is
constant along the line, if you chose 2 for this, the strength of the element varies linearly along the line, if you choose 3 for this, the strength of the element varies parabolically along the
line, etc. Using more parameters per line can increase the accuracy of the boundary condition approximation along the line, but at the cost of additional equations in the system of equations that
is solved. You only need a high number of parameters per line if the heads or discharges along the element are expected to vary complexly. In many cases, 3 or fewer parameters per line is
plenty. Experiment with this and reduce the number of parameters to the minimum that gives you reasonable boundary condition accuracy, which you can check using right-click / Check Line Boundary
Conditions. In the case of a Discharge-Specified line boundary, the user does not define Parameters_per_line because it is always set to 1; the discharge/length of these line elements is
constant and equal to the specified discharge of the line boundary divided by its total length.
• Coordinates defines the x,y coordinates of the polyline vertexes. When you click on a cell in this column, a small text box window appears. In this text box, enter/edit lines of coordinate
data, one line per vertex. Each line should have x and y values separated by either a comma or a tab character. You may digitize the coordinates of a polyline in the plot view and then paste
the coordinates into this text box. In the case of normal flux-specified external boundaries, the coordinates must be listed in counter-clockwise order with the domain to the left of the
boundary as you proceed along it. Once input, coordinates can be edited graphically by selecting the line boundary and then moving the vertexes or inserting or deleting vertexes. To graphically
edit a line boundary's coordinates, left click to select it. Once selected, the vertexes will be enclosed in purple square boxes as shown below
When highlighted in this way, the vertexes can be moved by clicking on the purple box and dragging it. This will automatically alter the coordinates of the line boundary in the model input table. To
de-select a line boundary, press Esc. Procedures for graphic editing of line boundaries are covered in the tutorial videos on the website. To reverse the order of the vertexes (this can be handy if
you digitized in the wrong direction), click the Reverse button.
Most line boundaries have additional parameters defined at the start and end points, such as heads for head-specified line boundaries. With all such parameters, the same algorithm is employed to
interpolate the specified values along the line boundary:
1. Apportion the intermediate values to the vertexes based on the number of line segments in the polyline. For example, if there are 4 segments and the end values are 100 and 110, the values at the
vertexes would be 100, 102.5, 105, 107.5, and 110. If there are 5 segments, the values at the vertexes would be 100, 102, 104, 106, 108, and 110.
2. Linearly interpolate values to the control points within a line segment, assuming a linear distribution from one end to the other.
The additional parameters specific to each type of line boundary are listed in the following topics.
With head-specified line boundaries you specify these additional items:
• Domain defines the domain that the line boundary is in. Double-click on this cell to open a drop-down list and make a selection.
• Check the Domain_Boundary check box if this line boundary is on the external boundary of the domain.
• The h_start and h_end are the specified head values at the starting and ending vertexes. In transient simulations, these parameters may vary from one time period to the next.
• Off_Periods is used in transient simulations if you want the line boundary discharge to be zero during certain periods. This could be handy for a dewatering trench that is turned off and on.
The periods you want off are delimited with comma(s). For example, the following input is for a transient model with three time periods. The line boundary discharges at rates such that at the
end of period 1 the heads along it are 104, the line boundary is zero during periods 2, and the line boundary discharges at rates such that at the end of period 3 the heads along it are 106.
Boundary condition equations are written at each control point on each line segment. The equation specifies that the modeled head = specified head (interpolated between endpoints of the line
boundary). The specified head condition is approximated between control points and may be checked graphically.
Normal Flux-Specified
With normal flux-specified line boundaries you specify these additional items:
• Domain defines the domain that the line boundary is in. Double-click on this cell to open a drop-down list and make a selection.
• Check the Domain_Boundary check box if this line boundary is on the external boundary of the domain.
When the boundary is external, it is necessary to list the coordinates of the polyline in counter-clockwise order with the domain to the left of the boundary as you proceed along it. If coordinates
are specified in the wrong (clockwise) order, there is a check in the program that will detect and report this error. This error can occur if you mistakenly specified the coordinates in clockwise
order around the outside of the domain. This message can also result if you have made errors in specifying additional external line boundaries either at these same coordinates (e.g. have two
external line boundaries in the input that share the same vertex coordinates), or have erroneous external line boundaries to the right of this one (see discussion of left/right algorithm).
• The Normal_flux_start and Normal_flux _end are the specified normal flux values at the starting and ending vertexes. Normal flux is the component of domain discharge normal to the line segment,
and has units of discharge per length [L^3/T/L] = [L^2/T]. This may also be thought of as the normal component of specific discharge times saturated thickness. The normal flux is positive for
flow across the boundary from left to right as you proceed from the start toward the end. Normal flux is negative for flow from right to left as you proceed from the start toward the end. In
transient simulations, the normal flux parameters may vary from one time period to the next.
Boundary condition equations are written for sub-intervals in each line segment (e.g. three subintervals for 3 parameters/line). The equation specifies that the total discharge across the line over
the subinterval equals the total to interpolated specified fluxes over the interval. The accuracy of this approximation may be checked graphically.
Head-Dependent Normal Flux (3rd type)
This allows head-dependent normal flux into or out of the model, depending on the modeled head at the boundary. Boundary conditions like this are sometimes called 3rd type, Robin, general head (GHB
in MODFLOW), or mixed boundary conditions; they involve both head and flux. The boundary must be an external boundary of a domain. It is necessary to list the coordinates of the polyline in
counter-clockwise order with the domain to the left of the boundary as you proceed along it. If coordinates are specified in the wrong order, there is a check in the program that will detect and
report this error. This error can occur if you mistakenly specified the coordinates in clockwise order around the outside of the domain. This message can also result if you have made errors in
specifying additional external line boundaries either at these same coordinates (e.g. have two external line boundaries in the input that share the same vertex coordinates), or have erroneous
external line boundaries to the right of this one (see discussion of left/right algorithm).
This kind of boundary is illustrated conceptually in the vertical profile sketched below. It is as though there is a fictional domain beyond the boundary (orange) outside of the domain (blue) , and
at a distance b outside the boundary, there is a fixed head h*. The head difference between h* and the head at the boundary (h) drives a component of discharge normal to the boundary. | {"url":"https://docs.anaqsim.com/books/user-guide/export/html","timestamp":"2024-11-03T16:12:58Z","content_type":"text/html","content_length":"1050251","record_id":"<urn:uuid:c4499ec1-06aa-4121-a373-88763a7a6233>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00883.warc.gz"} |
The Parallel Propagator and using it on the surface of a sphere
The three page Appendix I is on the parallel propagator which gives a general solution to the parallel transport equation of a vector. The appendix shows how to get this from the parallel transport
equation which is, for a vector ##V^\nu## transported along a curve ##x^\mu\left(\lambda\right)## $$\frac{dx^\mu}{d\lambda}\partial_\mu V^\nu+\frac{dx^\mu}{d\lambda}\Gamma_{\mu\sigma}^\nu V^\sigma=
0$$First we note that the transported vector can be calculated by$$V^\mu\left(\lambda\right)=P_{\ \ \rho}^\mu\left(\lambda,\lambda_0\right)V^\rho\left(\lambda_0\right)$$where ##V^\rho\left(\lambda_0\
right)## is the vector at the start point ##x^\mu\left(\lambda_0\right)## and ##P_{\ \ \rho}^\mu\left(\lambda,\lambda_0\right)## is some matrix which is called the parallel propagator. Next we define
another matrix $$A_{\ \ \rho}^\mu\left(\lambda\right)=-\Gamma_{\sigma\rho}^\mu\frac{dx^\sigma}{d\lambda}$$and then show that (dropping indices on the matrices)$$P\left(\lambda,\lambda_0\right)=I+\
sum_{n=1}^{n=\infty}T_n$$where ##I## is the identity matrix and $$T_n=\int_{\lambda_0}^{\lambda}{\int_{\lambda_0}^{\eta_n}\int_{\lambda_0}^{\eta_{n-1}}{\ldots\int_{\lambda_0}^{\eta_2}A\left(\eta_n\
right)A\left(\eta_{n-1}\right)\ldots A\left(\eta_1\right)}d^n\eta}$$$$=\frac{1}{n!}\int_{\lambda_0}^{\lambda}{\int_{\lambda_0}^{\lambda}\int_{\lambda_0}^{\lambda}{\ldots\int_{\lambda_0}^{\lambda}\
mathcal{P}\left[A\left(\eta_n\right)A\left(\eta_{n-1}\right)\ldots A\left(\eta_1\right)\right]}d^n\eta}$$where ##\mathcal{P}## orders the matrices ##A\left(\eta_i\right)## so that ##\eta_n\geq\eta_
The first integral is over ##n##-dimensional equilateral right triangles, or ##n##-simplices and is quite hard to calculate but ##n!## ##n##-simplices make an ##n##-cube which makes the second
integral which is much easier to calculate. I had a bit of trouble getting my head round all that and I tested it on a few examples including vectors transported along lines of constant latitude. It
all works!
Read about it here Commentary App I Parallel Propagator.pdf (13 pages).
Vectors transported round latitudes calculated by parallel propagator
Note vector length never changes. ##T_n## was evaluated to 5 decimal places.
Transport at 80°N. Vector barely
changes because it's nearly flat up there.
##T_n=0## at ##n=25##
Transport round equator. Vector remains
parallel because equator is geodesic.
##T_n=0## at ##n=1##
Transport at 15°N.
Vector rotates down by 87°.
##T_n=0## at ##n=11##
Transport at 15°S.
Vector rotates up by 87°.
##T_n=0## at ##n=11## | {"url":"https://www.general-relativity.net/2021/10/the-parallel-propagator-and-using-it-on.html","timestamp":"2024-11-11T06:56:49Z","content_type":"application/xhtml+xml","content_length":"70136","record_id":"<urn:uuid:9696bea0-baf8-49ac-984a-6ef7251f7cd8>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00712.warc.gz"} |
Advent Of Code 2020 - Solutions
These are my solutions. I'm trying to do as many as I can in both Python and in Javascript. Plus I'm going to scope out people's answers in C and see what I can glean.
Check out the solutions on GitHub
• 20201201: Solved the first challenge, starting up number 2
• 20201202: Solved day 2
• 20201203: Solved day 3 and 4
• 20201204: Solved day 5
• 20201205: Solved day 6
• 20201206: Solved day 7. Definitely noticing the desire to be on the leaderboards and "cool" or "fast" or whatever and trying to go against that feeling. I would rather have effective solutions
"slowly" than flail around and hope stuff works.
• 20201207: Solved day 8. Worked much more purposefully and without (as much) regard to the clock; still working on focusing on the process over the product.
• 20201208: Solved day 9. Brute forcing a problem I should probably use an algorithm for, but don't know enough. Now is the time to really dig! Look through solutions on Reddit, Mastodon, etc.
• 20201209: Revised my work for day 9 to find a better solution. Ended up using a solution from 'neelakantankk' I found on Reddit, utilizing a deque, which I have very little experience with. It is
a very interesting solution that utilizes a truth I missed in the iteration. When iterating through the contiguous numbers, if the sum of those contiguous numbers is higher than the target, you
can safely remove the first number in the contiguous numbers. I'm having a hard time fully grokking it but it intuitively feels right, so I'm going to try and work it out on paper until it really
makes sense.
• 20201209: Solved day 10 part 1 but do not have the know how for part 2 without just stealing an answer without any understanding. Will come back later.
• 20201210: Solved day 11
• 20201211: Solved day 12
• 20201211: Solved day 13 part 1 and have a brute force solution for part 2; however this solution has been spinning for a half hour at least and is showing no signs of progress.
• 20201213: Went and searched for solutions and found one that I was really able to understand from gravitar64, using a "sieving" method. This didn't use some intense number theory, like other
solutions which used the Chinese Remainder Theorem. I did cheat, but I understand the answer and how it works and that's all I wanted anyway.
• 20201213: Made a write up of how the sieve method works to repent for what feels like cheating.
• 20201215: Solved day 14 and 15. Day 15 was really tough to understand while I was doing it, as there was a lot to juggle, but it was a very fun one to solve. Ended up refactoring quite a bit by
reading the code of Scarymagi I found on Reddit.
• 20201216: Solved day 16, and parsing was the majority of the work. It was kind of its own challenge in itself. The problem for part 1 after that was pretty straightforward with a set. Part 2
presented an interesting challenge of essentially trying to solve a logic grid puzzle using programming, which I hadn't done before. That was a good time!
• 20201218: Started on day 17 and realized I need to finally learn how to use Numpy and deal with 3 dimensional arrays, as this is essentially Conway's Game of Life but in 3 dimensions.
• 20201219: Solved day 17 and learned how to use Numpy fairly well with it. Ended up doing Game of Life as a 3 dimensional and _4 dimensional_ version, which was wild. 3-dimensions required a lot
from me, a guy who doesn't ever do anything like this and probably solved it in a very long-winded inelegant way, but dammit I solved it.
• 20201219: Solved day 10 part 2 finally. Tried a recursive solution but it was blasting my computer and taking way too long, so I went searching. The solutions I found using `defaultdict` was
super elegant, so went with that and am pretty happy with it.
• 20201221: Solved day 18, which was all about essentially making math that solved in the wrong order (i.e. not PEMDAS).
• 20201222: Solved day 19 part 1, which was really hard to grok, but I got it. A recursion puzzle that was hard to wrap my head around the initial parts. Part 2 is beyond the scope of what my late
night brain can handle.
• 20201225: Solved day 20 and it took me a few hours yesterday and most of today. This was really fun, essentially automating putting together a puzzle, which was a really satisfying project.
• 20201228: Solved day 21 and it was much, much easier than 20. Essentially logic grid puzzle solving again, like in day 16, which is so satisfying to implement!
a solution from 'neelakantankk' I found on Reddit
found one that I was really able to understand from gravitar64
Made a write up of how the sieve method works to repent for what feels like cheating.
Scarymagi's solution for day 15 | {"url":"https://milofultz.com/2020-12-01-advent-of-code.html","timestamp":"2024-11-11T00:27:52Z","content_type":"text/html","content_length":"8461","record_id":"<urn:uuid:3ea23052-a409-42e0-825b-82e8eb437bf5>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00362.warc.gz"} |
Friday Funnies – Feb. 4, 2022 Edition
A math professor, John, is having problems with his sink so he calls a plumber. The plumber comes over and quickly fixes the sink. The professor is happy until he gets the bill. He tells the plumber,
“How can you charge this much? This is half of my paycheck.” But he pays it anyways.
The plumber tells him, “Hey, we are looking for more plumbers. You could become a plumber and triple your salary. Just make sure you say you only made it to 6th grade, they don’t like educated
The professor takes him up on the offer and becomes a plumber. His salary triples and he doesn’t have to work nearly as hard. But the company makes an announcement that all of their plumbers must get
a 7th grade education. So they all go to night school.
On the first day of night school they all attend math class. The teacher wants to gauge the class so he asks John, “What is the formula for the area of a circle?”
John walks up to the board and is about to write the formula when he realizes he has forgotten it. So he begins to attempt to derive the formula, filling the board with complicated mathematics. He
ends up figuring out it is negative pi times radius squared. He thinks the minus doesn’t belong so he starts over, but again he comes up with the same equation. After staring at the board for a
minute he looks out at the other plumbers and sees that they are all whispering, “Switch the limits on the integral!” | {"url":"https://chia.owly.net/friday-funnies-feb-4-2022-edition/","timestamp":"2024-11-01T22:34:45Z","content_type":"text/html","content_length":"160079","record_id":"<urn:uuid:cd3df7c0-7324-4bb6-bd2e-fdc9bfd4922d>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00452.warc.gz"} |
Celestial Focus, Haste, and ICC Gearing
The WotLK expansion has been out for quite a while, and over the past year and a half we "experts" have said a lot of different things regarding haste and such. That is why I try to be understanding
when it comes to questions where the answer seems obvious to me.
The question of the week seems to be centered on Celestial Focus and Haste. I've seen several questions asking what is the value of Haste? Should I drop Celestial Focus for another talent since I am
way over the Haste Cap? Should I Gem Haste or Crit?
In this post I hope to answer some of these questions or show you where to find the answer and also explain why we "experts" seem to be contradicting ourselves.
The Value of Haste:
I can't give you a precise value for Haste Rating. That depends on a lot of things that are different for everyone. How I value Haste will be slightly different then how you value haste. If you do
want a precise value of Haste Rating I suggest you read
this post
I wrote a few months ago.
That said, I can try and give you the big picture. As a general rule Haste Rating is better the Crit rating for ICC geared moonkin. Moonkin that are geared with ToC and Ulduar level epics will value
Crit Rating more then Haste because it is unlikely that they have reached the
Lunar Crit Cap
. Prior to that it doesn't really matter because you should be running as many instances as possible to get badges and 5man epics.
Why is Haste > Crit Now?
I know what some of you are thinking. Last year everyone was telling you that Crit is better then Haste. What changed? What did we "experts" screw up? Well, we didn't mess anything up. The truth is
that two things changed.
Lunar Crit Cap
is the first thing that changed. Having some of your Crit Rating rendered useless for 30% of the fight does diminish the marginal value of Crit Rating. That said, the Lunar crit cap is probably over
credited for the late rise of Haste Rating. The fact is that the marginal value of Crit Rating was already pretty low if you have a 99% crit rating during Lunar Eclipse. Taking it over 100% doesn't
change a lot.
The less obvious reason is Spell Power. Many people forget just how well Haste scaled with Spell Power for Starfire in TBC. Haste Rating was and still is more valuable on a point for point basis then
Spell Power for well geared moonkin if you only consider Starfire. So, with the massive amounts of Spell power we are receiving by upgrading to ICC level gear, haste is improving dramatically even
with the soft cap.
I won't detail the math but take a look at this graph.
What I did here was find the marginal gain in DPS by adding 1 point of Haste Rating and Crit Rating to various levels of Spell Power for the spell Starfire. The base levels of Crit and Haste were
held constant at 55% and 31% respectively.
Notice that the gap between the lines at the start of the graph is much smaller then the gap at the end of the graph. This shows that the slope of the +1 Haste line is bigger then that of +1 Crit,
and therefore Haste scales better with Spell Power then Crit Rating does when not held back by a soft cap. When you combine this with the fact that we've been holding our Haste levels relatively low
and the dramatic increase in Spell Power we have received from ICC, it is easy to see why Haste Rating is surging in value relative to Crit Rating.
Yes, You Do Want Celestial Focus.
I can understand why some new moonkin would consider dropping
Celestial Focus
from their talent build. We "experts" tell everyone that Haste drops significantly in value after you reach 400 Haste Rating, and any moonkin who makes the right gear choices will be well over 400
haste rating by the time they are raiding ICC. Logically, if the soft cap diminishes the value of Haste Rating it also diminishes the value of Celestial Focus. If you don't have a lot of experience
with the balance tree you may question if there is a better talent option past 400 haste. The answer is no, there is not.
First, as I showed above Haste Rating is very good for Starfire well past the Soft Cap of 400 haste rating. While that extra 1%-3% of haste that CF provides may be useless to Wrath and the instants,
it is still very good for a good portion of your rotation with Starfire.
Second, if you drop Celestial Focus you are ignoring the value of pushback protection. In today's raiding environment it isn't a huge need, but it does help your DPS. Therefore, it is important to
remember that if you drop CF from your talent build you are loosing more then just the Haste.
Finally, where else are you going to put the points. Genesis sucks. If haste is unimportant to you then Mana Regen should be as well. Brambles and Owlkin Frenzy are only minimally useful. Typhoon and
Gale Winds are only situationally use full. Fact of the matter is, even though Celestial Focus is diminishing in value with our increasing level of Haste Rating, there is no better place to put the
talent points.
If you want to find a precise valuation of Haste Rating you will need to use a tool like WrathCalcs or SimulationCraft, but I am confident in saying that Haste Rating is better then Crit Rating for
ICC geared moonkin. I realize this may confuse some people given the number of times we "experts" have flip flopped on the value of Haste over the course of this expansion, but it is true.
When you combine the huge amounts of Spell Power we receive ICC with the Lunar Crit cap, haste surges in value as we close out this expansion, though it is still lower then it would be with out the
Soft Cap.
This also shows that the Haste provided by Celestial Focus has value even past the soft cap. Given that there are no better options on how to spend those 3 talent points and Haste is still very
valuable for Starfire, Celestial Focus should remain a part of the core moonkin talent build.
43 comments:
Maestro said...
I don't think that it's going to make much of a difference here, but increasing haste by 1 point and increasing crit by 1 point gives different percent increased in the values of haste and crit.
Given that each point of haste is worth more of a percent than each point of crit, i can see where it would seem to scale better on a per point basis. I suppose then the question becomes whether
one point of haste is equivalent to one point of crit in terms of item budget. I honestly don't recall the answer to that. I was just wondering.
Indeed, for Starfire, Haste is the way to go, but keep in mind that by stacking crit, you increase the value of Wrath in your rotation. Since Wrath and Starfire are at about the same percent
damage-wise during a fight, and also the fact that more crit=more Nature's Grace uptime, i'd say don't value Haste so much over Crit. In my opinion, until we get to levels of 90-100% Nature's
Grace uptime, crit is very close or even slighty higher than Haste. But of course i didn't test this using some simulation, it's just my logical answer to Haste vs Crit :)
A lot of people also don't understand that the "haste soft cap" is binary. It doesn't really matter if you are a little bit over or a lot over it.
I think you are overconsidering haste.
Im running at 40%+ crit unbuffed in night elf form, and 544 haste.
during a whole boss fight starfire crits 90% of the times and wrath 70%
If you add starfall , natures grace is up probably 80% of the time.
That means that wrath benefits of more haste only when i dont land a crit (30% of the times or less)
Stacking haste and losing crit means NG is up much less.
Im still convinced crit>haste besides what wrathcalcs says.
That said, it seems that some dudus with high haste skip solar eclipse and spam starfire all the time. I am definetely not one of them!
Graylo said...
Yes, Haste and crit have the same itemization value. People have worked it out with green items, but you can also tell with gems.
Sorry but math trumps guesses. I don't know how you are geared but Most ICC25geared moonkin I know are already close to 90% NG uptime. You also missed the main point of the post.
Haste scales better than Crit.
Crit does increase your NG uptime but not much. That is a much overstated benefit. Crit also is does not scale as well with spell power. So at the high levels of ICC SP Crit is inferior.
Ignoring the math is dumb. Your only hurting your DPS. You may be getting 90% SF crits but your gettting them slower.
If you guys want to argue with what I've said, then bring math, otherwise your arguement means nothing.
Dentex - Dragonblight said...
The only dumb thing i see is having a main cast spell (wrath) constantly below the gcd just to spam starfire faster.
I dont do spreadsheets but take a look at this:
This is your latest DBS kill graylo (the kind of fight where you can measure spell dmg without "disturbing" things like malleable goo, impaling or sindragosa debuff)
This is my most recent kill:
Look at numbers (my wrath vs your starfire) here's my math.
You should write in the title of the post "for those who love to spam starfire, haste is better than crit"
With 1000+ for both, I am still socketing potent rather than reckless.
Where there is target swapping, I'm often missing 3% crit since we don't run with an ellie shaman, and there's rarely a muti/ret on my target.
I'm also often missing a further 5% unless the warlock happens to be on my target.
Haste wins on a boss, yes. If we're target swapping it generally means that something needs burned down fast, hence why I've been working out my gemming assuming I'm lacking crit buffs.
Graylo, is it worth it to consider the value of crit vs. haste wrt to eclipse procs?
In a very movement intensive fight, there are a lot of times when you are dpsing without an eclipse currently up. During this time, your dps is fairly low; I would imagine (but don't have math to
back this up) that crit, point for point, improves the average time it takes to proc eclipse, but haste also improves the starfire dps when you're specifically trying to proc solar eclipse.
Of course, theoretically,if you're casting wrath and NOT proccing Lunar eclipse, then most likely you're aren't critting, hence NG isn't up, hence haste cap is different.
OK, let me back up... this is too complicated. Are there models that take in account random segments of your rotation that are interrupted by 3-7 seconds of movement?
IMO maxing dps around movement is essential for anyone concerned with maxing out their characters potential. However, there's a lot more simulations and theory crafting around the idea of
standing in place and nuking.
Dentex - Dragonblight said...
I want to add something:
What is the purpose on getting shitloads of haste to lower starfire to a 1.5 sec cast, adding another cast in a 12 sec buff duration (when probably your next starfire is gonna crit anyway) and
not considering stacking crit for chain critting 20k/25k wrath during solar eclipse?
Refreshing your mind:
Lunar eclipse = 40 % crit to starfire (60% chance to proc from wrath critical strikes. benefits marginally from haste, and fully from spell power procs)
Solar eclipse = 40 % dmg to wrath (100% ONEHUNDREDPERCENT chance to proc from starfire critical strikes. it doesnt benefit from haste, but it does from IIS, critical stike rating and spell power
Guess who wins
Boomkins are made to crit. Leave haste to mages
Maragon said...
Starfire is our primary nuke for a reason. Starfire has a 1.0 +spelldamage coefficient whereas wrath only has a coefficient of .57.
I don't know what you're hoping to prove by posting your logs and Graylo's - I have parses of me ruining your DPS while severely favoring haste over critical strike rating.
Here's my world 60-something Saurfang kill:
I'm running with 981 haste and less than 32% critical strike out of form. And if you peruse the higher DPSing moonkins, you'll see similar haste values.
Unless you can provide us with mathematical data explaining why crit should be valued over haste, then we'll have to dismiss this as merely your opinion.
Maragon said...
You state - "Refreshing your mind:
Lunar eclipse = 40 % crit to starfire (60% chance to proc from wrath critical strikes. benefits marginally from haste, and fully from spell power procs)"
Please explain how you've come to the conclusion that lunar eclipse(aka starfire) benefits "marginally" from haste.
In reality, starfire benefits FULLY from haste. Hence why haste is such a powerful stat.
I'm perfectly fine with the math saying haste wins. I prefer to have a balance of my stats - so I don't completely factor out haste. I believe I'm at 880 or something in my gear.
However, I do favor crit. Experience has told me time and time again that crit has always helped me with Eclipse fluidity. I don't ignore the math - however, I don't follow it completely.
It's always going to come down to a matter of personal preference - for now - until a MAJOR gap between the two is made apparent. I do not believe the gap between Haste and Crit for Balance
druids is that major (think Shadow Priest, Affliction Warlock, Elemental Shaman gaps, etc).
@Maragon: The highest DPS moonkin - me - favors crit: http://www.worldoflogs.com/reports/wr7fk090zbdidk3k/sum/damageDone/?s=1974&e=2160#Silveria
It's just personal preference, for now. I prefer a balance of the two with a slight lean towards crit.
Everyone that posted a log:
Your log is ONE possible thing to happen with your favor for crit or hast. Run the fight like 9999999999 times and maybe you get some better facts to share... statistics is a bitch!
(I'm really sorry for my probalby crappy english)
Maragon said...
@ Silvera:
It's hard to say that you "favour crit" when you have 883 haste and only 36.51% crit. You may GEM crit, but you could certainly pick out pieces that would afford you more crit than haste - which
you're not doing. Your pants are a good example of you choosing better stats - haste, spellpower and hit - over more critical strike.
And I should hope you're out-dpsing me in your gear which is practically BiS - and nice PI, brah.
It's not a hard decision to choose a piece that is itemized better over the tier 277. One more socket, more overall spellpower - I would be an idiot not to choose them considering how close I
value them. If they had crit on them I would like them more, but they don't.
Your hostility is cute. I have some fantastic gear, yeah, I'm lucky. I got one PI, great. I wasn't trying to make this a DPS competition, all I was doing was stating MY opinion, which is an
OPINION - I prefer crit. It has done me well (evidenced by my logs). I consistently perform well with it, PI's or not, and although the math states that haste has a higher value, which I do not
doubt, I feel that the values are close enough to where you could prefer one or the other - or simply a balance, which is mostly what I go for. Like I said, I lean SLIGHTLY towards crit - which
is why I made the better decision regarding those pants. I gem crit because that's what I prefer.
@ Maragon:
60k more dmg over almost 3 million is not "ruining" my log. I could do better on a lucky RNG try and your 277 trinket with half your haste.
Anyway i didnt post the link to the dps section to showoff, but i did post the link to the specific spell dmg just to show that you can do better and/or similar results with different stats.
I still believe haste is overconsidered, because spreadsheets are one thing. the real fight is another thing.
Spell dmg coefficient is directly proportional to the cast time. with a base slight bonus to wrath. solar eclipse is the only one adding a damage bonus, which doubles every crit. thats why is win
in my opinion
Im sure in the next future i will need to raise haste because of the gear itemization, i will not cry about it.But for now i keep my crit, and encourage people to stack it before starting to
stack haste.
Maragon said...
"I could do better on a lucky RNG try and your 277 trinket with half your haste."
Prove it - or it's heresay, which is as worthless as your other opinions on haste and critical strike values.
Just because you SAY you could match my numbers doesn't mean you can anymore than my saying I could match Silvera's numbers if I had his gear and a PI means that I could.
@ Silvera
"Your hostility is cute. I have some fantastic gear, yeah, I'm lucky. I got one PI, great. I wasn't trying to make this a DPS competition, all I was doing was stating MY opinion, which is an
OPINION - I prefer crit. "
I hardly think I'm being hostile - I'm merely reacting to your direct challenge to my log by posting your log. You could have stated your opinion without posting a log.
If you really want to dispute the current theories, bring some math to do so.
There simply isn't that much theorycrafting regarding how crit can affect fluidity of Eclipse transitions, especially where you are cycling through different targets that are not fully raid
debuffed (i.e. 70% of ICC fights, the most important of which being LK) Those sort of factors can close the 'gap' between crit vs haste, and even end up favoring crit substantially.
I choose to gem sp/crit rather than sp/haste just for that edge. The rest of my pieces are essentially chosen because they're better pieces (like the tier pants vs plaguebringer's discussion that
happened earlier).
WTB Balance dots scaling w/ crit. :( I think that would end the 'debate'.
- Dignam, Crushridge US
Also, forgot to mention this regarding the crit vs haste debate.
When it comes to Lunar Eclipse 'clairvoyance', there is a level of haste where watching your natures grace and predicting (I normally predict at 1-2 simultaneous refreshes) where you end up
finishing your Starfire before you actually gain the Eclipse buff, screwing you over pretty hard. Generally, being excellent at predicting the transition has always been one thing that separated
good Moonkin from horrible ones, and now its suddenly become incredibly hard to do.
- Dignam, Crushridge US
The case being argued here is flawed from both perspectives. The only time when you should ever be considering whether to stack crit or haste over one another is when gemming. How many yellow
sockets do you have on your gear where the socket bonus is worth taking? (im assuming all yellow sockets are worth gemming reckless/potent except in situations where you need 2 yellow gems to get
the bonus...) 4? 6? possibly 8 at a pinch? So you're talking about a difference of upto 80 haste or crit here. 2.6% haste or 1.9% crit. In the grand scheme of things, a trivial amount.
When gearing, you should never be considering "do I want that piece cos it has more haste or do I want that piece cos it has more crit?" Instead, all you should be thinking is "where can I put my
hit so that I can maximise the amount of crit AND haste on my gear. Spirit is never really an option unless there is no drop with haste+crit, both are worth about 1.3 times the value of spirit.
(you can argue wrist slot all you like, hit wrists are still currently the best except when only capping hit to heroic presence, and come 3.3.5 there will be a haste/crit alternative that drops
from the Ruby Sanctum.)
So how many alternatives with haste+crit are there in each slot and at each ilvl? Most of the time only one, occasionally 2 - one leather one cloth with identical stats - and very rarely 2, one
10 man heroic mode item and one normal 25 man item, one with more crit, and one with more haste, so at heroic level, generally there is only ever one choice in every slot anyway. When you are
faced with a choice? Never go for 'the one that has more crit' or 'the one that has more haste', but always the one that has the greatest accumulation of the two and the most spellpower.
(although they should have the same sp at the same ilvl)
The only real dilemma should be (as I said above) where to put your hit in order to maximise both crit and haste.
If you read Graylos post properly, you should have noticed that he specifically states that haste scales better with spellpower, not 'haste is better than crit'. If you took the graph he
presented and changed it to Wrath casts instead of Starfire, you would see the lines reversed (with haste levelling off much earlier) but with the top value of crit being lower than that of haste
on the SF graph due to Wraths lower coefficient. This would show overall that haste scales marginally better with SF than crit does with Wrath and as such is the reason why you're better off
gemming reckless ametrines than potent ones.
Overall tho its such a minor gain that you would probably see it swallowed up by RNG most of the time.
There is barely ever a choice to make when gearing. Haste+crit in every slot, except where you want to put hit. Spirit is acceptable whilst waiting for a haste+crit upgrade, but won't be BiS. Gem
reckless over potent, although the difference could be considered preferential.
Yes, this 'debate' all ends entirely with whatever you find preferential. The more interesting debate is how to actually interpret data you read from theorycrafters and applying it to actual
fights where you don't just stand there and nuke away the entire time with a fully debuffed target.
To me, that is more important than the solid math since Boomkins and 98% of fights are not as two dimensional as the math makes them out to be.
- Dignam, Crushridge US
I respect Graylo and his math and I'd just like to say thank you for taking the time to do all that and for making it available to us.
We have been blessed with options here and everyone fails to see this. You can be a haste Moonkin, a Crit Moonkin or a Balanced Moonkin past the soft caps and do good DPS either way you go. This
is what cataclysm will be about. Choosing your path. So you have the option to do that now with little difference in your DPS.
Did Rawr, Wrathcalc and all the theory-crafter think of starfall-buff in patch 3.3.3.? I hope you consider, that starfall scales with crit-rating but not with haste-rating. And for me, starfall
does 15%+ of my overall dmg.
Graylo, your graph would be right for pre 3.3.3. situation, but now crit- and haste-rating should be almost equal.
mfg Olli, EU/Zuluhed
"Prove it - or it's heresay, which is as worthless as your other opinions on haste and critical strike values."
Your damage:
Deathbringer Saurfang 2664188 93.6 %
Blood Beast 1
82524 6.4 %
My damage:
Deathbringer Saurfang 2148626 77.2 %
Blood Beast 634216 22.8 %
You have been full time on boss, With the obvious benefits.
And my opinion is not so worthless if this thread is becoming someway interesting.
Graylo said...
Wow, I go away for one day and the comments on my blog explode.
Lets all take a step back and realize what I say in my blog and not what you guys think I was saying.
There seems to be a common belief around the interwebz that "haste sucks for moonkin." That is not completely true. Prior to 400 points Haste rating is god. After 400 is clearly inferior to Crit
until about 950-1050 crit rating, but it still isn't horrible. After the Lunar Crit Cap haste is better then Crit again on a point for point bases but the reasoning is complicated.
Part of the reason is that the Lunar Crit cap pulls the Value of Crit down, but that is only part of the reason. The other par is that haste scales with Spell Power much better then Crit does,
and that is what my graph shows. This is how Haste as caught up to Crit in ICC. The LCC deminishes the value of Crit, but the extra 500 SP we pick up by going from ToC to ICC boosts Haste a lot
More then it does Crit.
You can argue against that all you want with your feelings, but the math proves it. Until you can provide math to say otherwise, your arguement doesn't have much of a leg to stand on.
All that said, I have NEVER said "Crit sucks." In truth we are arguing about less then 1% DPS here. I wouldn't be surprised if it was less then 0.5% DPS. The marginal values of Haste and Crit are
very similar, but haste is better. Are you going to do horrible DPS if you go with crit instead? No! Are you going to top the DPS charts if you use Haste? Not necessarily! Crit's not optimal but
it is effective.
That said, it's absolutly laughable that you guys are throwing around WoL parces to prove your points.
Crit vs Haste is the only thing that matters am I right?
The fact of the matter is that there are hundreds of variables that go into determining how well any individual player does on one specific fight. So to pull out a parse and say I think X and I
did better then you there for X is correct is ludacrous. Those results could be different because of Gear, RNG, fight strat, technical issues, or just overall skill of the player to name a few
The simulators and spreadsheets are the only things we have to come close to accurately testing these theorys. A one off parse says almost nothing.
Graylo said...
SimulationCraft and WrathCalcs takes Eclipse into account, and you can set up Simulation Craft to consider movement like activity as well. When I was using the movement function I wasn't seeing a
huge shift in results.
I wouldn't say SF is our primary nuke. I think Wrath and SF are fairly equal now. That said, Wrath does scale with Spell Power better then Wrath does. You can thank Starlight Wrath for that.
Since Eclipse procs off of Crits, people tend to over estimate the value of that relationship. Gaining or loosing a few percent of Crit doesn't dramatically shift your chance to proc Eclipse on
average. At the same time people forget that haste has an impact on how fast you proc eclipse as well. So, doing theory on "how crit can affect fluidity of Eclipse transitions" isn't as important
as you may think.
I have linked the parse with detailed dmg done(not dps or whatever), and i asked you to compare wrath vs starfire.
Since i dont do calculations, the only thing i can link is my spell dmg list on an average fight like DBS.
Then someone else started the showoff, being also aggressive in some way.
Btw /agree with Silveria
k, i see, i will be ignored without a prove, so here are my spreadsheets based on WrathCalcs 100402.xls. I put in my stats (http://eu.wowarmory.com/character-sheet.xml?r=Zuluhed&cn=Ollidrood):
After that i increased the amount of stars from starfall (formula in cooldowns->D10 ...*10). In a single-target situation that number would be correct.
Now i increased number of stars to 20 and as you can see in character sheet tab in cell D5 and D7, that critrating overhauls hasterating in scaling.
I test a little bit and with 17 stars i see crit-rating = haste-rating.
Now the question is, how often u can use all 20 stars of starfall in a boss-fight. And if u think a little bit of that situation, u will only find 3 bosses (rotface, festergut, blood-prince
council), where u couldn't benefit from all stars (although u can push your dps at rotface and council, when your stars falls at 'ineffective' targets).
regards Olli, EU/Zuluhed
Ah, i forgot to mention, that wrathcalc doesn't calculate with starfall splash dmg (that also scales with crit-rating). So in some special 'bombing' situations crit-rating should be even better.
regards Olli, EU/Zuluhed
Graylo said...
"I have linked the parse with detailed dmg done(not dps or whatever), and i asked you to compare wrath vs starfire."
So what was the point and what do you think you proved. Yes, for you you did more damage with Wrath then Starfire. Does that mean Crit > Haste? Does that prove you right?What you linked told us
almost nothing.
I am still wondering if your graph incorporates Starfall in any manner. Looking back over it, Starfall is never mentioned.
Once again, I am not trying to be disrespectful in any manner and I truly appreciate the work you put in to explaining these numbers.
- Dignam, Crushridge US
Graylo said...
I don't think you understand what the graph is saying. The graph shows how Haste and Crit scale with spell power. I used Starfire as the bases for the comparison because it gives the clearest
picture since it is not limited by the GCD and is not an instant cast.
This post is purely an explination on how haste has improved so much since patch 3.3.
Yes, Starfall does improve the marginal value of Crit more then it does Haste because haste has little impact on the DPS of Starfall. Same can be said for Wrath, or Moonfire. However, if you run
a reasonable rotation through a spreadsheet or a Sim you are likely to find that Haste > Crit for your gear level. This is because the Lunar Crit cap limits the value of Crit, and Haste scales
with Spell Power for one of our primary spells so much better then it does with Crit rating.
@ Olli
Are you including glyph of focus in your WrathCalcs sheet? If so, this is probably not as valid a comparison as the general consensus on Glyphs doesn't include it.
I feel I need to reiterate some of what I said above as the discussion still seems to be heading in the direction of haste v crit.
There should never be a point, (nor should there really ever have been), where you start deciding to 'stack' either crit OR haste. You should be trying to increase both simultaneously. The only
time where you should even be contemplating whether one is greater than the other is when gemming, and that difference is going to be minute! Haste AND crit in every slot, except where you have
to have hit. And you do have to have hit.
Linking parses proves nothing except how big your ego is.
"So what was the point and what do you think you proved. Yes, for you you did more damage with Wrath then Starfire. Does that mean Crit > Haste? Does that prove you right?What you linked told us
almost nothing."
I have proven nothing, you are right. But i have seen that i can do much more damage with wrath with a high critical strike rating.
"Having some of your Crit Rating rendered useless for 30% of the fight does diminish the marginal value of Crit Rating."
That is true, but you did not add that the extra crit applies during solar, when you have a +40% bonus damage (which doubles each time you crit). And the extra haste that benefits starfire is
useless on wrath in the same %
"What I did here was find the marginal gain in DPS by adding 1 point of Haste Rating and Crit Rating to various levels of Spell Power for the spell Starfire"
This is the main reason why i am posting here. You made a statement "Haste>Crit", you made a graph that applies only to starfire spell. That is obvious
Can you pls explain how critical strike rating scales with wrath and starfall?
Thats why i posted a direct link to my spreadsheets (download them and try yourself) - no, i didn't use focus glyph.
I only wanted to mention, that Graylo's haste>crit thesis is based on single-target view done with his own analyis or simcraft/wrathcalc-tools etc. But ICC isn't only to be single-targeting,
quite the opposite, it's more multi-targeting, sometimes nearly bombing situations (lichking). And the missing splash-dmg in the spreadsheet-tools will do an additional pro-scaling for haste.
Yes, haste scales better with sp, than crit does, but you have to be realistically. An ICC-25hm euipped moonkin couldn't reach that high sp levels, if you take 20 falling stars in account.
regards Olli, EU/Zuluhed
Graylo said...
"I have proven nothing, you are right. But i have seen that i can do much more damage with wrath with a high critical strike rating."
That's like saying I can go faster if I put a V6 in my Jetta. True, but you can go even faster if you just get a Ferrari.
"That is true, but you did not add that the extra crit applies during solar, when you have a +40% bonus damage (which doubles each time you crit). And the extra haste that benefits starfire is
useless on wrath in the same %"
True, but that was not the point f the post. I wrote this post to show why Haste is still good, and why it has caught up with Crit rating when you've past both caps. I only made this comment
because in every one of your comments you seem to ignore the negative aspects of crit, while focusing on them for Haste. Both stats scale in the 55%-65% range, and that is bad for druids in
general, but one still better then the other.
"This is the main reason why i am posting here. You made a statement "Haste>Crit", you made a graph that applies only to starfire spell. That is obvious"
I ran a SimulationCraft on you. It had Haste valued at 63% and crit valued at 55% per point for you. Haste is greater then Crit, especially for you given your encredibly large Crit Rating levels
and very low Haste Rating levels. That is a fact.
"Can you pls explain how critical strike rating scales with wrath and starfall?"
Crit scales with Wrath and Starfall in very similar ways. Wrath gets a slight edge because it's higher spell coeffiecent and lower average crit rating because of ImpIS, but that difference is
very small. So the like in the Starfire graph is applicable to Wrath as well and likely very similar to a line that you would finde for Starfall. However, that is the wrong way to look at it.
It's not that Crit scales better for Wrath and Starfall but Haste scales worse for those individual spells. Everyone knows it, but some people take that info and use it in an incorrect manner.
they assuming that since Haste is not good for 50-60% of my rotation then Haste is not good. Wrong!
If you're over the Crit cap, then Crit rating is going to be useless for about 40-50% of your rotation. Some would take this as a sign that crit is still better since Haste does not impact more
of your rotation, but that ignores how the two stats scale with spell power.
My post above shows that Haste Rating Scales with spell Power much more then Crit rating does. This is how Haste has closed the gap between the two stats. Haste scales so much better for STarfire
then Crit does for Wrath and Starfall.
Post is not including 4 piece tier 10. IMO with 4pt10 Crit > Haste
"However, if you run a reasonable rotation through a spreadsheet or a Sim you are likely to find that Haste > Crit for your gear level."
Did that with wrathcalc and with 20 falling stars from starfall (and missing splash-dmg, which can hit critically too) it shows crit>haste with an ICC-hm euipped moonkin.
regards, Olli EU/Zuluhed
Graylo said...
While I did not include 4T10 in the graph I did run the numbers with it. It does improve the marginal value of Crit a little, but not as much as you obviously think. Not to mention that all the
sims and spreadsheets would include the set bonus. Sorry, Haste > Crit even with 4T10.
Tried looking you up on the armory but couldn't find you so I can't verify what you said. However, I do know that not every fight has adds, and even on the fights that do not all of them are
significant. Don't overestimate the value of what you are seeing.
Here's my initial post with armory- and spreadsheet-links: http://graymatterwow.blogspot.com/2010/06/celestial-focus-haste-and-icc-gearing.html?showComment=1276186134046#c6850394069605064783
However, I do know that not every fight has adds, and even on the fights that do not all of them are significant.
Can't agree at all, see post above.
Don't overestimate the value of what you are seeing.
With my initial post i only wanted to note, that it doesn't matter, if you prefer crit or haste as an ICC25hm raiding moonkin, because
1. the difference is quite small
2. it depends on encounter, whether haste or crit would be a little bit more usefull.
3. you can't reach that high sp-levels with ICC25hm gear, where you can definitely say: haste>crit.
regards Olli, EU/Zuluhed
WoL: http://www.worldoflogs.com/guilds/25472/
I like your ferrari/jetta comparison. Made me sincerely smile :)
Lets say ferrari has low torque, but great top speed. Jetta has lower top speed but higher acceleration (torque). What would you pick up for a uphill road and what would you pick up for a
straight highway?
Besides those jokes, i respect your calculations and probably some more haste will benefit me when my +crit choices are over..
But i wanted to ask you something else:
Do you know how the +40% solar bonus damage scales with the various spell power procs we have?
So far, every respectful moonkin in the world has the Ashen Verdict ring. Almost everyone has at least one BiS trinket (Phylacery or Dislodged or maybe both)
On an average fight (lets' say 4 minutes) we have at least 3(or 6 for the lucky guys having both BiS) trinket procs, and let's say 3 Ring procs.
Those procs are all pure Spellpower, and there is no haste or crit proc as far as i know.
How do they interact during solar and lunar eclipse?
Could this be a variable to be taken into consideration in the various spreadsheets?
Thanks in advance
Three things.
The marginal value of crit increases slower then haste as spellpower only increases (graylos chart)
The marginal value of crit decreases as Int increases since int gives crit also.
The marginal value of haste increases as int increases.
Thus as higher levels of gear with all their int and spellpower are gotten... at some point haste overtakes crit, end of story. You can argue we're not at that point yet but simulations and logs
can settle that.
I know this post is old, but I feel the need to speak up in Graylo's defense.
He is correct, at the soft crit cap, haste beats out crit by about .2-.3 dps per point in full raid buffs and 30% ICC buff.
This is not enough of an increase to regem all your 264 gear with haste, but as I start replacing my gear with 277 gear, I gem spell/haste in yellow sockets while maintaining around 960 crit. I
keep it over the crit cap, because you can't always be sure to get every buff.
Again, like all SOFT caps, they are break points to keep in mind, not to stress over. If you go over it, don't change your gear and gems to get back down to it.
The reality is, in heroic 25 man gear, you are going to be way over both soft caps, and keeping them balanced is a better solution.
http://www.worldoflogs.com/reports/72i58np7rpte996y/details/7/?s=2434&e=2650 is the link to my moonkin, if you want a comparison | {"url":"https://graymatterwow.blogspot.com/2010/06/celestial-focus-haste-and-icc-gearing.html","timestamp":"2024-11-10T05:03:24Z","content_type":"application/xhtml+xml","content_length":"149918","record_id":"<urn:uuid:21775072-9423-4a04-827f-80ff37c1f1be>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00744.warc.gz"} |
Is the Landlord’s Rent Increase Lawful? Trust But Verify!
Beverly Hills is a rent control city which means the allowed annual rent increase for rent-stabilized tenants is regulated by a local rent stabilization ordinance. The ordinance determines the
allowable rent increase and the city posts that percentage online. Sometimes the landlord raises the rent and a tenant will wonder, Is the landlord’s rent increase correct or even allowed by law?
Here we explain how to find the allowed percentage and verify that the rent increase imposed by the landlord is correct and lawful.
The Maximum Allowed Rent Increase in a Nutshell
First the basics: what the Beverly Hills rent stabilization ordinance does and does not allow for rent-stabilized households. Residents of newer multifamily buildings, condominiums and single-family
homes are not rent-stabilized but may find some protection under state rent control. For rent-stabilized tenants:
• The rent can be increased only once in any 12-month period.
• The maximum allowed maximum increase percentage for Chapter 6 tenants is determined once annually in June, and posted that month, with new percentage taking effect on July 1st.
• The Chapter 5 maximum allowed annual increase is recalculated each month, is posted monthly, and that percentage is applicable only to a Chapter 5 rent increase that will take effect in that
• The landlord can demand the maximum allowed percentage or demand no increase at the landlord’s discretion.
• Only failing to register a unit with the city’s rental unit registry will prevent the landlord from demanding a rent increase.
• Notice of rent increase is formal change in the terms-of-tenancy which requires a dated written notice which must then be served personally to the tenant no fewer than 30 days before the rent
increase takes effect; OR mailed no fewer than 35 days before it takes effect (out-of-state mailings are 40 days).
• An improperly-served notice (for example a notice of rent increase by email, text, simple posting or even verbal) is not valid and should be corrected with a new notice served properly that
starts the clock anew;
• If an improper notice is not called-out by the tenant then it is presumed to be lawful notice.
This post goes on to explain how to calculate and verify the percentage rent increase and how to calculate the lawful rent over a period of years to ensure that you are not being overcharged.
Calculating the Percentage Change Between Old and New Rents
After receiving notice of a rent increase a tenant will often ask, Can my landlord raise the rent by that much? To verify that the rent increase is lawful we must calculate the percentage change
between the old rent and the new rent. First determine the dollar difference between the original rent and the rent after the increase; then compare that to the original rent. How much has it
increased? The steps:
1. Subtract the new rent and from the current rent prior to the increase. Example: $2,062 – $2,000 = $62.
2. Divide the monthly dollar difference by the original rent. Example: $62 / $2,000 = .031
3. Multiply the numeric increase by 100 to arrive at the percentage: .031 X 100 = 3.1%.
If the answer is a negative number then that would be a percentage decrease. We expect no tenant will find a rent decrease!
Calculating the New Rent Using the Announced Percentage
Once the city establishes the maximum allowable rent increase percentage a tenant will often asked, What is my new rent after the increase? That percentage difference is determined by multiplying the
original rent by the amount of the increase and then adding the dollar different to the original rent. The steps:
1. Convert the percentage figure (3.1%) into a decimal by dividing it by 100. Example: 3.1 / 100 = .031
2. Multiply the original rent by the rent increase to get the monthly dollar increase. Example: $2,000 x .031 = $62
3. Add the dollar amount of the increase to the original rent to get the new rent. Example: $2,000 + $62 = $2,062
Calculating the Allowed Rent Increase Over Time
The principle for calculating the lawful rent through successive periods of rent increases is really the same as for calculating the correct new rent for any one period: 1) identify the multiplier
that corresponds to the percentage increase at the time of the rent increase; 2) determine the additional dollar amount in rent as a result of the increase; and 3) add that monthly dollar amount to
the current rent to determine the new lawful rent. Then rinse-and-repeat through each period where the rent was increased.
Take for example the annual reporting of our rent amount by the landlord. (The figure is shown on the ‘notice of rent reported by the landlord’ that we receive every spring.) To determine whether the
reported rent is the lawful rent we should look farther back than the most recent rent increase; we can look to the original base rent at the beginning of the tenancy (what is on the lease) and in
our calculation apply each successive rent increase.
If for example the base rent at the beginning of the tenancy was $2,000 then we simply step through the subsequent rent increases:
• Year 2 allowed a 3.1% increase, according to the city, so after the first rent increase the new rent would have been $2,062.
• Year 3 allowed a 1.6% increase, according to the city, so after the second increase the new rent would have been $2,095.
And so on. We work our way through each rent increase period through to the current period.
When might this kind of calculation be useful? When we want to verify whether the current rent is actually the lawful rent relative to the base year taking into account successive rent increases. One
or more excessive rent increases can have a significant effect due to cumulative payments in excess of the lawful rent as well as annual compounding of the rent increases each year.
Here is an example where the current rent is not the lawful rent relative to the base year. This hypothetical nine year tenancy shows how two early unlawfully high rent increases (in red) over time
add up to $51 in excess rent each month.
The cumulative impact of those early excessive rent increases is very significant! In the final year the tenant has overpaid the landlord by $612 but over the entire nine year tenancy the tenant
overpaid the landlord by $4,080…all because those early two rent increases were excessive.
We encourage every tenant to review the entire rent record at least once when the city sends out the ‘notice of rent amount reported by the landlord.’ That is a great time to document over-payment
and take action to claw-back the excess payment of rent. To do that on paper download our rent increase worksheet. For more about the notice of rent reported by the landlord see our explainer: Rental
Unit Registration: What You Need to Know. | {"url":"https://bhrentersalliance.org/2023/07/is-the-landlords-rent-increase-lawful/","timestamp":"2024-11-02T19:02:45Z","content_type":"text/html","content_length":"86411","record_id":"<urn:uuid:5338e0a5-cfba-4ace-b3d5-da9159800af1>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00115.warc.gz"} |
The official Formula 1 thread
Hey TPR!
Some of you may, but Im sure most of you don't know that F1 is about to kick off in Australia tonight for the first race of the season. I remember there being a little interest in the sport on here,
but hey what the hell I thought I'd start a thread about it anyway!
I'm gambling with the fact that some of you might be interested, but I do want to keep this a regularly updated page if its something people find interesting. So here it is for the Australian GP!
Starting Grid:
Pos No Driver Team Q1 Q2 Q3 Laps
1 Lewis Hamilton McLaren-Mercedes 1:26.572 1:25.187 1:26.714 14
2 Robert Kubica BMW 1:26.103 1:25.315 1:26.869 15
3 Heikki Kovalainen McLaren-Mercedes 1:25.664 1:25.452 1:27.079 13
4 Felipe Massa Ferrari 1:25.994 1:25.691 1:27.178 12
5 Nick Heidfeld BMW 1:25.960 1:25.518 1:27.236 16
6 Jarno Trulli Toyota 1:26.427 1:26.101 1:28.527 17
7 Nico Rosberg Williams-Toyota 1:26.295 1:26.059 1:28.687 21
8 David Coulthard Red Bull-Renault 1:26.381 1:26.063 1:29.041 18
9 Timo Glock Toyota 1:26.919 1:26.164 1:29.593 17
10 Sebastian Vettel STR-Ferrari 1:26.702 1:25.842 No time 18
11 Rubens Barrichello Honda 1:26.369 1:26.173 13
12 Fernando Alonso Renault 1:26.907 1:26.188 10
13 Jenson Button Honda 1:26.712 1:26.259 13
14 Kazuki Nakajima Williams-Toyota 1:26.891 1:26.413 13
15 Mark Webber Red Bull-Renault 1:26.914 No time 8
16 Kimi Räikkönen Ferrari 1:26.140 3
17 Giancarlo Fisichella Force India-Ferrari 1:27.207 9
18 Sebastien Bourdais STR-Ferrari 1:27.446 10
19 Adrian Sutil Force India-Ferrari 1:27.859 9
20 Takuma Sato Super Aguri-Honda 1:28.208 9
21 Nelsinho Piquet Renault 1:28.330 6
22 Anthony Davidson Super Aguri-Honda 1:29.059
Also, be sure to check out these websites!
Offical F1 website
Really awesome Australian F1 podcast!
So if there is interest in this, I'll keep the page updated. If not, hey atleast I gave it a shot!
Oh and go Ferrari.
Top Posters In This Topic
Hopefully this will be a good season for McLaren, from what i hear the car is a lot better than last season and they didn't do to badly then
Any predictions for the drivers/constructors champions? I'm being the optimistic brit and backing Hamilton to win on his second season! At least it's not as optimistic as backing Coulthard (who i
supported through my childhood because we shared the same first name ).
Dave "yay for F1 thread" Wilson
The only kind of racing I like involves dirt and ovals (local track is about to reopen and I am helping), but f1 is neat to watch and listen to also.
I see two Japanese drivers. I hope they are both men. Japanese women are the worst drivers.
Well Sato is already out, and I don't know how well Nakajima is doing (commercial break!) but Kimi and Massa arn't doing well.
My prediction is that Hamilton will take the championship this yeah, but I really dont like McLaren-Mercedes. I'm HOPING that Ferrari will win it again, but they need to get it together.
20 to go!
ugh....i watched half the race and it's the same boring thing as last year. I was hoping that the no traction control would help overtaking, but unfortunately....
I'm thinking John Force will win it all this year.
^No way, man! This season belongs to Richard Petty, like it should be!
Watched the whole race yesterday. It was one of the most intereesting grand prix's i've seen for a while, it was exciting all the way to the chequered flag. You just didnt know what was coming next,
which was a major plus in my view.
Where's Ricky Bobby? Or Cole Trickle?
^It's Dick Trickle!
Here are the results from the Australian GP:
Pos No Driver Team Laps Time/Retired Grid Pts
1 22 Lewis Hamilton McLaren-Mercedes 58 1:34:50.616 1 10
2 3 Nick Heidfeld BMW 58 +5.4 secs 5 8
3 7 Nico Rosberg Williams-Toyota 58 +8.1 secs 7 6
4 5 Fernando Alonso Renault 58 +17.1 secs 11 5
5 23 Heikki Kovalainen McLaren-Mercedes 58 +18.0 secs 3 4
6 8 Kazuki Nakajima Williams-Toyota 57 +1 Lap 13 3
7 14 Sebastien Bourdais STR-Ferrari 55 +3 Laps 17 2
8 1 Kimi Räikkönen Ferrari 53 Engine 15 1
Ret 4 Robert Kubica BMW 47 Accident 2
Ret 12 Timo Glock Toyota 43 Accident 18
Ret 18 Takuma Sato Super Aguri-Honda 32 Transmission 19
Ret 6 Nelsinho Piquet Renault 30 Accident damage 20
Ret 2 Felipe Massa Ferrari 29 Engine 4
Ret 9 David Coulthard Red Bull-Renault 25 Accident 8
Ret 11 Jarno Trulli Toyota 19 Electrical 6
Ret 20 Adrian Sutil Force India-Ferrari 8 Hydraulics 22
Ret 10 Mark Webber Red Bull-Renault 0 Accident 14
Ret 16 Jenson Button Honda 0 Accident 12
Ret 19 Anthony Davidson Super Aguri-Honda 0 Accident 21
Ret 15 Sebastian Vettel STR-Ferrari 0 Accident 9
Ret 21 Giancarlo Fisichella Force India-Ferrari 0 Accident 16
DSQ 17 Rubens Barrichello Honda 58 +52.4 secs 10
Lots of retired racers, only 7 finished, but Rubens was DQ'd for exiting the pits under Red.
There usually are alot of cars dropping out over the first couple of races, but Ive rarely seen it that bad.
Did you hear DCs rant about Massa? "If he doesnt apologise, Im going the kick 10 colours of S*** out of the little B******!". Thats almost as funny as the time he told Louise he would have to treat
the speedlimiter button gently by pretending it was her nipple. lol.
It was nice to see the drivers really struggle for a change. Its about time their skill was really tested. Its looking like its levelling the field a little more.
I think Lewis is going to run away with it unless Kovy gets his act togather. Ferrari look to be all at sea, with ferrari engines failing for two teams. They have pace, but its not a good sign that
other teams ferrari engines have been blowing as well.
Is there rubbing in F1? Afterall, "rubbin', is racin."
^It's Dick Trickle!
I think he was referencing Tom Cruise's character in Days of Thunder.
Dick Trickle is by far the all-time best NASCAR driver's name.
Is there rubbing in F1? Afterall, "rubbin', is racin."
There's more chance of "rubbin" in F1 than in NASCAR! Its open wheel racing! Touch a tire to ANY part of another car, and you are pretty much screwed!
^It's almost as good as Dick Withers. He lives in Coldwater Michigan.
"Do you know Dick Withers in Coldwater?"
^It's Dick Trickle!
I think he was referencing Tom Cruise's character in Days of Thunder.
Dick Trickle is by far the all-time best NASCAR driver's name.
Exactly. Top Gun on a Nascar track.
I have a 1/32 model of a F1 Ferrari somewhere. It's massive and looks like a projects I'll get to around 2040. I have always loved the looks of F1 cars.
I like all the whacked out things the designers were trying in the late 70s and early 80s. Downforce was pretty new and cars were sprouting all sorts of wings and ground effect systems.
Did you know that Williams tested a 6 wheeled car, as an experiment to improve traction? it had 4 driven wheels at the back instead of just two. It worked, but never raced as the FIA banned 4 wheel
drive, and even though all 4 driven wheels were at the back the car still fell foul of the rule.
Tyrrel also developed a 6 wheeled car which did race, with four wheels at the front. It did rather well until the tyre manufacturer failed to develop the small front tyres properly. Performance
dropped off and it was scrapped.
Im sure there was a third six wheeler but I cant be bothered to go look it up.
EDIT: Actually yes I can. It was the March 2-4-0 but it never raced in an F1 race.
I always watch the F1 races.
This will be the first year I will miss a race.
Going to Dorney Park On may 25 during our USA trip when the Monaco GP will be held.
Well it's an acceptable reason to miss a GP.
There were two things that surprised me during the Australian GP.
1. The bad performance of Ferrari.
2. The race pace of Honda.
I mean Kimi Raikonen was not able to overtake Barrichello for 10 laps or even more. Can't say anything about Jenson Button since he was pushed out of the race on lap 1.
Don't you guys think we need another dutchman in F1.
Anyhow I'm looking forward to the Malaysian GP held on march 23.
^ Watching the Monaco GP will ALWAYS be > than a visit to Dorney Park.
Remembering Senna drive the streets of Monaco.....even better. He may of been a McLaren guy, but he's still my second favorite racer of all time, on any circuit. With quotes like this, how could you
not love the guy.
"Winning is like a drug, I cannot justify in any circumstances coming second or third."
Anyway, I missed the Aussie GP. And considering the results, I'm not really saddened by that, I guess.
Ferrari WILL rebound.
^ Watching the Monaco GP will ALWAYS be > than a visit to Dorney Park.
I agree that monaco has an unique atmosphere but most of the times it's just a parade of cars.
Here is the vid of the perect lap on monaco by Senna.
Honda finally have a better car to work with this year, so I expected a better pace from them. Its still not great but with Ross Brawn on board now, the team should start to pick up. Especially when
the circus comes back to Europe. I would really like to see button get a car which does his talent justice.
Personally Im looking forward to the Singapore race. Not just because its at night but because its a proper, well designed street course. Ive seen a plan and a CG trip of the track and it looks
Here is the vid of the perect lap on monaco by Senna.
Wow. Awesome clip. Thanks for posting that! What's great is watching him have to reach down and shift. Ah, the good 'ol days.
Awww poor Senna, he was pretty much the most amazing driver ever.
Monaco is one race I really wan't to see in my lifetime. That, Monza, and the GP of Europe are the three races I really really want to attend (plus there will be a credit there soon at the Ring!)
On a different note, Ferrari believes that the ECU was the problem that caused two DNF's in the opening race. The ECU's are now standard among all F1 cars so as to better improve driver equality, but
who makes them? McLaren. McLaren was accused last year of stealing Ferrari data and found guilty, fined $100,000,000.00, and stripped of all constructors points. This has started a BIG rivalry
between the two teams, so something like this is not at all surprising.
Oh, and Friday Practices kicks off in about 22 hours at Malaysia!
Awww poor Senna, he was pretty much the most amazing driver ever.
That was a grand prix to remember...
The bad performance of Ferrari.
I think there is something wrong with Ferrari this year: Both Ferrari's went out of the race with engine problems, and Sebastian Bourdais (Toro Rosso) had a Ferrari engine, and it failed. Not to
mention Sebastian Vettel's Hydraulics problem.
I think this year will be much closer then the past few years, which we saw last weekend. I predict big things for BMW this year, as well as Williams returning to the standard they had a few years
ago. I also predict that we shall really see who the better drivers in the comp are, now that we don't have traction control.
On a different note, Ferrari believes that the ECU was the problem that caused two DNF's in the opening race. The ECU's are now standard among all F1 cars so as to better improve driver equality,
but who makes them? McLaren. McLaren was accused last year of stealing Ferrari data and found guilty, fined $100,000,000.00, and stripped of all constructors points. This has started a BIG
rivalry between the two teams, so something like this is not at all surprising.
Yeah that's true but those ECU's are given to the teams at random by the FIA not by McLaren and it's not affecting any of the other cars.
We will see it in the next couple of races.
BTW there is rain predicted for this weekend in Sepang so that could be an extra interesting factor. | {"url":"https://themeparkreview.com/forum/topic/21617-the-official-formula-1-thread/","timestamp":"2024-11-02T08:09:19Z","content_type":"text/html","content_length":"400409","record_id":"<urn:uuid:45684bed-5266-4443-8ae8-246d565cc10f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00651.warc.gz"} |
Absolute Breadth Index
ABSOLUTE BREADTH INDEX
The Absolute Breadth Index ("ABI") is a market momentum indicator that was developed by Norman G. Fosback.
The ABI shows how much activity, volatility, and change is taking place on the New York Stock Exchange while ignoring the direction prices are headed.
You can think of the ABI as an "activity index." High readings indicate market activity and change, while low readings indicate lack of change.
In Fosback's book, Stock Market Logic, he indicates that historically, high values typically lead to higher prices three to twelve months later. Fosback found that a highly reliable variation of the
ABI is to divide the weekly ABI by the total issues traded. A ten-week moving average of this value is then calculated. Readings above 40% are very bullish and readings below 15% are bearish.
The following chart shows the S&P 500 and a 5-week moving average of the ABI.
Strong rallies occurred every time the ABI's moving average rose above 310.
The Absolute Breadth Index is calculated by taking the absolute value of the difference between NYSE Advancing Issues and NYSE Declining Issues.
Absolute value (i.e., ABS) means "regardless of sign." Thus, the absolute value of -100 is 100 and the absolute value of +100 is also 100. | {"url":"https://www.marketinout.com/technical_analysis.php?t=Absolute_Breadth_Index&id=19","timestamp":"2024-11-05T04:24:57Z","content_type":"text/html","content_length":"23260","record_id":"<urn:uuid:7d67ca1a-15c7-4ae0-8c01-b452e9a5a144>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00773.warc.gz"} |
Modeling and simulation of the human cardiovascular system
Moura, Alexandra; Sequeira, Adélia
CIM Bulletin (International Center for Mathematics), 31 (2012), 27-34
The use of mathematical modeling and numerical simulation to study blood circulation and related pathologies is an active interdiciplinary field of research. It has a great social and economical
impact mainly due to cardiovascular diseases, that represent one of the leading causes of death and morbidity in industrialized countries. Due to the complexity of the human cardiovascular system,
the use of computational models to study blood flow in healthy and pathological situations is a challenge to mathematicians and engineers. Nevertheless, it constitutes nowadays a reliable tool which
is increasingly used in clinical applications, such as the placement of stents in arteries with atherosclerotic plaques, or the understanding of aneuerysm growth and rupture. In this article some of
the fundamental aspects of mathematical modeling and numerical simulation of blood circulation will be described, highlighting in particular the pathological case of cerebral aneurysms. | {"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=6&member_id=83&doc_id=1576","timestamp":"2024-11-07T08:47:24Z","content_type":"text/html","content_length":"9051","record_id":"<urn:uuid:da4a336e-2811-4634-83d6-12bd879ccf1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00777.warc.gz"} |
Kernel-based Approximation Methods Using Matlab Pdf 53 where
Kernel-based Approximation Methods Using Matlab Pdf 53
where \(\mathsf A=(\mathsf A_i,j)=\kappa (\varvecx_i,\varvecx_j)\), \(i,j=1,\ldots ,n\), is the so-called interpolation (or collocation or simply kernel) matrix. The uniqueness of the solution of (1)
is guaranteed as long as \(\kappa \) is strictly positive definite. For a more general formulation of the interpolant that involves conditionally positive definite kernels, and for a complete
overview concerning kernel-based approximation, we refer the reader e.g. to [2].
kernel-based approximation methods using matlab pdf 53
As a consequence, the power function gives information about how the interpolation error relates to the node distributions. Indeed, as for polynomial and spline interpolation, the approximation
quality strongly depends on the distribution of the scattered data. In view of this, starting with an initial set of nodes, many adaptive strategies have been studied in order to construct
well-behaved interpolation designs, i.e. interpolation sets which provide an accurate reconstruction and, preferably, affordable computational costs. In particular, the so-called greedy approaches
match such purposes by iteratively adding new points to the interpolation set. The iterative rule is based on minimizing a pointwise upper bound for the interpolant. Precisely, those strategies rely
on the residual of the interpolant (f-greedy) or on the value of the power function (p-greedy); refer to [6,7,8,9] for a general overview. Despite the fact that we only focus on these two schemes,
other strategies that combine the two above, known as \(f \cdot p\) and f/p greedy, are available; we refer the reader to [10,11,12]. These methods fall under the context of knot insertion
algorithms, which have been studied also in the context of adaptive least-squares approximation [1, 21].
Kernel-based classification and regression methods have been successfully applied to modelling a wide variety of biological data. The Kernel-based Orthogonal Projections to Latent Structures (K-OPLS)
method offers unique properties facilitating separate modelling of predictive variation and structured noise in the feature space. While providing prediction results similar to other kernel-based
methods, K-OPLS features enhanced interpretational capabilities; allowing detection of unanticipated systematic variation in the data such as instrumental drift, batch variability or unexpected
biological variation.
The Kernel-OPLS method [21] is a recent reformulation of the original OPLS method to its kernel equivalent. K-OPLS has been developed with the aim of combining the strengths of kernel-based methods
to model non-linear structures in the data while maintaining the ability of the OPLS method to model structured noise. The K-OPLS algorithm allows estimation of an OPLS model in the feature space,
thus combining these features. In analogy with the conventional OPLS model, the K-OPLS model contains a set of predictive components Tp and a set of Y-orthogonal components To. This separate
modelling of Y-predictive and Y-orthogonal components does not affect the predictive power of the method, which is comparable to KPLS and least-squares SVMs [22]. However, the explicit modelling of
structured noise in the feature space can be a valuable tool to detect unexpected anomalies in the data, such as instrumental drift, batch differences or unanticipated biological variation and is not
performed by any other kernel-based method to the knowledge of the authors. Pseudo-code for the K-OPLS method is available in Table 1. For further details regarding the K-OPLS method, see Rantalainen
et al. [21].
Implementations of various kernel-based methods are available in the literature for the R and MATLAB environments. Among the R packages available on CRAN [23], a few relevant examples include kernlab
(kernel-based regression and classification), e1071 (including SVMs) and PLS (implementing a linear kernel-based implementation of the PLS algorithm). kernlab provides a number of kernel-based
methods for regression and classification, including SVMs and least-squares SVMs, with functionality for n-fold cross-validation. The e1071 package contains functions for training and prediction
using SVMs, including (randomised) n-fold cross-validation. The PLS package includes an implementation of both linear PLS as well as a linear kernel-based PLS version. This enables more efficient
computations in situations where the number of observations is very large in relation to the number of features. The PLS package also provides a flexible cross-validation functionality.
The K-OPLS method can be used for both regression as well as classification tasks and has optimal performance in cases where the number of variables is much higher than the number of observations.
Typical application areas are non-linear regression and classification problems using omics data sets. Properties of the K-OPLS method make it particularly helpful in cases where detecting and
interpreting patterns in the data is of interest. This may e.g. involve instrumental drift over time in metabolic profiling applications using e.g. LC-MS or when there is a risk of dissimilarities
between different experimental batches collected at different days. In addition, structured noise (Y-orthogonal variation) may also be present as a result of the biological system itself and can
therefore be applied for the explicit detection and modelling of such variation. This is accomplished by interpretation of the Y-predictive and the Y-orthogonal score components in the K-OPLS model.
The separation of Y-predictive and Y-orthogonal variation in the feature space is unique to the K-OPLS method and is not present in any other kernel-based method.
The K-OPLS algorithm has been implemented as an open-source and platform-independent software package for MATLAB and R, in accordance with [21]. The K-OPLS package provides functionality for model
training, prediction and evaluation using cross-validation. Additionally, model diagnostics and plot functions have been implemented to facilitate and further emphasise the interpretational strengths
of the K-OPLS method compared to other related methods.
Kernel methods have previously been applied successfully in many different pattern recognition applications due to the strong predictive abilities and availability of the methods. The K-OPLS method
is well suited for analysis of biological data, foremost through its innate capability to separately model predictive variation and structured noise. This property of the K-OPLS method has the
potential to improve the interpretation of biological data, as was demonstrated by a plant NMR data set where interpretation is enhanced compared to the related method KPLS. In conjunction with the
availability of the outlined open-source package, K-OPLS provides a comprehensive solution for kernel-based analysis in bioinformatics applications. | {"url":"https://www.techniquejiujitsu.com/group/young-ninja-group-ages-3-5/discussion/1d8f4b45-6dd5-48cf-80e0-a3709c2dff90","timestamp":"2024-11-02T14:33:32Z","content_type":"text/html","content_length":"1050600","record_id":"<urn:uuid:4ed770d9-a89a-4b0f-bddc-3ddb37cbc8bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00377.warc.gz"} |
Scarola Research Group
Vito Scarola
Professor of Physics
Work in our group spans several subfields of theoretical quantum physics with the aim of fostering quantum state engineering in the laboratory. The pristine environments we study typically allow for
close connection with experiment in, e.g., two dimensional materials as well as atomic, molecular, and optical (AMO) systems. Recent research directions include algorithms for quantum simulation,
modelling of quantum computing hardware, quantum analogue simulation, and topological states of matter.
Algorithms for Quantum Simulation:
Exact results on otherwise intractable quantum problems have been long sought in a diverse array of fields including: quantum chemistry, materials science, high energy physics, and solid-state
physics. Unfortunately, the complexity of quantum many-body problems has left long-standing open problems. Quantum algorithms can be used to speed up calculations on quantum many-body problems.
Work in our group constructs and examines quantum algorithms designed to solve important quantum many-body problems.
An algorithm designed to solve the Hubbard model using measurement-based quantum computing. In this scheme a resource quantum state is prepared, and the algorithm is run using just measurements.
Phys. Rev. Research 4, L032013 (2022).
Quantum Computing Hardware:
Semiconductor Devices-Semiconductor devices offered early and promising platforms for quantum state engineering because they leverage expertise in the semiconductor industry. This led to quantum
computing proposals that blossomed into large international efforts. For example, electron spins in semiconductor quantum dots can yield tunable qubits. Our work examines the role of
electron-electron interactions in harnessing and potentially even harming quantum information stored in quantum dot qubits.
Phys. Rev. A 71, 032340 (2005)
Phys. Rev. Lett. 93, 120503 (2004)
Phys. Rev. Lett. 91, 167903 (2003)
Atomic and Molecular Qubits-Progress in atomic clock technology established ions and neutral atoms as competitive qubit technologies. Long lifetimes, high precision, and optical addressing set up
these platforms as ideal candidates for quantum computing. Our work models atomic and molecular systems for use in quantum information processing. For example, we proposed a route to create high
fidelity quantum registers from neutral atom Mott insulators. More recently, the community has begun to explore the possibility of using ultra-cold molecules as qubits. Work in our group is
currently modelling molecules trapped in optical tweezers to support ongoing laboratory efforts to use these systems to build large quantum states.
Quantum Analogue Simulation
Quantum analogue simulation complements digital quantum computation by using continuous degrees of freedom to reconstruct, and in some cases, replicate otherwise intractable models and
phenomena. Running the simulator and extracting an observable is tantamount to solving the problem being explored by, for example, mapping out phase diagrams. One way to build a simulator of an
important but intractable model is to ensure that a physical system is very well parameterized by the model of interest. AMO systems offer excellent platforms for quantum analogue simulation because
of precise control and tunability of system parameters. Our work has shown how the precise control over parameters in AMO systems allows simulation of large classes of models of interest to many
areas of physics.
Topological States of Matter
Topologically ordered many-body states of matter carry unique properties such as fractionalized excitations and chiral edge currents, properties that are intimately connected to the topology of the
underlying state. Examples of topologically ordered many-body states include fractional quantum Hall states or topological superconductors.
Some topological states can serve as platforms for a robust route to quantum computing. In topologically ordered systems, excitations are anyons which can be thought of as qubits. Braiding anyons
in time and space executes quantum gates that do not necessarily need conventional error correction overhead. Topological quantum computing research is therefore a rich topic of fundamental
scientific importance with concomitant applications to quantum information science.
Work in our group on this topic centers on establishing topologically ordered ground states as viable in the laboratory. We have focused on two platforms in particular: the fractional quantum Hall
regime and in atomic systems.
Selected Works
Quantum Phases of the Extended Bose-Hubbard Hamiltonian:
The Possibility of a Supersolid State of Cold Atoms in Optical Lattices,
V. W. Scarola and S. Das Sarma,
Phys. Rev. Lett. 95 , 033003 (2005).
Dispersion of the Excitations of Fractional Quantum Hall States,
I. V. Kukushkin, J. H. Smet, V. W. Scarola, V. Umansky, and K. von Klitzing,
Science 324, 1044 (2009).
Cooper Instability of Composite Fermions,
V. W. Scarola, K. Park, and J.K. Jain,
Nature 406, 863 (2000) | {"url":"https://scarola.phys.vt.edu/content/scarola_phys_vt_edu/en/index.html","timestamp":"2024-11-04T01:12:30Z","content_type":"text/html","content_length":"35963","record_id":"<urn:uuid:8ce7160c-c8c7-4ffd-b43c-8024711e0479>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00234.warc.gz"} |
S. CAN Et Al. , "On The Properties of k+1 dimentional time like ruled surfaces with the space like generating Space in the Minkowski space.," Mathematical and Computational Applications , pp.393-398,
CAN, S. Et Al. 2004. On The Properties of k+1 dimentional time like ruled surfaces with the space like generating Space in the Minkowski space.. Mathematical and Computational Applications , 393-398.
CAN, S., Bahar, U., & Aydemir, İ., (2004). On The Properties of k+1 dimentional time like ruled surfaces with the space like generating Space in the Minkowski space.. Mathematical and Computational
Applications , 393-398.
CAN, SANİYE, Uyar Bahar, And İsmail Aydemir. "On The Properties of k+1 dimentional time like ruled surfaces with the space like generating Space in the Minkowski space.," Mathematical and
Computational Applications , 393-398, 2004
CAN, SANİYE Et Al. "On The Properties of k+1 dimentional time like ruled surfaces with the space like generating Space in the Minkowski space.." Mathematical and Computational Applications ,
pp.393-398, 2004
CAN, S. Bahar, U. And Aydemir, İ. (2004) . "On The Properties of k+1 dimentional time like ruled surfaces with the space like generating Space in the Minkowski space.." Mathematical and Computational
Applications , pp.393-398.
@article{article, author={SANİYE CAN Et Al. }, title={On The Properties of k+1 dimentional time like ruled surfaces with the space like generating Space in the Minkowski space.}, journal=
{Mathematical and Computational Applications}, year=2004, pages={393-398} } | {"url":"https://avesis.comu.edu.tr/activitycitation/index/1/9b539a90-d41d-4224-ac5f-8198a754cde2","timestamp":"2024-11-14T10:50:21Z","content_type":"text/html","content_length":"12388","record_id":"<urn:uuid:24b768b4-cd74-4304-ab01-c54f15ae6159>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00159.warc.gz"} |
Algebra in Ancient and Modern Timessearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Algebra in Ancient and Modern Times
A co-publication of the AMS and Hindustan Book Agency
Softcover ISBN: 978-0-8218-0989-1
Product Code: MAWRLD/12
List Price: $49.00
MAA Member Price: $44.10
AMS Member Price: $39.20
eBook ISBN: 978-1-4704-2478-7
Product Code: MAWRLD/12.E
List Price: $45.00
MAA Member Price: $40.50
AMS Member Price: $36.00
Softcover ISBN: 978-0-8218-0989-1
eBook: ISBN: 978-1-4704-2478-7
Product Code: MAWRLD/12.B
List Price: $94.00 $71.50
MAA Member Price: $84.60 $64.35
AMS Member Price: $75.20 $57.20
Click above image for expanded view
Algebra in Ancient and Modern Times
A co-publication of the AMS and Hindustan Book Agency
Softcover ISBN: 978-0-8218-0989-1
Product Code: MAWRLD/12
List Price: $49.00
MAA Member Price: $44.10
AMS Member Price: $39.20
eBook ISBN: 978-1-4704-2478-7
Product Code: MAWRLD/12.E
List Price: $45.00
MAA Member Price: $40.50
AMS Member Price: $36.00
Softcover ISBN: 978-0-8218-0989-1
eBook ISBN: 978-1-4704-2478-7
Product Code: MAWRLD/12.B
List Price: $94.00 $71.50
MAA Member Price: $84.60 $64.35
AMS Member Price: $75.20 $57.20
• Mathematical World
Volume: 12; 1998; 142 pp
MSC: Primary 01; 12
This text offers a special account of Indian work in diophantine equations during the 6th through 12th centuries and Italian work on solutions of cubic and biquadratic equations from the 11th
through 16th centuries. The volume traces the historical development of algebra and the theory of equations from ancient times to the beginning of modern algebra, outlining some modern themes
such as the fundamental theorem of algebra, Clifford algebras, and quarternions. It is geared toward undergraduates who have no background in calculus.
For other wonderful titles written by this author see: Euler through Time: A New Look at Old Themes, Supersymmetry for Mathematicians: An Introduction, The Mathematical Legacy of
Harish-Chandra: A Celebration of Representation Theory and Harmonic Analysis, and The Selected Works of V.S. Varadarajan.
This book is copublished with the Hindustan Book Agency (New Delhi) and is distributed worldwide, except in India, Sri Lanka, Bangladesh, Pakistan, and Nepal by the American Mathematical Society.
Undergraduate mathematics majors, graduate students, research mathematicians and historians interested in the history of mathematics.
□ Some history of early mathematics
□ 1. Eucild–Archimedes–Diophantus
□ 2. Pythagoras and the Pythagorean triplets
□ 3. Āryabhaṭa–Brahmagupta–Bhāskara
□ 4. Irrational numbers: construction and approximation
□ 5. Arabic mathematics
□ 6. Beginnings of algebra in Europe
□ 7. The cubic and biquadratic equations
□ Solution of the cubic and biquadratic equations
□ 8. Solution of the cubic equation
□ 9. Solution of the biquadratic equation
□ Some themes from modern algebra
□ 10. Numbers, algebra, and the real world
□ 11. Complex numbers
□ 12. Fundamental theorem of algebra
□ 13. Equations of degree greater than four
□ 14. General number systems and the axiomatic treatment of algebra
□ The book was written for freshmen students who should learn algebra by its history. So the topics mentioned above are treated from a mathematical as well as a historical point of view. The
material is presented in a way that students should see how ideas have emerged. In some cases a rough look forward to the modern development is given. Many sections are supplemented by notes
and exercises, which contain a lot of mathematics as well as additional historical facts. The book is completed by a very short list of references and an index.
Zentralblatt MATH
□ This is a fine book on two counts. First ... there is the singularly excellent treatment of the solution of biquadratic equations. Second, it paints a strong picture of mathematics as a very
long sequence of accomplishments, each building on the ones before, in a way that beginning mathematicians can understand and appreciate it. It paints the picture in a concise and economical
style, the style that mathematicians find elegant. I would particularly recommend Algebra in Ancient and Modern Times to strong high school students, to high school algebra teachers, to
people who want a history of mathematics with a lot of mathematics in the history, and to anyone who needs to know how to find an analytic solution to a nasty fourth degree polynomial.
MAA Online
□ Varadarajan spins a captivating tale, and the mathematics is first-rate. The book belongs on the shelf of any teacher of algebra ... The great treasure of this book is the discussion of the
work of the great Hindu mathematicians Aryabhata (c.476–550), Brahmagupta (c.598–665), and Bhaskara (c.1114–1185). Teachers of mathematics history will be especially interested in
Varadarajan's exposition of the remarkable cakravala, an algorithm for solving \(X^2 - NY^2= \pm 1\). The book contains many exercises that enhance and supplement the text and that also
include historical information. Many of the exercises ask readers to apply the historical techniques. Some of the exercises are quite difficult and will challenge any student.
Mathematics Teacher
□ Varadarajan gives us nice treatment of the work of Indian mathematicians on the so-called Pell equation as well as a very detailed yet teachable discussion of the standard story of the
solution of cubic and quartic equations by del Ferro, Tartaglia, Cardano, and Ferrari in sixteenth-century Italy.
Mathematical Reviews
• Book Details
• Table of Contents
• Additional Material
• Reviews
• Requests
Volume: 12; 1998; 142 pp
MSC: Primary 01; 12
This text offers a special account of Indian work in diophantine equations during the 6th through 12th centuries and Italian work on solutions of cubic and biquadratic equations from the 11th through
16th centuries. The volume traces the historical development of algebra and the theory of equations from ancient times to the beginning of modern algebra, outlining some modern themes such as the
fundamental theorem of algebra, Clifford algebras, and quarternions. It is geared toward undergraduates who have no background in calculus.
For other wonderful titles written by this author see: Euler through Time: A New Look at Old Themes, Supersymmetry for Mathematicians: An Introduction, The Mathematical Legacy of
Harish-Chandra: A Celebration of Representation Theory and Harmonic Analysis, and The Selected Works of V.S. Varadarajan.
This book is copublished with the Hindustan Book Agency (New Delhi) and is distributed worldwide, except in India, Sri Lanka, Bangladesh, Pakistan, and Nepal by the American Mathematical Society.
Undergraduate mathematics majors, graduate students, research mathematicians and historians interested in the history of mathematics.
• Some history of early mathematics
• 1. Eucild–Archimedes–Diophantus
• 2. Pythagoras and the Pythagorean triplets
• 3. Āryabhaṭa–Brahmagupta–Bhāskara
• 4. Irrational numbers: construction and approximation
• 5. Arabic mathematics
• 6. Beginnings of algebra in Europe
• 7. The cubic and biquadratic equations
• Solution of the cubic and biquadratic equations
• 8. Solution of the cubic equation
• 9. Solution of the biquadratic equation
• Some themes from modern algebra
• 10. Numbers, algebra, and the real world
• 11. Complex numbers
• 12. Fundamental theorem of algebra
• 13. Equations of degree greater than four
• 14. General number systems and the axiomatic treatment of algebra
• The book was written for freshmen students who should learn algebra by its history. So the topics mentioned above are treated from a mathematical as well as a historical point of view. The
material is presented in a way that students should see how ideas have emerged. In some cases a rough look forward to the modern development is given. Many sections are supplemented by notes and
exercises, which contain a lot of mathematics as well as additional historical facts. The book is completed by a very short list of references and an index.
Zentralblatt MATH
• This is a fine book on two counts. First ... there is the singularly excellent treatment of the solution of biquadratic equations. Second, it paints a strong picture of mathematics as a very long
sequence of accomplishments, each building on the ones before, in a way that beginning mathematicians can understand and appreciate it. It paints the picture in a concise and economical style,
the style that mathematicians find elegant. I would particularly recommend Algebra in Ancient and Modern Times to strong high school students, to high school algebra teachers, to people who want
a history of mathematics with a lot of mathematics in the history, and to anyone who needs to know how to find an analytic solution to a nasty fourth degree polynomial.
MAA Online
• Varadarajan spins a captivating tale, and the mathematics is first-rate. The book belongs on the shelf of any teacher of algebra ... The great treasure of this book is the discussion of the work
of the great Hindu mathematicians Aryabhata (c.476–550), Brahmagupta (c.598–665), and Bhaskara (c.1114–1185). Teachers of mathematics history will be especially interested in Varadarajan's
exposition of the remarkable cakravala, an algorithm for solving \(X^2 - NY^2= \pm 1\). The book contains many exercises that enhance and supplement the text and that also include historical
information. Many of the exercises ask readers to apply the historical techniques. Some of the exercises are quite difficult and will challenge any student.
Mathematics Teacher
• Varadarajan gives us nice treatment of the work of Indian mathematicians on the so-called Pell equation as well as a very detailed yet teachable discussion of the standard story of the solution
of cubic and quartic equations by del Ferro, Tartaglia, Cardano, and Ferrari in sixteenth-century Italy.
Mathematical Reviews
You may be interested in...
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/mawrld-12","timestamp":"2024-11-04T00:50:15Z","content_type":"text/html","content_length":"135702","record_id":"<urn:uuid:f2f6fb2a-0b86-4e36-a0c6-e5671b0e3835>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00351.warc.gz"} |
Bootstrapping methods – Machine Learning Fundamentals – MLS-C01 Study Guide
Bootstrapping methods
Cross-validation is a good strategy to validate ML models, and you should try it in your daily activities as a data scientist. However, you should also know about other resampling techniques
available out there. Bootstrapping is one of them.
While cross-validation works with no replacement, a bootstrapping approach works with replacement. With replacement means that, while you are drawing multiple random samples from a population
dataset, the same observation might be duplicated across samples.
Usually, bootstrapping is not used to validate models as you do in the traditional cross-validation approach. The reason is simple: since it works with replacement, the same observation used for
training could potentially be used for testing, too. This would result in inflated model performance metrics since the estimator is likely to be correct when predicting an observation that was
already seen in the training set.
Bootstrapping is often used by ML algorithms in an embedded way that requires resampling capabilities to process the data. In this context, bootstrapping is not used to validate the model but to
create the model. Random forest, which will be covered in Chapter 6, Applying Machine Learning Algorithms, is one of those algorithms that use bootstrapping internally for model building.
Designing a good data splitting/sampling strategy is crucial to the success of the model or the algorithm. You should come up with different approaches to split your data, check how the model is
performing on each split, and make sure those splits represent the real scenario where the model will be used.
The variance versus bias trade-off
Any ML model is supposed to contain errors. There are three types of errors that you can find in models: bias errors, variance errors, and unexplained errors. The last one, as expected, cannot be
explained. It is often related to the context of the problem and the relationships between the variables (you can’t control it).
The other two types of errors can be controlled during modeling. You can say that there is a trade-off between bias and variance errors because one will influence the other. In this case, increasing
bias will decrease variance and vice versa.
Bias errors relate to assumptions taken by the model to learn the target function, the one that you want to solve. Some types of algorithms, such as linear algorithms, usually carry over that type of
error because they make a lot of assumptions during model training. For example, linear models assume that the relationship present in the data is linear. Linear regression and logistic regression
are types of algorithms that, in general, contain high bias. Decision trees, on the other hand, are types of algorithms that make fewer assumptions about the data and contain less bias.
Variance relates to the difference in estimations that the model performs on different training data. Models with high variance usually overfit the training set. Decision trees are examples of
algorithms with high variance (they usually rely a lot on specifics of the training set, failing to generalize), and linear and logistic regression are examples of algorithms with low variance. It
does not mean that decision trees are bad estimators; it just means that you need to prune (optimize) them during training.
That being said, the goal of any model is to minimize both bias and variance. However, as already mentioned, each one will impact the other in the opposite direction. For the sake of demonstration,
consider a decision tree to understand how this trade-off works.
Decision trees are nonlinear algorithms and often contain low bias and high variance. In order to decrease variance, you can prune the tree and set the max_depth hyperparameter (the maximum allowed
depth of the tree) to 10. That will force a more generic model, reducing variance. However, that change will also force the model to make more assumptions (since it is now more generic) and increase | {"url":"https://www.acedexam.com/bootstrapping-methods-machine-learning-fundamentals-mls-c01/","timestamp":"2024-11-10T11:56:58Z","content_type":"text/html","content_length":"11898","record_id":"<urn:uuid:1bf209c7-2556-4e33-be08-7f4c8a4c5914>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00191.warc.gz"} |
Euler’s Method-Definition, Properties, Applications, and Examples
Euler’s Method is a cornerstone in numerical approximation, offering a simple yet powerful approach to solving differential equations.
Named after the esteemed mathematician Leonhard Euler, this technique has revolutionized scientific and engineering disciplines by enabling researchers and practitioners to tackle complex
mathematical problems that defy analytical solutions.
Euler’s Method allows for approximating solutions to differential equations by breaking them down into smaller, manageable steps. This article delves into the intricacies of Euler’s Method by
highlighting the crucial interplay between numerical computation and the fundamental concepts of calculus.
We journeyed to uncover its underlying principles, understand its strengths and limitations, and explore its diverse applications across various scientific domains.
Definition of Euler’s Method
Euler’s Method is a numerical approximation technique used to numerically solve ordinary differential equations (ODEs). It is named after the Swiss mathematician Leonhard Euler, who made significant
contributions to the field of mathematics.
The method provides an iterative approach to estimating the solution of an initial value problem by breaking the continuous differential equation into discrete steps. Euler’s Method advances from one
point to the next by approximating the derivative at each step, gradually constructing an approximate solution curve.
The method is based on the concept of the tangent line to an ODE at a given point and employs simple calculations to estimate the next point on the solution trajectory. Below we present a generic
representation of Euler’s method approximation in figure-1.
Although Euler’s Method is relatively straightforward, it is a foundation for more advanced numerical techniques and has immense practical significance in various scientific and engineering fields
where analytic solutions may be challenging or impossible to obtain.
Evaluating Euler’s Method
Evaluating Euler’s Method involves following a systematic process to approximate the solution of an ordinary differential equation (ODE). Here is a step-by-step description of the process:
Formulate the ODE
Start by having a given ODE in the form dy/dx = f(x, y), along with an initial condition specifying the value of y at a given x-value (e.g., y(x₀) = y₀).
Choose the Step Size
Determine the desired step size (h) to divide the interval of interest into smaller intervals. A smaller step size generally yields more accurate results but increases computational effort.
Set up the Discretization
Define a sequence of x-values starting from the initial x₀ and incrementing by the step size h: x₀, x₁ = x₀ + h, x₂ = x₁ + h, and so on, until the desired endpoint is reached.
Initialize the Solution
Set the initial solution value to the given initial condition: y(x₀) = y₀.
Repeat the Iteration
Continue iterating the method by moving to the next x-value in the sequence and updating the solution using the computed derivative and step size. Repeat this process until reaching the desired
Output the Solution
Once the iteration is complete, the final set of (x, y) pairs represents the numerical approximation of the solution to the ODE within the specified interval.
Iterate the Method
For each xᵢ in the sequence of x-values (from x₀ to the endpoint), apply the following steps:
□ Evaluate the derivative: Compute the derivative f(x, y) at the current xᵢ and y-value.
□ Update the solution: Multiply the derivative by the step size h and add the result to the previous solution value. This yields the next approximation of the solution: yᵢ₊₁ = yᵢ + h * f(xᵢ,
It is important to note that Euler’s Method provides an approximate solution, and the accuracy depends on the chosen step size. Smaller step sizes generally yield more accurate results but require
more computational effort. Higher-order methods may be more appropriate for complex or highly curved solution curves to minimize the accumulated error.
Approximation of Solutions
Euler’s Method provides a numerical approximation of the solution to an ordinary differential equation (ODE). It breaks down the continuous ODE into discrete steps, allowing for the estimation of the
solution at specific points.
Local Linearity Assumption
The method assumes that the behavior of the solution between two adjacent points can be approximated by a straight line based on the slope at the current point. This assumption holds for small step
sizes, where a tangent line can closely approximate the solution curve.
The method employs a step size (h) to divide the interval over which the solution is sought into smaller intervals. This discretization allows for evaluating the derivative at each step and the
progression toward the next point on the solution curve.
Global Error Accumulation
Euler’s Method is prone to accumulating errors over many steps. This cumulative error arises from the linear approximation employed at each step and can lead to a significant deviation from the true
solution. Smaller step sizes generally reduce the overall error.
Iterative Process
Euler’s Method is an iterative process where the solution at each step is determined based on the previous step’s solution and the derivative at that point. It builds the approximation by
successively calculating the next point on the solution trajectory.
Euler’s Method follows a simple algorithm for each step: (a) Evaluate the derivative at the current point, (b) Multiply the derivative by the step size, (c) Update the solution by adding the product
to the current solution, (d) Move to the next point by increasing the independent variable by the step size.
First-Order Approximation
Euler’s Method is a first-order numerical method, meaning its local truncation error is proportional to the square of the step size (O(h^2)). Consequently, it may introduce significant errors for
large step sizes or when the solution curve is highly curved.
Versatility and Efficiency
Despite its limitations, Euler’s Method is widely used for its simplicity and efficiency in solving initial value problems. It serves as the foundation for more sophisticated numerical methods, and
its basic principles are extended and refined in higher-order methods like the Improved Euler Method and Runge-Kutta methods.
Understanding the properties of Euler’s Method helps to appreciate its strengths and limitations, aiding in selecting appropriate numerical methods based on the specific characteristics of the
Despite its simplicity, Euler’s method finds applications in various fields where numerical approximation of ordinary differential equations (ODEs) is required. Here are some notable applications of
Euler’s Method in different fields:
Euler’s Method is extensively used in physics for simulating the motion of objects under the influence of forces. It allows for the numerical solution of ODEs arising from physical laws such as
Newton’s laws of motion or thermodynamics. Applications range from simple projectile motion to complex celestial bodies or fluid dynamics simulations.
Euler’s Method plays a vital role in modeling and analyzing dynamic systems. It enables the numerical solution of ODEs that describe the behavior of systems such as electrical circuits, control
systems, mechanical structures, and fluid flow. Using Euler’s Method, engineers can understand and predict system responses without relying solely on analytical solutions.
Computer Science
Euler’s Method forms the foundation for many numerical algorithms used in computer science. It is crucial for solving differential equations that arise in areas like computer graphics, simulation,
and optimization. Euler’s Method is employed to model physical phenomena, simulate particle dynamics, solve differential equations in numerical analysis, and optimize algorithms through iterative
Biology and Medicine
In biological and medical sciences, Euler’s Method models biological processes, such as population growth, pharmacokinetics, and drug-dose response relationships. It allows researchers to investigate
the dynamics of biological systems and simulate the effects of interventions or treatment strategies.
Economics and Finance
Euler’s Method is utilized in economic and financial modeling to simulate and analyze economic systems and financial markets. It enables the numerical solution of economic equations, asset pricing
models, portfolio optimization, and risk management. Euler’s Method facilitates the study of complex economic dynamics and the assessment of economic policies and investment strategies.
Environmental Science
Environmental scientists utilize Euler’s Method to model ecological systems and analyze the dynamics of environmental processes. It enables the simulation of population dynamics, ecosystem
interactions, climate modeling, and pollutant dispersion. Euler’s Method aids in predicting the effects of environmental changes and understanding the long-term behavior of ecosystems.
Astrophysics and Cosmology
Euler’s Method is employed in astrophysics and cosmology to model the evolution and behavior of celestial objects and the universe. It helps study the dynamics of planetary orbits, stellar evolution,
galaxy formation, and cosmological phenomena. Euler’s Method allows researchers to simulate and analyze complex astronomical systems and investigate the universe’s origins.
Euler’s Method is a versatile and foundational tool in numerous fields, providing a practical approach to numerically solve ODEs and gain insights into dynamic systems lacking analytical solutions.
Its applications span scientific research, engineering design, computational modeling, and decision-making processes.
Example 1
Approximating a First-Order Differential Equation
Consider the differential equation dy/dx = x^2 with the initial condition y(0) = 1. Use Euler’s Method with a step size of h = 0.1 to approximate the solution at x = 0.5.
Using Euler’s Method, we start with the initial condition y(0) = 1 and iteratively calculate the next approximation using the formula:
y_i+1 = y_i + h * f(x_i, y_i)
where f(x, y) represents the derivative.
Step 1: At x = 0, y = 1.
Step 2: At x = 0.1, y = 1 + 0.1 * (0^2) = 1.
Step 3: At x = 0.2, y = 1 + 0.1 * (0.1^2) = 1.001.
Step 4: At x = 0.3, y = 1 + 0.1 * (0.2^2) = 1.004.
Step 5: At x = 0.4, y = 1 + 0.1 * (0.3^2) = 1.009.
Step 6: At x = 0.5, y = 1 + 0.1 * (0.4^2) = 1.016.
Therefore, the approximation of the solution at x = 0.5 is y ≈ 1.016.
Example 2
Approximating a Second-Order Differential Equation
Consider the differential equation d^2y/dx^2 + 2dy/dx + 2y = 0 with initial conditions y(0) = 1 and dy/dx(0) = 0. Use Euler’s Method with a step size of h = 0.1 to approximate the solution at x = 0.4
We convert the second-order equation into a system of first-order equations to approximate the solution using Euler’s Method.
Let u = dy/dx. Then, the given equation becomes a system of two equations:
du/dx = -2u – 2y
dy/dx = u
Using Euler’s Method with a step size of h = 0.1, we approximate the values of u and y at each step.
Step 1: At x = 0, y = 1 and u = 0.
Step 2: At x = 0.1, y = 1 + 0.1 * (0) = 1 and u = 0 + 0.1 * (-2 * 0 – 2 * 1) = -0.2.
Step 3: At x = 0.2, y = 1 + 0.1 * (-0.2) = 0.98 and u = -0.2 + 0.1 * (-2 * (-0.2) – 2 * 0.98) = -0.242.
Step 4: At x = 0.3, y = 0.98 + 0.1 * (-0.242) = 0.9558 and u = -0.242 + 0.1 * (-2 * (-0.242) – 2 * 0.9558) = -0.28514.
Step 5: At x = 0.4, y = 0.9558 + 0.1 * (-0.28514) = 0.92729 and u = -0.28514 + 0.1 * (-2 * (-0.28514) – 2 * 0.92729) = -0.32936.
Therefore, the approximation of the solution at x = 0.4 is y ≈ 0.92729.
Example 3
Approximating a System of Differential Equations
Consider the differential equations dx/dt = t – x and dy/dt = x – y with initial conditions x(0) = 1 and y(0) = 2. Use Euler’s Method with a step size of h = 0.1 to approximate x and y values at t =
Using Euler’s Method, we approximate the values of x and y at each step using the given system of differential equations.
Step 1: At t = 0, x = 1 and y = 2.
Step 2: At t = 0.1, x = 1 + 0.1 * (0 – 1) = 0.9 and y = 2 + 0.1 * (1 – 2) = 1.9.
Step 3: At t = 0.2, x = 0.9 + 0.1 * (0.1 – 0.9) = 0.89 and y = 1.9 + 0.1 * (0.9 – 1.9) = 1.89.
Step 4: At t = 0.3, x = 0.89 + 0.1 * (0.2 – 0.89) = 0.878 and y = 1.89 + 0.1 * (0.89 – 1.89) = 1.88.
Step 5: At t = 0.4, x = 0.878 + 0.1 * (0.3 – 0.878) = 0.8642 and y = 1.88 + 0.1 * (0.878 – 1.88) = 1.8692.
Step 6: At t = 0.5, x = 0.8642 + 0.1 * (0.4 – 0.8642) = 0.84758 and y = 1.8692 + 0.1 * (0.8642 – 1.8692) = 1.86038.
Therefore, the approximation of the x and y values at t = 0.5 is x ≈ 0.84758 and y ≈ 1.86038.
All images were created with MATLAB. | {"url":"https://www.storyofmathematics.com/eulers-method/","timestamp":"2024-11-08T08:22:44Z","content_type":"text/html","content_length":"164989","record_id":"<urn:uuid:d45b5aa4-74d4-4756-9298-e6fde8a4d25d>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00402.warc.gz"} |
Impedance Calculation in Surface Acoustic Wave Simulation
Discussion Closed This discussion was created more than 6 months ago and has been closed. To start a new discussion with a link back to this one, click here.
Impedance Calculation in Surface Acoustic Wave Simulation
Posted 2024年2月21日 GMT-5 12:24 MEMS & Piezoelectric Devices Version 6.1 3 Replies
Please login with a confirmed email address before reporting spam
I am using a simulation for Surface Acoustic Waves in the frequency domain in order to obtain impedance. Initially, I am employing two boundary probes, one to measure electric potential and the other
to measure the norm of surface current density multiplied by the out-of-plane thickness. Then, in the results section for the 1D plot group, it was chosen to use the point graph where I calculate the
impedance ratio (V/A) through both probes. The issue is that I am not obtaining satisfactory results. Am I approaching this incorrectly?
In the physics I am applying, a terminal is used as a source of AC voltage. I have a question regarding this. In the time domain study, I made a global definition (analytical) as a cosine wave
dependent on time, but in the frequency domain study, this is not possible. Therefore, I decided to define a waveform (global definition), but I am uncertain if it is correct for obtaining impedance.
What should I do instead of using this waveform?
Could someone help me with this problem?
Thank you, Regards, TM
3 Replies Last Post 2024年2月29日 GMT-5 15:27
Please login with a confirmed email address before reporting spam
Posted: 9 months ago
In frequency domain impedance is a complex quantity.
Edgar J. Kaiser
emPhys Physical Technology
Please login with a confirmed email address before reporting spam
Posted: 8 months ago
Updated: 8 months ago
In frequency domain impedance is a complex quantity.
I appreciate your reply, but I'm unsure about the changes I should make. I've found a section in the solving where I can select to split complex variables into real and imaginary parts. Is this the
change you are referring to? Can you help me?
Thank you.
Please login with a confirmed email address before reporting spam
Posted: 8 months ago
It was meant as a hint that impedance is quantified in different ways in time and frequency domain. Your approach and your intentions are not really clear. So it is difficult to tell what to change.
In frequency domain it is not required to define any waveform. The load is always a harmonic waveform with the frequency you set in the study step and the amplitude you set in the load node, e.g. the
terminal voltage.
Edgar J. Kaiser
emPhys Physical Technology | {"url":"http://cn.comsol.com/forum/thread/335902/impedance-calculation-in-surface-acoustic-wave-simulation","timestamp":"2024-11-12T12:42:17Z","content_type":"text/html","content_length":"51910","record_id":"<urn:uuid:324998e1-d177-4e07-9117-b5daa9c48e48>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00165.warc.gz"} |
Linear regression analysis of coastal processes
From Coastal Wiki
Linear regression analysis of beach level data is demonstrated here using a set of beach profile measurements carried out at locations along the Lincolnshire coast (UK) by the National Rivers
Authority (now the Environment Agency) and its predecessors between 1959 and 1991, as described in Sutherland et al. 2007^[1]. A short introduction to linear regression theory is given in Appendix A.
Use of trend line for prediction
In order to better understand and predict beach level lowering in front of a seawall, beach levels were measured in front of seawalls along the Lincolnshire coast (UK). Figure 1 shows the bed level
change measured in front of the seawall at Mablethorpe Convalescent Home.
Straight lines fitted to beach level time series give an indication of the rate of change of elevation and hence of erosion or accretion. The measured rates of change are often used to predict future
beach levels by assuming that the best-fit rate from one period will be continued into the future. Alternatively, long-term shoreline change rates can be determined using linear regression on
cross-shore position versus time data.
Genz et al. (2007)^[2] reviewed methods of fitting trend lines, including using end point rates, the average of rates, ordinary least squares (including variations such as jackknifing, weighted least
squares and least absolute deviation (with and without weighting functions). Genz et al. recommended that weighted methods should be used if uncertainties are understood, but not otherwise. The
ordinary least squares, jackknifing and least absolute deviation methods were preferred (with weighting, if appropriate). If the uncertainties are unknown or not quantified then the least absolute
deviation methods is preferred.
The following question then arises: how useful is a best-fit linear trend as a predictor of future beach levels? In order to examine this, the thirty years of Lincolnshire data have been divided into
sections: from 1960 to 1970, from 1970 to 1980, from 1980 to 1990 and from 1960 to 1990, for most of the stations. In each case a least-squares best-fit straight line was fitted to the data and the
rates of change in elevation from the different periods are shown below:
• From 1960 to 1970 the rate of change was -17mm/year;
• From 1970 to 1980 the rate of change was -63mm/year;
• From 1980 to 1990 the rate of change was +47mm/year.
• From 1960 to 1990 the rate of change was -25mm/year.
The data above indicates that 10-year averages provide little predictive capability for estimating the change in elevation for the next 10-years, let alone for the planning horizon that might need to
be considered for a coastal engineering scheme. Few of the 10-year averages are close to the 30-year average.
A prediction horizon is defined as the average length of time over which a prediction (here an extrapolated trend) produces a better level of prediction of future beach levels than a simple baseline
prediction. Sutherland et al. (2007)^[1] devised a method of determining the prediction horizon for an extrapolated trend using the Brier Skill Score (Sutherland et al., 2004^[3]). Here the baseline
prediction was that future beach levels would be the same as the average of the measured levels used to define the trend. A 10 year trend was found to have a prediction horizon of 4 years at
Mablethorpe Convalescent Home (Fig. 2). Similar values have been found at other sites in Lincolnshire.
For a discussion of beach lowering in front of seawalls see also the article Seawalls and revetments.
Conditions for application
Assumptions underlying regression analysis and least-square fitting are:
• the deviations from the trend curve can be equated with Gauss-distributed random noise;
• the deviations are uncorrelated.
In the example of Mablethorpe beach the distribution of residual (i.e. de-trended) beach levels seems to follow the common assumption of a Gaussian (normal) distribution, as shown in Fig. 3.
If the data have random fluctuations which are significantly correlated over some distance, other regression methods must be used. Frequently used analysis methods in this case are:
Appendix A: Introduction to linear regression theory
Although regression analysis is explained in many textbooks, a short mathematical introduction is given below. The object of linear regression is to analyze the correlation of a measurable property
[math]h[/math] (the so-called 'dependent variable' or 'target') with a number [math]K-1[/math] measurable factors [math]y_k, \; k=2, .., K[/math] (the 'independent variables' or 'regressors'). Linear
regression is based on the assumption that [math]h[/math] can be estimated through a linear relationship:
[math]h = \hat{h} +\epsilon \; , \quad \hat{h} = a_1 + \sum_{k=2}^K a_k y_k = \sum_{k=1}^K a_k y_k \; , \qquad (A1)[/math]
where [math]\hat{h}[/math] is the estimator, [math]a_k, \; k=2, .., K[/math] are the regression coefficients, [math]a_1[/math] is the intercept and [math]y_1=1[/math]. The term [math]\epsilon[/math]
is the 'error', i.e., the difference between the true value [math]h[/math] and the estimator [math]\hat{h}[/math]. (Note: Linear regression means linear in the regression coefficients; the regressors
[math]y_k[/math] can be nonlinear quantities, for example [math]y_3=y_2^2[/math].) Now assume that we have observations of [math]N[/math] different instances of the [math]K-1[/math] regressors,
[math]y_{ki}, \; i=1, .., N[/math], and corresponding observations of the dependent variable, [math]h_i[/math]. If [math]N\gt K[/math] these observations can be used to estimate the values of the
regression coefficients [math]a_k[/math] by looking for the best solution to the linear system
[math]h_i = \sum_{k=1}^K a_k y_{ki} + \epsilon_i \; , \qquad (A2)[/math]
where [math]\epsilon_i[/math] are errors related to: (i) the approximate validity of the linear estimator (sometimes called 'epistemic uncertainty') and (ii) measurement inaccuracies, which are often
statistical ('aleatoric') uncertainties. The errors are considered to be stochastic variables with zero mean, [math]E[\epsilon_i]=0[/math] and variance [math]\sigma_i^2 \equiv E[\epsilon_i^2][/math]
(we use the notation [math]E[x][/math]= the mean value from a large number of trials of the stochastic variable [math]x[/math]).
What is 'the best solution"? Different criteria can be used. Least square optimization is the most often used criterium, i.e., the solution for which the sum of the squared errors [math]\Phi[/math]
is minimum,
[math]\Phi = ½ \sum_{i=1}^N \epsilon_i^2 = ½ \sum_{i=1}^N \sum_{j=1}^N \big(h_i - \sum_{k=1}^K a_k y_{ki}\big) \big(h_j - \sum_{k'=1}^K a_{k'} y_{k'j}\big) . \qquad (A3)[/math]
At minimum, [math]\Phi[/math] increases for any change in one of the coefficients [math]a_k[/math], which implies that the partial derivatives are zero:
[math]\Large\frac{\partial \Phi}{\partial a_k}\normalsize = 0 , \; k=1, ..,K . \qquad (A4)[/math]
This condition yields the set of [math]K[/math] linear equations
[math] - \sum_{i=1}^N y_{ki} h_i + \sum_{i=1}^N \sum_{k'=1}^K y_{ki} y_{k'i} a_{k'} = 0 . \qquad (A5)[/math]
from which the regression coefficients [math]a_k[/math] can be solved.
In the more compact matrix notation we may write: [math]H[/math] is the [math]N[/math]-vector with elements [math]h_i[/math], [math]\; A[/math] is the [math]K[/math]-vector with elements [math]a_k[/
math] and [math]Y[/math] is the [math]K \times N[/math] matrix with elements [math]y_{ki}[/math]. The linear equations (A5) can then be written
[math]Y^T \, H = Y^T \, Y \, A . \qquad (A6)[/math]
The solution is found by inversion of the [math]N \times N[/math] matrix [math]Y^T \, Y[/math], giving
[math]A = (Y^T \, Y)^{-1} Y^T \, H . \qquad (A7)[/math]
Least squares is the best solution if the regressors [math]y_k[/math] are uncorrelated and if the errors [math]\epsilon_i[/math] are uncorrelated and identically Gaussian-distributed, i.e., they all
have the same variance [math]\sigma_i^2[/math]. This is often not the case in practice. A few other cases are discussed below.
Case 1. The data have errors that scale with their magnitude. Then a log transform of the data produces errors approximately independent of the magnitude. Linear regression with least squares is
applied to the log-transformed data.
Case 2. The errors differ among the data ('non-homoscedasticity'), [math]\sigma_i \ne \sigma_j , \, i \ne j[/math]. If estimates of the variances are known, one can use weighted least squares instead
of least squares, i.e., replace (A3) with [math]\Phi = ½ \sum_{i=1}^N \big( \epsilon_i / \sigma_i \big)^2[/math].
Case 3. Multicollinearity, meaning a linear relationship exists between two or more regressor variables. In this case the non-independent regressor variable should be removed from the data.
Case 4. The errors of the [math]N[/math] observations are correlated, [math]c_{ij} \equiv E[\epsilon_i \, \epsilon_j] \ne 0[/math]. If an estimate of the covariances [math]c_{ij} [/math] is known,
the generalized least squares method can be used. The solution (A7) is then replaced by
[math]A = (Y^T \, C^{-1} \, Y)^{-1} Y^T \, C^{-1} H , \qquad (A8)[/math]
where [math]C[/math] is the covariance matrix with elements [math]c_{ij}[/math]. Another option is to use kriging, see the article Data interpolation with Kriging.
Related articles
1. ↑ ^1.0 ^1.1 ^1.2 Sutherland, J., Brampton, A.H., Obhrai, C., Motyka, G.M., Vun, P.-L. and Dunn, S.L. 2007. Understanding the lowering of beaches in front of coastal defence structures, Stage 2.
Defra/EA Joint Flood and Coastal Erosion Risk Management R&D programme Technical Report FD1927/TR
2. ↑ Genz, A.S., Fletcher, C.H., Dunn, R.A., Frazer, L.N. and Rooney, J.J. 2007. The predictive accuracy of shoreline change rate methods and alongshore beach variation on Maui, Hawaii. Journal of
Coastal Research 23(1): 87 – 105
3. ↑ Sutherland, J., Peet, A.H. and Soulsby, R.L. 2004. Evaluating the performance of morphological models. Coastal Engineering 51, pp. 917-939. | {"url":"https://www.coastalwiki.org/wiki/Linear_regression_analysis_of_coastal_processes","timestamp":"2024-11-06T01:22:23Z","content_type":"text/html","content_length":"43530","record_id":"<urn:uuid:410bf353-9507-449d-b1f1-865ccd66aef9>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00354.warc.gz"} |
pinaki's homepage
Starting from my PhD thesis Towards a Bezout-type Theory of Affine Varieties written at University of Toronto under the supervision of Pierre Milman, my research in math falls under two broad themes:
• Compactification of affine varieties
• Affine Bézout problem
I am in particular interested in one of the simplest cases of the first problem:
• Compactification of $\mathbb{C}^2$
Compactification of affine varieties
Compactification of $\mathbb{C}^2$
• Normal equivariant compactifications of $\mathbb{G}^2_a$ of Picard rank one, arXiv:1610.03563.
• Mori dream surfaces associated with curves with one place at infinity, arXiv:1312.2168.
• Analytic compactifications of $\mathbb{C}^2$ part II – one irreducible curve at infinity, arXiv:1307.5577.
• Is the intersection of two finitely generated subalgebras of a polynomial ring also finitely generated? Arnold Mathematical Journal, 3 (3), 2017; arXiv:1301.2730.
• How to determine the sign of a valuation on $\mathbb{C}[x,y]$, Michigan Mathematics Journal 66 (4), 2017; arXiv:1301:3172.
• Algebraicity of normal analytic compactifications of $\mathbb{C}^2$ with one irreducible curve at infinity, Algebra & Number Theory, 10 (8), 2016; arXiv:1510.00998.
• Analytic Compactifications of $\mathbb{C}^2$ part I – curvettes at infinity, C. R. Math. Acad. Sci. Soc. R. Canada, 38 (2), 2016; arXiv:1110.6905.
• How fast do polynomials grow on semialgebraic sets?, with Tim Netzer, Journal of Algebra, 413, 2014; arXiv:1305.1215.
Note: I had signed the pledge to boycott Elsevier. I came to know of the submission of this article to the Journal of Algebra only after the fact (Tim made this submission, and he was not aware
of my pledge). I was too shy to ask him to withdraw the submission – this I regret now.
• Normal analytic compactifications of $\mathbb{C}^2$, Automorphisms in birational and affine geometry, Springer Proc. Math. Stat. 79, 2014; arXiv:1308.3286.
Affine Bézout problem
• Intersection multiplicity, Milnor number and Bernstein’s theorem, arXiv:1607.04860
Publications (primarily announcements) | {"url":"https://pinakimondal.org/math-research-overview/","timestamp":"2024-11-04T07:08:41Z","content_type":"text/html","content_length":"39752","record_id":"<urn:uuid:7f3782ba-d5b4-48e1-987e-65011c5475fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00157.warc.gz"} |
Connectivity properties of random subgraphs of the cube
The n‐dimensional cube Qn is the graph whose vertices are the subsets of {1, n} where two such vertices are adjacent if and only if their symmetric difference is a singleton. Clearly Qn is an
n‐connected graph of diameter and radius n. Write M = n2n−1 = e(Qn) for the size of Qn. Let Q̃ = (Qt)M0 be a random Q̃‐process. Thus Qt is a spanning subgraph of Qn of size t, and Qt is obtained from
Qt‐1 by the random addition of an edge of Qn not in Qt‐1. Let tk = τ(Q̃ δ ≧ k) be the hitting time of the property of having minimal degree at least k. It is shown in [5] that, almost surely, at time
t1 the graph Qt becomes connected and that in fact the diameter of Qt at this point is n + 1. Here we generalize this result by showing that, for any fixed k≧2, almost surely at time tk the graph Qt
acquires the extremely strong property that any two of its vertices are connected by k internally vertex‐disjoint paths each of length at most n, except for possibly one, which may have length n + 1.
In particular, the hitting time of k‐connectedness is almost surely tk. © 1995 John Wiley & Sons, Inc. Copyright © 1995 Wiley Periodicals, Inc., A Wiley Company
Publication Title
Random Structures & Algorithms
Recommended Citation
Bollobás, B., Kohayakawa, Y., & Luczak, T. (1995). Connectivity properties of random subgraphs of the cube. Random Structures & Algorithms, 6 (2-3), 221-230. https://doi.org/10.1002/rsa.3240060210 | {"url":"https://digitalcommons.memphis.edu/facpubs/4420/","timestamp":"2024-11-05T11:02:44Z","content_type":"text/html","content_length":"36368","record_id":"<urn:uuid:42b10302-5793-4d92-8ebd-22959c2ab93b>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00122.warc.gz"} |
Dr Gareth Moore
Sudoku Xtra 24 is now finally available! It’s packed with 130 puzzles of a wide range of types, including a huge variety of sudoku variants.
This issue I’ve included some new sudoku types such as Two-grid Interconnected Sudoku, Mystery Multiple Sudoku and Blackout Sudoku. I’ve also made an effort to include all of the most popular
variants as requested by readers, such as Consecutive Sudoku, Inequality Sudoku, Odd/Even Sudoku – and of course many more.
There’s also a range of non-sudoku puzzles, including Light-up/Akari, Hashi, Slitherlink, Battleships, Skyscrapers, Calcudoku, Futoshiki, No Four in a Row, and more!
It’s available either as a PDF to print yourself (every page is self-contained, so you can print only the pages you want), or as a professionally-printed book direct from Amazon – there are links for
all of these on the Sudoku Xtra site.
Sum-Skyscraper 5×5 puzzleI made this puzzle just before Christmas, and it’s been waiting on my desktop to be posted here ever since! Well, now it finally has been.
This is a Sum Skyscraper. Place the digits 1 to 5 once each into every row and column in the grid. Numbers outside the grid provide the total (i.e. sum) of ‘visible’ grid digits along that row or
column, if you imagine each digit as a building of that many storeys. Taller buildings always obscure shorter ones. So, for example, a clue for 21354 from the top of such a column would be 10, since
the 2, 3 and 5 are visible (the 1 and 4 are obscured by the 2 and 5 respectively), and 2+3+5 = 10.
Sudoku Christmas Star puzzleA Sudoku, in a star shape.
Just that. (Place 1-9 once each into every row, column and bold-lined 3×3 box).
Weaved Bridge Maze 2
Weaved Bridge Maze 1
I’ve recently been making some material for a book of kids’ mazes, and so I thought it would be fun to post a harder version of some of those puzzles here.
First-up, here’s a weave maze, so-called because the paths weave over and under each other. In this puzzle I’ve drawn narrow bridges where one path crosses over another.
If the first one is too easy for you, try the second! It needs to be printed full-page in order to have enough space to solve it.
Just enter at the top of the maze and follow paths until you exit at the bottom of the maze.
I’m currently working on a forthcoming book (The Mammoth Book of Brain Workouts, published next year in the UK by Constable & Robinson, and in the US by Running Press), and have been experimenting
with something I wrote about briefly a few years ago but hadn’t really tried since – variants on Numberlink.
Numberlink puzzles have proved popular recently in various guises, including Flow Free and other apps on mobile devices such as the iPhone. There are quite a few such apps available, but none of them
force a unique solution on the user (and generally the puzzles do indeed have many different solutions), which when you’re playing against a computer that grades you isn’t necessarily a problem since
you can at least be marked correct/wrong automatically.
For a logic puzzle solver, such vague puzzles are perhaps a bit disappointing because you will reach a point during solving where you can’t eliminate any options because they may all be valid,
despite being contradictory.
I’ve made a printed book of 200 of these Flow Free puzzles – you can get it from Amazon.com (currently $5.36) or Amazon.co.uk (£3.95) – and they’re actually quite fun to solve despite the multiple
solutions (not all the puzzles have multiple solutions, but some do). Unlike traditional Numberlink the puzzles include an explicit rule that every cell in the grid must be used, which eliminates a
lot of potential solutions and means the puzzles usually require some thought.
Toroidal numberlink 6×6 puzzle
Toroidal numberlink 5×5 puzzle
But such multi-solution puzzles are not the kind of puzzle I usually post, so I’m going to stick to logic puzzles with unique solutions on this blog.
It turns out that if you allow lines on a Numberlink puzzle to wrap around one edge and come back on the other – so if a line goes off one end of a row or column it comes back on at the other end of
the same row or column – that the puzzles get very difficult very quickly. In fact, even at 5×5 many such puzzles are very challenging. Once you get to 6×6, I have real trouble with them.
Here’s a 5×5 and a 6×6 puzzle for you to try. Let me know how you get on! There’s no explicit rule that every cell must be used – but as a hint I can tell you that they are anyway. There’s a unique
solution to each puzzle.
I’ve recently launched a new series of ‘101 Giant Sudoku’ books, to cater for those who like their Sudoku to be considerably larger than normal!
You can see the entire series at PuzzleBooks.org (scroll to the bottom) or visit Amazon and search for “101 giant sudoku”.
There are currently 12 books in the series: 14×14, 15×15, 16×16, 18×18, 20×20, 21×21, 22×22, 24×24, 25×25, 28×28, 30×30 and 36×36.
The larger puzzles work just as you’d expected, so in Sudoku 36×36, for example, you must place 0-9 and A-Z into every one of the 36 rows, 36 columns and 36 6×6 boxes!
These puzzles are designed so they don’t need any advanced logic – just scan the rows and columns and boxes to see what’s missing and what can fit where.
All of the puzzles are designed with attractive 8-way symmetry patterns.
Following-up yesterday’s Skyscraper puzzles, I thought I’d post a couple of Sum Skyscraper variant puzzles.
Sum Skyscraper puzzles are very similar to Skyscraper puzzles, so no number can repeat in any row or column and external ’skyscraper’ clues reveal information about the numbers in the main grid. In
5×5 puzzles place 1-5, and in 6×6 puzzles place 1-6.
Each number in the completed grid represents a building of that many storeys, and place the buildings in such a way that each given number outside the grid represents the sum of the number of
buildings that can be seen from that point, looking only at that number’s row or column. A building with a higher value always obscures a building with a lower value, while a building with a lower
value never obscures a building with a higher value. So the clue ‘6′ in a 5×5 puzzle would indicate that the buildings ‘1′ and ‘5′ can be seen (’5′ is always visible in 5×5 puzzles), so the solution
to a row might be 15234.
I haven’t posted here for a while, but to celebrate the advent of reduced-clue skyscraper puzzles on PuzzleMix.com earlier today I thought I’d post a few Skyscraper puzzles here.
Skyscraper puzzles combine the no-repeat row and column constraints of sudoku with novel additional clues. In these 5×5 puzzles, place the numbers 1-5 once each into every row and column. Each number
in the completed grid represents a building of that many storeys.
Place the buildings in such a way that each given number outside the grid represents the number of buildings that can be seen from that point, looking only at that number’s row or column. A building
with a higher value always obscures a building with a lower value, while a building with a lower value never obscures a building with a higher value.
Sudoku Xtra 21 is now available, both as a PDF download to print yourself and also as a pre-printed book from your local Amazon store. Follow the links on the Sudoku Xtra website to get hold of it
in your preferred form.
Sudoku Xtra 21 is packed full of 144 top-quality logic puzzles covering a wide range of types. There is a particular emphasis on Sudoku and new varieties appearing for the first time in this volume
include Quad-Max Sudoku, Anti-Knight Sudoku, Slashed Sudoku, Minus Little Killer, Product Frame Sudoku, Headless Worm Sudoku, Extra Region Windmill Sudoku, Non-Consecutive Diagonal Sudoku, Mystery
Calcudoku Zero and a giant Trio 13-grid Samurai Sudoku.
The very first page features a large Arrow Samurai Sudoku, and other returning variants that were recently introduced to the series include Worm Sudoku, Quad Clue Sudoku, Offset Sudoku, Sudoku XV
and Kropki Sudoku.
Not only that, but there’s Hanjie, Futoshiki, Hashi, Yajilin, Calcudoku, Dominoes, Hitori, Slitherlink and many more logic puzzles.
Pre-printed copies are on top-quality, 8.5×11 inch paper ideal for solving on, while download PDFs are designed to fit both A4 and Letter paper for printing.
Just a quick heads-up that PuzzleMix, my site where you can play a wide range of puzzles online, now supports touch screen play for all of the number entry puzzles – so that’s Sudoku, Killer Sudoku,
Futoshiki, Calcudoku, Skyscraper, Sudoku X, Kropki Sudoku, Killer Sudoku Pro, Jigsaw Sudoku, Consecutive Sudoku, Wraparound Sudoku, Sudoku XV, Killer Sudoku X, Odd Pair Sudoku and more.
It’s pretty darn awesome, even if I do say so myself! It handles the screen touch events directly so it’s just as fast as running a native application on the iPad or iPhone. It also works on other
Recent Comments
• Marilyn on Trio Odd-Even Sudoku
• Marilyn on Diagonal Non-Consecutive Sudoku | {"url":"https://www.garethmoore.co.uk/page/2/","timestamp":"2024-11-07T09:46:48Z","content_type":"application/xhtml+xml","content_length":"61485","record_id":"<urn:uuid:ea095370-b0b5-4818-a621-fd960d530d13>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00420.warc.gz"} |
Does 32 ounces equal 1 quart? - Liquid Image
Does 32 ounces equal 1 quart?
No, 32 ounces does not equal 1 quart. While there are 32 ounces in a quart, 1 quart is actually equivalent to 32 fluid ounces, which is different. A fluid ounce is a unit of volume equal to about 0.
0296 liters (1.
041 imperial fluid ounces). One quart is equal to about 0. 946 liters (4 cups or 2 pints or 32 fluid ounces). Therefore, 32 ounces does not equal 1 quart.
Does 2 quarts equal 32 oz?
No, 2 quarts does not equal 32 oz. A quart is a unit of volume that is equal to 4 cups, which is equivalent to 32 fluid ounces. Therefore, 2 quarts would be equal to 64 fluid ounces, which is double
the amount of 32 ounces.
Additionally, it is important to note that a quart is a dry volume measurement, which is different from a fluid volume measurement. In dry measurements, 1 quart is equal to about 1. 1 liters, or 4. 2
However, in fluid measurements, 1 quart is equal to 32 fluid ounces, so two quarts would total 64 fluid ounces.
What is 1 quart equal to in ounces?
One quart is equal to 32 fluid ounces in the US Customary system of measurement. Quart is a unit of volume and ounce is a unit of weight or mass. A quart is equal to two pints and 4 cups, which is 32
US customary ounces.
A quart is also equal to 33. 6 imperial ounces or 29. 6 metric ounces.
How many 8oz cups are in a quart?
There are 4 cups in a quart, so if you have 8 ounce cups, there would be 8 of them in a quart. To put it another way, there are 32 fluid ounces (oz) in a quart, and since 8 fluid ounces fill a cup, 4
cups would fill a quart.
Therefore, 8 8oz cups would be equal to a quart.
What makes up 1 quart?
One quart is equal to two pints, four cups, and 32 fluid ounces. A quart is a unit of volume used mainly in the United States and the United Kingdom. It is equivalent to 0. 946352946 liters. A quart
is half a gallon, a U.
S. customary unit of liquid measurement. In dry capacity, a quart is equal to a quarter of a gallon, or 57. 75 cubic inches. In terms of weight, one quart is equivalent to 2 pounds (or 32 ounces),
though this will vary depending on the density of the liquid or material being measured.
Examples of materials typically measured in quarts include fruits and vegetables, grains, water, and other liquids. Dry ingredients such as salt, sugar, and flour can also be measured in quarts.
Is 1 gallon the same as 2 quarts?
No, 1 gallon is not the same as 2 quarts. One gallon is 4 quarts, so 2 quarts would equal half a gallon. A gallon is also equal to 16 cups, 32 ounces, or 128 tablespoons. The imperial gallon is 4.
54609 liters and the US gallon is 3.
785 liters. In comparison, a quart is equal to 2 pints, 4 cups, 32 ounces, or 64 tablespoons. The imperial quart is 1. 13652 liters and the US quart is 0. 946 liters.
Is 8oz equal to 2 cups?
Yes, 8oz is equal to 2 cups. A cup is a unit of volume typically used for measuring foods like flour, sugar, and water. 1 cup is equivalent to 8 fluid oz, and therefore 2 cups is equal to 16 fluid
So 8 oz is equal to 2 cups. However, the exact conversion can vary depending on the substance you are measuring, as the density of different substances can vary. For example, 1 cup of flour doesn’t
necessarily weigh the same as 1 cup of water.
How many quarts makes a gallon?
A gallon is a measure of volume in the US customary and imperial systems of measurement. It is equal to 128 US fluid ounces, 4 quarts, or 3. 785 liters. In other words, one gallon is equal to four
To answer the question more simply, one gallon is equal to four quarts.
Is it okay to drink a gallon of water a day?
It depends on the individual and their activity levels, but generally speaking, drinking a gallon of water per day is inadvisable and potentially dangerous. While drinking plenty of water is
beneficial for your health, drinking more than necessary can cause water intoxication, where the body has too much water and too little salt in the blood.
This can lead to a series of health complications, such as low sodium levels, headaches, seizures, confusion, and possibly even death in some cases.
It is generally recommended that the average person drink between 8 to 12 cups of water (or 64-96 ounces) per day, depending on factors such as age and activity level. However, some individuals may
need more or less water depending on personal medical conditions or other factors.
It is important to consult with a doctor before making any large changes to one’s overall water intake, such as drinking a gallon of water per day.
Is 1 quart more than 8 cups?
Yes, 1 quart is more than 8 cups of liquid. A quart is equal to 2 pints, which is equal to 4 cups. Therefore, 1 quart is 4 cups larger than 8 cups. If you need to convert between quarts and cups, you
can use the following conversion: 1 quart is equal to 4 cups.
How many ounces does it take to make a quart?
A quart is a unit of measurement that is equal to two pints, or 32 fluid ounces. Therefore, it takes 32 ounces to make a single quart.
Is 1 quart the same as 16 oz?
Yes, 1 quart is equal to 16 ounces. There are 8 ounces in a cup, so 1 quart is equal to 2 cups or 16 ounces. A quart is a commonly used unit of volume measurement in the US customary system, which is
equal to a quarter of a gallon.
When measuring liquids such as milk, water, juice, and other beverages, it is common to use quarts to do so. Therefore, if a recipe calls for 1 quart of liquid, it is the same as 16 oz. Additionally,
1 quart is also the equivalent to 4 cups, 32 tablespoons, or 64 teaspoons.
How many 8oz glasses should you drink a day?
The amount of 8 oz glasses of water you should drink a day varies and is based on a variety of factors, such as your age, weight, sex, activity level and the climate you live in. Generally speaking,
an adult should aim for 6-8 8 oz glasses of water every day, totaling around 48-64 oz of water.
However, if you are an active individual who lives in a hot or humid climate, you may need to drink up to 10 8 oz glasses of water per day. In addition, women who are pregnant or breastfeeding may
need to consume higher amounts of water on a daily basis, depending on their individual needs.
Furthermore, it is important to note that other beverages, such as coffee and tea, also contribute to your daily water intake and should be taken into account when calculating how much water to
Is 64 oz good to drink a day?
No, 64 ounces of water per day is not necessarily a good recommendation for everyone. Depending on your health, lifestyle, and activity levels, your water intake requirements may vary significantly.
Generally, most doctors recommend drinking approximately half your body weight in ounces.
For example, if you weigh 160 pounds then aim for 80 ounces of water a day. That said, there are certain factors that may affect your daily water needs, such as temperature, elevation, exercise, and
If you are living in a hot climate or exercise regularly, it’s best to drink more than the recommended amount to make up for fluid lost during sweat. Additionally, if you are pregnant, or are
suffering from any health ailment such as fever, diarrhea, or vomiting, then you should drink more water to replace lost fluids.
Ultimately, it is best to speak to your doctor to determine what amount of water is best for you.
How many water bottles is 8 glasses a day?
Generally, the recommended amount of water that should be consumed by adults is 2 liters or 8 glasses per day. Since it is commonly known that a standard water bottle holds half a liter of liquid,
that would mean that the recommended daily intake would be about 4 water bottles per day.
However, this is just a general estimate as 8 glasses of water don’t necessarily have to come from water bottles. Glasses of water could also be filled from pitchers, drinking fountains, cups or
other containers.
Therefore, it would depend on the size and amount of liquid contained in these objects to get an exact equivalent for 8 glasses of water. | {"url":"https://www.liquidimageco.com/does-32-ounces-equal-1-quart/","timestamp":"2024-11-09T22:51:37Z","content_type":"text/html","content_length":"109466","record_id":"<urn:uuid:14f79cd6-a00a-4b3f-8539-82edf34e8192>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00411.warc.gz"} |
Unveiling the Number of Integers Between Two Given Numbers: A Comprehensive Guide
Unveiling The Number Of Integers Between Two Given Numbers: A Comprehensive Guide
Delve into the fascinating world of bounds and intervals, where you’ll explore the concepts of lower and upper bounds, intervals (closed, open, half-open), and range. Discover how these mathematical
tools help us define and explore sets of numbers. We’ll investigate how to determine the number of integers between two given numbers, unlocking practical applications in various fields.
Unlocking the Secrets of Bounds and Intervals: A Journey into Mathematical Precision
In the realm of mathematics, understanding the concepts of bounds and intervals is crucial for deciphering the intricate language of numbers. Bounds serve as invisible barriers that confine a set of
values, while intervals delineate the specific range within which those values reside. Together, they lay the foundation for comprehending the structure and behavior of the numerical world.
In this exploration, we will embark on a journey to unravel the significance of bounds, intervals, and their interplay. We will delve into the concepts of lower and upper bounds, embracing the range
as the difference between these bounds. We will discover the various types of intervals, recognizing their distinct characteristics and relationships with bounds. We will also delve into the
intricacies of half-open intervals, understanding their unique properties and practical applications.
Along this mathematical odyssey, we will encounter real-life scenarios where these concepts find their utility. We will learn how to determine the number of integers nestled between two given numbers
and calculate the cardinality of a set of numbers within a specific range or interval. Join us on this enlightening adventure as we unlock the secrets of bounds and intervals, illuminating the path
to mathematical precision.
Defining Bounds: Essential Concepts for Understanding Intervals and Ranges
Understanding the concepts of bounds, ranges, and intervals is crucial for navigating mathematical problems and real-life applications effectively. In this post, we’ll embark on a storytelling
journey to unravel the complexities of these concepts, making them accessible to all.
Bounds: The Boundaries of a Set
Bounds are limits that define the extent of a set of numbers. The lower bound represents the smallest possible value in the set, while the upper bound represents the largest possible value. These
boundaries help us understand the range and distribution of a set.
For instance, if we have the set {2, 4, 6, 8}, the lower bound is 2 and the upper bound is 8. These bounds define the range of the set, which is the difference between the upper and lower bounds. In
this case, the range is 8 – 2 = 6.
Range: The Span of a Set
The range of a set is the total distance between the lower and upper bounds. It measures the spread or variability of the set. A larger range indicates a wider spread of values, while a smaller range
indicates a narrower spread.
Returning to our previous example, the range of the set {2, 4, 6, 8} is 6. This tells us that the values in the set are spread out over a range of 6 units.
Intervals: Defining Subsets
Intervals are subsets of the real numbers defined by the relationship between their lower and upper bounds. Different types of intervals include:
• Closed intervals: Include both the lower and upper bounds. Represented as [a, b].
• Open intervals: Exclude both the lower and upper bounds. Represented as (a, b).
• Half-open intervals: Include one bound and exclude the other. Represented as [a, b) or (a, b].
For example, the interval [2, 8] includes all numbers between 2 and 8, including 2 and 8 themselves. The interval (2, 8) includes all numbers between 2 and 8, but excludes 2 and 8. The interval [2,
8) includes all numbers between 2 and 8, including 2 but excluding 8.
Understanding bounds, ranges, and intervals is essential for solving mathematical problems and interpreting real-world data. By grasping these concepts, you’ll gain a deeper appreciation for the
structure and organization of numbers, empowering you to tackle more complex mathematical challenges with confidence.
Understanding Range: The Difference Between Bounds
In the realm of mathematics, bounds and intervals play a crucial role in defining the limits within which values can reside. Among these concepts, range stands out as a fundamental measure that
quantifies the spread of a dataset or the extent of a mathematical expression.
Defining Range
The range of a set of numbers or an interval is the difference between its upper bound and lower bound. The upper bound is the highest value that the set or interval can attain, while the lower bound
is the lowest.
For example, if we have the set of numbers {2, 5, 7, 10}, the upper bound is 10 and the lower bound is 2. The range of this set is therefore 10 – 2 = 8, indicating that the numbers in the set span a
difference of 8 units.
Upper and Lower Bounds as Starting and Endpoint
The upper bound and lower bound serve as essential reference points for an interval. The lower bound marks the starting point of an interval, below which no values are included. Conversely, the upper
bound represents the endpoint of an interval, beyond which no values are included.
Consider the closed interval [2, 10]. The lower bound of 2 indicates that the interval starts at 2, and the upper bound of 10 indicates that the interval ends at 10. All numbers between 2 and 10,
including 2 and 10, are included in this interval.
In summary, the range of a set of numbers or an interval measures the distance between the highest and lowest values it can contain. The upper bound and lower bound define the starting point and
endpoint of an interval, respectively, serving as important parameters for understanding mathematical expressions and data analysis.
Intervals: Types and Boundaries
In the realm of mathematics, intervals play a crucial role in describing the boundaries and ranges of numbers. Intervals serve as the building blocks for both closed and open sets, essential concepts
in calculus, analysis, and beyond. Understanding the different types of intervals is fundamental to comprehending the behavior of functions, sets, and sequences.
Closed Intervals
Closed intervals are characterized by both their lower bound and upper bound being included in the set. These intervals are denoted using square brackets, such as [a, b]. The lower bound, a,
represents the starting point, while the upper bound, b, represents the ending point. Every number between a and b, including a and b themselves, belongs to the closed interval [a, b].
Open Intervals
Open intervals, on the other hand, exclude both their lower and upper bounds from the set. They are denoted using parentheses, such as (a, b). The lower bound, a, is not included in the interval,
while the upper bound, b, is not included either. Only numbers between a and b, but not equal to a or b, belong to the open interval (a, b).
Half-Open Intervals
Half-open intervals combine the characteristics of both closed and open intervals. They include one boundary point while excluding the other. There are two types of half-open intervals:
• [a, b): This half-open interval includes the lower bound, a, but excludes the upper bound, b. It is often read as “a to b, not including b.”
• (a, b]: This half-open interval includes the upper bound, b, but excludes the lower bound, a. It is often read as “a to b, not including a.”
Relationship to Bounds
The type of interval is determined by the relationship between the lower and upper bounds. Closed intervals occur when both bounds are included, open intervals occur when both bounds are excluded,
and half-open intervals occur when one bound is included and the other is excluded.
Half-open Intervals: A Simple Guide
In the realm of mathematics, understanding the nuances of concepts like bounds and intervals is crucial. Particularly, half-open intervals, with their unique structure, hold a special significance.
Defining Half-open Intervals
A half-open interval is a set of numbers defined by its starting point and excluded endpoint. It’s represented using brackets on one side and parentheses on the other. For instance, the interval (2,
5] includes all numbers greater than 2 but less than or equal to 5.
Structure and Significance
Half-open intervals are a specific type of interval that lies between closed and open intervals. Their distinctive feature is the excluded endpoint. This exclusion creates a sense of directionality,
indicating that the interval continues indefinitely in one direction.
Practical Applications
Half-open intervals play a vital role in various practical scenarios. One common application is finding the number of integers between two given numbers. For example, to determine the number of
integers between 2 and 5 (excluding 5), we use the half-open interval (2, 5], which encompasses three integers: 3, 4, and 5.
Another practical use is in determining the cardinality of a set of numbers within a given interval. For instance, the set {3, 4, 5, 6} has a cardinality of 4 within the half-open interval (2, 7].
Half-open intervals are a fundamental concept in mathematics, offering a precise way to describe sets of numbers. Understanding their structure and significance is essential for various practical
applications, such as counting integers and determining the cardinality of sets. Embrace the power of half-open intervals and unlock a deeper understanding of mathematical concepts!
Practical Applications of Bounds and Intervals: Unlocking the Secrets of Numbers
As we delve into the realm of mathematics, understanding concepts like bounds and intervals becomes crucial for navigating the intricacies of numbers. These concepts go beyond mere theoretical
knowledge; they find practical applications in various real-life scenarios.
One such application lies in determining the number of integers within a specified range. Consider a scenario where you need to count the number of positive integers between 10 and 50 inclusive. By
understanding the bounds and the concept of range, you can easily calculate that the range is 40 (50-10=40). Since we’re including both 10 and 50, the number of integers within the range is 41 (0 to
40 inclusive).
Another practical application involves determining the cardinality of a set of numbers within a certain range or interval. The cardinality of a set refers to the number of elements within it. Suppose
you have a set of numbers {2, 4, 6, 8, 10} and you wish to find the cardinality of the set that lies within the closed interval [2, 8]. By identifying the bounds as 2 and 8, you can quickly count
that there are four elements (2, 4, 6, 8) in the set that belong to this interval.
The concepts of bounds and intervals are not just limited to these examples; they extend to a wide range of practical applications across various disciplines. Whether you’re working with data
analysis, probability, or physics, understanding these concepts becomes imperative for interpreting and comprehending the numerical aspects of the world around us. | {"url":"https://www.pattontheedge.ca/unveiling-integers-between-given-numbers/","timestamp":"2024-11-04T11:15:39Z","content_type":"text/html","content_length":"157172","record_id":"<urn:uuid:74b6d49a-fb84-4f35-88c9-cdb2a3bf60de>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00666.warc.gz"} |
Re: ARIMA differences in R and SAS
Hi everyone,
I am working to understand some of the differences between SAS and R (using the fable, forecasting, or basic stats packages) to be able to run the same model in SAS and R. After being unable to
replicate a more complex model, I tried to run a basic ARIMA (1,1,1) and work my way up. I know that this is not the right model for my data, the point of this is not to fit a model, rather to get
the same forecasted values using SAS and R.
However, I am not understanding the SAS outcome - although the data has seasonality, I have not indicated that to SAS. But the forecasted values make me think that something is happening in SAS that
I don't know about. Interestingly, when I add an intercept to my R model, I get similar values as the SAS model where I specify no intercept.
I would appreciate any insight anyone has into this! I have read through a lot of R and SAS documentation, but have not found anything that sheds light on why this would happen. I have attached a
screenshot of the forecasted values in SAS, R without an intercept, and R with an intercept. Thanks!
SAS Code:
proc arima data=cld;
identify var=lgs1(1) nlag=13 minic scan esacf
estimate p=(1)q=(1) input=(sspben)
noint method=ml plot;
forecast lead=61 interval=month id=month alpha=0.05 out=s1out;
R (fable) code without an intercept (with an intercept, replace ~0 with ~1):
model(arima = ARIMA(lgs1 ~ 0+ sspben +pdq(1,1,1)
+PDQ(0,0,0), method="ML"))
s1_forecast<-forecast(arima111, fc_data)%>%
select(month, lgs1, forecast)
09-07-2023 11:40 AM | {"url":"https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/ARIMA-differences-in-R-and-SAS/m-p/893787/highlight/true","timestamp":"2024-11-12T06:00:43Z","content_type":"text/html","content_length":"244168","record_id":"<urn:uuid:19d2cfec-a426-4222-a05c-ddda344c4368>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00080.warc.gz"} |