url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://pypi.org/project/partial-dependence/0.0.1/
|
PartialDependence is a library for visualizing input-output relationships of machine learning models.
## Project description
partial_dependence
==================
A library for plotting partial dependency patterns of machine learning classfiers.
Partial dependence measures the prediction change when changing one or more input features.
We will focus only on 1D partial dependency plots.
For each instance in the data we can plot the prediction change as we change a single feature in a defined sample range.
Then we cluster similar plots, e.g., instances reacting similarly value changes, to reduce clutter.
The technique is a black box approach to recognize sets of instances where the model makes similar decisions.
You can install *partial_dependence* via
.. code:: bash
pip install partial_dependence
and import it in python using:
.. code:: python
import partial_dependence as pdp_plot
****************************************
Plotting clustering of partial dependence
****************************************
Following we will show the pipeline of functions works. Please refer to the inline documentation of the methods for full information.
You can also run the jupyter notebook file to have a running example.
Initialization
##############
Required arguments:
*******************
* df_test: a pandas.DataFrame containing only the features
values for each istance in the test-set.
* model: trained classfier as an object with the following properties.
The object must have a method prodict_proba(X) which takes a numpy.array of shape (n, num_feat) as input and returns a numpy.array of shape (n, len(class_array)).
* class_array: a list of strings with all the classes name in the same order
as the predictions returned by prodict_proba(X).
* class_focus: a string with the class name of the desired partial dependence.
Optional arguments:
*******************
* num_samples: number of desired samples. Sampling a feature is done with:
numpy.linspace(min_value,max_value,num_samples)
where the bounds are related to min and max value for that feature in the test-set.
* scale: scale parameter vector for normalization.
* shift: shift parameter vector for normalization.
Instead if you need to provide your data to the model in normalized form,
you have to define scale and shift such that:
transformed_data = (original_data + shift)*scale
where shift and scale are both numpy.array of shape (1,num_feat).
If the model uses directly the raw data in df_test without any transformation,
do not insert any scale and shift parameters.
.. code:: python
my_pdp_plot = pdp_plot.PartialDependence( my_df_test,
my_model,
my_labels_name,
my_labels_focus,
my_number_of_samples,
my_scale,
my_shift )
Creating the matrix of istances vectors
########################################
By choosing a feature and changing it in sample range, for each row in the test-set we can create num_samples different versions of the original istance.
pdp() returns a 3D matrix numpy.array of shape (num_rows,num_samples,num_feat) storing all those different versions.
Required argument:
******************
* fix: string with name of the chosen feature as reported in a column of df_test.
.. code:: python
the_matrix = my_pdp_plot.pdp(chosen_feature)
Computing prediction changes
############################
By feeding the_matrix to pred_comp_all() we are able to compute prediction values for each of the different vectors.
.. code:: python
preds = my_pdp_plot.pred_comp_all(the_matrix)
In preds, a numpy.array of shape (num_rows,num_samples), we have for each element a prediction linked to an original istance of the test-set and a precise sample of the chosen_feature.
Clustering the partial dependence
#################################
.. code:: python
labels_clusters = my_pdp_plot.compute_clusters(preds,chosen_cluster_number)
Plotting the results
####################
.. code:: python
my_pdp_plot.plot(preds,labels_clusters)
.. image:: plot_alcohol.png
:width: 750px
:align: center
:height: 421px
:alt: alternate text
## Project details
Uploaded source
Uploaded py2 py3
|
2023-03-23 22:23:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24493171274662018, "perplexity": 5812.352479812356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00457.warc.gz"}
|
https://everything.explained.today/Binary_search_algorithm/
|
# Binary search algorithm explained
Class: Search algorithm Data: Array Time: O(log n) Space: O(1) Best-Time: O(1) Average-Time: O(log n) Optimal: Yes
In computer science, binary search, also known as half-interval search,[1] logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
Binary search runs in logarithmic time in the worst case, making
O(logn)
comparisons, where
n
is the number of elements in the array.[2] Binary search is faster than linear search except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized data structures designed for fast searching, such as hash tables, that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array.
There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in computational geometry and in numerous other fields. Exponential search extends binary search to unbounded lists. The binary search tree and B-tree data structures are based on binary search.
## Algorithm
Binary search works on sorted arrays. Binary search begins by comparing an element in the middle of the array with the target value. If the target value matches the element, its position in the array is returned. If the target value is less than the element, the search continues in the lower half of the array. If the target value is greater than the element, the search continues in the upper half of the array. By doing this, the algorithm eliminates the half in which the target value cannot lie in each iteration.
### Procedure
Given an array
A
of
n
elements with values or records
A0,A1,A2,\ldots,An-1
sorted such that
A0\leqA1\leqA2\leq\leqAn-1
, and target value
T
, the following subroutine uses binary search to find the index of
T
in
A
.
1. Set
L
to
0
and
R
to
n-1
.
1. If
L>R
, the search terminates as unsuccessful.
1. Set
m
(the position of the middle element) to the floor of
L+R 2
, which is the greatest integer less than or equal to
L+R 2
.
1. If
Am<T
, set
L
to
m+1
and go to step 2.
1. If
Am>T
, set
R
to
m-1
and go to step 2.
1. Now
Am=T
, the search is done; return
m
.
This iterative procedure keeps track of the search boundaries with the two variables
L
and
R
. The procedure may be expressed in pseudocode as follows, where the variable names and types remain the same as above, floor is the floor function, and unsuccessful refers to a specific value that conveys the failure of the search.
function binary_search(A, n, T) is L := 0 R := n - 1 while L ≤ R do m := floor((L + R) / 2) if A[m] < T then L := m + 1 else if A[m] > T then R := m - 1 else: return m return unsuccessful
Alternatively, the algorithm may take the ceiling of
L+R 2
. This may change the result if the target value appears more than once in the array.
#### Alternative procedure
In the above procedure, the algorithm checks whether the middle element (
m
) is equal to the target (
T
) in every iteration. Some implementations leave out this check during each iteration. The algorithm would perform this check only when one element is left (when
L=R
). This results in a faster comparison loop, as one comparison is eliminated per iteration, while it requires only one more iteration on average.[3]
Hermann Bottenbruch published the first implementation to leave out this check in 1962.
1. Set
L
to
0
and
R
to
n-1
.
1. While
LR
,
1. Set
m
(the position of the middle element) to the ceiling of
L+R 2
, which is the least integer greater than or equal to
L+R 2
.
1. If
Am>T
, set
R
to
m-1
.
1. Else,
Am\leqT
; set
L
to
m
.
1. Now
L=R
, the search is done. If
AL=T
, return
L
. Otherwise, the search terminates as unsuccessful.
Where ceil is the ceiling function, the pseudocode for this version is:
function binary_search_alternative(A, n, T) is L := 0 R := n - 1 while L != R do m := ceil((L + R) / 2) if A[m] > T then R := m - 1 else: L := m if A[L] = T then return L return unsuccessful
### Duplicate elements
The procedure may return any index whose element is equal to the target value, even if there are duplicate elements in the array. For example, if the array to be searched was
[1,2,3,4,4,5,6,7]
and the target was
4
, then it would be correct for the algorithm to either return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the 4th element (index 3) in this case. It does not always return the first duplicate (consider
[1,2,3,4,4,5,6,7]
which still returns the 4th element). However, it is sometimes necessary to find the leftmost element or the rightmost element for a target value that is duplicated in the array. In the above example, the 4th element is the leftmost element of the value 4, while the 5th element is the rightmost element of the value 4. The alternative procedure above will always return the index of the rightmost element if such an element exists.
#### Procedure for finding the leftmost element
To find the leftmost element, the following procedure can be used:
1. Set
L
to
0
and
R
to
n
.
1. While
L<R
,
1. Set
m
(the position of the middle element) to the floor of
L+R 2
, which is the greatest integer less than or equal to
L+R 2
.
1. If
Am<T
, set
L
to
m+1
.
1. Else,
Am\geqT
; set
R
to
m
.
1. Return
L
.
If
L<n
and
AL=T
, then
AL
is the leftmost element that equals
T
. Even if
T
is not in the array,
L
is the rank of
T
in the array, or the number of elements in the array that are less than
T
.
Where floor is the floor function, the pseudocode for this version is:
function binary_search_leftmost(A, n, T): L := 0 R := n while L < R: m := floor((L + R) / 2) if A[m] < T: L := m + 1 else: R := m return L
#### Procedure for finding the rightmost element
To find the rightmost element, the following procedure can be used:
1. Set
L
to
0
and
R
to
n
.
1. While
L<R
,
1. Set
m
(the position of the middle element) to the floor of
L+R 2
, which is the greatest integer less than or equal to
L+R 2
.
1. If
Am>T
, set
R
to
m
.
1. Else,
Am\leqT
; set
L
to
m+1
.
1. Return
R-1
.
If
R>0
and
AR-1=T
, then
AR-1
is the rightmost element that equals
T
. Even if
T
is not in the array,
n-R
is the number of elements in the array that are greater than
T
.
Where floor is the floor function, the pseudocode for this version is:
function binary_search_rightmost(A, n, T): L := 0 R := n while L < R: m := floor((L + R) / 2) if A[m] > T: R := m else: L := m + 1 return R - 1
### Approximate matches
The above procedure only performs exact matches, finding the position of a target value. However, it is trivial to extend binary search to perform approximate matches because binary search operates on sorted arrays. For example, binary search can be used to compute, for a given value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), and nearest neighbor. Range queries seeking the number of elements between two values can be performed with two rank queries.
• Rank queries can be performed with the procedure for finding the leftmost element. The number of elements less than the target value is returned by the procedure.
• Predecessor queries can be performed with rank queries. If the rank of the target value is
r
, its predecessor is
r-1
.
• For successor queries, the procedure for finding the rightmost element can be used. If the result of running the procedure for the target value is
r
, then the successor of the target value is
r+1
.
• The nearest neighbor of the target value is either its predecessor or successor, whichever is closer.
• Range queries are also straightforward. Once the ranks of the two values are known, the number of elements greater than or equal to the first value and less than the second is the difference of the two ranks. This count can be adjusted up or down by one according to whether the endpoints of the range should be considered to be part of the range and whether the array contains entries matching those endpoints.
## Performance
In terms of the number of comparisons, the performance of binary search can be analyzed by viewing the run of the procedure on a binary tree. The root node of the tree is the middle element of the array. The middle element of the lower half is the left child node of the root, and the middle element of the upper half is the right child node of the root. The rest of the tree is built in a similar fashion. Starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration.
In the worst case, binary search makes $\lfloor \log_2 (n) + 1 \rfloor$ iterations of the comparison loop, where the $\lfloor \rfloor$ notation denotes the floor function that yields the greatest integer less than or equal to the argument, and $\log_2$ is the binary logarithm. This is because the worst case is reached when the search reaches the deepest level of the tree, and there are always $\lfloor \log_2 (n) + 1 \rfloor$ levels in the tree for any binary search.
The worst case may also be reached when the target element is not in the array. If $n$ is one less than a power of two, then this is always the case. Otherwise, the search may perform $\lfloor \log_2 (n) + 1 \rfloor$iterations if the search reaches the deepest level of the tree. However, it may make $\lfloor \log_2 (n) \rfloor$ iterations, which is one less than the worst case, if the search ends at the second-deepest level of the tree.
On average, assuming that each element is equally likely to be searched, binary search makes
\lfloorlog2(n)\rfloor+1-
\lfloorlog2(n)\rfloor+1 (2
-\lfloorlog2(n)\rfloor-2)/n
iterations when the target element is in the array. This is approximately equal to
log2(n)-1
iterations. When the target element is not in the array, binary search makes
\lfloorlog2(n)\rfloor+2-
\lfloorlog2(n)\rfloor+1 2
/(n+1)
iterations on average, assuming that the range between and outside elements is equally likely to be searched.
In the best case, where the target value is the middle element of the array, its position is returned after one iteration.
In terms of iterations, no search algorithm that works only by comparing elements can exhibit better average and worst-case performance than binary search. The comparison tree representing binary search has the fewest levels possible as every level above the lowest level of the tree is filled completely. Otherwise, the search algorithm can eliminate few elements in an iteration, increasing the number of iterations required in the average and worst case. This is the case for other search algorithms based on comparisons, as while they may work faster on some target values, the average performance over all elements is worse than binary search. By dividing the array in half, binary search ensures that the size of both subarrays are as similar as possible.
### Space complexity
Binary search requires three pointers to elements, which may be array indices or pointers to memory locations, regardless of the size of the array. Therefore, the space complexity of binary search is
O(1)
in the word RAM model of computation.
### Derivation of average case
The average number of iterations performed by binary search depends on the probability of each element being searched. The average case is different for successful searches and unsuccessful searches. It will be assumed that each element is equally likely to be searched for successful searches. For unsuccessful searches, it will be assumed that the intervals between and outside elements are equally likely to be searched. The average case for successful searches is the number of iterations required to search every element exactly once, divided by
n
, the number of elements. The average case for unsuccessful searches is the number of iterations required to search an element within every interval exactly once, divided by the
n+1
intervals.
#### Successful searches
In the binary tree representation, a successful search can be represented by a path from the root to the target node, called an internal path. The length of a path is the number of edges (connections between nodes) that the path passes through. The number of iterations performed by a search, given that the corresponding path has length
l
, is
l+1
counting the initial iteration. The internal path length is the sum of the lengths of all unique internal paths. Since there is only one path from the root to any single node, each internal path represents a search for a specific element. If there are
n
elements, which is a positive integer, and the internal path length is
I(n)
, then the average number of iterations for a successful search
T(n)=1+
I(n) n
, with the one iteration added to count the initial iteration.
Since binary search is the optimal algorithm for searching with comparisons, this problem is reduced to calculating the minimum internal path length of all binary trees with
n
nodes, which is equal to:
I(n)=
n \sum k=1
\left\lfloorlog2(k)\right\rfloor
For example, in a 7-element array, the root requires one iteration, the two elements below the root require two iterations, and the four elements below require three iterations. In this case, the internal path length is:
7 \sum k=1
\left\lfloorlog2(k)\right\rfloor=0+2(1)+4(2)=2+8=10
The average number of iterations would be
1+
10 7
=2
3 7
based on the equation for the average case. The sum for
I(n)
can be simplified to:
I(n)=
n \sum k=1
\left\lfloorlog2(k)\right\rfloor=(n+1)\left\lfloorlog2(n+1)\right\rfloor-
\left\lfloorlog2(n+1)\right\rfloor+1 2
+2
Substituting the equation for
I(n)
into the equation for
T(n)
:
T(n)=1+
(n+1)\left\lfloorlog2(n+1)\right\rfloor-
\left\lfloorlog2(n+1)\right\rfloor+1 2
+2
n
=\lfloorlog2(n)\rfloor+1-
\lfloorlog2(n)\rfloor+1 (2
-\lfloorlog2(n)\rfloor-2)/n
For integer
n
, this is equivalent to the equation for the average case on a successful search specified above.
#### Unsuccessful searches
Unsuccessful searches can be represented by augmenting the tree with external nodes, which forms an extended binary tree. If an internal node, or a node present in the tree, has fewer than two child nodes, then additional child nodes, called external nodes, are added so that each internal node has two children. By doing so, an unsuccessful search can be represented as a path to an external node, whose parent is the single element that remains during the last iteration. An external path is a path from the root to an external node. The external path length is the sum of the lengths of all unique external paths. If there are
n
elements, which is a positive integer, and the external path length is
E(n)
, then the average number of iterations for an unsuccessful search
T'(n)= E(n) n+1
, with the one iteration added to count the initial iteration. The external path length is divided by
n+1
n
because there are
n+1
external paths, representing the intervals between and outside the elements of the array.
This problem can similarly be reduced to determining the minimum external path length of all binary trees with
n
nodes. For all binary trees, the external path length is equal to the internal path length plus
2n
. Substituting the equation for
I(n)
:
E(n)=I(n)+2n=\left[(n+1)\left\lfloorlog2(n+1)\right\rfloor-
\left\lfloorlog2(n+1)\right\rfloor+1 2
+2\right]+2n=(n+1)(\lfloorlog2(n)\rfloor+2)-
\lfloorlog2(n)\rfloor+1 2
Substituting the equation for
E(n)
into the equation for
T'(n)
, the average case for unsuccessful searches can be determined:
T'(n)=
(n+1)(\lfloorlog2(n)\rfloor+2)-
\lfloorlog2(n)\rfloor+1 2
(n+1)
=\lfloorlog2(n)\rfloor+2-
\lfloorlog2(n)\rfloor+1 2
/(n+1)
#### Performance of alternative procedure
Each iteration of the binary search procedure defined above makes one or two comparisons, checking if the middle element is equal to the target in each iteration. Assuming that each element is equally likely to be searched, each iteration makes 1.5 comparisons on average. A variation of the algorithm checks whether the middle element is equal to the target at the end of the search. On average, this eliminates half a comparison from each iteration. This slightly cuts the time taken per iteration on most computers. However, it guarantees that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only $\lfloor \log_2 (n) + 1 \rfloor$ times in the worst case, the slight increase in efficiency per iteration does not compensate for the extra iteration for all but very large $n$.[4]
### Running time and cache use
In analyzing the performance of binary search, another consideration is the time required to compare two elements. For integers and strings, the time required increases linearly as the encoding length (usually the number of bits) of the elements increase. For example, comparing a pair of 64-bit unsigned integers would require comparing up to double the bits as comparing a pair of 32-bit unsigned integers. The worst case is achieved when the integers are equal. This can be significant when the encoding lengths of the elements are large, such as with large integer types or long strings, which makes comparing elements expensive. Furthermore, comparing floating-point values (the most common digital representation of real numbers) is often more expensive than comparing integers or short strings.
On most computer architectures, the processor has a hardware cache separate from RAM. Since they are located within the processor itself, caches are much faster to access but usually store much less data than RAM. Therefore, most processors store memory locations that have been accessed recently, along with memory locations close to it. For example, when an array element is accessed, the element itself may be stored along with the elements that are stored close to it in RAM, making it faster to sequentially access array elements that are close in index to each other (locality of reference). On a sorted array, binary search can jump to distant memory locations if the array is large, unlike algorithms (such as linear search and linear probing in hash tables) which access elements in sequence. This adds slightly to the running time of binary search for large arrays on most systems.[5]
## Binary search versus other schemes
Sorted arrays with binary search are a very inefficient solution when insertion and deletion operations are interleaved with retrieval, taking $O(n)$ time for each such operation. In addition, sorted arrays can complicate memory use especially when elements are often inserted into the array. There are other data structures that support much more efficient insertion and deletion. Binary search can be used to perform exact matching and set membership (determining whether a target value is in a collection of values). There are data structures that support faster exact matching and set membership. However, unlike many other searching schemes, binary search can be used for efficient approximate matching, usually performing such matches in $O(\log n)$ time regardless of the type or structure of the values themselves.[6] In addition, there are some operations, like finding the smallest and largest element, that can be performed efficiently on a sorted array.
### Linear search
Linear search is a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays except if the array is short, although the array needs to be sorted beforehand. All sorting algorithms based on comparing elements, such as quicksort and merge sort, require at least $O(n \log n)$ comparisons in the worst case. Unlike linear search, binary search can be used for efficient approximate matching. There are operations such as finding the smallest and largest element that can be done efficiently on a sorted array but not on an unsorted array.
### Trees
A binary search tree is a binary tree data structure that works based on the principle of binary search. The records of the tree are arranged in sorted order, and each record in the tree can be searched using an algorithm similar to binary search, taking on average logarithmic time. Insertion and deletion also require on average logarithmic time in binary search trees. This can be faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array, including range and approximate queries.
However, binary search is usually more efficient for searching as binary search trees will most likely be imperfectly balanced, resulting in slightly worse performance than binary search. This even applies to balanced binary search trees, binary search trees that balance their own nodes, because they rarely produce the tree with the fewest possible levels. Except for balanced binary search trees, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approaching $n$ comparisons. Binary search trees take more space than sorted arrays.
Binary search trees lend themselves to fast searching in external memory stored in hard disks, as binary search trees can be efficiently structured in filesystems. The B-tree generalizes this method of tree organization. B-trees are frequently used to organize long-term storage such as databases and filesystems.
### Hashing
For implementing associative arrays, hash tables, a data structure that maps keys to records using a hash function, are generally faster than binary search on a sorted array of records. Most hash table implementations require only amortized constant time on average.[7] However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record.[8] Binary search is ideal for such matches, performing them in logarithmic time. Binary search also supports approximate matches. Some operations, like finding the smallest and largest element, can be done efficiently on sorted arrays but not on hash tables.
### Set membership algorithms
A related problem to search is set membership. Any algorithm that does lookup, like binary search, can also be used for set membership. There are other algorithms that are more specifically suited for set membership. A bit array is the simplest, useful when the range of keys is limited. It compactly stores a collection of bits, with each bit representing a single key within the range of keys. Bit arrays are very fast, requiring only $O(1)$ time. The Judy1 type of Judy array handles 64-bit keys efficiently.
For approximate results, Bloom filters, another probabilistic data structure based on hashing, store a set of keys by encoding the keys using a bit array and multiple hash functions. Bloom filters are much more space-efficient than bit arrays in most cases and not much slower: with $k$ hash functions, membership queries require only $O(k)$ time. However, Bloom filters suffer from false positives.[9]
### Other data structures
There exist data structures that may improve on binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such as van Emde Boas trees, fusion trees, tries, and bit arrays. These specialized data structures are usually only faster because they take advantage of the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that lack that attribute. As long as the keys can be ordered, these operations can always be done at least efficiently on a sorted array regardless of the keys. Some structures, such as Judy arrays, use a combination of approaches to mitigate this while retaining efficiency and the ability to perform approximate matching.
## Variations
### Uniform binary search
See main article: Uniform binary search. Uniform binary search stores, instead of the lower and upper bounds, the difference in the index of the middle element from the current iteration to the next iteration. A lookup table containing the differences is computed beforehand. For example, if the array to be searched is, the middle element (
m
) would be . In this case, the middle element of the left subarray is and the middle element of the right subarray is . Uniform binary search would store the value of as both indices differ from by this same amount. To reduce the search space, the algorithm either adds or subtracts this change from the index of the middle element. Uniform binary search may be faster on systems where it is inefficient to calculate the midpoint, such as on decimal computers.
### Exponential search
See main article: Exponential search. Exponential search extends binary search to unbounded lists. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes $\lfloor \log_2 x + 1\rfloor$ iterations before binary search is started and at most $\lfloor \log_2 x \rfloor$ iterations of the binary search, where $x$ is the position of the target value. Exponential search works on bounded lists, but becomes an improvement over binary search only if the target value lies near the beginning of the array.
### Interpolation search
See main article: Interpolation search. Instead of calculating the midpoint, interpolation search estimates the position of the target value, taking into account the lowest and highest elements in the array as well as length of the array. It works on the basis that the midpoint is not the best guess in many cases. For example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array.
A common interpolation function is linear interpolation. If
A
is the array,
L,R
are the lower and upper bounds respectively, and
T
is the target, then the target is estimated to be about
(T-AL)/(AR-AL)
of the way between
L
and
R
. When linear interpolation is used, and the distribution of the array elements is uniform or near uniform, interpolation search makes $O(\log \log n)$ comparisons.[10]
In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation. Its time complexity grows more slowly than binary search, but this only compensates for the extra computation for large arrays.
See main article: Fractional cascading. Fractional cascading is a technique that speeds up binary searches for the same element in multiple sorted arrays. Searching each array separately requires $O(k \log n)$ time, where $k$ is the number of arrays. Fractional cascading reduces this to $O(k + \log n)$ by storing specific information in each array about each element and its position in the other arrays.[11] [12]
Fractional cascading was originally developed to efficiently solve various computational geometry problems. Fractional cascading has been applied elsewhere, such as in data mining and Internet Protocol routing.
### Generalization to graphs
Binary search has been generalized to work on certain types of graphs, where the target value is stored in a vertex instead of an array element. Binary search trees are one such generalization - when a vertex (node) in the tree is queried, the algorithm either learns that the vertex is the target, or otherwise which subtree the target would be located in. However, this can be further generalized as follows: given an undirected, positively weighted graph and a target vertex, the algorithm learns upon querying a vertex that it is equal to the target, or it is given an incident edge that is on the shortest path from the queried vertex to the target. The standard binary search algorithm is simply the case where the graph is a path. Similarly, binary search trees are the case where the edges to the left or right subtrees are given when the queried vertex is unequal to the target. For all undirected, positively weighted graphs, there is an algorithm that finds the target vertex in
O(logn)
queries in the worst case.[13]
### Noisy binary search
Noisy binary search algorithms solve the case where the algorithm cannot reliably compare elements of the array. For each pair of elements, there is a certain probability that the algorithm makes the wrong comparison. Noisy binary search can find the correct position of the target with a given probability that controls the reliability of the yielded position. Every noisy binary search procedure must make at least
(1-\tau)
log2(n) H(p)
-
10 H(p)
comparisons on average, where
H(p)=-plog2(p)-(1-p)log2(1-p)
is the binary entropy function and
\tau
is the probability that the procedure yields the wrong position.[14] [15] [16] The noisy binary search problem can be considered as a case of the Rényi-Ulam game,[17] a variant of Twenty Questions where the answers may be wrong.[18]
### Quantum binary search
Classical computers are bounded to the worst case of exactly $\lfloor \log_2 n + 1 \rfloor$ iterations when performing binary search. Quantum algorithms for binary search are still bounded to a proportion of $\log_2 n$ queries (representing iterations of the classical procedure), but the constant factor is less than one, providing for a lower time complexity on quantum computers. Any exact quantum binary search procedure—that is, a procedure that always yields the correct result—requires at least $\frac(\ln n - 1) \approx 0.22 \log_2 n$ queries in the worst case, where $\ln$ is the natural logarithm.[19] There is an exact quantum binary search procedure that runs in $4 \log_ n \approx 0.433 \log_2 n$ queries in the worst case.[20] In comparison, Grover's algorithm is the optimal quantum algorithm for searching an unordered list of elements, and it requires
O(\sqrt{n})
queries.[21]
## History
The idea of sorting a list of items to allow for faster searching dates back to antiquity. The earliest known example was the Inakibit-Anu tablet from Babylon dating back to . The tablet contained about 500 sexagesimal numbers and their reciprocals sorted in lexicographical order, which made searching for a specific entry easier. In addition, several lists of names that were sorted by their first letter were discovered on the Aegean Islands. Catholicon, a Latin dictionary finished in 1286 CE, was the first work to describe rules for sorting words into alphabetical order, as opposed to just the first few letters.
In 1946, John Mauchly made the first mention of binary search as part of the Moore School Lectures, a seminal and foundational college course in computing. In 1957, William Wesley Peterson published the first method for interpolation search.[22] Every published binary search algorithm worked only for arrays whose length is one less than a power of two until 1960, when Derrick Henry Lehmer published a binary search algorithm that worked on all arrays.[23] In 1962, Hermann Bottenbruch presented an ALGOL 60 implementation of binary search that placed the comparison for equality at the end, increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration. The uniform binary search was developed by A. K. Chandra of Stanford University in 1971. In 1986, Bernard Chazelle and Leonidas J. Guibas introduced fractional cascading as a method to solve numerous search problems in computational geometry.[24]
## Implementation issues
When Jon Bentley assigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rare edge cases. A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks.[25] Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contained an overflow error that remained undetected for over twenty years. The Java programming language library implementation of binary search had the same overflow bug for more than nine years.[26]
In a practical implementation, the variables used to represent the indices will often be of fixed size (integers), and this can result in an arithmetic overflow for very large arrays. If the midpoint of the span is calculated as
L+R 2
, then the value of
L+R
may exceed the range of integers of the data type used to store the midpoint, even if
L
and
R
are within the range. If
L
and
R
are nonnegative, this can be avoided by calculating the midpoint as
L+ R-L 2
.[27]
An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once
L
exceeds
R
, the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions.
## Library support
Many languages' standard libraries include binary search routines:
• C provides the function bsearch in its standard library, which is typically implemented via binary search, although the official standard does not require it so.[28]
• C++'s Standard Template Library provides the functions binary_search, lower_bound, upper_bound and equal_range.
• D's standard library Phobos, in std.range module provides a type SortedRange (returned by sort and assumeSorted functions) with methods contains, equaleRange, lowerBound and trisect, that use binary search techniques by default for ranges that offer random access.[29]
• COBOL provides the SEARCH ALL verb for performing binary searches on COBOL ordered tables.
• Go's sort standard library package contains the functions Search, SearchInts, SearchFloat64s, and SearchStrings, which implement general binary search, as well as specific implementations for searching slices of integers, floating-point numbers, and strings, respectively.[30]
• Java offers a set of overloaded binarySearch static methods in the classes and in the standard java.util package for performing binary searches on Java arrays and on Lists, respectively.[31] [32]
• Microsoft's .NET Framework 2.0 offers static generic versions of the binary search algorithm in its collection base classes. An example would be System.Array's method BinarySearch<T>(T[] array, T value).[33]
• For Objective-C, the Cocoa framework provides the [https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSArray_Class/NSArray.html#//apple_ref/occ/instm/NSArray/indexOfObject:inSortedRange:options:usingComparator: NSArray -indexOfObject:inSortedRange:options:usingComparator:] method in Mac OS X 10.6+.[34] Apple's Core Foundation C framework also contains a [https://developer.apple.com/library/mac/documentation/CoreFoundation/Reference/CFArrayRef/Reference/reference.html#//apple_ref/c/func/CFArrayBSearchValues CFArrayBSearchValues] function.[35]
• Python provides the bisect module.[36]
• Ruby's Array class includes a bsearch method with built-in approximate matching.
• – the same idea used to solve equations in the real numbers
## Notes and References
1. Williams, Jr.. Louis F.. A modification to the half-interval search (binary search) method. Proceedings of the 14th ACM Southeast Conference. 22 April 1976. 95–101. 10.1145/503561.503582. ACM. 29 June 2018. https://web.archive.org/web/20170312215255/http://dl.acm.org/citation.cfm?doid=503561.503582. 12 March 2017. live. dmy-all.
2. Flores. Ivan. Madpis. George. 43325465. Average binary search length for dense ordered lists. Communications of the ACM. 1 September 1971. 14. 9. 602–603. 0001-0782. 10.1145/362663.362752.
3. Bottenbruch. Hermann. 13406983. Structure and use of ALGOL 60. . 1 April 1962. 9. 2. 161–221 . 0004-5411. 10.1145/321119.321120 . Procedure is described at p. 214 (§43), titled "Program for Binary Search".
4. Rolfe. Timothy J.. 23752485. Analytic derivation of comparisons in binary search. ACM SIGNUM Newsletter. 1997. 32. 4. 15–19. 10.1145/289251.289255.
5. Khuong. Paul-Virak. Morin. Pat. 23752485. Pat Morin . Array Layouts for Comparison-Based Searching. Journal of Experimental Algorithmics. 2017. 22. Article 1.3. 10.1145/3053370. 1509.05053.
6. Beame. Paul. Fich. Faith E.. Faith Ellen. Optimal bounds for the predecessor problem and related problems. Journal of Computer and System Sciences. 2001. 65. 1. 38–72. 10.1006/jcss.2002.1822. free.
7. Dietzfelbinger. Martin. Karlin. Anna. Mehlhorn. Kurt. Meyer auf der Heide. Friedhelm. Rohnert. Hans. Tarjan. Robert E.. Anna Karlin. Kurt Mehlhorn. Robert Tarjan. Dynamic perfect hashing: upper and lower bounds. SIAM Journal on Computing. August 1994. 23. 4. 738–761. 10.1137/S0097539791194094.
8. Web site: Morin. Pat. Hash tables. 28 March 2016. 1.
9. Bloom. Burton H.. 7931252. Space/time trade-offs in hash coding with allowable errors. Communications of the ACM. 1970. 13. 7. 422–426. 10.1145/362686.362692. dmy-all. 10.1.1.641.9096.
10. Perl. Yehoshua. Itai. Alon. Avni. Haim. 11089655. Interpolation search—a log log n search. Communications of the ACM. 1978. 21. 7. 550–553. 10.1145/359545.359557.
11. Chazelle. Bernard. Liu. Ding. Bernard Chazelle. Lower bounds for intersection searching and fractional cascading in higher dimension. 33rd ACM Symposium on Theory of Computing. 322–329. 6 July 2001. 10.1145/380752.380818. 30 June 2018 . ACM. 978-1-58113-349-3.
12. Chazelle. Bernard. Liu. Ding. Bernard Chazelle. Lower bounds for intersection searching and fractional cascading in higher dimension. Journal of Computer and System Sciences. 1 March 2004 . 68. 2. 269–284 . en . 0022-0000. 10.1016/j.jcss.2003.07.003. 10.1.1.298.7772. 30 June 2018.
13. Emamjomeh-Zadeh. Ehsan. Kempe. David. Singhal. Vikrant. Deterministic and probabilistic binary search in graphs. 2016. 519–532. 48th ACM Symposium on Theory of Computing. 1503.00805. 10.1145/2897518.2897656.
14. Ben-Or . Michael . Hassidim . Avinatan . The Bayesian learner is optimal for noisy binary search (and pretty good for quantum as well) . 2008 . . 221–230 . 10.1109/FOCS.2008.58 . 978-0-7695-3436-7.
15. Pelc. Andrzej. Searching with known error probability. Theoretical Computer Science. 1989. 63. 2. 185–202. 10.1016/0304-3975(89)90077-7. free.
16. Rivest. Ronald L.. Meyer. Albert R.. Kleitman. Daniel J.. Winklmann. K.. Ronald Rivest. Albert R. Meyer. Daniel Kleitman. Coping with errors in binary search procedures. 10th ACM Symposium on Theory of Computing. 10.1145/800133.804351.
17. Pelc. Andrzej. Searching games with errors—fifty years of coping with liars. Theoretical Computer Science. 2002. 270. 1–2. 71–109. 10.1016/S0304-3975(01)00303-6. free.
18. Rényi . Alfréd . On a problem in information theory . hu . 0143666 . 1961 . Magyar Tudományos Akadémia Matematikai Kutató Intézetének Közleményei. 6 . 505–516.
19. Høyer. Peter. Neerbek. Jan. Shi. Yaoyun. 13717616. Quantum complexities of ordered searching, sorting, and element distinctness. Algorithmica. 2002. 34. 4. 429–448. 10.1007/s00453-002-0976-3. quant-ph/0102078.
20. Childs. Andrew M.. Landahl. Andrew J.. Parrilo. Pablo A.. 41539957. Quantum algorithms for the ordered search problem via semidefinite programming. Physical Review A. 2007. 75. 3. 032335. 10.1103/PhysRevA.75.032335. quant-ph/0608161. 2007PhRvA..75c2335C.
21. Grover . Lov K. . Lov Grover . A fast quantum mechanical algorithm for database search . . 212–219. 1996. Philadelphia, PA . 10.1145/237814.237866. quant-ph/9605043.
22. Peterson . William Wesley . W. Wesley Peterson. Addressing for random-access storage . IBM Journal of Research and Development . 1957 . 1 . 2 . 130–146 . 10.1147/rd.12.0130.
23. Teaching combinatorial tricks to a computer . Lehmer, Derrick . Proceedings of Symposia in Applied Mathematics . 1960 . 10 . 180–181 . 10.1090/psapm/010. free .
24. Chazelle . Bernard . Bernard Chazelle. Guibas . Leonidas J. . 12745042 . Leonidas J. Guibas. Fractional cascading: I. A data structuring technique. Algorithmica. 1 . 1–4 . 1986 . 133–162 . 10.1007/BF01840440. 10.1.1.117.8349 .
25. Richard E. . Pattis . Richard E. Pattis. 10.1145/52965.53012 . Textbook errors in binary searching . SIGCSE Bulletin . 20 . 1988 . 190–194 .
26. Web site: Extra, extra – read all about it: nearly all binary searches and mergesorts are broken . Google Research Blog . Joshua . Bloch . Joshua Bloch . 2 June 2006 . 21 April 2016 . https://web.archive.org/web/20160401140544/http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html . 1 April 2016 . live . dmy-all .
27. Ruggieri. Salvatore. On computing the semi-sum of two integers. Information Processing Letters. 2003. 87. 2. 67–71. 10.1016/S0020-0190(03)00263-1. 10.1.1.13.5631. 19 March 2016. https://web.archive.org/web/20060703173514/http://www.di.unipi.it/~ruggieri/Papers/semisum.pdf. 3 July 2006. live. dmy-all.
28. Web site: bsearch – binary search a sorted table. The Open Group Base Specifications. 7th. The Open Group. 28 March 2016. 2013. https://web.archive.org/web/20160321211605/http://pubs.opengroup.org/onlinepubs/9699919799/functions/bsearch.html. 21 March 2016. live. dmy-all.
29. Web site: std.range - D Programming Language . dlang.org . 2020-04-29.
30. Web site: The Go Programming Language . Package sort . 28 April 2016 . https://web.archive.org/web/20160425055919/https://golang.org/pkg/sort/ . 25 April 2016 . live . dmy-all .
31. Web site: java.util.Arrays. Java Platform Standard Edition 8 Documentation. Oracle Corporation. 1 May 2016. https://web.archive.org/web/20160429064301/http://docs.oracle.com/javase/8/docs/api/java/util/Arrays.html. 29 April 2016. live. dmy-all.
32. Web site: java.util.Collections. Java Platform Standard Edition 8 Documentation. Oracle Corporation. 1 May 2016. https://web.archive.org/web/20160423092424/https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html. 23 April 2016. live. dmy-all.
33. Web site: List.BinarySearch method (T). Microsoft Developer Network. 10 April 2016. https://web.archive.org/web/20160507141014/https://msdn.microsoft.com/en-us/library/w4e7fxsh%28v=vs.110%29.aspx. 7 May 2016. live. dmy-all.
34. Web site: NSArray. Mac Developer Library. Apple Inc.. 1 May 2016. https://web.archive.org/web/20160417163718/https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSArray_Class/index.html#//apple_ref/occ/instm/NSArray/indexOfObject:inSortedRange:options:usingComparator:. 17 April 2016. live. dmy-all.
35. Web site: CFArray. Mac Developer Library. Apple Inc.. 1 May 2016. https://web.archive.org/web/20160420193823/https://developer.apple.com/library/mac/documentation/CoreFoundation/Reference/CFArrayRef/index.html#//apple_ref/c/func/CFArrayBSearchValues. 20 April 2016. live. dmy-all.
36. Web site: 8.6. bisect — Array bisection algorithm. The Python Standard Library. Python Software Foundation. 26 March 2018. https://web.archive.org/web/20180325105932/https://docs.python.org/3.6/library/bisect.html#module-bisect. 25 March 2018. live. dmy-all.
|
2022-10-05 05:04:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44133540987968445, "perplexity": 1276.17727520952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00202.warc.gz"}
|
https://encyclopediaofmath.org/index.php?title=Action_of_a_group_on_a_manifold
|
# Action of a group on a manifold
The best-studied case of the general concept of the action of a group on a space. A topological group $G$ acts on a space $X$ if to each $g \in G$ there corresponds a homeomorphism $\phi _ {g}$ of $X$( onto itself) satisfying the following conditions: 1) $\phi _ {g} \cdot \phi _ {h} = \phi _ {gh}$; 2) for the unit element $e \in G$ the mapping $\phi _ {e}$ is the identity homeomorphism; and 3) the mapping $\phi : G \times X \rightarrow X$, $\phi (g, x) = \phi _ {g} (x)$ is continuous. If $X$ and $G$ have supplementary structures, the actions of $G$ which are compatible with such structures are of special interest; thus, if $X$ is a differentiable manifold and $G$ is a Lie group, the mapping $\phi$ is usually assumed to be differentiable.
The set $\{ \phi _ {g} ( x _ {0} ) \} _ {g \in G }$ is called the orbit (trajectory) of the point $x _ {0} \in X$ with respect to the group $G$; the orbit space is denoted by $X/G$, and is also called the quotient space of the space $X$ with respect to the group $G$. An important example is the case when $X$ is a Lie group and $G$ is a subgroup; then $X/G$ is the corresponding homogeneous space. Classical examples include the spheres $S ^ {n-1} = \textrm{ O } (n) / \textrm{ O } (n-1)$, the Grassmann manifolds $\textrm{ O } (n) / ( \textrm{ O } (m) \times \textrm{ O } (n-m) )$, and the Stiefel manifolds $\textrm{ O } (n) / \textrm{ O } (m)$( cf. Grassmann manifold; Stiefel manifold). Here, the orbit space is a manifold. This is usually not the case if the action of the group is not free, e.g. if the set $X ^ {G}$ of fixed points is non-empty. A free action of a group is an action for which $g=e$ follows if $gx=x$ for any $x \in X$. On the contrary, $X ^ {G}$ is a manifold if $X$ is a differentiable manifold and the action of $G$ is differentiable; this statement is valid for cohomology manifolds over $\mathbf Z _ {p}$ for $G = \mathbf Z _ {p}$ as well (Smith's theorem).
If $G$ is a non-compact group, the space $X/G$ is usually inseparable, and this is why a study of individual trajectories and their mutual locations is of interest. The group $G = \mathbf R$ of real numbers acting on a differentiable manifold $X$ in a differentiable manner is a classical example. The study of such dynamical systems, which in terms of local coordinates is equivalent to the study of systems of ordinary differential equations, usually involves analytical methods.
If $G$ is a compact group, it is known that if $X$ is a manifold and if each $g \in G$, $g \neq e$, acts non-trivially on $X$( i.e. not according to the law $(g, x) \rightarrow x$), then $G$ is a Lie group [8]. Accordingly, the main interest in the action of a compact group is the action of a Lie group.
Let $G$ be a compact Lie group and let $X$ be a compact cohomology manifold. The following results are typical. A finite number of orbit types exists in $X$, and the neighbourhoods of an orbit look like a direct product (the slice theorem); the relations between the cohomology structures of the spaces $X$, $X/G$ and $X ^ {G}$ are of interest.
If $G$ is a compact Lie group, $X$ a differentiable manifold and if the action
$$\phi : G \times X \rightarrow X$$
is differentiable, then one naturally obtains the following equivalence relation: $(X, \phi ) \sim ( X ^ { \prime } , \phi ^ \prime )$ if and only if it is possible to find an $( X ^ { \prime\prime } , \phi ^ {\prime\prime} )$ such that the boundary $\partial X ^ { \prime\prime }$ has the form $\partial X ^ { \prime\prime } = X \cup X ^ { \prime }$ and such that $\phi ^ {\prime\prime} \mid _ {X} = \phi$, $\phi ^ {\prime\prime} \mid _ {X ^ { \prime } } = \phi ^ \prime$. If the group $G$ acts freely, the equivalence classes can be found from the one-to-one correspondence with the bordisms $\Omega _ {*} ( B _ {G} )$ of the classifying space $B _ {G}$( cf. Bordism).
Recent results (mid-1970s) mostly concern: 1) the determination of types of orbits with various supplementary assumptions concerning the group $G$ and the manifold $X$([6]); 2) the classification of group actions; and 3) finding connections between global invariants of the manifold $X$ and local properties of the group actions of $G$ in a neighbourhood of fixed points of $X ^ {G}$. In solving these problems an important part is played by: methods of modern differential topology (e.g. surgery methods); $K _ {G}$- theory [1], which is the analogue of $K$- theory for $G$- vector bundles; bordism and cobordism theories [3]; and analytical methods of studying the action of the group $G$ based on the study of pseudo-differential operators in $G$- bundles [2], [7].
#### References
[1] M.F. Atiyah, "-theory: lectures" , Benjamin (1967) [2] M.F. Atiyah, I.M. Singer, "The index of elliptic operators" Ann. of Math. (2) , 87 (1968) pp. 484–530 [3] V.M. Bukhshtaber, A.S. Mishchenko, S.P. Novikov, "Formal groups and their role in the apparatus of algebraic topology" Russian Math. Surveys , 26 (1971) pp. 63–90 Uspekhi Mat. Nauk , 26 : 2 (1971) pp. 131–154 [4] P.E. Conner, E.E. Floyd, "Differentiable periodic maps" , Springer (1964) [5] G. Bredon, "Introduction to compact transformation groups" , Acad. Press (1972) [6] W.Y. Hsiang, "Cohomology theory of topological transformation groups" , Springer (1975) [7] D.B. Zagier, "Equivariant Pontryagin classes and applications to orbit spaces" , Springer (1972) [8] , Proc. conf. transformation groups , Springer (1968) [9] , Proc. 2-nd conf. compact transformation groups , Springer (1972)
|
2020-11-24 15:33:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988077044487, "perplexity": 224.9562423995502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176864.5/warc/CC-MAIN-20201124140942-20201124170942-00607.warc.gz"}
|
http://answers.gazebosim.org/question/22523/instabilities-with-ode-engine/
|
# Instabilities with ODE engine
This actually is a follow up of still-runaway-simulations-in-gazebo. There I thought that the solution would be to use the dart engine, but I don't know how to set coulomb friction there, which I really need. So I am back with using ODE.
Please find here the boot.sdf and instable.mp4 that show the instability (but no crashing) in the model. It contains 5 revolute joints like a shoulder and elbow. The instability as shown in the mp4 is very reproducable.
The included model arms.urdf is a little more complicated, but does not exibit the instabilities. The complete model, to be found here in boot.sdf crashes almost directly.
The question is, is there something wrong in ODE or is the current setting of ODE not good for my use case.
UPDATE: Even setting the max step size to 0.0001 does not help much.
2nd UPDATE Getting a bit desparate, after 6 month trying to find a solution. Changeing to gazebo 10.1, from source, does not help either.
I would be helped very much to have at least some confirmation that someone has the same problem (instable.mp4) or not with my boot.sdf mentioned above.
Thanks again, Sietse
edit retag close merge delete
|
2019-07-18 23:33:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4134010076522827, "perplexity": 1495.6329779185328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525863.49/warc/CC-MAIN-20190718231656-20190719013656-00376.warc.gz"}
|
https://www.eduzip.com/ask/question/in-the-figure-what-value-of-x-will-make-aob-a-straight-line-521100
|
Mathematics
# In the figure, what value of $x$ will make $AOB$, a straight line?
##### SOLUTION
We know that
$AOB$ will be straight line only if the adjacent angles form a linear pair
$\angle BOC+\angle AOC={ 180 }^{ o }$
${ x }+{ \left( 4x-36 \right) }^{ o }+{ \left( 3x+20 \right) }^{ o }={ 180 }^{ o }$
$\Rightarrow 4x-{ 36 }^{ o }+3x+{ 20 }^{ o }={ 180 }^{ o }$
$\Rightarrow 7x={ 180 }^{ o }-{ 20 }^{ o }+{ 36 }^{ o }$
$\Rightarrow 7x={ 196 }^{ o }\quad$
$\Rightarrow x={ 28 }^{ o }$
Therefore the value of $x$ is $28$
You're just one step away
Subjective Medium Published on 09th 09, 2020
Questions 120418
Subjects 10
Chapters 88
Enrolled Students 86
#### Realted Questions
Q1 Subjective Medium
Find the supplement of the angle whose measure is ${60}^{\circ}$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q2 TRUE/FALSE Medium
Say True or False.
The measure of an obtuse angle $< 90^{\circ}$.
• A. True
• B. False
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q3 Single Correct Medium
When two lines are parallel, the distance between them is
• A. always equal
• B. not equal
• C. increases
• D. is constant
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q4 Subjective Medium
Define the following term:
Complementary angle
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q5 Subjective Medium
Read the following two statements which are taken as axioms:
(i) If two lines intersect each other, then the vertically opposite angles are not equal.
(ii) If a ray stands on a line, then the sum of two adjacent angles so formed is equal to $180^0$.
|
2022-01-24 03:28:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7117800116539001, "perplexity": 4416.930595288517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00640.warc.gz"}
|
https://www.fresheconomicthinking.com/2021/11/
|
## Monday, November 22, 2021
### Alice in housing economics wonderland
“When I use a word,” Humpty Dumpty said in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.”
’The question is,” said Alice, “whether you can make words mean so many different things.”
― Lewis Carroll, Through the Looking Glass
An increasing academic and policy focus on housing supply has unfortunately not brought with it an increase in clarity over the meaning of words. Words like housing supply, demand, zoning, price and affordability, have come to mean whatever the authors want them to mean.
I've put together in the below table various potential meanings of these catch-all economic terms, and the words I think we should use instead to increase clarity.
For example, economists should be crystal clear that the economic price of housing is the market rental price (i.e. the price that measures the amount of good and services given up to get that good). The sale price of a dwelling is merely the market’s judgement of the asset value of buying a rental income stream in perpetuity and is heavily affected by prevailing interest rates (capitalisation rates), land and property taxes, and expectations of changes to the asset value (capital gains expectations).
With such a variety of definitions, what can the phrase “an increase in the supply of dwellings relative to demand will reduce dwelling prices” actually mean?
The obvious and strictly true definition is when the terms mean
supply → market willingness to sell housing in this period,
demand → market willingness to pay for housing in this period, and
price → the sale price of a dwelling in this period.
This merely describes asset market bid and offer schedules. When there is a "supply shift" in these schedules that massively reduces prices, we call that a market crash (a topic that is also poorly understood). But the economics of housing stock change is more sophisticated than this.
Supply and demand might also mean number of dwellings and total population—completely different ideas from bid and offer schedules in asset markets, instead focussed on material quantities that are expected to move together.
Sometimes supply is used to mean zoned capacity (how much can build built within current zoning rules) and is often assumed to be synonymous with the absorption rate (how quickly the market will develop new housing subject to asset market conditions). Changing zoned capacity does not necessary change the absorption rate, yet they are often described as one and the same thing.
Perhaps if we could describe what we mean more specifically we can start to allow the evidence to support or disprove theories about how the housing and property market operate.
I'm open to improving and expanding the table with better, or more widely used, terminology and will update and refine it over time.
## Tuesday, November 16, 2021
### Opening remarks to housing inquiry
A video of my testimony to the inquiry is here (from 11:00:00)
My full written submission is here and my follow-up response to questions raised is here.
_____________
I believe I am one of the few witnesses who has worked for residential and industrial property developers, in government departments dealing with infrastructure charges and regulation, and now as a housing researcher.
To be clear, Australia has more, bigger, better dwellings per capita than any point in history. We are also building new dwellings at a near record pace in a period where population growth is the lowest in decades.
More housing is better than less housing. Absolutely. I agree.
The argument I disagree with is that private landowners want to build faster, but only pesky council and state government red tape is slowing things down.
While I certainly have many ideas for improving and simplifying the planning system I see this as a separate topic to housing affordability.
We’ve heard from previous witnesses that housing developers have a lot of trouble building on unzoned land. No doubt. The whole point of unzoned land is to not have development at that location and get it located in the zoned land. It’s hardly evidence of anything except that these developers are bad at their jobs, always buying the wrong land for what they want to build.
Indeed, they certainly appear to be terrible lobbyists. If what they claim is true about zoning keeping prices up, then lobbying for mass rezoning is financial suicide. It would vastly increase the number of competitors in their market and reduce prices, wiping billions in value from their balance sheets. What sort of industry lobbies for that?
Perhaps this story is a lie.
In 2003, the AFR ran the headline “Brisbane running out of land for housing”, with land for housing expected to run out by 2015 according to the same expert lobbyists who have attended this inquiry. Yet detached housing lot production was 30% higher in the 6 years since 2015 than in the 6 years prior.
Are they terrible at their jobs? Or just telling stories that conceal the true nature of property markets?
Remember, only landowners can choose to make planning applications. Only landowners can choose when to build homes. Councils don’t do this. There is no speed limit to building new housing in the planning system. Planning regulates the location of different uses and densities, like road lanes regulate locations on the road. Density (dwellings per unit of land) and supply (new dwellings per period of time) are completely different concepts. More density does not equal faster supply.
The key issue at stake in this debate is that land is an asset. It is therefore priced like one. This is why when it is a good time to buy it is also a good time not to sell.
This is true for developed and undeveloped land. Stocks of undeveloped land sit on the balance sheet of developers, earning a return by growing in value while undeveloped.
The trade-off between the return from delaying developing land, versus developing now, creates a built-in market speed limit on the rate at which private landowners develop. As a previous witness mentioned, “… if you are a property owner or developer that had land that was consented and you hadn’t sold it a year ago then you are in a very strong position.” Builders might like to build faster, but landowners prefer to maximise returns on their assets.
We can see how this pays off with a case study of Jordan Springs, a Lendlease subdivision of around 2,000 housing lots in Sydney that took a decade to sell. I looked at the sales rates over time and saw that the average rate was only 45% of the peak rate (3-month average), though some periods had sales just 12% of the peak rate. The speed of developing new housing lots was far below the capability of developers and the capacity of zoned land. By selling at this slower rate, and capturing overall market gains in the form of higher prices, they made an additional $137 million on the project. Prices were 31% higher at the end than the start for land lots on a per sqm basis. It would have been financially irresponsible of them to develop faster than they did. I’m not saying that this behaviour is wrong, or a conspiracy, or even that it has major price effects. The stock of housing only changes a couple of percent a year at the best and small changes to those small changes make tiny price differences. This is just normal market behaviour. This is why for the century prior to the existence of zoning we had the same issues of unequal access to land and housing ownership, only much worse. The current housing asset price boom is a global one. Average prices are up 20% in the last year in the US, the same as Australia, and many places that were previously lauded as having flexible zoning, like Germany and cities in Texas, have had the highest price growth. What we are seeing is a period of global asset re-pricing, as intended by monetary policy. If you really want more homes build them. Flood the market with a public housing developer—you know, just in case the private developers don’t do what they said they would. It might be a sensible insurance policy. No doubt the property lobbyists will find something wrong with this, even though it is exactly the outcome they pretend to be lobbying for—more competitors and lower prices. ## Wednesday, November 3, 2021 ### Public housing is way cheaper than rental subsidies A discussion about the best way to provide below-market-priced housing popped up on Twitter recently. Peter Tulip noted many of the limitations of such systems—queuing, quotas, qualifying criteria, etc—concluding that a cash payment to help pay market rents is an economically-efficient way to get the policy outcome of reducing housing costs to low-income households. I am not against providing such cash payments. They are clearly better than nothing. But the reason I believe governments should build and own some housing is that it provides a better bang for your housing subsidy buck. Consider the two alternatives over a “tenant life” of say 30 years. With cash rental assistance, the government pays, say for the sake of argument,$13,000 per year the first year. But to have a meaningful effect this must grow over time to reflect growth in rents and incomes. At a 2% growth rate, by year 30, the subsidy is $23,000 and over 30 years the total subsidy paid is$527,000. The present value of this 30-year flow of subsidy payments at a 2% discount rate is $374,000. With public housing ownership, the government builds or buys a dwelling worth$500,000 today to supply that dwelling at a rent that is currently $13,000 below market rent per year (i.e. the same rental subsidy to the resident). The remaining rent paid by the tenant covers ongoing costs only. Like the cash rental subsidy, the gap grows over time to be$23,000 in the 30th year. Instead of $374,000 in present value terms, this option costs$500,000 today to build or buy the dwelling (much less if built on under-utilised publicly-owned land).
However, with public housing ownership, a government agency owns the property at the end of the 30 years. Over this period, the asset value grows. Even if it grows in line with the 2% growth of local incomes it means that the property is worth $890,000. In reality, because incomes at a location rise faster than the average (because cities expand), it is likely to be more. For reference, this is only a 76% rise in three decades; a conservative figure when compared to the 143% price rise seen in Australia’s capital cities in the past 18 years. The table below shows a comparison of the two alternative ways of providing the same value of housing subsidy to a resident over 30 years. Although the public housing ownership option costs$500,000 upfront, today's value of the final sale price is $490,000, leaving a net economic cost of just$10,000. This approach gets 40x better value for the budgetary spend. If capital growth is closer to historical norms then public housing can more than pay for itself.
What we learn from this is that
• The cost of rental assistance over the long term is not much different than simply buying a dwelling and giving it to the household ($374k vs$500k).
• The cost of rental assistance over the long term is much more than providing the same rental subsidy via owning the property ($374k vs$10k)
• Getting out of the housing ownership game over the past three decades and shifting towards rental subsidies has cost government budgets billions.
[UPDATE] I've updated the figures to reflect 2% growth of incomes and rents and 2% interest and made the spreadsheet available here. Play around with the numbers.
[UPDATE] People seem to think that interest payments need to be taken into account somewhere. They do not. Prevailing interest rates are incorporated via discounting.
[UPDATE] Thanks to Jago Dodson for letting me know that a 1993 review by the Industry Commission (now Productivity Commission) ranked public housing first in terms of efficiency and cost-effectiveness out of a variety of alternative housing subsidy approaches they assessed.
[UPDATE] Thanks to Vivienne Milligen for letting me know that the 1989 National Housing Policy Review found similarly—that public ownership of housing is the lowest-cost strategy for housing poverty relief.
|
2022-06-30 22:26:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18675565719604492, "perplexity": 2849.845654886682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00461.warc.gz"}
|
https://confluence.cornell.edu/pages/diffpages.action?originalId=326370827&pageId=327091735
|
# Page History
## Key
• This line was removed.
• Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
...
The purpose of this tutorial is to illustrate the setup and solution of an unsteady flow past a circular cylinder and to study the vortex shedding phenomenon. Flow past a circular cylinder is one of the classical problems of fluid mechanics. For this problem, we will be looking at Reynolds number of 150.
{latex}
Wiki Markup
Latex
\large
$${Re} = {\rho VD \over \mu}$$
{latex}
We know D = 2 m. To obtain Re = 150, we can arbitrarily set ρ, V and μ. For our case, let's set ρ = 75 kg/m 3 , V = 1 m/s and μ = 1 kg/ms.
...
|
2022-10-05 16:04:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822998881340027, "perplexity": 949.0025404784717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00582.warc.gz"}
|
http://openstudy.com/updates/511fb90ee4b06821731c7b77
|
## Dodo1 Group Title How do you find horizontal asymptote ? one year ago one year ago
1. ash2326 Group Title
Suppose you have a function $y=f(x)$ Express x in terms of y, now find value of y for which x goes to infinity or - infinity That value of y is your horizontal asymptote
2. pottersheep Group Title
This website helped me a lot when I was learning about horizontal aymptotes. http://www.purplemath.com/modules/asymtote2.htm
3. pottersheep Group Title
Btw, your line CAN touch a horizontal aymptote. "Whereas vertical asymptotes are sacred ground, horizontal asymptotes are just useful suggestions. Whereas you can never touch a vertical asymptote, you can (and often do) touch and even cross horizontal asymptotes. Whereas vertical asymptotes indicate very specific behavior (on the graph), usually close to the origin, horizontal asymptotes indicate general behavior far off to the sides of the graph. "
4. ash2326 Group Title
@Dodo1 do you understand?
5. Dodo1 Group Title
This is a question A function is said to have a horizontal asymptote if either the limit at infinity exists or the limit at negative infinity exists. Show that each of the following functions has a horizontal asymptote by calculating the given limit. $\lim_{x \rightarrow \infty}10+\frac{ 3x }{ x^2-12x+3 }$
6. Dodo1 Group Title
Yes, the basic concept! thank you i will take note.
7. pottersheep Group Title
Let me get my grade 12 notes
8. pottersheep Group Title
If y = [ax^n + ...............]/[Ax^N +.................] the horizontal asymptote = 0
9. pottersheep Group Title
Is the exponent on the bottom is greater than the one on the top, then it approaches zero
10. ash2326 Group Title
Yes this is a way too, let's find the limit here $\lim_{x \rightarrow \infty}10+\frac{ 3x }{ x^2-12x+3 }$ Can you find the limit?
11. pottersheep Group Title
Because eventually, the number on the bottom will become HUGEEEEE compared to the top. And a number / a hugeeeeeeeee number is close to zero, a veryy small decimal!
12. pottersheep Group Title
That's how I understood it :)
13. Dodo1 Group Title
ok, thsnk you potterssheep. :) how do i find limit?
14. ash2326 Group Title
$\lim_{x \rightarrow \infty}10+\frac{ 3x }{ x^2-12x+3 }$ First divide the numerator and denominator by x $\lim_{x \rightarrow \infty}10+\frac{ 3\frac x x }{ \frac{x^2}{x}-12\frac x x +\frac 3 x}$ We'll get $\lim_{x\to \infty} 10+\frac 3 {x-12+\frac 3 x }$ 3/x =0 so we get $\lim_{x\to \infty} 10+\frac 3 {x-12}$ as $$x\to \infty$$ we get \$10+\frac 3 \infty$ $=10+0=>10$ so that's the limit
15. Dodo1 Group Title
I see but why x-12= infinity?
16. ash2326 Group Title
$x-12$ $\infty-12$ Infinity is a very big no. subtracting 12 won't change it $\infty-12 \longrightarrow \infty$
17. Dodo1 Group Title
Oh i see thank you!! how about $\lim_{x \rightarrow -\infty} \frac{ 5-8x }{ 7+x }+\frac{ (6x^2+8) }{ (14x-12)^2 }$
18. Dodo1 Group Title
Do I mutliply the ()2 first then add?
19. ash2326 Group Title
$\lim_{x \rightarrow -\infty} \frac{ 5-8x }{ 7+x }+\frac{ (6x^2+8) }{ (14x-12)^2 }$ Let's split the limits $\lim_{x \rightarrow -\infty} \frac{ 5-8x }{ 7+x }$ First step divide by x , numerator and denominator $\lim_{x \rightarrow -\infty} \frac{\frac 5 x -8}{\frac 7 x +1}$ 1/x terms will become 0, when x goes to + or - infinity. We'll get $\frac{-8}{1}$ Do you get this?
20. Dodo1 Group Title
OK, i got it so far.
21. Dodo1 Group Title
:) its fun!
22. ash2326 Group Title
Great, now the second part $\lim_{x \rightarrow -\infty}\frac{ (6x^2+8) }{ (14x-12)^2 }$ Let's expand the denominator $\lim_{x \rightarrow -\infty}\frac{ (6x^2+8) }{ (196x^2-336x+144) }$ Divide numerator and denominator by x^2, $\lim_{x \rightarrow -\infty}\frac{ (6+\frac8{x^2}) }{ (196-336\frac{x}{x^2}+\frac{144}{x^2}) }$Can you find the limit from here?
23. Dodo1 Group Title
mmm, 6/(196-336)?
24. ash2326 Group Title
x/x^2=1/x $x\to -\infty$$\frac 1 x \to 0, \frac 1 {x^2} \to 0$
25. ash2326 Group Title
Try again now :)
26. Dodo1 Group Title
6/196?
27. ash2326 Group Title
yes, limit is the combined limit
28. Dodo1 Group Title
whats combined limit?
29. ash2326 Group Title
$-8+\frac 6 {196}$
30. Dodo1 Group Title
Oh i see! - infinity and infinity are really matter? beacuse it seems that it does nt matter.?
31. ash2326 Group Title
It does matter, in this question it doesn't
32. Dodo1 Group Title
oh, not horizentally matter?
33. ash2326 Group Title
nope
34. Dodo1 Group Title
got it! thank you, i have other 4 questions but i will try and if i stuck can i ask you?
35. ash2326 Group Title
Okay, try them. If I'm here, I'll help you But ask them in a new question. Close this
|
2014-08-20 16:45:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9685671925544739, "perplexity": 6270.628935756708}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500811391.43/warc/CC-MAIN-20140820021331-00343-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1354442/what-are-the-ways-to-solve-trig-equations-of-the-form-sinfx-cosgx
|
What are the ways to solve trig equations of the form $\sin(f(x)) = \cos(g(x))$?
if I have the following trig equation:
$$\sin(10x) = \cos(2x)$$
I take the following steps to solve it:
• I rewrite $\cos(2x)$ as $\sin\left(\frac{\pi}{2} + 2x\right)$ or as $\sin\left(\frac{\pi}{2} - 2x\right)$ cause $\sin\left(\frac{\pi}{2} - a\right) = \sin\left(\frac{\pi}{2} + a\right) = \cos(a)$;
• Let's say that I have chosen $\sin\left(\frac{\pi}{2} + 2x\right)$, the equation becomes:
$$\sin(10x) = \sin\left(\frac{\pi}{2} + 2x\right).$$
• Then, I know that:
$$\sin(f(x)) = \sin(g(x)) \Leftrightarrow f(x) = g(x) + 2\pi n, n \in \mathbb{Z} \lor f(x) = (\pi - g(x)) + 2\pi k, k \in \mathbb{Z}$$
This means that (for $f(x) = 10x$ and $g(x) = \frac{\pi}{2} + 2x$):
$$\sin(10x) = \cos(2x) \Leftrightarrow 10x = \frac{\pi}{2} + 2x + 2\pi n, n \in \mathbb{Z} \lor 10x = (\pi - (\frac{\pi}{2} + 2x)) + 2\pi k, k \in \mathbb{Z}$$
Solving, I get the following results:
$$x_{1} = \frac{\pi}{16} + \frac{\pi}{4}n,\,\,x_{2} = \frac{\pi}{24} + \frac{\pi}{6}k,\,\,\,\,n,k \in \mathbb{Z}$$
Now, are there any other methods for solving such equations or could this one be just fine?
• This is rhe most efficient method. – André Nicolas Jul 8 '15 at 21:55
• draw extremely careful graphs of $\cos 2x$ and $\sin 10x$ for, say, $0 \leq x \leq 2 \pi,$ and see if the intersections agree with your calculations printablepaper.net/category/graph – Will Jagy Jul 8 '15 at 21:55
• @Zach466920 Thanks! – user3019105 Jul 8 '15 at 21:57
• @AndréNicolas All right, got it! – user3019105 Jul 8 '15 at 21:57
• @WillJagy Are you saying that there is an error? – user3019105 Jul 8 '15 at 21:57
You may prefer to transform the sine into cosine: $$\cos\left(\frac{\pi}{2}-10x\right)=\cos(2x)$$ This splits into two: $$\frac{\pi}{2}-10x=2x+2k\pi$$ or $$\frac{\pi}{2}-10x=-2x+2k\pi$$ The trick is that $\cos\alpha=\cos\beta$ if and only if $\alpha=\beta+2k\pi$ or $\alpha=-\beta+2k\pi$ (with integer $k$).
You have $$sin (f(x))-sin (\frac\pi2-g(x))=0$$ so from the identity
$$sin(a)-sin(b)=2sin\frac {a-b}{2}cos\frac{a+b}{2}$$
it follows $$2sin\frac{(f(x)+g(x)- \frac{\pi}{2})}{2}cos\frac{(f(x)-g(x)+\frac{\pi}{2})}{2}=0$$ hence $$f(x)+g(x)=2n\pi+\frac\pi2$$ and $$f(x)-g(x)=n\pi-\frac\pi2$$
|
2019-12-08 11:11:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8714590668678284, "perplexity": 650.2223471999544}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00309.warc.gz"}
|
https://stacks.math.columbia.edu/tag/08CC
|
Lemma 20.45.2. Let $(X, \mathcal{O}_ X)$ be a ringed space. Let $E$ be an object of $D(\mathcal{O}_ X)$.
1. If there exists an open covering $X = \bigcup U_ i$, strictly perfect complexes $\mathcal{E}_ i^\bullet$ on $U_ i$, and maps $\alpha _ i : \mathcal{E}_ i^\bullet \to E|_{U_ i}$ in $D(\mathcal{O}_{U_ i})$ with $H^ j(\alpha _ i)$ an isomorphism for $j > m$ and $H^ m(\alpha _ i)$ surjective, then $E$ is $m$-pseudo-coherent.
2. If $E$ is $m$-pseudo-coherent, then any complex representing $E$ is $m$-pseudo-coherent.
Proof. Let $\mathcal{F}^\bullet$ be any complex representing $E$ and let $X = \bigcup U_ i$ and $\alpha _ i : \mathcal{E}_ i^\bullet \to E|_{U_ i}$ be as in (1). We will show that $\mathcal{F}^\bullet$ is $m$-pseudo-coherent as a complex, which will prove (1) and (2) simultaneously. By Lemma 20.44.8 we can after refining the open covering $X = \bigcup U_ i$ represent the maps $\alpha _ i$ by maps of complexes $\alpha _ i : \mathcal{E}_ i^\bullet \to \mathcal{F}^\bullet |_{U_ i}$. By assumption $H^ j(\alpha _ i)$ are isomorphisms for $j > m$, and $H^ m(\alpha _ i)$ is surjective whence $\mathcal{F}^\bullet$ is $m$-pseudo-coherent. $\square$
## Comments (2)
Comment #2770 by on
There is a typo in the first line of the proof. It should be $\alpha_i\colon\mathcal{E}^\bullet_i\rightarrow E|_{U_i}$ ($\bullet$ is missing on $\mathcal{E}_i$).
## Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 08CC. Beware of the difference between the letter 'O' and the digit '0'.
|
2022-09-28 02:21:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8805456161499023, "perplexity": 425.9080069306417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00783.warc.gz"}
|
https://math.stackexchange.com/questions/1243822/use-cauchys-integral-formula-to-evaluate-the-following-integrals
|
# Use Cauchy's Integral Formula to evaluate the following integrals.
Use Cauchy's Integral Formula to evaluate the following integral:
$$\int\limits_\Gamma \frac{1}{{(z-1)^3}{(z-2)^2}}dz$$ where $$\Gamma$$is a circumference of radius $4$ centered at $-2+i$ and traversed once in the positive(with respect to the interior of the disk) direction.
My thoughts on the problem:
I HAVE to use the Cauchy Integral Formula. I've been trying to decide the best way to change the expression in the integral. If I change it to:
$$\int\limits_\Gamma \frac{\frac{1}{{(z-2)^2}}}{{(z-1)^3}}$$
The point 2 is on the boundary of Gamma which means I can NOT use the formula. Are there any other ideas of ways I could change this integral to make it friendly enough to use the formula?
• Are you sure that $z=2$ is on the boundary..? (See here.) – Cameron Williams Apr 20 '15 at 18:49
• Just observe the point $z=2$ is outside of $\Gamma$ since $|z-z_0|=\sqrt{(2-(-2))^2+1^2}=\sqrt{17}>4$. Where $z_0=-2+i$ – Ángel Mario Gallegos Apr 20 '15 at 18:50
• Okay thank you! I was looking at a picture that I drew. This is a much better way of verifying if 2 is outside the boundary. – Kristin Apr 20 '15 at 18:53
• @Kristin No problem! Drawings can be very unreliable at times. I've definitely fallen into that trap before. – Cameron Williams Apr 20 '15 at 18:56
• I have a note that says: If a function f is analytic in a domain D and on the boundary of D then the integral is equal to the value of the function evaluated at z not, multiplied by two pi (i). Am I able to use this in my situation? – Kristin Apr 20 '15 at 19:02
Be careful, $2$ is not on the boundary of $\Gamma$. Then your approach is the correct one, letting $f(z)=1/(z-2)^2$, then by the General Cauchy Integral formula: $$\int_{\Gamma} \frac{f(z)}{(z-z_0)^3} = 2\pi i \frac{f''(z_0)}{2!} = \frac{6 \pi i}{(z_0-2)^4} = 6\pi i$$ since $f''(z)=6/(z-2)^4$ and $z_0=1$.
|
2020-03-31 20:35:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978064656257629, "perplexity": 225.32378699607068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00370.warc.gz"}
|
https://www.biostars.org/p/157832/#157971
|
Is this the good way to parse var.vcf files ?
1
0
Entering edit mode
7.1 years ago
jsgounot ▴ 170
Hi everyone,
I have to parse var.vcf files for my work and I'm wondering if I'm doing this correctly. For example with this line :
Chr1 204346 . G A,C 138 . DP=360;VDB=0.0143;AF1=1;AC1=2;DP4=1,1,208,126;MQ=35;FQ=-282;PV4=1,0.01,0.48,1 PL 171,255,0,172,35,25
First I check for all bases possibilites : GG, GA, AA, GC, CA, CC
After that I look at the PL field and choose the bases with the index of the lowest value in this field.
So in my case it will be AA.
Is this correct ?
snp • 1.7k views
0
Entering edit mode
Ummm, what are var.vcf files? Are they a special type of VCF files?
0
Entering edit mode
7.1 years ago
vassialk ▴ 200
Perhaps, you can use VCF tools and software libraries of R, Python, Ruby, Java, C++. See the sourceforge for details.
0
Entering edit mode
Thanks
0
Entering edit mode
vcftools (with its documentation) is now on GitHub. Avoid SourceForge because they bundle in junkware with their software installers.
0
Entering edit mode
I was referring to the libraries. I don't know many libraries in Java to process vcf files. Do you know any? Until now I only used htsjdk.
0
Entering edit mode
Sorry, I'm not a Java person. Maybe Pierre Lindenbaum or Brian Bushnell can help.
|
2022-10-07 13:15:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2431788593530655, "perplexity": 8679.339647832416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00144.warc.gz"}
|
https://www.gap-system.org/Manuals/pkg/PatternClass-2.4.2/doc/chap8_mj.html
|
Goto Chapter: Top 1 2 3 4 5 6 7 8 9 10 Bib Ind
### 8 Properties of Permutations
It has been of interest to the authors to compute different properties of permutations. Inspections include plus- and minus-decomposable permutations, block-decompositions of permutations, as well as the computation of the direct and skew sum of permutations. First a couple of definitions on which some of the properties are based on:
An interval of a permutation $$\sigma$$ is a set of contiguous values which in $$\sigma$$ have consecutive indices.
A permutation of length $$n$$ is called simple if it only contains the intervals of length 0, 1 and $$n$$.
#### 8.1 Intervals in Permutations
As mentioned above, an interval of a permutation $$\sigma$$ is a set of contiguous numbers which in $$\sigma$$ have consecutive indices. For example in $$\sigma = 4 6 8 7 1 5 2 3$$ the following is an interval $$\sigma(2)\sigma(3)\sigma(4)=6 8 7$$ whereas $$\sigma(4)\sigma(5)\sigma(6)=7 1 5$$ is not.
##### 8.1-1 IsInterval
‣ IsInterval( list ) ( function )
Returns: true, if list is an interval.
IsInterval takes in any list with unique elements, which can be ordered lexicographically and checkes whether it is an interval.
gap> IsInterval([3,6,9,2]);
false
gap> IsInterval([2,6,5,3,4]);
true
gap>
#### 8.2 Simplicity
As mentioned above a permutation is said to be simple if it only contains intervals of length 0, 1 or the length of the permutation.
##### 8.2-1 IsSimplePerm
‣ IsSimplePerm( perm ) ( function )
Returns: true if perm is simple.
To check whether perm (length of perm = $$n$$) is a simple permutation IsSimplePerm uses the basic algorithm proposed by Uno and Yagiura in [UY00] to compare the perm against the identity permutation of the same length.
gap> IsSimplePerm([2,3,4,5,1,1,1,1]);
true
gap> IsSimplePerm([2,4,6,8,1,3,5,7]);
true
gap> IsSimplePerm([3,2,8,6,7,1,5,4]);
false
gap>
#### 8.3 Point Deletion in Simple Permutations
In [PR12] it is shown how one can get chains of permutations by starting with a simple permutation and then removing either a single point or two points and the resulting permutation would still be simple. We have applied this theory to create functions such that the set of simple permutations of shorter length, by one deletion, can be found.
##### 8.3-1 OnePointDelete
‣ OnePointDelete( perm ) ( function )
Returns: A list of simple permutations with one point less than perm.
OnePointDelete removes single points in the simple permutation and returns a list of the resulting simple permutations, in their rank encoding.
gap> OnePointDelete([5,2,3,1,2,1]);
[ [ 2, 3, 1, 2, 1 ], [ 4, 1, 2, 2, 1 ] ]
gap> OnePointDelete([5,2,4,1,6,3]);
[ [ 2, 3, 1, 2, 1 ], [ 4, 1, 2, 2, 1 ] ]
gap>
##### 8.3-2 TwoPointDelete
‣ TwoPointDelete( perm ) ( function )
Returns: The exceptional permutation with two point less than perm.
TwoPointDelete removes two points of the input exceptional permutation and returns the list of the unique resulting permutation, in its rank encoding.
gap> TwoPointDelete([2,4,6,8,1,3,5,7]);
[ [ 2, 3, 4, 1, 1, 1 ] ]
gap> TwoPointDelete([2,3,4,5,1,1,1,1]);
[ [ 2, 3, 4, 1, 1, 1 ] ]
gap>
##### 8.3-3 PointDeletion
‣ PointDeletion( perm ) ( function )
Returns: A list of simple permutations with of shorter length than perm.
PointDeletion takes any simple permutation does not matter whether exceptional or not and removes the right number of points.
gap> PointDeletion([5,2,3,1,2,1]);
[ [ 2, 3, 1, 2, 1 ], [ 4, 1, 2, 2, 1 ] ]
gap> PointDeletion([5,2,4,1,6,3]);
[ [ 2, 3, 1, 2, 1 ], [ 4, 1, 2, 2, 1 ] ]
gap> PointDeletion([2,4,6,8,1,3,5,7]);
[ [ 2, 3, 4, 1, 1, 1 ] ]
gap> PointDeletion([2,3,4,5,1,1,1,1]);
[ [ 2, 3, 4, 1, 1, 1 ] ]
gap>
#### 8.4 Block-Decomposition
Given a permutation $$\pi$$ of length $$m$$ and nonempty permutations $$\alpha_{1},\ldots,\alpha_{m}$$ the inflation of $$\pi$$ by $$\alpha_{1},\ldots,\alpha_{m}$$, written as $$\pi[\alpha_{1},\ldots,\alpha_{m}]$$, is the permutation obtained by replacing each entry $$\pi(i)$$ by an interval that is order isomorphic to $$\alpha_{i}$$ [Bri08]. Conversely a block-decomposition of $$\sigma$$ is any expression of $$\sigma$$ as an inflation $$\sigma=\pi[\alpha_{1},\ldots,\alpha_{m}]$$. The block decomposition of a permutation is unique if and only if $$\sigma,\pi,\alpha_{1},\ldots,\alpha_{n}$$ all are in the same pattern class and $$\pi$$ is simple and $$\pi\neq 1 2,\ 2 1$$ [AA05].
For example the inflation of $$25413[21,1,1,1,2413]=3 2 8 9 1 5 7 4 6$$, written in GAP this is [[2,5,4,1,3],[2,1],[1],[1],[1],[2,4,1,3]]. This decomposition of $$3 2 8 9 1 5 7 4 6$$ is not unique. The unique block-decomposition, as described above, for $$3 2 8 9 1 5 7 4 6=2413[21,12,1,2413]$$ or in GAP notation [3,2,8,9,1,5,7,4,6]=[[2,4,1,3],[2,1],[1,2],[1],[2,4,1,3]].
##### 8.4-1 Inflation
‣ Inflation( list_of_perms ) ( function )
Returns: A permutation that represents the inflation of the list of permutations, taking the first permutation to be $$\pi$$, as described in the definition of inflation.
Inflation takes the list of permutations that stand for a box decomposition of a permutation, and calculates that permutation by replacing each entry $$i$$ in the first permutation by an interval order isomorphic to the permutation in index $$i+1$$.
gap> Inflation([[3,2,1],[1],[1,2],[1,2,3]]);
[ 6, 4, 5, 1, 2, 3 ]
gap> Inflation([[1,2],[1],[4,2,1,3]]);
[ 1, 5, 3, 2, 4 ]
gap> Inflation([[2,4,1,3],[2,1],[3,1,2],[1],[2,4,1,3]]);
[ 3, 2, 10, 8, 9, 1, 5, 7, 4, 6 ]
gap>
##### 8.4-2 BlockDecomposition
‣ BlockDecomposition( perm ) ( function )
Returns: A list of permutations, representing the block-decomposition of perm. In the list the first permutation is $$\pi$$, as described in the definiton of block-decomposition above.
BlockDecomposition takes a plus- and minus-indecomposable permutation and decomposes it into its maximal maximal intervals, which are preceded by the simple permutation that represents the positions of the intervals. If a plus- or minus-decomposable permutation is input, then the decomposition will not be the unique decomposition, by the definition of plus- or minus- decomposable permutations, see below.
gap> BlockDecomposition([3,2,10,8,9,1,5,7,4,6]);
[ [ 2, 4, 1, 3 ], [ 2, 1 ], [ 3, 1, 2 ], [ 1 ], [ 2, 4, 1, 3 ] ]
gap> BlockDecomposition([1,2,3,4,5]);
[ [ 1, 2 ], [ 1, 2, 3, 4 ], [ 1 ] ]
gap> BlockDecomposition([5,4,3,2,1]);
[ [ 2, 1 ], [ 4, 3, 2, 1 ], [ 1 ] ]
gap>
#### 8.5 Plus-Decomposability
A permutation $$\sigma$$ is said to be plus-decomposable if it can be written uniquely in the following form,
$\sigma = 12 [\alpha_{1},\alpha_{2}]$
where $$\alpha_{1}$$ is not plus-decomposable.
The subset of a rational class, containing all permutations that are plus-decomposable and in the class, has been found to be also rational under the rank encoding.
##### 8.5-1 IsPlusDecomposable
‣ IsPlusDecomposable( perm ) ( function )
Returns: true if perm is plus-decomposable.
To check whether perm is a plus-decomposable permutation IsPlusDecomposable uses the fact that there has to be an interval $$1..x$$ where $$x <n$$ ($$n$$ = length of the perm) in the rank encoded permutation that is a valid rank encoding.
gap> IsPlusDecomposable([3,3,2,3,2,2,1,1]);
true
gap> IsPlusDecomposable([3,4,2,6,5,7,1,8]);
true
gap> IsPlusDecomposable([3,2,8,6,7,1,5,4]);
false
gap>
#### 8.6 Minus-Decomposability
Minus-decomposability is essentially the same as plus-decomposability, the difference is that if a permutation $$\sigma$$ is minus-decomposable, it can be written uniquely in the following form,
$\sigma = 21 [\alpha_{1},\alpha_{2}]$
where $$\alpha_{1}$$ is not minus-decomposable.
Here also, the subset of a rational class, containing all permutations that are minus-decomposable and in the class, has been found to be rational under the rank encoding.
##### 8.6-1 IsMinusDecomposable
‣ IsMinusDecomposable( perm ) ( function )
Returns: true if perm is minus-decomposable.
To check whether perm (length of perm = $$n$$) is a minus-decomposable permutation IsMinusDecomposable uses the fact that the first $$n-x$$, where $$x<n$$, letters in the rank encoding of perm have to be $$>x$$ and that the letters from position $$x+1$$ until the last one have to be $$\leq x$$.
gap> IsMinusDecomposable([3,3,3,3,3,3,2,1]);
true
gap> IsMinusDecomposable([3,4,5,6,7,8,2,1]);
true
gap> IsMinusDecomposable([3,2,8,6,7,1,5,4]);
false
gap>
#### 8.7 Sums of Permutations
The direct sum of two permutations $$\sigma=\sigma_{1} \ldots \sigma_{k}$$ and $$\tau=\tau_{1}\ldots\tau_{l}$$ is defined as,
$\sigma \oplus \tau = \sigma_{1}\ \ \sigma_{2}\ldots\sigma_{k}\ \ \tau_{1}+k\ \ \tau_{2}+k\ldots\tau_{l}+k\ .$
In a similar fashion the skew sum of $$\sigma, \tau$$ is
$\sigma \ominus \tau = \sigma_{1}+l\ \ \sigma_{2}+l\ldots\sigma_{k}+l\ \ \tau_{1}\ \tau_{2}\ldots\tau_{l}\ .$
The calculation of the direct and skew sums of permutations using the rank encoding is also straight forward and is used in the functions described below. The direct sum of two permutations $$\sigma,\tau$$ represented as their rank encoded sequences is the permutation which has the rank encoding that is the concatention of the rank encoding of $$\sigma$$ and $$\tau$$. The skew sum of two permutations $$\sigma,\tau$$ encoded by the rank encoding is the concatenation of the rank encodings of $$\sigma$$ and $$\tau$$ where in the sequence corresponding to $$\sigma$$ under the rank encoding each element has been increased by $$l$$, with $$l$$ being the length of $$\tau$$.
##### 8.7-1 PermDirectSum
‣ PermDirectSum( perm1, perm2 ) ( function )
Returns: A permutation resulting from perm1 $$\oplus$$ perm2.
PermDirectSum returns the permutation corresponding to perm1 $$\oplus$$ perm2 if perm1 and perm2 are both not rank encoded. If both perm1 and perm2 are rank encoded, then PermDirectSum returns a rank encoded sequence.
gap> PermDirectSum([2,4,1,3],[2,5,4,1,3]);
[ 2, 4, 1, 3, 6, 9, 8, 5, 7 ]
gap> PermDirectSum([2,3,1,1],[2,4,3,1,1]);
[ 2, 3, 1, 1, 2, 4, 3, 1, 1 ]
gap>
##### 8.7-2 PermSkewSum
‣ PermSkewSum( perm1, perm2 ) ( function )
Returns: A permutation resulting from perm1 $$\ominus$$ perm2.
PermSkewSum returns the permutation corresponding to perm1 $$\ominus$$ perm2 if perm1 and perm2 are both not rank encoded. If both perm1 and perm2 are rank encoded, then PermSkewSum returns a rank encoded sequence.
gap> PermSkewSum([2,4,1,3],[2,5,4,1,3]);
[ 7, 9, 6, 8, 2, 5, 4, 1, 3 ]
gap> PermSkewSum([2,3,1,1],[2,4,3,1,1]);
[ 7, 8, 6, 6, 2, 4, 3, 1, 1 ]
gap>
Goto Chapter: Top 1 2 3 4 5 6 7 8 9 10 Bib Ind
generated by GAPDoc2HTML
|
2019-06-20 03:28:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.752562940120697, "perplexity": 744.1933457892496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00083.warc.gz"}
|
https://tex.stackexchange.com/questions/268775/how-to-typeout-dimensions-in-mm
|
# How to typeout dimensions in mm?
\typeout{***This vertical space will be \the\textheight}
prints
***This vertical space will be 153.64488pt
Is it possible to print \the\textheight in mm?
• – Werner
Sep 21 '15 at 19:40
## 2 Answers
With some help from How to print a length accurately and with user-controlled rounding?, here is a way:
\documentclass{article}
\usepackage{xparse}
\ExplSyntaxOn
% https://tex.stackexchange.com/a/123283/5764
\DeclareExpandableDocumentCommand { \printlengthas } { m m }
{ \dim_to_decimal_in_unit:nn {#1} { 1 #2 } #2 }
\ExplSyntaxOff
\begin{document}
\newlength{\advertwidth}
\setlength{\advertwidth}{2.5in}
\printlengthas{\advertwidth}{in}
\printlengthas{\advertwidth}{mm}
\typeout{The length \string\advertwidth\space is \printlengthas{\advertwidth}{mm}.}
\end{document}
The .log includes
The length \advertwidth is 63.50034mm.
Wouldn't this be much easier?
\documentclass{article}
\usepackage{lengthconvert}
\begin{document}
\Convert[unit = mm]{\textheight}
\end{document}
EDIT: Oh, I see, this is not a typeout ...
• \textheight = 1564.90157mm = 156.490157cm !! Possibly there is inconsistency between recent l3 packages and lengthconvert.sty. Oct 2 '15 at 13:44
• @AkiraKakuto I didn't even look at what the number said. This is worth a new question, I think. Oct 3 '15 at 9:37
• New question opened at tex.stackexchange.com/questions/270857/… Oct 3 '15 at 9:46
|
2022-01-24 05:14:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.424665629863739, "perplexity": 8275.958064891563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00082.warc.gz"}
|
http://math.stackexchange.com/questions/362457/existence-of-dominating-measure-for-weak-compact-set-of-measures
|
# Existence of dominating measure for weak*-compact set of measures
Let $(\Omega,\mathcal F)$ be a measurable space and $\mathcal P$ a weak*-compact set of the set of all probability measures $\mathcal M_1(\Omega)$. Does there always exist a probability measure $\mathbb Q\in\mathcal M_1(\Omega)$ such that every $\mathbb P\in\mathcal P$ is absolutely continuous to $\mathbb Q$, i.e. such that $\mathbb Q$ dominates all measures in $\mathcal P$?
-
Can you precisely state what you mean by weak$^*$-compact? – Davide Giraudo Apr 15 '13 at 19:55
the weak*-topology is usually taken to be the weakest topology such that the linear functionals $l_Z:\mathcal M_1(\Omega)\rightarrow\mathbb R$, defined by $l_Z(\mu)=\int_\mathbb R Zd\mu$ is continuous for every bounded and measurable function $Z:\Omega\rightarrow\mathbb R$. – Andy Teich Apr 15 '13 at 20:34
|
2015-08-30 02:37:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899593591690063, "perplexity": 183.00517840103393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064865.49/warc/CC-MAIN-20150827025424-00263-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://wiki.iac.isu.edu/index.php?title=Physics_Catalog
|
# Physics Catalog
Department of Physics
Chair: Beezhold
Professors: Cole, Dale, Forest, Shropshire
Associate Professors: McNulty, Tatar
Research Associate Professor:
Assistant Professor: McNulty
Research Assistant Professor:
Senior Lecturer:
Instructor: Bernabee
Affiliate Faculty: Burgett, Blackburn, DeVeaux, Franckowiak, Gesell, Harker, Harris, Hill, Jones, K. Kim, Millward, Nigg, Nieschmidt, Roney Professors Emeriti: Beezhold, Harmon, Parker, Vegors
The objectives of our graduate degrees, which are the M.S., M.N.S., and the Ph.D. in Applied Physics, are to develop a core competence in the fundamental physical sciences that is appropriate for the level of the degree, to develop more generalized skills of quantitative reasoning that are applicable to any discipline, and to understand the nature and influence of physics in particular, and science in general, upon our society. Additional objectives include the development of (1) broad, fundamental technical skills and knowledge, (2) strong communication skills, and (3) the capability to think critically and work independently. The expectations for each of these objectives has a level that is appropriate for the degree.
The learning objectives of the M.S. degree are mastery of the core subjects of electromagnetism, non-relativistic quantum mechanics, and theoretical methods of classical physics (principally mechanics).
The purpose of the M.N.S. degree is to provide a broad spectrum of knowledge in the physical sciences for teachers of secondary education. The technical learning objectives are flexible in order to accommodate the interests of the student, so long as the subject area is physical science. There is no thesis requirement or expectation of one for this degree.
The communication objectives for these degrees are writing and speaking skills that are sufficient for students to represent themselves, their projects, and their organizations at regional, national, or international scientific meetings. Our expectations are that these students will obtain critical thinking skills and an ability to work independently at a level that will require minimal or no supervision by a more senior scientist or management.
The educational objectives of the Ph.D. degree in Applied Physics include all of those of the M.S. programs, plus mastery of additional graduate level classes of the student’s choosing, plus completion of an original doctoral research thesis project with the objective of mastery of planning, executing, and publishing original research in physics at the highest level of the discipline. The communication objectives for students at this level are writing and speaking skills that are sufficient to teach in higher education, attract interest and funding to their projects, and to represent themselves, their projects and their organizations at regional, national, or international scientific meetings. Our expectations are that these students will develop critical thinking skills and an ability to work independently such that they are capable of initiating and leading their own scientific projects, and can work at a level that requires no supervision.
Doctor of Philosophy in Applied Physics
Program Goals
• Prepare graduates to conduct and disseminate independent scholarly research in applied physics.
Program Objectives
• Increase knowledge of graduates in their chosen field of applied physics.
• Enhance the ability of graduates to contribute to their chosen field of applied physics.
• Enhance effective written and oral communication skills of graduates.
The Ph.D. program in Applied Physics is an interdisciplinary program offered by the Department of Physics that allows for a broad range of research topics. Areas of emphasis in the department include nuclear physics applications, medical physics, radiation effects in materials, biological systems and devices, accelerator physics and applications, materials science, homeland security applications, and other areas of fundamental and applied nuclear science.
To attain a degree in this program, a student must demonstrate scholarly achievement and the ability to conduct independent investigation. The program will normally require approximately five years of full-time study beyond the bachelor’s degree (or three years beyond the master’s degree), including class work, research, and preparation of the dissertation.
Admission requirements All applicants must meet Idaho State University Graduate School admission requirements for doctoral programs. In addition, applicants must have attained a minimum of a bachelor’s degree in physics or a closely related field (engineering, chemistry, etc.). The student’s course of study will be determined in consultation with the department chair or the department’s graduate advising committee. Students may be required to complete any missing course material that is required for the B.S. degree in physics at Idaho State University. Continued enrollment in the program is contingent upon maintaining a 3.0 grade point average, and upon making satisfactory progress toward a degree.
A complete graduate application for classified status in the Idaho State University Physics Department Ph.D. program consists of:
a. GRE scores (normally, a minimum of 50th percentile on verbal, quantitative, or analytical sections is required for classified students);
b. An Idaho State University Graduate School application form, fee, and official copies of transcripts;
c. Three letters of recommendation;
d. A statement of career goals.
General Requirements The Ph.D. degree requires completion of at least 84 credits at the 500-course level or greater. Of these, at least 32 credits, but no more than 44 credits, must be doctoral dissertation credits (PHYS 6699). At least 4 credits must be graduate seminar (or equivalent, as determined by the department). The remaining required credits consist of electives and the required courses listed below. Students entering the program with a master’s degree may receive credit for up to 30 credits toward the Ph.D., subject to the department chair’s approval. Students should complete the required courses as listed below (or their equivalent, as determined by the department), at Idaho State University.
Required Courses
PHYS 6602 Theoretical Methods of Physics 3 cr PHYS 6611-6612 Electricity and Magnetism 6 cr PHYS 6621 Classical Mechanics 3 cr PHYS 6624-6625 Quantum Mechanics 6 cr PHYS 649 Graduate Seminar 4 cr
Program of Study A departmental advisory committee consisting of graduate faculty will guide each student in establishing his or her program of course and laboratory study based upon the student’s background and research interests. The advisory committee has the responsibility of ensuring that the student has adequate knowledge to support research in his or her area of research.
At the beginning of a full-time student’s second year, the student will sit for a written Qualifying Examination. Exceptions to this schedule may be made when a student has academic deficits to make up, in which case the student will have an additional year. These examinations are offered in January and September. The student will be allowed two attempts to pass the examination, and the second attempt must be the next available examination. The student will be admitted to candidacy upon passing the qualifying examination.
A dissertation committee of four departmental members and a Graduate Faculty Representative (GFR), chaired by the candidate’s major professor, must be appointed within six months of passing the qualifying examination. Within one year of passing the qualifying exam the full-time candidate, with guidance from the major professor, must satisfactorily complete the Preliminary Examination, which consists of an oral presentation and defense of a written proposal for dissertation research to the student’s dissertation committee.
The research and dissertation preparation must be done under the close supervision of the committee and must include at least one full year of work performed under the supervision of Idaho State University graduate faculty.
Dissertation Examination approval requires a public presentation of the dissertation and a satisfactory oral defense to the dissertation committee. Doctoral oral examinations are open to all regular members of the graduate faculty as observers. Further, oral presentations are open to the public until questioning by the dissertation committee begins.
Master of Science Programs
Admission Requirements The student must apply to, and meet all criteria for, admission to the Graduate School. In addition to the general requirements of the Graduate School, the student must comply with departmental requirements.:
A complete graduate application for classified status in the Idaho State University Physics Department consists of:
a. GRE aptitude scores;
b. An ISU Graduate School Application form and official copies of transcripts;
c. Three letters of recommendation;
d. A brief statement of career goals.
Applicants must hold the degree of Bachelor of Science or Bachelor of Arts in Physics or a closely related field as determined by the department. The student’s course of study will be determined in consultation with the chair and the student’s major advisor. In some circumstances, a placement examination will be given. Students will normally be required to complete as deficiencies any course required for the B.S. in Physics at Idaho State University which they have not already taken. Continued enrollment in the program is contingent upon maintaining a 3.0 grade point average and making satisfactory progress toward a degree.
Master of Science – Thesis Option
A satisfactory score on physics examination(s) may be required before admission to candidacy. A total of 30 credits are required for the Master of Science Degree with a Physics Emphasis.
Required Courses PHYS 6602 Theoretical Methods of Physics 3 cr PHYS 6611 Electricity and Magnetism 3 cr PHYS 6624-6625 Quantum Mechanics 6 cr PHYS 6650 Masters Thesis 6 cr
A public presentation of the thesis is required, along with a satisfactory oral defense to the thesis committee consisting of two departmental members and one GFR.
Master of Science – Non-thesis Option
There are two mechanisms by which a student may attain a non-thesis M.S. degree. First, students in the Ph.D. program who do not pass the qualifying examination at the Ph.D. level after two attempts may complete a non-thesis M.S. degree. The required core courses for the non-thesis M.S. degree are the same as those for the Ph.D., i.e. those listed above. In addition, a non-thesis M.S. student must pass the qualifying examination at a level appropriate for the M.S., and he or she must complete an oral presentation and defense of a written proposal for a research project to the student’s graduate committee.
Second, students in the Ph.D. program who have completed all required courses for the Ph.D. and have passed both their qualifying examination and their oral presentation and defense of a written proposal for a research project are eligible for a non-thesis M.S. degree.
Master of Natural Science in Physics
The Master of Natural Science (MNS) in Physics is designed primarily for teachers and prospective teachers who want to improve their understanding of the subject matter of physics. Emphasis is upon the subject matter and the M.N.S. is generally not a thesis program. Individuals interested in this degree should hold a teaching certificate or be working toward one. The student’s program will be determined in consultation with the student’s advisor and committee. The program requires a minimum of 30 credits, 22 of which must be in residence. A final oral examination is required, with the thesis committee consisting of two departmental members and one GFR.
Admission Requirements The student must apply to, and meet all criteria for, admission to the Graduate School.
General Requirements The student’s program will be determined in consultation with the student’s advisor and committee. The program requires a minimum of 30 credits, 22 of which must be in residence. A final oral examination is required.
PHYS 5503 - 5504 Advanced Modern Physics 3 credits. Study of the elementary principles of quantum mechanics and an introduction to atomic, solid state, and nuclear physics. Quantum mechanics will be used as much as possible. PHYS 5503 is a PREREQ for 5504. PREREQ: MATH 3360 OR EQUIVALENT, AND PHYS 3301.
PHYS 5505 Advanced Laboratory 2 credits. Experiments in radiation detection and measurement, nuclear spectroscopy including x-ray and gamma spectroscopies, neutron activation and ion beam methods. Available to Geology, Engineering, and Physics majors. PREREQ: PERMISSION OF INSTRUCTOR.
PHYS 5509 Introductory Nuclear Physics 3 credits. A course in Nuclear Physics with emphasis upon structural models, radioactivity, nuclear reactions, fission and fusion. PREREQ: KNOWLEDGE OF ELEMENTARY QUANTUM MECHANICS AND DIFFERENTIAL EQUATIONS OR PERMISSION OF INSTRUCTOR.
PHYS 5510 Science in American Society 2 credits. Observational basis of science; technology’s historical influences on scientific developments; perceptions of science in contemporary America; tools/strategies for teaching science. Cross-listed as GEOL 5510. PREREQ: PERMISSION OF INSTRUCTOR.
PHYS 5515 Statistical Physics 3 credits. Topics covered may include kinetic theory, elementary statistical mechanics, random motion and the theory of noise. Choice of topics will depend upon the interest of the students and instructor. PREREQ: PHYS 2212, MATH 3360.
PHYS 5516 Radiation Detection and Measurement 3 credits. Lecture/laboratory course emphasizing practical measurement techniques in nuclear physics. PREREQ: CHEM 1111, CHEM L1111, CHEM1112, CHEML1112, AND EITHER (PHYS1111 AND PHYS1113) OR (PHYS2211 AND PHYS 2213).
PHYS 5521-5522 Electricity and Magnetism 3 credits. Intermediate course in fundamental principles of electrical and magnetic theory. Free use will be made of vector analysis and differential equations. PHYS 5521 is a PREREQ for 5522. PREREQ: PHYS 2212 AND MATH 3360.
PHYS 5542 Solid State Physics 3 credits. Introduction to the field of solid state physics emphasizing the fundamental concepts. Topics usually covered are crystal structure, X-ray diffraction, crystal binding energies, free electron theory of solids, energy bands. PREREQ: PHYS 3301, PHYS 5583, MATH 3360 OR PERMISSION OF INSTRUCTOR.
PHYS 5552 Intermediate Optics 3 credits. Wave theory, e/m waves, production of light, measurement of light, reflection, refraction, interference, diffraction, polarization, optical systems, matrix methods, Jones vectors, Fourier optics, propagation of e/m waives in materials, atmospheric optics. PREREQ: PHYS 2212. COREQ: MATH 3360
PHYS 5553 Topics in Astrophysics 2 credits. Applications of physics to astronomy or cosmology. May include lab exercise. PREREQ: PERMISSION OF INSTRUCTOR.
PHYS 5561-5562 Introduction to Mathematical Physics 3 credits. Introduction to the mathematics most commonly used in physics with applications to and practice in solving physical problems; includes vector analysis, ordinary and partial differential equations. PHYS 5561 is a PREREQ for 5562. PREREQ: PHYS 2212 AND MATH 3360.
PHYS 5583 Theoretical Mechanics 4 credits. Detailed study of the motion of particles, satellites, rigid bodies and oscillating systems. Develop and apply Langrangian and Hamiltonian methods. PREREQ: PHYS 2212 AND MATH 3360.
PHYS 5592 Colloquium in Physics 1 credit. Faculty and student lectures in current research topics in physics. Open to upper division and graduate students in physics. May be repeated to a maximum of 4 credits.
PHYS 5597 Professional Education Development Topics. Variable credit. A course for practicing professionals aimed at the development and improvement of skills. May not be applied to graduate degrees. May be repeated. May be graded S/U.
PHYS 5599 1-6 Credits. This is an experimental course. The course title and number of credits are noted by course section and announced in the class schedule by the scheduling department. Experimental courses may offered no more than three times. May be repeated.
PHYS 6602 Theoretical Methods of Physics 3 credits. Calculus of variations, Lagrangian and Hamiltonian formalisms of classical mechanics, some classical scattering theory, methods of solving PDEs, Green’s functions, functions of complex variables,vector and tensor analysis, matrix, group and operator theory, and numerical methods integrated throughout each topic.
PHYS 6603 Particle Physics 3 credits. Basic constituents of the standard model, experimental methods, particle interactions: weak, gravitational, strong and electromagnetic, conservation laws, hadron structure and interactions, unification of interactions, physics beyond the standard model. PREREQ: PHYS 6624 OR PERMISSION OF INSTRUCTOR.
PHYS 6609 Advanced Nuclear Physics 3 credits. Nucleon-nucleon interaction, bulk nuclear structure, microscopic models of nuclear structure, collective models of nuclear structure, nuclear decays and reactions, electromagnetic interactions, weak interactions, strong interactions, nucleon structure, nuclear applications, current topics in nuclear physics. PREREQ:PHYS6624 OR PERMISSION OF INSTRUCTOR.
PHYS 6611 Electricity and Magnetism 3 credits. Maxwell’s equations and methods of solution, plane wave propagation and dispersion, wave guides, antennas and other simple radiating systems, relativistic kinematics and dynamics, classical interaction of charged particles with matter, classical radiation production mechanisms.
PHYS 6612 Advanced Electricity and Magnetism 3 credits. Advanced topics in application of Maxwell’s equations to wave guides, antennas and other simple radiating systems. Particular emphasis upon the relativistic interaction of charged particles with matter, energy loss, and classical radiation production and absorption mechanisms. PREREQ: PHYS 6611 OR PERMISSION OF INSTRUCTOR.
PHYS 6615 Activation Analysis 3 credits. Theory and use of activation methods for quantitative chemical analysis of natural and synthetic materials. Applications will be emphasized.
PREREQ: PERMISSION OF INSTRUCTOR.
PHYS 6621 Classical Mechanics 3 credits. Lagrange equations, small vibrations; Hamilton’s canonical equations; Hamilton’s principal, least action; contact transformation; Hamilton-Jacobi equation, perturbation theory; nonlinear mechanics. PREREQ: PHYS5583, PHYS5561-5562, OR PERMISSION OF INSTRUCTOR.
PHYS 6624-6625 Quantum Mechanics 3 credits. Schrodinger wave equation, stationary state solutions; operators and matrices; perturbation theory, non-degenerate and degenerate cases; WKB approximation, non-harmonic oscillator, etc.; collision problems. Born approximation, method of partial waves. PHYS 6624 is a PREREQ for 6625. PREREQ: PHYS 5561-5562, PHYS 6621 OR PERMISSION OF INSTRUCTOR.
PHYS 6626 Advanced Quantum Mechanics 3 credits. Elementary quantum field theory and practical applications. Emphasis upon non-relativistic and relativistic quantum electrodynamics, radiative processes, bremsstrahlung, pair-production, scattering, photo-electric effect, emission and absorption. PREREQ: PHYS 6625 OR PERMISSION OF INSTRUCTOR.
PHYS 6630 Accelerator Physics 3 credits. The physics of direct voltage accelerators, betatrons, sychrotrons, linear induction acceleration; high current accelerators; electromagnetic particle optics, free electron lasers and synchrotron light sources. PREREQ: PHYS 6612, PHYS 6624 OR EQUIVALENT.
PHYS 6631 Accelerator Technology 3 credits. Topics will include high voltage and pulsed power techniques, waveguide and R.F. structures, ion and electron beam sources and beam measurements as applied to particle beam machines. PREREQ: PHYS 6612 OR EQUIVALENT.
PHYS 6632 Particle Beam Laboratory 1-3 credits. Laboratory projects in particle beam and ion optics, radiation detectors, ion source operation, etc. May be repeated up to 3 credits. PREREQ: PERMISSION OF INSTRUCTOR.
PHYS 6640 Statistical Mechanics 3 credits. Statistical ensembles; the Maxwell-Boltzmann law; approach to equilibrium, quantum statistical mechanics; application of statistical mechanics to thermodynamic processes. PREREQ: PHYS5515 AND PHYS 6621.
PHYS 6641 Field Theory, Particles, and Cosmology I 3 credits. Topics may include Dirac theory, group theory, Feynman diagrams, superstrings, supergravity, relativity and cosmology. PREREQ: PERMISSION OF INSTRUCTOR.
PHYS 6642 Field Theory, Particles, and Cosmology II 3 credits. A continuation of 641. Topics may include Dirac theory, group theory, Feynman diagrams, superstrings, super gravity, relativity and cosmology. PREREQ: PERMISSION OF INSTRUCTOR.
PHYS 6643 Advanced Solid State Physics 3 credits. Electron many-body problem, crystal and reciprocal lattice, Bloch functions, pseudo potentials, semi-conductors, transition metals, crystal momentum and coordinate representations, electric and magnetic fields, impurities and defects in crystals and semi-conductors, radiation effects on solids, lattice vibrations, electron transport. PREREQ: PHYS 6624 OR PERMISSION OF INSTRUCTOR.
PHYS 6648 Special Topics in Physics 1-3 credits. Survey, seminar, or project (usually at an advanced
level) in one area of physics. Content varies depending
upon the desires of the students and faculty. May be repeated until 6 credits are earned. PREREQ: PERMISSION OF INSTRUCTOR.
PHYS 6649 Graduate Seminar 2 credits. Advanced seminar topics in currently active areas of applied physics. Students will be required to make presentations and will be required to submit a paper. Four credits required. May be repeated.
PHYS 6650 Thesis 1-10 credits. Maybe repeated. Graded S/U .
PHYS 6699 1-6 Credits. This is an experimental course. The course title and number of credits are noted by course section and announced in the class schedule by the scheduling department. Experimental courses may be offered no more than three times. May be repeated.
PHYS 8850 Doctoral Dissertation Variable credit. Research toward and completion of the dissertation. May be repeated. Graded S/U.
|
2022-09-29 23:44:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3142937123775482, "perplexity": 3752.5560422033736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00684.warc.gz"}
|
https://dmoj.ca/problem/nccc9s2
|
## Mock CCC '22 1 S2 - IU
View as PDF
Points: 5 (partial)
Time limit: 0.25s
Memory limit: 1G
Problem type
Kaitlyn is the world's biggest IU fan. One day, she is bored and buys magnets. of them have the letter I on them and of them have the letter U on them. She arranges them on the fridge to spell IU repeated times.
Sadly, her archnemesis, Sylvia, has broken into her apartment and rearranged the magnets because she is not an IU fan.
Kaitlyn wants to fix the magnets so that it spells IU repeatedly. However, Kaitlyn is tired, so the only operation she can do is swap two adjacent magnets.
Compute the minimum number of operations Kaitlyn needs to make this happen.
#### Constraints
In tests worth 1 mark, .
In tests worth an additional 4 marks, .
#### Input Specification
The first line contains a single integer .
The second line contains a string of characters. It is guaranteed that of them are I and of them are U.
#### Output Specification
Output an integer , the minimum number of operations needed to rearrange the magnets as desired. If it is impossible to do so, output -1.
#### Sample Input
2
IUUI
#### Sample Output
1
|
2022-09-26 16:49:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4131040871143341, "perplexity": 3388.672681636164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00028.warc.gz"}
|
https://analystprep.com/study-notes/cfa-level-2/seasonality/
|
Limited Time Offer: Save 10% on all 2022 Premium Study Packages with promo code: BLOG10
# Seasonality
Seasonality is a time series feature in which data shows regular and predictable patterns that recur every year. For example, retail sales tend to peak for the Christmas season and then decline after the holidays.
A seasonal lag is the value of a time series one period before the current period, incorporated as an additional term in an autoregressive model. For example, if quarterly data are used, the seasonal lag is 4; if monthly data are used, the seasonal lag is 12; and so on. Seasonality is detected by autocorrelation of error terms when it differs significantly from 0. Using seasonal lags eliminates autocorrelation in the error terms.
Consider quarterly sales data. The 4th autocorrelation will not be statistically different from zero if there is quarterly seasonality.
Seasonality can be corrected by incorporating an additional lagged term. For quarterly data, we add a prior year quarterly seasonal lag as follows:
$$\text{x}_{\text{t}}=\text{b}_{0}+\text{b}_{1}\text{x}_{\text{t}-1}+\text{b}_{2}\text{x}_{\text{t}-4}+\epsilon_{\text{t}}$$
In the above expression, the seasonal lag is $$\text{b}_{2}\text{x}_{\text{t}-4}$$. We then run a regression analysis on the time series data to test whether the seasonal lag will eliminate statistically significant autocorrelation of error terms.
#### Example: Testing and Correcting Seasonality
Consider an AR(1) model used to forecast quarterly retail sales of a certain company based on 100 observations.
$$\text{x}_{\text{t}}=\text{b}_{0}+\text{b}_{1}+\epsilon_{\text{t}}$$
The residual autocorrelations relating to a certain year are as presented in the following table:
$$\small{\begin{array}{c|c} \textbf{Lag} & \textbf{Autocorrelation}\\ \hline1 & -0.0584 \\ \hline2 & -0.0492 \\ \hline3 & 0.0625 \\ \hline4 & 0.6580 \\ \end{array}}$$
Test for seasonality in the time series and suggest how to correct it.
#### Solution
The first step is to calculate the t-statistic for each autocorrelation using the formula:
$$\text{t}_{\text{Statistic}}=\frac{\text{Residual autocorrelation}}{\frac{1}{\sqrt{\text{T}}}}$$
The t-statistic for lag one is calculated as:
$$\text{t}_{\text{Statistic}}=\frac{-0.0584}{\sqrt{100}}=-0.584$$
$$\small{\begin{array}{c|c|c} \textbf{Lag} & \textbf{Autocorrelation} & \textbf{t-statistic} \\\hline1 & -0.0584 & -0.584 \\\hline2 & -0.0492 & -0.492 \\\hline3 & 0.0625 & 0.625 \\\hline4 & 0.658 & 6.580\\ \end{array}}$$
There are 100 observations and two parameters, $$\text{b}_{0}$$ and $$\text{b}_{1}$$ to be estimated. Thus 98 (100-2) degrees of freedom. The critical t-value at the 5% significance level with 98 degrees of freedom is 1.98. The table above shows that the 4th lag autocorrelation and the t-statistic are the highest.
The t-statistics of the first three lagged autocorrelations are less than the critical value at the 5% significance level. We can therefore conclude that none of the first three lagged autocorrelations is significantly different from zero.
However, notice that the t-statistic for the 4th lag autocorrelation is greater than the critical value. We thus reject the null hypothesis that the 4th lag autocorrelation is zero. The conclusion is that there is seasonality in the time series. This implies that the model is misspecified and not appropriate for use.
## Correcting Seasonality
Seasonality can be corrected by incorporating an additional lag of the dependent variable that corresponds to the same period in the previous year to the original AR(1) model as another independent variable to make the model more correctly specified.
In the previous example, we have seen that there is seasonality in the quarterly time series. This implies that retail sales in each quarter are related to both the previous quarter and the corresponding quarter in the previous year.
We can incorporate a seasonality lag to the original AR(1) model as follows:
$$\text{x}_{\text{t}}=\text{b}_{0}+\text{b}_{1}\text{x}_{\text{t}-1}+\text{b}_{2}\text{x}_{\text{t}-4}+\epsilon_{\text{t}}$$
We have added the seasonal lag $$\text{b}_{2}\text{x}_{\text{t}-4}$$ to eliminate the regular quarterly pattern and seasonal non-stationarity if they exist.
## Forecasting with Seasonal Lags
Assume that the regression coefficients after incorporating the seasonal lag are estimated as:
$$\text{b}_{0}=0.0080, \text{b}_{1}=-0.0650$$ and $$\text{b}_2=0.8068$$.
The estimated equation is:
$$\text{x}_{\text{t}}=0.0080-0.0650\text{x}_{\text{t}-1}+0.8068\text{x}_{\text{t}-4}+\epsilon_{\text{t}}$$
Where $$\text{x}_{\text{t}}$$ is the retail sales for the ith quarter.
Given the following quarterly retail sales:
$$\small{\begin{array}{c|c|c|c|c} \textbf{Quarter} & 2020.1 & 2020.2 & 2020.3 & 2020.4 \\\hline\textbf{Retail sales (USD Millions)} & 100 & 300 & 150 & 200\\ \end{array}}$$
We can forecast the retail sales for the first quarter of 2021 as follows:
\begin{align*}\text{y}_{2021.1}&=0.0080-0.0650(\text{y}_{2020.4})+0.8068(\text{y}_{2020.1})\\&=0.0080-0.0650(200)+0.8068(100)\\&=67.69\end{align*}
The forecasted value of the retail sales for the first quarter of 2021 is 67.69 million.
## Question
Consider the following monthly AR model with an additional lag incorporated to eliminate seasonality.
$$\text{x}_{\text{t}}=\text{b}_{0}+\text{b}_{1}\text{x}_{\text{t}-1}+\beta_{2}\text{x}_{\text{t}-12}+\epsilon_{\text{t}}$$
Given the following information:
$$\small{\begin{array}{l|c} {}& \text{Coefficients} \\ \hline\text{Intercept} & 0.0005 \\ \hline\text{lag 1} & –0.12 \\ \hline\text{lag 12} & 0.87\\ \end{array}}$$
$$\small{\begin{array}{c|c|c} 2019 & \text{Jan} & 2.8 \\\hline2019 & \text{Feb} & 3.0 \\\hline2019 & \text{Mar} & 3.5 \\\hline2019 & \text{Apr} & 4.0 \\\hline2019 & \text{May} & 4.6 \\\hline2019 & \text{Jun} & 5.0 \\\hline2019 & \text{Jul} & 5.4 \\\hline2019 & \text{Aug} & 6.0 \\\hline2019 & \text{Sep} & 7.0 \\\hline2019 & \text{Oct} & 5.4 \\\hline2019 & \text{Nov} & 6.0 \\\hline2019 & \text{Dec} & 7.0\\ \end{array}}$$
The forecasted value for January 2020 is closest to:
1. 1.6.
2. 3.3.
3. 5.8.
### Solution
$$\text{x}_{\text{t}}=\text{b}_{0}+\text{b}_{1}\text{x}_{\text{t}-1}+\text{b}_{2}\text{x}_{\text{t}-12}+\epsilon_{\text{t}}$$
\begin{align*}\text{x}_{2020.1}&=0.0005-0.12(\text{x}_{2019.12})+0.87(\text{x}_{2019.1})\\&=0.0005-0.12\times7+0.87\times2.8\\&=1.6\end{align*}
LOS 3(l) Explain how to test and correct for seasonality in a time-series model and calculate and interpret a forecasted value using an AR model with a seasonal lag.
Shop CFA® Exam Prep
Offered by AnalystPrep
Featured Shop FRM® Exam Prep Learn with Us
Subscribe to our newsletter and keep up with the latest and greatest tips for success
Shop Actuarial Exams Prep Shop GMAT® Exam Prep
Daniel Glyn
2021-03-24
I have finished my FRM1 thanks to AnalystPrep. And now using AnalystPrep for my FRM2 preparation. Professor Forjan is brilliant. He gives such good explanations and analogies. And more than anything makes learning fun. A big thank you to Analystprep and Professor Forjan. 5 stars all the way!
michael walshe
2021-03-18
Professor James' videos are excellent for understanding the underlying theories behind financial engineering / financial analysis. The AnalystPrep videos were better than any of the others that I searched through on YouTube for providing a clear explanation of some concepts, such as Portfolio theory, CAPM, and Arbitrage Pricing theory. Watching these cleared up many of the unclarities I had in my head. Highly recommended.
Nyka Smith
2021-02-18
Every concept is very well explained by Nilay Arun. kudos to you man!
2021-02-13
Agustin Olcese
2021-01-27
Excellent explantions, very clear!
Jaak Jay
2021-01-14
Awesome content, kudos to Prof.James Frojan
sindhushree reddy
2021-01-07
Crisp and short ppt of Frm chapters and great explanation with examples.
|
2022-09-28 08:50:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6065884232521057, "perplexity": 1952.3688721395492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00115.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/on-her-birthday-seema-decided-to-donate-some-money-to-children-of-an-orphanage-home-if-there-were-8-children-less-everyone-would-have-got-10-more-however-if-there-were-16-applications-determinants-matrices_104982
|
# On Her Birthday Seema Decided to Donate Some Money to Children of an Orphanage Home. If There Were 8 Children Less, Everyone Would Have Got ₹ 10 More. However, If There Were 16 - Mathematics
Sum
On her birthday Seema decided to donate some money to children of an orphanage home. If there were 8 children less, everyone would have got ₹ 10 more. However, if there were 16 children more, everyone would have got ₹ 10 less. Using the matrix method, find the number of children and the amount distributed by Seema. What values are reflected by Seema’s decision?
#### Solution
Let the number of children be x and the amount distributed by Seema for one child be ₹ y.
So, (x - 8)(y + 10) = xy
⇒ 5x - 4y = 40 ....(i)
and
(x + 16)(y - 10) = xy
⇒ 5x - 8y = - 80 ...(ii)
To solve (i) and (ii),
let A = ((5,4),(5,-8)), "B" = ((40),(-80)), "X" = (("x"),("y"))
∵ "AX" = "B" ⇒ "X" = "A"^-1 "B"
Now "A"^-1
"A^-1 = - 1/20 ((-8,4),(-5, 5))
⇒ (("x"),("y")) = ((32),(30))
Clearly x = 32, y = 30.
Hence the number of children = 32 and the amount distributed by Seema = ₹ 30.
Value reflected: Helpfulness towards the needy people.
Concept: Applications of Determinants and Matrices
Is there an error in this question or solution?
|
2021-05-16 08:18:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35177966952323914, "perplexity": 5146.022730757354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00528.warc.gz"}
|
http://crypto.stackexchange.com/questions?page=156&sort=active
|
# All Questions
537 views
### How is BCrypt secure when it uses a static dataset for blowfish hashing?
I'm planning on using this Javascript BCrypt implementation, but as you can see in the code, it uses a 4KB precalculated dataset for the P and ...
79 views
### Are there any signature schemes that protect against collusion by multiple parties?
Say I want to verify the identity of Alice, but Alice could be colluding with Bob to fool me. Is there any way to verify Alice's identity and also be sure that Bob is not impersonating Alice, e.g. ...
618 views
### Is this fixed length MAC unforgeable?
Consider the following fixed length MAC for messages of length $\ell(n)=2n-2$ using a pseudorandom function $F$: On input of a mesage $m_0||m_1$ ($|m_0| = |m_1| = n-1$) and a key $k \in \{0,1\}^n$, ...
299 views
### Why does RSA give better security on longer messages?
I am trying to understand the notion of RSA security. Choosing a public exponent where $e = 3$ facilitates the calculations, considering that it is secure if the plaintext or message is long. If the ...
100 views
### Is the following scheme secure for different cipher
Let's say I have two keys K1 and K2. two messages M1 and M2 of the same length. Cipher (E,D) 3 ciphertexts: C11, C12,C22 where Cij = E(Ki, Mj) In situation ...
842 views
### How does a “Tiger Tree Hash” handle data whose size isn't a power of two?
Constructing a hash tree is simple enough if the data fits into a number of blocks that is a power of two. ...
403 views
### How to generate successive stream-cipher keys?
I've identified a weakness in a distributed simulation system I'm looking at, and I'm looking for some advice on how to fix it. Clients initially negotiate an authentication token with a login server ...
987 views
### Permutations of pseudorandom data
Assuming a bit string is deemed cryptographically secure, e.g. PRNG using AES in counter mode, can we equally assume any permutation of said bit string is also cryptographically secure? In a more ...
528 views
### Alternatives to FHE for secure function evaluation
As a followup to a previous question I asked which was more related to Fully Homomorphic Encryption (FHE), what other cryptographic methods are available for computing a private function on public ...
833 views
### Complexity of arithmetic in a finite field?
I am wondering what the complexities are of adding/subtracting and muliplying/dividing numbers in a finite field $\mathbb{F}_q$. I need it to understand an article I am reading. Thank you
12k views
### Difference between “Signature Algorithm” and “Signature Hash Algorithm” in X.509
What's the difference between the "Signature Algorithm" and the "Signature Hash Algorithm" found in an X.509 certificate? Why does it need a "Signature Hash Algorithm"? Edit: I'm creating the ...
371 views
### Do parts of a hash carry the properties of the entire hash?
When I need to generate unique id's based on some information hashing is typical choice. However, sometimes that id needs to be of a particular size. I've seen a lot of schemes (HMAC-MD5-96 in SSH, ...
473 views
### X.509 CSR: Why does CA remove signature?
I just read this article on Wikipedia: Certificate Signing Request I'm not a PKI or Crypto expert. As I understand, a CSR (certification request) is always signed by the PKCS#10-Request creator. ...
223 views
### Shannon entropy calculation: is $H(A|R·A) = H(A)$?
Suppose I generate a random $m×m$ matrix $R$, where each of its elements belongs to $\mathbb Z_n$. I ensure that $R$ is invertible in $\mathbb Z_n^{m×m}$. Now I take a non-random $m×m$ ...
365 views
### Security analysis of a matrix multiplication protocol
Suppose Alice would like to obtain the product of two mXm matrices i.e. A and B. Alice has A, whereas Bob has B. Since Alice does not want to reveal A to Bob, she chooses a mXm random invertable ...
157 views
### Does security under ROM imply exactly what?
I'm not sure I understand really the implications of proofs of security in the random oracle model. Does a proof of security in ROM translate to a reduction of security of the crypto-system to the ...
200 views
### RSA-OAEP versus RSA with Fujisaki-Okamoto construction
I was wondering why the Fujisaki-Okamoto construction (or one of its variants) is not (at least commonly) used with RSA to achieve CCA2 security? Does anyone know of any speed comparisons between RSA ...
345 views
### advances in usability for cryptography/authentication
I'm wondering if there have been any recent advances (say, the past 5-10 years) in human usability for cryptography and/or authentication? By that I mean something that makes it easier for an ...
2k views
When a user on facebook grants an app access to their account, an API key is issued to the app. This key is app and user-specific. This process is described in Facebook's developer documentation. ...
82 views
### How to take SHA-1 safely for my particular case?
Let me ask about my toy passwords generator program X5 which I want to improve. X5 uses a secret key and a public key to generate a password.Where any public key is supposed to be known to hackers in ...
116 views
### RSA security assumptions - does breaking the DLP also break RSA? [duplicate]
Possible Duplicate: Would the ability to efficiently find Discrete Logs have any impact on the security of RSA? I'm wondering if breaking the DLP, that is the basis for ElGamal and DSA, ...
470 views
### What is the computational cost of a public key certificate signature verification?
What is the computational cost of a certificate signature verification in terms of exponentiation, multiplication and other computation operations?
421 views
### Security analysis of a “one-time pad” type hill cipher
Suppose the Hill cipher were modified to something like a one-time pad cipher, where Alice wants to send a message to Bob, and she chooses a key matrix randomly everytime a new message is sent (and ...
158 views
### In a lattice, how can one define a good basis and a bad basis?
When it comes to lattice based cryptographic systems, all the literature talks about, good bases and bad bases. How does one define what a good basis is and what a bad basis is?
383 views
### Is the new preprint “An Algorithm For Factoring Integers” by Yingpu Deng and Yanbin Pan worth reading?
I just discovered on the eprint server of the IACR the paper mentioned in the title. Scanning quickly over the paper I didn't find anything spectacular, so I doubt that their new(?) approach will be ...
244 views
### Signing a GCM MAC
If I encrypt a message with AES-GCM, is it safe to use the MAC as the hash in a DSA/RSA signature? That is, if someone knows the AES key and nonce, will they be able to generate a different message ...
219 views
### Is the $\ell$-Diffie Hellman Inversion easy when g is known?
From here they define the $\ell$-Diffie Hellman inversion problem as: Given $g^{a},g^{a^2}\ldots,g^{a^{\ell}} \in G$, compute $g^{a^{-1}}$ Would this problem become easy if the generator $g$ is ...
3k views
### How large should a Diffie-Hellman p be?
In a Diffie-Hellman exchange, the parties need to agree on a prime p and a base g in order to continue. Assuming some ...
538 views
### Offline anonymous electronic money systems and their cryptographical base
What anonymous offline electronic money systems exist and what are they based on? I know only one currently - eCash, based on RSA blind signatures.
300 views
### How to construct a zero-knowledge proof of a number of the form $n=p^a q^b$
Let $n = p^a$$q^b$ where p and q are distinct primes and a and b are positive integers. How to construct a zero knowledge proof that n is of such form? This is actually a homework problem with a ...
1k views
### Why was ISO10126 Padding Withdrawn?
Wikipedia mentions ISO10126 Padding has been withdrawn, but doesn't say why. Also there were no news reports about this, as far as I can see. Why was it withdrawn? Are there security flaws? Is there ...
2k views
### Sending KCV (key check value) with cipher text
I was wondering why it is not more common to send the KCV of a secret key together with the cipher text. I see many systems that send cipher text and properly prepend the IV to e.g. a CBC mode ...
427 views
### secure multiparty computation for multiplication
Suppose there are $N$ parties $p_j$, each with a binary $b_j\in{\{0,1\}}$. The problem needs to compute the multiplication of number of ones times that of zeros, that is, ...
205 views
### How can I store a combination of multiple pass phrases?
Let's assume we have 2 phrases, one is the real password from a user, and the other is generated from the real password and almost impossible to guess. You would need both to authenticate a user. What ...
626 views
### Can I secure my key by XORing it with a hashed password?
I'd like to build a simple password-protected symmetric key system. The key-creation process in my system operates as follows: The system creates a 256-bit key purely at random. The user chooses a ...
304 views
### Are derived hashes weakening the root?
Given a root hash root = H(plaintext) and two (or more) derived hashes h1 = H(salt1 + root) h2 = H(salt2 + root) would the ...
755 views
150 views
### Is a using salt important when creating a hash data validator?
I am creating a service that will return an set of objects, which will be used by multiple systems. At the end of the process, one (or more) of the objects will be sent back to our system for ...
748 views
### Is Common Name encoded in the certificate?
When I make a certificate like so cd /etc/openvpn/easy-rsa/2.0/ source ./vars . /etc/openvpn/easy-rsa/2.0/build-key client1 Then ...
333 views
### Realize a MAC using a Pseudo-random function?
Given a pseudo-random function and assuming that we do not have any other tools, How can we construct a MAC? I believe this can be done. Would like to know if there is more than one way of doing ...
1k views
### How does the MOV attack work?
What exactly is the MOV attack, how does it actually work, and what is it used for? It's explained briefly here and I'd like to know what it is more / what is it fully used for.
982 views
### Passwords with same SALT. What does this mean?
If the same SALT is used for many passwords on a Linux server, in what way is that a security risk? Does the mean, that a user (which can change his own password) can calculate other users passwords? ...
2k views
### Why not use CTR with a randomized IV?
I'm currently reading the chapter of Cryptographic Engineering (Ferguson, Schneier, Kohno 2010) about block cipher modes of operation. They have recommended CBC with random IV instead of CTR due to ...
|
2015-11-28 13:06:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6488480567932129, "perplexity": 2159.01576111165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398452560.13/warc/CC-MAIN-20151124205412-00139-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://math.mercyhurst.edu/~lwilliams/math150/
|
# Math 150: Linear Algebra
## Final Exam Review
### Course Information
Instructor: Dr. Lauren Williams
Class Meeting: MWF 9:15 - 10:20 Hirt 209; T 9:50 - 11:30 Old Main Advanced Lab
My Office: Old Main 404 (Tower)
My Office Hours: Mon 2:15 - 3:30, Tues 11:45 - 1, Wed 2:15 - 3:30, Thur 11:30 - 2
### Course Description
This is a one semester course in linear algebra with computer applications. We will be covering the following topics: matrices and matrix properties, vectors and vector spaces, linear systems, and linear transformations. The class lectures will focus primarily on definitions and theory, with some simple calculations being performed without the aid of a computer. After learning the basic principles and theory of each topic, we will reinforce the material using the open source mathematics software SAGE. Through a series of lab experiments, you will also gain familiarity with the programming language Python. Many of these lab experiments will focus on applications of linear algebra to other areas of mathematics and other fields, including data science.
Topics will include vectors and vector arithmetic, solutions of linear systems, LU factorization, vector spaces and subspaces, the four fundamental subspaces, projections, determinants, eigenvalues and eigenvectors, symmetry, singular value decomposition, linear transformations, and applications.
### Course Objectives
On successful completion of the course, students should be able to:
• describe the solution(s) of a system of linear equations, or be able to decide that one does not exist.
• be able to perform arithmetic operations on vectors and matrices, where defined.
• calculate the determinant of a matrix, and understand its significance.
• define a vector space and determine whether a set is a vector space.
• find the basis and dimension of a vector space.
• define and describe the four fundamental subspaces.
• define and identify linear maps.
• define and compute eigenvalues and eigenvectors.
• explain the geometric effect of a linear transformation on 2-dimensional spaces.
• produce and utilize simple Sage programs to perform computations related to all of the above topics.
### Textbook and Materials
Introduction to Linear Algebra, by Gilbert Strang, 4th Edition (older editions are fine too). No other supplies are required for the course.
### Homework
You will be given take home assignments, usually every week. These assignments will include questions taken directly from the text as well as additional problems related to topics we’ll see in class. Late work will not be accepted. The assignments will be posted on the course website (not Blackboard), along with solutions after assignments are due. Your lowest homework grade will be dropped when calculating your final grade.
### Lab Assignments
In addition to the homework assignments, you will have a weekly lab assignment. These will typically be completed during the lab meetings. If you need additional time on the lab, or if you are absent, the lab work may be completed at home and turned in by Friday of the week the assignment is given. Your lowest lab assignment grade will be dropped when calculating your final grade.
Lab assignments will be completed online through Sage Cloud. You do not need to purchase any software or equipment for the labs, and you are free to use your own computer if you prefer. To work at home, you'll only need an internet connection - no software needs to be installed.
### Exams
We will have two midterm exams. You will be given an exact list of topics, along with a review sheet, approximately one week before each exam. Use of notes, textbooks, calculators, electronic devices, or other materials will not be permitted during an exam.
1. Midterm 1: Wednesday, March 9
2. Midterm 2: Wednesday, April 27
The final exam will be cumulative, and is scheduled for Friday, May 20, 8:00 - 10:00.
Your final grade will be calculated as follows:
• Average of midterm exams: 30%
• Average of homework assignments: 30%
• Average of lab assignments: 15%
• Final Exam: 25%
Quiz and exam grades will be posted on Blackboard, so you can keep track of your progress at any time.
Your letter grade will be determined according to the department grading scale:
F D D+ C C+ B B+ A 0-59 60-64 65-69 70-77 78-83 84-89 90-93 94-100
### Course Schedule
This schedule will be kept up to date as assignments are given, or if we get behind schedule. Exam dates will not be changed as long as the University is open on those days.
Date Topic Noteworthy Events Week 1 Feb 3 Class Introduction Feb 5 Vectors and Linear Combinations Week 2 Feb 8 Lengths and Dot Products Feb 10 Matrices Feb 12 Vectors and Linear Equations Week 3 Feb 15 Elimination Feb 17 Elimination Feb 19 Rules for Matrix Operations Week 4 Feb 22 Inverse Matrices Feb 24 Inverse Matrices Feb 26 Transposes & Permutations Week 5 Feb 29 Spaces of Vectors Mar 2 Solutions of $$Ax=0$$ Mar 4 Rank & Reduced Echelon Form Week 6 Mar 7 Review Mar 9 Midterm I Mar 11 Solutions of $$Ax=b$$ Week 7 Mar 14 Solutions of $$Ax=b$$ Mar 16 Solutions of $$Ax=b$$ Mar 18 Independence, Basis, Dimension Week 8 Mar 21-25 Easter Break Week 9 Mar 28 Easter Break Mar 30 Independence, Basis, Dimension Apr 1 Orthogonality & Projections MAA Section Meeting (April 1-2, Gannon U) Week 10 Apr 4 Determinants Apr 6 Determinants Apr 8 Cramer's Rule Week 11 Apr 11 Eigenvalues & Eigenvectors Apr 13 Eigenvalues & Eigenvectors Apr 15 Diagonalization Week 12 Apr 18 Diagonalization Apr 20 Similar Matrices Apr 22 Break Week 13 Apr 25 Review Apr 27 Midterm II Apr 29 SVD Week 14 May 2 Markov Matrices May 4 Linear Transformations May 6 Linear Transformations Week 15 May 9 Linear Transformations May 11 Linear Transformations May 13 Review Week 16 May 16 Reading Day May 18 May 20 Final Exam 8:00 - 10:00
### Learning Differences
In keeping with college policy, any student with a disability who needs academic accommodations must call Learning Differences Program secretary at 824-3017, to arrange a confidential appointment with the director of the Learning Differences Program during the first week of classes.
### Support of the Mercy Mission
This course supports the mission of Mercyhurst University by creating students who are intellectually creative. Students will foster this creativity by: applying critical thinking and qualitative reasoning techniques to new disciplines; developing, analyzing, and synthesizing scientific ideas; and engaging in innovative problem solving strategies.
|
2017-04-25 02:50:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32509374618530273, "perplexity": 2502.9187581681203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120092.26/warc/CC-MAIN-20170423031200-00157-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-options/nologo-compiler-option
|
-nologo (C# Compiler Options)
The -nologo option suppresses display of the sign-on banner when the compiler starts up and display of informational messages during compiling.
Syntax
-nologo
Remarks
This option is not available from within the development environment; it is only available when compiling from the command line.
This compiler option is unavailable in Visual Studio and cannot be changed programmatically.
|
2019-09-21 22:37:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20115375518798828, "perplexity": 10474.097990627417}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574665.79/warc/CC-MAIN-20190921211246-20190921233246-00451.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/eect.2014.3.331
|
Article Contents
Article Contents
# Shape optimization for non-Newtonian fluids in time-dependent domains
• We study the model of an incompressible non-Newtonian fluid in a~moving domain. The domain is defined as a tube built by the velocity field $\mathbf{V}$ and described by the family of domains $\Omega_t$ parametrized by $t\in[0,T]$. A new shape optimization problem associated with the model is defined for a family of initial domains $\Omega_0$ and admissible velocity vector fields. It is shown that such shape optimization problems are well posed under the classical conditions on compactness of the admissible shapes [18]. For the state problem, we prove the existence of weak solutions and their continuity with respect to perturbations of the time-dependent boundary, provided that the power-law index $r\ge11/5$.
Mathematics Subject Classification: Primary: 35Q30, 76D55; Secondary: 35R37.
Citation:
• [1] N. Arada, Regularity of flows and optimal control of shear-thinning fluids, Nonlinear Analysis: Theory, Methods & Applications, 89 (2013), 81-94.doi: 10.1016/j.na.2013.04.015. [2] V. Barbu, I. Lasiecka and R. Triggiani, Tangential boundary stabilization of Navier-Stokes equations, Mem. Amer. Math. Soc., 181 (2006), x+128pp.doi: 10.1090/memo/0852. [3] M. C. Delfour and J.-P. Zolésio, Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization, Second edition, Advances in Design and Control, 22, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011.doi: 10.1137/1.9780898719826. [4] M. C. Delfour and J. P. Zolésio, Oriented distance function and its evolution equation for initial sets with thin boundary, SIAM Journal on Control and Optimization, 42 (2004), 2286-2304.doi: 10.1137/S0363012902411945. [5] L. Diening, M. Růžička and J. Wolf, Existence of weak solutions for unsteady motions of generalized Newtonian fluids, Annali della Scuola Normale Superiore di Pisa. Classe di scienze, 9 (2010), 1-46. [6] R. Dziri and J.-P. Zolésio, Dynamical shape control in non-cylindrical navier-stokes equations, Journal of Convex Analysis, 6 (1999), 293-318. [7] E. Feireisl, O. Kreml, Š. Nečasová, J. Neustupa and J. Stebel, Weak solutions to the barotropic Navier-Stokes system with slip boundary conditions in time-dependent domains, Journal of Differential Equations, 254 (2013), 125-140.doi: 10.1016/j.jde.2012.08.019. [8] E. Feireisl, J. Neustupa and J. Stebel, Convergence of a Brinkman-type penalization for compressible fluid flows, Journal of Differential Equations, 250 (2011), 596-606.doi: 10.1016/j.jde.2010.09.031. [9] J. Frehse, J. Málek and M. Steinhauer, On existence results for fluids with shear dependent viscosity-unsteady flows, Partial Differential Equations, Theory and Numerical Solution, 406 (2000), 121-129. [10] J. Frehse, J. Málek and M. Steinhauer, On analysis of steady flows of fluids with shear-dependent viscosity based on the lipschitz truncation method, SIAM Journal on Mathematical Analysis, 34 (2003), 1064-1083.doi: 10.1137/S0036141002410988. [11] O. A. Ladyzhenskaya, New equations for the description of the motions of viscous incompressible fluids, and global solvability for their boundary value problems, Trudy Mat. Inst. Steklov., 102 (1967), 85-104. [12] O. A. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flow, Second English edition, revised and enlarged, Translated from the Russian by Richard A. Silverman and John Chu, Mathematics and its Applications, Vol. 2, Gordon and Breach Science Publishers, New York, 1969. [13] O. A. Ladyzhenskaya, Initial-boundary problem for Navier-Stokes equations in domains with time-varying boundaries, Zapiski Nauchnykh Seminarov LOMI, 11 (1968), 97-128. [14] J.-L. Lions, Quelques Méthodes De Résolution Des Problèmes Aux Limites Non Linéaires, (French) Dunod; Gauthier-Villars, 1969. [15] J. Málek and K. R. Rajagopal, Mathematical issues concerning the Navier-Stokes equations and some of its generalizations, in Evolutionary equations. Vol. II, Handb. Differ. Equ., Elsevier/North-Holland, Amsterdam, (2005), 371-459. [16] M. Moubachir and J.-P. Zolésio, Moving Shape Analysis and Control, vol. 277, Chapman & Hall/CRC, Boca Raton, FL, 2006.doi: 10.1201/9781420003246. [17] J. Neustupa, Existence of a weak solution to the Navier-Stokes equation in a general time-varying domain by the Rothe method, Mathematical Methods in the Applied Sciences, 32 (2009), 653-683.doi: 10.1002/mma.1059. [18] P. Plotnikov and J. Sokolowski, Compressible Navier-Stokes Equations, Theory and Shape Optimization, Springer-Verlag, Basel, 2012.doi: 10.1007/978-3-0348-0367-0. [19] K. Rajagopal, Mechanics of non-Newtonian fluids, in Recent Developments in Theoretical Fluid Mechanics (Winter School, Paseky, 1992), Pitman Res. Notes Math. Ser., 291, Longman Sci. Tech., Harlow, 1993, 129-162. [20] W. Schowalter, Mechanics of Non-Newtonian Fluids, Pergamon Press, 1978. [21] T. Slawig, Distributed control for a class of non-Newtonian fluids, Journal of Differential Equations, 219 (2005), 116-143.doi: 10.1016/j.jde.2005.03.009. [22] J. Sokołowski and J. Stebel, Shape sensitivity analysis of time-dependent flows of incompressible non-Newtonian fluids, Control and Cybernetics, 40 (2011), 1077-1097. [23] J. Sokołowski and J. Stebel, Shape sensitivity analysis of incompressible non-Newtonian fluids, in System Modeling and Optimization, Springer, 2013, 427-436. [24] J. Sokołowski and J.-P. Zolésio, Introduction to Shape Optimization. Shape Sensitivity Analysis, Springer Series in Computational Mathematics, 16, Springer-Verlag, Berlin, 1992. [25] C. Truesdell, W. Noll and S. Antman, The Non-linear Field Theories Of Mechanics, Springer Verlag, 2004.doi: 10.1007/978-3-662-10388-3. [26] D. Wachsmuth and T. Roubíček, Optimal control of planar flow of incompressible non-Newtonian fluids, Z. Anal. Anwend., 29 (2010), 351-376.doi: 10.4171/ZAA/1412.
|
2023-03-21 08:12:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5234173536300659, "perplexity": 2113.880363507515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00151.warc.gz"}
|
https://noise.getoto.net/tag/crypto/
|
Using HPKE to Encrypt Request Payloads
Post Syndicated from Miguel de Moura original https://blog.cloudflare.com/using-hpke-to-encrypt-request-payloads/
The Managed Rules team was recently given the task of allowing Enterprise users to debug Firewall Rules by viewing the part of a request that matched the rule. This makes it easier to determine what specific attacks a rule is stopping or why a request was a false positive, and what possible refinements of a rule could improve it.
The fundamental problem, though, was how to securely store this debugging data as it may contain sensitive data such as personally identifiable information from submissions, cookies, and other parts of the request. We needed to store this data in such a way that only the user who is allowed to access it can do so. Even Cloudflare shouldn’t be able to see the data, following our philosophy that any personally identifiable information that passes through our network is a toxic asset.
This means we needed to encrypt the data in such a way that we can allow the user to decrypt it, but not Cloudflare. This means public key encryption.
Now we needed to decide on which encryption algorithm to use. We came up with some questions to help us evaluate which one to use:
• What requirements do we have for the algorithm?
• What language do we implement it in?
• How do we make this as secure as possible for users?
Here’s how we made those decisions.
Algorithm Requirements
While we knew we needed to use public key encryption, we also needed to keep an eye on performance. This led us to select Hybrid Public Key Encryption (HPKE) early on as it has a best-of-both-worlds approach to using symmetric as well as public-key cryptography to increase performance. While these best-of-both-worlds schemes aren’t new [1][2][3], HPKE aims to provide a single, future-proof, robust, interoperable combination of a general key encapsulation mechanism and a symmetric encryption algorithm.
HPKE is an emerging standard developed by the Crypto Forum Research Group (CFRG), the research body that supports the development of Internet standards at the IETF. The CFRG produces specifications called RFCs (such as RFC 7748 for elliptic curves) that are then used in higher level protocols including two we talked about previously: ODoH and ECH. Cloudflare has long been a supporter of Internet standards, so HPKE was a natural choice to use for this feature. Additionally, HPKE was co-authored by one of our colleagues at Cloudflare.
How HPKE Works
HPKE combines an asymmetric algorithm such as elliptic curve Diffie-Hellman and a symmetric cipher such as AES. One of the upsides of HPKE is that the algorithms aren’t dictated to the implementer, but making a combination that’s provably secure and meets the developer’s intuitive notions of security is important. All too often developers reach for a scheme without carefully understanding what it does, resulting in security vulnerabilities.
HPKE solves these problems by providing a high level of security in a generic manner and providing necessary hooks to tie messages to the context in which they are generated. This is the application of decades of research into the correct security notions and schemes.
HPKE is built in stages. First it turns a Diffie-Hellman key agreement into a Key Encapsulation Mechanism. A key encapsulation mechanism has two algorithms: Encap and Decap. The Encap algorithm creates a symmetric secret and wraps it in a public key, so that only the holder of the private key can unwrap it. An attacker with the encapsulation cannot recover the random key. Decap takes the encapsulation and the private key associated to the public key, and computes the same random key. This translation gives HPKE the flexibility to work almost unchanged with any kind of public key encryption or key agreement algorithm.
HPKE mixes this key with an optional info argument, as well as information relating to the cryptographic parameters used by each side. This ensures that attackers cannot modify messages’ meaning by taking them out of context. A postcard marked “So happy to see you again soon” is ominous from the dentist and endearing from one’s grandmother.
The specification for HPKE is open and available on the IETF website. It is on its way to becoming an RFC after passing multiple rounds of review and analysis by cryptography experts at the CFRG. HPKE is already gaining adoption in IETF protocols like ODoH, ECH, and the new Messaging Layer Security (MLS) protocol. HPKE is also designed with the post-quantum future since it is built to work with any KEM, including all the NIST finalists for post-quantum public-key encryption.
Implementation Language
Once we had an encryption scheme selected, we needed to settle on an implementation. HPKE is still fairly new, so the libraries aren’t quite mature yet. There is a reference implementation, and we’re in the process of developing an implementation in Go as part of CIRCL. However, in the absence of a clear “go to” that is widely known to be the best, we decided to go with an implementation leveraging the same language already powering much of the Firewall code running at the Cloudflare edge – Rust.
Aside from this, the language benefits from features like native primitives, and crucially the ability to easily compile to WebAssembly (WASM).
As we mentioned in a previous blog post, customers are able to generate a key pair and decrypt payloads either from the dashboard UI or from a CLI. Instead of writing and maintaining two different codebases for these, we opted to reuse the same implementation across the edge component that encrypts the payloads and the UI and CLI that decrypt them. To achieve this we compile our library to target WASM so it can be used in the dashboard UI code that runs in the browser. While this approach may yield a slightly larger JavaScript bundle size and relatively small computational overhead, we found it preferable to spending a significant amount of time securely re-implementing HPKE using JavaScript WebCrypto primitives.
The HPKE implementation we decided on comes with the caveat of not yet being formally audited, so we performed our own internal security review. We analyzed the cryptography primitives being used and the corresponding libraries. Between the composition of said primitives and secure programming practices like correctly zeroing memory and safe usage of random number generators, we found no security issues.
Making It Secure For Users
To encrypt on behalf of users, we need them to provide us with a public key. To make this as easy as possible, we built a CLI tool along with the ability to do it right in the browser. Either option allows the user to generate a public/private key pair without needing to talk to Cloudflare servers at all.
In our API, we specifically do not accept the private key of the key pair — we don’t want it! We don’t need and don’t want to be able to decrypt the data we’re storing.
For the dashboard, once the user provides the private key for decryption, the key is held in a temporary JavaScript variable and used for the in-browser decryption. This allows the user to not constantly have to provide the key while browsing the Firewall event logs. The private key is also not persisted in any way in the browser, so any action that refreshes the page such as refreshing or navigating away will require the user to provide the key again. We believe this is an acceptable usability compromise for better security.
After deciding how to encrypt the data, we just had to figure out the rest of the feature: what data to encrypt, how to store and transmit it, and how to allow users to decrypt it.
When an HTTP request reaches the L7 Firewall, it is evaluated against a set of rulesets. Each of these rulesets contain several rules written in the wirefilter syntax.
An example of one such rule would be:
http.request.version eq "HTTP/1.1"
and
(
http.request.uri.path matches "\n+."
or
http.request.uri.query matches "\x00+."
)
This expression evaluates to a boolean “true” for HTTP/1.1 requests that either contain one or more newlines followed by a character in the request path or one or more NULL bytes followed by a character in the query string.
Say we had the following request that would match the rule above:
GET /cms/%0Aadmin?action=%00post HTTP/1.1
Host: example.com
If matched data logging is enabled, the rules that match would be executed again in a special context that tags all fields that are accessed during execution. We do this second execution because this tagging adds a noticeable computational overhead, and since the vast majority of requests don’t trigger a rule at all we would be unnecessarily adding overhead to each request. Requests that do match any rules will only match a few rules as well, so we don’t need to re-execute a large portion of the ruleset.
You may notice that although http.request.uri.query matches "\x00+." evaluates to true for this request, it won’t be executed, because the expression short-circuits with the first or condition that also matches. This results in only http.request.version and http.request.uri.path being tagged as accessed:
http.request.version -> HTTP/1.1
Having gathered the fields that were accessed, the Firewall engine does some post-processing; removing fields that are a subset of others (e.g., the query string and the full URI), or truncating fields that are beyond a certain character length.
Finally, these get serialized as JSON, encrypted with the customer’s public key, serialized again as a set of bytes, and prefixed with a version number should we need to change/update it in the future. To simplify consumption of these blobs, our APIs display a base64 encoded version of the bytes:
Now that we have encrypted the data at the edge and persisted it in ClickHouse, we need to allow users to decrypt it. As part of the setup of turning this feature on, users generated a key-pair: the public key which was used to encrypt the payloads and a private key which is used to decrypt them. Decryption is done completely offline via either the command line using cloudflare/matched-data-cli:
$MATCHED_DATA=AkjQDktMX4FudxeQhwa0UPNezhkgLAUbkglNQ8XVCHYqPgAAAAAAAACox6cEwqWQpFVE2gCFyOFsSdm2hCoE0/oWKXZJGa5UPd5mWSRxNctuXNtU32hcYNR/azLjsGO668Jwk+qCdFvmKjEqEMJgI+fvhwLQmm4=$ matched-data-cli decrypt -d $MATCHED_DATA -k$PRIVATE_KEY
Or the dashboard UI:
Since our CLI tool is open-source and HPKE is interoperable, it can also be used in other tooling as part of a user’s logging pipeline, for example in security information and event management (SIEM) software.
Conclusion
This was a team effort with help from our Research and Security teams throughout the process. We relied on them for recommendations on how best to evaluate the algorithms as well as vetting the libraries we wanted to use.
We’re very pleased with how HPKE has worked out for us from an ease-of-implementation and performance standpoint. It was also an easy choice for us to make due to its impending standardization and best-of-both-worlds approach to security.
Round 2 post-quantum TLS is now supported in AWS KMS
Post Syndicated from Alex Weibel original https://aws.amazon.com/blogs/security/round-2-post-quantum-tls-is-now-supported-in-aws-kms/
AWS Key Management Service (AWS KMS) now supports three new hybrid post-quantum key exchange algorithms for the Transport Layer Security (TLS) 1.2 encryption protocol that’s used when connecting to AWS KMS API endpoints. These new hybrid post-quantum algorithms combine the proven security of a classical key exchange with the potential quantum-safe properties of new post-quantum key exchanges undergoing evaluation for standardization. The fastest of these algorithms adds approximately 0.3 milliseconds of overheard compared to a classical TLS handshake. The new post-quantum key exchange algorithms added are Round 2 versions of Kyber, Bit Flipping Key Encapsulation (BIKE), and Supersingular Isogeny Key Encapsulation (SIKE). Each organization has submitted their algorithms to the National Institute of Standards and Technology (NIST) as part of NIST’s post-quantum cryptography standardization process. This process spans several rounds of evaluation over multiple years, and is likely to continue beyond 2021.
In our previous hybrid post-quantum TLS blog post, we announced that AWS KMS had launched hybrid post-quantum TLS 1.2 with Round 1 versions of BIKE and SIKE. The Round 1 post-quantum algorithms are still supported by AWS KMS, but at a lower priority than the Round 2 algorithms. You can choose to upgrade your client to enable negotiation of Round 2 algorithms.
Why post-quantum TLS is important
A large-scale quantum computer would be able to break the current public-key cryptography that’s used for key exchange in classical TLS connections. While a large-scale quantum computer isn’t available today, it’s still important to think about and plan for your long-term security needs. TLS traffic using classical algorithms recorded today could be decrypted by a large-scale quantum computer in the future. If you’re developing applications that rely on the long-term confidentiality of data passed over a TLS connection, you should consider a plan to migrate to post-quantum cryptography before the lifespan of the sensitivity of your data would be susceptible to an unauthorized user with a large-scale quantum computer. As an example, this means that if you believe that a large-scale quantum computer is 25 years away, and your data must be secure for 20 years, you should migrate to post-quantum schemes within the next 5 years. AWS is working to prepare for this future, and we want you to be prepared too.
We’re offering this feature now instead of waiting for standardization efforts to be complete so you have a way to measure the potential performance impact to your applications. Offering this feature now also gives you the protection afforded by the proposed post-quantum schemes today. While we believe that the use of this feature raises the already high security bar for connecting to AWS KMS endpoints, these new cipher suites will impact bandwidth utilization and latency. However, using these new algorithms could also create connection failures for intermediate systems that proxy TLS connections. We’d like to get feedback from you on the effectiveness of our implementation or any issues found so we can improve it over time.
Hybrid post-quantum TLS 1.2
Hybrid post-quantum TLS is a feature that provides the security protections of both the classical and post-quantum key exchange algorithms in a single TLS handshake. Figure 1 shows the differences in the connection secret derivation process between classical and hybrid post-quantum TLS 1.2. Hybrid post-quantum TLS 1.2 has three major differences from classical TLS 1.2:
• The negotiated post-quantum key is appended to the ECDHE key before being used as the hash-based message authentication code (HMAC) key.
• The text hybrid in its ASCII representation is prepended to the beginning of the HMAC message.
• The entire client key exchange message from the TLS handshake is appended to the end of the HMAC message.
Figure 1: Differences in the connection secret derivation process between classical and hybrid post-quantum TLS 1.2
Some background on post-quantum TLS
Today, all requests to AWS KMS use TLS with key exchange algorithms that provide perfect forward secrecy and use one of the following classical schemes:
While existing FFDHE and ECDHE schemes use perfect forward secrecy to protect against the compromise of the server’s long-term secret key, these schemes don’t protect against large-scale quantum computers. In the future, a sufficiently capable large-scale quantum computer could run Shor’s Algorithm to recover the TLS session key of a recorded classical session, and thereby gain access to the data inside. Using a post-quantum key exchange algorithm during the TLS handshake protects against attacks from a large-scale quantum computer.
The possibility of large-scale quantum computing has spurred the development of new quantum-resistant cryptographic algorithms. NIST has started the process of standardizing post-quantum key encapsulation mechanisms (KEMs). A KEM is a type of key exchange that’s used to establish a shared symmetric key. AWS has chosen three NIST KEM submissions to adopt in our post-quantum efforts:
Hybrid mode ensures that the negotiated key is as strong as the weakest key agreement scheme. If one of the schemes is broken, the communications remain confidential. The Internet Engineering Task Force (IETF) Hybrid Post-Quantum Key Encapsulation Methods for Transport Layer Security 1.2 draft describes how to combine post-quantum KEMs with ECDHE to create new cipher suites for TLS 1.2.
These cipher suites use a hybrid key exchange that performs two independent key exchanges during the TLS handshake. The key exchange then cryptographically combines the keys from each into a single TLS session key. This strategy combines the proven security of a classical key exchange with the potential quantum-safe properties of new post-quantum key exchanges being analyzed by NIST.
The effect of hybrid post-quantum TLS on performance
Post-quantum cipher suites have a different performance profile and bandwidth usage from traditional cipher suites. AWS has measured bandwidth and latency across 2,000 TLS handshakes between an Amazon Elastic Compute Cloud (Amazon EC2) C5n.4xlarge client and the public AWS KMS endpoint, which were both in the us-west-2 Region. Your own performance characteristics might differ, and will depend on your environment, including your:
• Hardware–CPU speed and number of cores.
• Existing workloads–how often you call AWS KMS and what other work your application performs.
• Network–location and capacity.
The following graphs and table show latency measurements performed by AWS for all newly supported Round 2 post-quantum algorithms, in addition to the classical ECDHE key exchange algorithm currently used by most customers.
Figure 2 shows the latency differences of all hybrid post-quantum algorithms compared with classical ECDHE alone, and shows that compared to ECDHE alone, SIKE adds approximately 101 milliseconds of overhead, BIKE adds approximately 9.5 milliseconds of overhead, and Kyber adds approximately 0.3 milliseconds of overhead.
Figure 2: TLS handshake latency at varying percentiles for four key exchange algorithms
Figure 3 shows the latency differences between ECDHE with Kyber, and ECDHE alone. The addition of Kyber adds approximately 0.3 milliseconds of overhead.
Figure 3: TLS handshake latency at varying percentiles, with only top two performing key exchange algorithms
The following table shows the total amount of data (in bytes) needed to complete the TLS handshake for each cipher suite, the average latency, and latency at varying percentiles. All measurements were gathered from 2,000 TLS handshakes. The time was measured on the client from the start of the handshake until the handshake was completed, and includes all network transfer time. All connections used RSA authentication with a 2048-bit key, and ECDHE used the secp256r1 curve. All hybrid post-quantum tests used the NIST Round 2 versions. The Kyber test used the Kyber-512 parameter, the BIKE test used the BIKE-1 Level 1 parameter, and the SIKE test used the SIKEp434 parameter.
Item Bandwidth(bytes) Totalhandshakes Average(ms) p0(ms) p50(ms) p90(ms) p99(ms) ECDHE (classic) 3,574 2,000 3.08 2.07 3.02 3.95 4.71 ECDHE + Kyber R2 5,898 2,000 3.36 2.38 3.17 4.28 5.35 ECDHE + BIKE R2 12,456 2,000 14.91 11.59 14.16 18.27 23.58 ECDHE + SIKE R2 4,628 2,000 112.40 103.22 108.87 126.80 146.56
By default, the AWS SDK client performs a TLS handshake once to set up a new TLS connection, and then reuses that TLS connection for multiple requests. This means that the increased cost of a hybrid post-quantum TLS handshake is amortized over multiple requests sent over the TLS connection. You should take the amortization into account when evaluating the overall additional cost of using post-quantum algorithms; otherwise performance data could be skewed.
AWS KMS has chosen Kyber Round 2 to be KMS’s highest prioritized post-quantum algorithm, with BIKE Round 2, and SIKE Round 2 next in priority order for post-quantum algorithms. This is because Kyber’s performance is closest to the classical ECDHE performance that most AWS KMS customers are using today and are accustomed to.
How to use hybrid post-quantum cipher suites
To use the post-quantum cipher suites with AWS KMS, you need the preview release of the AWS Common Runtime (CRT) HTTP client for the AWS SDK for Java 2.x. Also, you will need to configure the AWS CRT HTTP client to use the s2n post-quantum hybrid cipher suites. Post-quantum TLS for AWS KMS is available in all AWS Regions except for AWS GovCloud (US-East), AWS GovCloud (US-West), AWS China (Beijing) Region operated by Beijing Sinnet Technology Co. Ltd (“Sinnet”), and AWS China (Ningxia) Region operated by Ningxia Western Cloud Data Technology Co. Ltd. (“NWCD”). Since NIST has not yet standardized post-quantum cryptography, connections that require Federal Information Processing Standards (FIPS) compliance cannot use the hybrid key exchange. For example, kms.<region>.amazonaws.com supports the use of post-quantum cipher suites, while kms-fips.<region>.amazonaws.com does not.
1. If you’re using the AWS SDK for Java 2.x, you must add the preview release of the AWS Common Runtime client to your Maven dependencies.
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>aws-crt-client</artifactId>
<version>2.14.13-PREVIEW</version>
</dependency>
2. You then must configure the new SDK and cipher suite in the existing initialization code of your application:
if(!TLS_CIPHER_PREF_KMS_PQ_TLSv1_0_2020_07.isSupported()){
throw new RuntimeException("Post Quantum Ciphers not supported on this Platform");
}
SdkAsyncHttpClient awsCrtHttpClient = AwsCrtAsyncHttpClient.builder()
.tlsCipherPreference(TLS_CIPHER_PREF_KMS_PQ_TLSv1_0_2020_07)
.build();
KmsAsyncClient kms = KmsAsyncClient.builder()
.httpClient(awsCrtHttpClient)
.build();
ListKeysResponse response = kms.listKeys().get();
Now, all connections made to AWS KMS in supported Regions will use the new hybrid post-quantum cipher suites! To see a complete example of everything set up, check out the example application here.
Things to try
Here are some ideas about how to use this post-quantum-enabled client:
• Run load tests and benchmarks. These new cipher suites perform differently than traditional key exchange algorithms. You might need to adjust your connection timeouts to allow for the longer handshake times or, if you’re running inside an AWS Lambda function, extend the execution timeout setting.
• Try connecting from different locations. Depending on the network path your request takes, you might discover that intermediate hosts, proxies, or firewalls with deep packet inspection (DPI) block the request. This could be due to the new cipher suites in the ClientHello or the larger key exchange messages. If this is the case, you might need to work with your security team or IT administrators to update the relevant configuration to unblock the new TLS cipher suites. We’d like to hear from you about how your infrastructure interacts with this new variant of TLS traffic. If you have questions or feedback, please start a new thread on the AWS KMS discussion forum.
Conclusion
In this blog post, I announced support for Round 2 hybrid post-quantum algorithms in AWS KMS, and showed you how to begin experimenting with hybrid post-quantum key exchange algorithms for TLS when connecting to AWS KMS endpoints.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Alex Weibel
Alex is a Senior Software Engineer on the AWS Crypto Algorithms team. He’s one of the maintainers for Amazon’s TLS Library s2n. Previously, Alex worked on TLS termination and request proxying for S3 and the Elastic Load Balancing Service developing new features for customers. Alex holds a Bachelor of Science degree in Computer Science from the University of Texas at Austin.
Fall 2020 RPKI Update
Post Syndicated from Louis Poinsignon original https://blog.cloudflare.com/rpki-2020-fall-update/
The Internet is a network of networks. In order to find the path between two points and exchange data, the network devices rely on the information from their peers. This information consists of IP addresses and Autonomous Systems (AS) which announce the addresses using Border Gateway Protocol (BGP).
One problem arises from this design: what protects against a malevolent peer who decides to announce incorrect information? The damage caused by route hijacks can be major.
Routing Public Key Infrastructure (RPKI) is a framework created in 2008. Its goal is to provide a source of truth for Internet Resources (IP addresses) and ASes in signed cryptographically signed records called Route Origin Objects (ROA).
Recently, we’ve seen the significant threshold of two hundred thousands of ROAs being passed. This represents a big step in making the Internet more secure against accidental and deliberate BGP tampering.
We have talked about RPKI in the past but we thought it would be a good time for an update.
In a more technical context, the RPKI framework consists of two parts:
• IP addresses need to be cryptographically signed by their owners in a database managed by a Trust Anchor: Afrinic, APNIC, ARIN, LACNIC and RIPE. Those five organizations are in charge of allocating Internet resources. The ROA indicates which Network Operator is allowed to announce the addresses using BGP.
• Network operators download the list of ROAs, perform the cryptographic checks and then apply filters on the prefixes they receive: this is called BGP Origin Validation.
The “Is BGP Safe Yet” website
The launch of the website isbgpsafeyet.com to test if your ISP correctly performs BGP Origin Validation was a success. Since launch, it has been visited more than five million times from over 223 countries and 13,000 unique networks (20% of the entire Internet), generating half a million BGP Origin Validation tests.
Many providers subsequently indicated on social media (for example, here or here) that they had an RPKI deployment in the works. This increase in Origin Validation by networks is increasing the security of the Internet globally.
The site’s test for Origin Validation consists of queries toward two addresses, one of which is behind an RPKI invalid prefix and the other behind an RPKI valid prefix. If the query towards the invalid succeeds, the test fails as the ISP does not implement Origin Validation. We counted the number of queries that failed to reach invalid.cloudflare.com. This also included a few thousand RIPE Atlas tests that were started by Cloudflare and various contributors, providing coverage for smaller networks.
Every month since launch we’ve seen that around 10 to 20 networks are deploying RPKI Origin Validation. Among the major providers we can build the following table:
Month Networks
August Swisscom (Switzerland), Salt (Switzerland)
June Colocrossing (USA), Get Norway (Norway), Vocus (Australia), Hurricane Electric (Worldwide), Cogent (Worldwide)
May Sengked Fiber (Indonesia), Online.net (France), WebAfrica Networks (South Africa), CableNet (Cyprus), IDnet (Indonesia), Worldstream (Netherlands), GTT (Worldwide)
With the help of many contributors, we have compiled a list of network operators and public statements at the top of the isbgpsafeyet.com page.
We excluded providers that manually blocked the traffic towards the prefix instead of using RPKI. Among the techniques we see are firewall filtering and manual prefix rejection. The filtering is often propagated to other customer ISPs. In a unique case, an ISP generated a “more-specific” blackhole route that leaked to multiple peers over the Internet.
The deployment of RPKI by major transit providers, also known as Tier 1, such as Cogent, GTT, Hurricane Electric, NTT and Telia made many downstream networks more secure without them having them deploying validation software.
Overall, we looked at the evolution of the successful tests per ASN and we noticed a steady increase over the recent months of 8%.
Furthermore, when we probed the entire IPv4 space this month, using a similar technique to the isbgpsafeyet.com test, many more networks were not able to reach an RPKI invalid prefix than compared to the same period last year. This confirms an increase of RPKI Origin Validation deployment across all network operators. The picture below shows the IPv4 space behind a network with RPKI Origin Validation enabled in yellow and the active space in blue. It uses a Hilbert Curve to efficiently plot IP addresses: for example one /20 prefix (4096 IPs) is a pixel, a /16 prefix (65536 IPs) will form a 4×4 pixels square.
The more the yellow spreads, the safer the Internet becomes.
What does it mean exactly? If you were hijacking a prefix, the users behind the yellow space would likely not be affected. This also applies if you miss-sign your prefixes: you would not be able to reach the services or users behind the yellow space. Once RPKI is enabled everywhere, there will only be yellow squares.
Progression of signed prefixes
Owners of IP addresses indicate the networks allowed to announce them. They do this by signing prefixes: they create Route Origin Objects (ROA). As of today, there are more than 200,000 ROAs. The distribution shows that the RIPE region is still leading in ROA count, then followed by the APNIC region.
2020 started with 172,000 records and the count is getting close to 200,000 at the beginning of November, approximately a quarter of all the Internet routes. Since last year, the database of ROAs grew by more than 70 percent, from 100,000 records, an average pace of 5% every month.
On the following graph of unique ROAs count per day, we can see two points that were followed by a change in ROA creation rate: 140/day, then 231/day, and since August, 351 new ROAs per day.
It is not yet clear what caused the increase in August.
Free services and software
In 2018 and 2019, Cloudflare was impacted by BGP route hijacks. Both could have been avoided with RPKI. Not long after the first incident, we started signing prefixes and developing RPKI software. It was necessary to make BGP safer and we wanted to do more than talk about it. But we also needed enough networks to be deploying RPKI as well. By making deployment easier for everyone, we hoped to increase adoption.
The following is a reminder of what we built over the years around RPKI and how it grew.
OctoRPKI is Cloudflare’s open source RPKI Validation software. It periodically generates a JSON document of validated prefixes that we pass onto our routers using GoRTR. It generates most of the data behind the graphs here.
The latest version, 1.2.0, of OctoRPKI was released at the end of October. It implements important security fixes, better memory management and extended logging. This is the first validator to provide detailed information around cryptographically invalid records into Sentry and performance data in distributed tracing tools.
GoRTR remains heavily used in production, including by transit providers. It can natively connect to other validators like rpki-client.
When we released our public rpki.json endpoint in early 2019, the idea was to enable anyone to see what Cloudflare was filtering.
The file is also used as a bootstrap by GoRTR, so that users can test a deployment. The file is cached on more than 200 data centers, ensuring quick and secure delivery of a list of valid prefixes, making RPKI more accessible for smaller networks and developers.
Between March 2019 and November 2020, the number of queries more than doubled and there are five times more networks querying this file.
The growth of queries follows approximately the rate of ROA creation (~5% per month).
A public RTR server is also available on rtr.rpki.cloudflare.com. It includes a plaintext endpoint on port 8282 and an SSH endpoint on port 8283. This allows us to test new versions of GoRTR before release.
Later in 2019, we also built a public dashboard where you can see in-depth RPKI validation. With a GraphQL API, you can now explore the validation data, test a list of prefixes, or see the status of the current routing table.
Currently, the API is used by BGPalerter, an open-source tool that detects routing issues (including hijacks!) from a stream of BGP updates.
Additionally, starting in November, you can access the historical data from May 2019. Data is computed daily and contains the unique records. The team behind the dashboard worked hard to provide a fast and accurate visualization of the daily ROA changes and the volumes of files changed over the day.
The future
We believe RPKI is going to continue growing, and we would like to thank the hundreds of network engineers around the world who are making the Internet routing more secure by deploying RPKI.
25% of routes are signed and 20% of the Internet is doing origin validation and those numbers grow everyday. We believe BGP will be safer before reaching 100% of deployment; for instance, once the remaining transit providers enable Origin Validation, it is unlikely a BGP hijack will make it to the front page of world news outlets.
While difficult to quantify, we believe that critical mass of protected resources will be reached in late 2021.
We will keep improving the tooling; OctoRPKI and GoRTR are open-source and we welcome contributions. In the near future, we plan on releasing a packaged version of GoRTR that can be directly installed on certain routers. Stay tuned!
NTS is now an RFC
Post Syndicated from Watson Ladd original https://blog.cloudflare.com/nts-is-now-rfc/
Earlier today the document describing Network Time Security for NTP officially became RFC 8915. This means that Network Time Security (NTS) is officially part of the collection of protocols that makes the Internet work. We’ve changed our time service to use the officially assigned port of 4460 for NTS key exchange, so you can use our service with ease. This is big progress towards securing a ubiquitous Internet protocol.
Over the past months we’ve seen many users of our time service, but very few using Network Time Security. This leaves computers vulnerable to attacks that imitate the server they use to obtain NTP. Part of the problem was the lack of available NTP daemons that supported NTS. That problem is now solved: chrony and ntpsec both support NTS.
Time underlies the security of many of the protocols such as TLS that we rely on to secure our online lives. Without accurate time, there is no way to determine whether or not credentials have expired. The absence of an easily deployed secure time protocol has been a problem for Internet security.
Without NTS or symmetric key authentication there is no guarantee that your computer is actually talking NTP with the computer you think it is. Symmetric key authentication is difficult and painful to set up, but until recently has been the only secure and standardized mechanism for authenticating NTP. NTS uses the work that goes into the Web Public Key Infrastructure to authenticate NTP servers and ensure that when you set up your computer to talk to time.cloudflare.com, that’s the server your computer gets the time from.
Our involvement in developing and promoting NTS included making a specialized server and releasing the source code, participation in the standardization process, and much working with implementers to hunt down bugs. We also set up our time service with support for NTS from the beginning, and it was a useful resource for implementers to test interoperability.
When Cloudflare supported TLS 1.3 browsers were actively updating, and so deployment quickly took hold. However, the long tail of legacy installs and extended support releases slowed adoption. Similarly until Let’s Encrypt made encryption easy for webservers most web traffic was not encrypted.
By contrast ssh quickly displaced telnet as the way to access remote systems: the security benefits were substantial, and the experience was better. Adoption of protocols is slow, but when there is a real security need it can be much faster. NTS is a real security improvement that is vital to adopt. We’re proud to continue making the Internet a better place by supporting secure protocols.
We hope that operating systems will incorporate NTS support and TLS 1.3 in their supplied NTP daemons. We also urge administrators to deploy NTS as quickly as possible, and NTP server operators to adopt NTS. With Let’s Encrypt provided certificates this is simpler than it has been in the past
We’re continuing our work in this area with the continued development of the Roughtime protocol for even better security as well as engagement with the standardization process to help develop the future of Internet time.
Cloudflare is willing to allow any device to point to time.cloudflare.com and supports NTS. Just as our Universal SSL made it easy for any website to get the security benefits of TLS, our time service makes it easy for any computer to get the benefits of secure time.
Internship Experience: Cryptography Engineer
Post Syndicated from Watson Ladd original https://blog.cloudflare.com/internship-experience-cryptography-engineer/
Back in the summer of 2017 I was an intern at Cloudflare. During the scholastic year I was a graduate student working on automorphic forms and computational Langlands at Berkeley: a part of number theory with deep connections to representation theory, aimed at uncovering some of the deepest facts about number fields. I had also gotten involved in Internet standardization and security research, but much more on the applied side.
While I had published papers in computer security and had coded for my dissertation, building and deploying new protocols to production systems was going to be new. Going from the academic environment of little day to day supervision to the industrial one of more direction; from greenfield code that would only ever be run by one person to large projects that had to be understandable by a team; from goals measured in years or even decades, to goals measured in days, weeks, or quarters; these transitions would present some challenges.
Cloudflare at that stage was a very different company from what it is now. Entire products and offices simply did not exist. Argo, now a mainstay of our offering for sophisticated companies, was slowly emerging. Access, which has been helping safeguard employees working from home these past weeks, was then experiencing teething issues. Workers was being extensively developed for launch that autumn. Quicksilver was still in the slow stages of replacing KyotoTycoon. Lisbon wasn’t on the map, and Austin was very new.
Day 1
My first job was to get my laptop working. Quickly I discovered that despite the promise of using either Mac or Linux, only Mac was supported as a local development environment. Most Linux users would take a good part of a month to tweak all the settings and get the local development environment up. I didn’t have months. After three days, I broke down and got a Mac.
Needless to say I asked for some help. Like a drowning man in quicksand, I managed to attract three engineers to this near insoluble problem of the edge dev stack, and after days of hacking on it, fixing problems that had long been ignored, we got it working well enough to test a few things. That development environment is now gone and replaced with one built Kubernetes VMs, and works much better that way. When things work on your machine, you can now send everyone your machine.
Speeding up
With setup complete enough, it was on to the problem we needed to solve. Our goal was to implement a set of three interrelated Internet drafts, one defining secondary certificates, one defining external authentication with TLS certificates, and a third permitting servers to advertise the websites they could serve.
External authentication is a TLS feature that permits a server or a client on an already opened connection to prove its possession of the private key of another certificate. This proof of possession is tied to the TLS connection, avoiding attacks on bearer tokens caused by the lack of this binding.
Secondary certificates is an HTTP/2 feature enabling clients and servers to send certificates together with proof that they actually know the private key. This feature has many applications such as certificate-based authentication, but also enables us to prove that we are permitted to serve the websites we claim to serve.
The last draft was the HTTP/2 ORIGIN frame. The ORIGIN frame enables a website to advertise other sites that it could serve, permitting more connection reuse than allowed under the traditional rules. Connection reuse is an important part of browser performance as it avoids much of the setup of a connection.
These drafts solved an important problem for Cloudflare. Many resources such as JavaScript, CSS, and images hosted by one website can be used by others. Because Cloudflare proxies so many different websites, our servers have often cached these resources as well. Browsers though, do not know that these different websites are made faster by Cloudflare, and as a result they repeat all the steps to request the subresources again. This takes unnecessary time since there is an established and usable perfectly good connection already. If the browser could know this, it could use the connection again.
We could only solve this problem by getting browsers and the broader community of TLS implementers on board. Some of these drafts such as external authentication and secondary certificates had a broader set of motivations, such as getting certificate based authentication to work with HTTP/2 and TLS 1.3. All of these needs had to be addressed in the drafts, even if we were only implementing a subset of the uses.
Successful standards cover the use cases that are needed while being simple enough to implement and achieve interoperability. Implementation experience is essential to achieving this success: a standard with no implementations fails to incorporate hard won lessons. Computers are hard.
Prototype
My first goal was to set up a simple prototype to test the much more complex production implementation, as well as to share outside of Cloudflare so that others could have confidence in their implementations. But these drafts that had to be implemented in the prototype were incremental improvements to an already massive stack of TLS and HTTP standards.
I decided it would be easiest to build on top of an already existing implementation of TLS and HTTP. I picked the Go standard library as my base: it’s simple, readable, and in a language I was already familiar with. There was already a basic demo showcasing support in Firefox for the ORIGIN frame, and it would be up to me to extend it.
Using that as my starting point I was able in 3 weeks to set up a demonstration server and a client. This showed good progress, and that nothing in the specification was blocking implementation. But without integrating it into our servers for further experimentation so that we might discover rare issues that could be showstoppers. This was a bitter lesson learned from TLS 1.3, where it took months to track down a single brand of printer that was incompatible with the standard, and forced a change.
From Prototype to Production
We also wanted to understand the benefits with some real world data, to convince others that this approach was worthwhile. Our position as a provider to many websites globally gives us diverse, real world data on performance that we use to make our products better, and perhaps more important, to learn lessons that help everyone make the Internet better. As a result we had to implement this in production: the experimental framework for TLS 1.3 development had been removed and we didn’t have an environment for experimentation.
At the time everything at Cloudflare was based on variants of NGINX. We had extended it with modules to implement features like Keyless and customized certificate handling to meet our needs, but much of the business logic was and is carried out in Lua via OpenResty.
Lua has many virtues, but at the time both the TLS termination and the core business logic lived in the same repo despite being different processes at runtime. This made it very difficult to understand what code was running when, and changes to basic libraries could create problems for both. The build system for this creation had the significant disadvantage of building the same targets with different settings. Lua also is a very dynamic language, but unlike the dynamic languages I was used to, there was no way to interact with the system as it was running on requests.
The first step was implementing the ORIGIN frame. In implementing this, we had to figure out which sites hosted the subresources used by the page we were serving. Luckily, we already had this logic to enable server push support driven by Link headers. Building on this let me quickly get ORIGIN working.
This work wasn’t the only thing I was up to as an intern. I was also participating in weekly team meetings, attending our engineering presentations, and getting a sense of what life was like at Cloudflare. We had an excursion for interns to the Computer History Museum in Mountain View and Moffett Field, where we saw the base museum.
The next challenge was getting the CERTIFICATE frame to work. This was a much deeper problem. NGINX processes a request in phases, and some of the phases, like the header processing phase, do not permit network I/O without locking up the event loop. Since we are parsing the headers to determine what to send, the frame is created in the header processing phase. But finding a certificate and telling Keyless to sign it required network I/O.
The standard solution to this problem is to have Lua execute a timer callback, in which network I/O is possible. But this context doesn’t have any data from the request: some serious refactoring was needed to create a way to get the keyless module to function outside the context of a request.
Once the signature was created, the battle was half over. Formatting the CERTIFICATE frame was simple, but it had to be stuck into the connection associated with the request that had demanded it be created. And there was no reason to expect the request was still alive, and no way to know what state it was in when the request was handled by the Keyless module.
To handle this issue I made a shared btree indexed by a number containing space for the data to be passed back and forth. This enabled the request to record that it was ready to send the CERTIFICATE frame and Keyless to record that it was ready with a frame to send. Whichever of these happened second would do the work to enqueue the frame to send out.
This was not an easy solution: the Keyless module had been written years before and largely unmodified. It fundamentally assumed it could access data from the request, and changing this assumption opened the door to difficult to diagnose bugs. It integrates into BoringSSL callbacks through some pretty tricky mechanisms.
However, I was able to test it using the client from the prototype and it worked. Unfortunately when I pushed the commit in which it worked upstream, the CI system could not find the git repo where the client prototype was due to a setting I forgot to change. The CI system unfortunately didn’t associate this failure with the branch, but attempted to check it out whenever it checked out any other branch people were working on. Murphy ensured my accomplishment had happened on a Friday afternoon Pacific time, and the team that manages the SSL server was then exclusively in London…
Monday morning the issue was quickly fixed, and whatever tempers had frayed were smoothed over when we discovered the deficiency in the CI system that had enabled a single branch to break every build. It’s always tricky to work in a global team. Later Alessandro flew to San Francisco for a number of projects with the team here and we worked side by side trying to get a demonstration working on a test site. Unfortunately there was some difficulty tracking down a bug that prevented it working in production. We had run out of time, and my internship was over. Alessandro flew back to London, and I flew to Idaho to see the eclipse.
The End
Ultimately we weren’t able to integrate this feature into the software at our edge: the risks of such intrusive changes for a very experimental feature outweighed the benefits. With not much prospect of support by clients, it would be difficult to get the real savings in performance promised. There also were nontechnical issues in standardization that have made this approach more difficult to implement: any form of traffic direction that doesn’t obey DNS creates issues for network debugging, and there were concerns about the impact of certificate misissuance.
While the project was less successful than I hoped it would be, I learned a lot of important skills: collaborating on large software projects, working with git, and communicating with other implementers about issues we found. I also got a taste of what it would be like to be on the Research team at Cloudflare and turning research from idea into practical reality and this ultimately confirmed my choice to go into industrial research.
I’ve now returned to Cloudflare full-time, working on extensions for TLS as well as time synchronization. These drafts have continued to progress through the standardization process, and we’ve contributed some of the code I wrote as a starting point for other implementers to use. If we knew all our projects would work out, they wouldn’t be ambitious enough to be research worth doing.
If this sort of research experience appeals to you, we’re hiring.
Speeding up Linux disk encryption
Post Syndicated from Ignat Korchagin original https://blog.cloudflare.com/speeding-up-linux-disk-encryption/
Data encryption at rest is a must-have for any modern Internet company. Many companies, however, don’t encrypt their disks, because they fear the potential performance penalty caused by encryption overhead.
Encrypting data at rest is vital for Cloudflare with more than 200 data centres across the world. In this post, we will investigate the performance of disk encryption on Linux and explain how we made it at least two times faster for ourselves and our customers!
Encrypting data at rest
When it comes to encrypting data at rest there are several ways it can be implemented on a modern operating system (OS). Available techniques are tightly coupled with a typical OS storage stack. A simplified version of the storage stack and encryption solutions can be found on the diagram below:
On the top of the stack are applications, which read and write data in files (or streams). The file system in the OS kernel keeps track of which blocks of the underlying block device belong to which files and translates these file reads and writes into block reads and writes, however the hardware specifics of the underlying storage device is abstracted away from the filesystem. Finally, the block subsystem actually passes the block reads and writes to the underlying hardware using appropriate device drivers.
The concept of the storage stack is actually similar to the well-known network OSI model, where each layer has a more high-level view of the information and the implementation details of the lower layers are abstracted away from the upper layers. And, similar to the OSI model, one can apply encryption at different layers (think about TLS vs IPsec or a VPN).
For data at rest we can apply encryption either at the block layers (either in hardware or in software) or at the file level (either directly in applications or in the filesystem).
Block vs file encryption
Generally, the higher in the stack we apply encryption, the more flexibility we have. With application level encryption the application maintainers can apply any encryption code they please to any particular data they need. The downside of this approach is they actually have to implement it themselves and encryption in general is not very developer-friendly: one has to know the ins and outs of a specific cryptographic algorithm, properly generate keys, nonces, IVs etc. Additionally, application level encryption does not leverage OS-level caching and Linux page cache in particular: each time the application needs to use the data, it has to either decrypt it again, wasting CPU cycles, or implement its own decrypted “cache”, which introduces more complexity to the code.
File system level encryption makes data encryption transparent to applications, because the file system itself encrypts the data before passing it to the block subsystem, so files are encrypted regardless if the application has crypto support or not. Also, file systems can be configured to encrypt only a particular directory or have different keys for different files. This flexibility, however, comes at a cost of a more complex configuration. File system encryption is also considered less secure than block device encryption as only the contents of the files are encrypted. Files also have associated metadata, like file size, the number of files, the directory tree layout etc., which are still visible to a potential adversary.
Encryption down at the block layer (often referred to as disk encryption or full disk encryption) also makes data encryption transparent to applications and even whole file systems. Unlike file system level encryption it encrypts all data on the disk including file metadata and even free space. It is less flexible though – one can only encrypt the whole disk with a single key, so there is no per-directory, per-file or per-user configuration. From the crypto perspective, not all cryptographic algorithms can be used as the block layer doesn’t have a high-level overview of the data anymore, so it needs to process each block independently. Most common algorithms require some sort of block chaining to be secure, so are not applicable to disk encryption. Instead, special modes were developed just for this specific use-case.
So which layer to choose? As always, it depends… Application and file system level encryption are usually the preferred choice for client systems because of the flexibility. For example, each user on a multi-user desktop may want to encrypt their home directory with a key they own and leave some shared directories unencrypted. On the contrary, on server systems, managed by SaaS/PaaS/IaaS companies (including Cloudflare) the preferred choice is configuration simplicity and security – with full disk encryption enabled any data from any application is automatically encrypted with no exceptions or overrides. We believe that all data needs to be protected without sorting it into "important" vs "not important" buckets, so the selective flexibility the upper layers provide is not needed.
Hardware vs software disk encryption
When encrypting data at the block layer it is possible to do it directly in the storage hardware, if the hardware supports it. Doing so usually gives better read/write performance and consumes less resources from the host. However, since most hardware firmware is proprietary, it does not receive as much attention and review from the security community. In the past this led to flaws in some implementations of hardware disk encryption, which render the whole security model useless. Microsoft, for example, started to prefer software-based disk encryption since then.
We didn’t want to put our data and our customers’ data to the risk of using potentially insecure solutions and we strongly believe in open-source. That’s why we rely only on software disk encryption in the Linux kernel, which is open and has been audited by many security professionals across the world.
Linux disk encryption performance
We aim not only to save bandwidth costs for our customers, but to deliver content to Internet users as fast as possible.
At one point we noticed that our disks were not as fast as we would like them to be. Some profiling as well as a quick A/B test pointed to Linux disk encryption. Because not encrypting the data (even if it is supposed-to-be a public Internet cache) is not a sustainable option, we decided to take a closer look into Linux disk encryption performance.
Device mapper and dm-crypt
Linux implements transparent disk encryption via a dm-crypt module and dm-crypt itself is part of device mapper kernel framework. In a nutshell, the device mapper allows pre/post-process IO requests as they travel between the file system and the underlying block device.
dm-crypt in particular encrypts "write" IO requests before sending them further down the stack to the actual block device and decrypts "read" IO requests before sending them up to the file system driver. Simple and easy! Or is it?
Benchmarking setup
For the record, the numbers in this post were obtained by running specified commands on an idle Cloudflare G9 server out of production. However, the setup should be easily reproducible on any modern x86 laptop.
Generally, benchmarking anything around a storage stack is hard because of the noise introduced by the storage hardware itself. Not all disks are created equal, so for the purpose of this post we will use the fastest disks available out there – that is no disks.
Instead Linux has an option to emulate a disk directly in RAM. Since RAM is much faster than any persistent storage, it should introduce little bias in our results.
The following command creates a 4GB ramdisk:
$sudo modprobe brd rd_nr=1 rd_size=4194304$ ls /dev/ram0
Now we can set up a dm-crypt instance on top of it thus enabling encryption for the disk. First, we need to generate the disk encryption key, "format" the disk and specify a password to unlock the newly generated key.
$fallocate -l 2M crypthdr.img$ sudo cryptsetup luksFormat /dev/ram0 --header crypthdr.img
WARNING!
========
This will overwrite data on crypthdr.img irrevocably.
Are you sure? (Type uppercase yes): YES
Enter passphrase:
Verify passphrase:
Those who are familiar with LUKS/dm-crypt might have noticed we used a LUKS detached header here. Normally, LUKS stores the password-encrypted disk encryption key on the same disk as the data, but since we want to compare read/write performance between encrypted and unencrypted devices, we might accidentally overwrite the encrypted key during our benchmarking later. Keeping the encrypted key in a separate file avoids this problem for the purposes of this post.
Now, we can actually "unlock" the encrypted device for our testing:
$sudo cryptsetup open --header crypthdr.img /dev/ram0 encrypted-ram0 Enter passphrase for /dev/ram0:$ ls /dev/mapper/encrypted-ram0
/dev/mapper/encrypted-ram0
At this point we can now compare the performance of encrypted vs unencrypted ramdisk: if we read/write data to /dev/ram0, it will be stored in plaintext. Likewise, if we read/write data to /dev/mapper/encrypted-ram0, it will be decrypted/encrypted on the way by dm-crypt and stored in ciphertext.
It’s worth noting that we’re not creating any file system on top of our block devices to avoid biasing results with a file system overhead.
Measuring throughput
When it comes to storage testing/benchmarking Flexible I/O tester is the usual go-to solution. Let’s simulate simple sequential read/write load with 4K block size on the ramdisk without encryption:
$sudo fio --filename=/dev/ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=plain plain: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1 fio-2.16 Starting 1 process ... Run status group 0 (all jobs): READ: io=21013MB, aggrb=1126.5MB/s, minb=1126.5MB/s, maxb=1126.5MB/s, mint=18655msec, maxt=18655msec WRITE: io=21023MB, aggrb=1126.1MB/s, minb=1126.1MB/s, maxb=1126.1MB/s, mint=18655msec, maxt=18655msec Disk stats (read/write): ram0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00% The above command will run for a long time, so we just stop it after a while. As we can see from the stats, we’re able to read and write roughly with the same throughput around 1126 MB/s. Let’s repeat the test with the encrypted ramdisk: $ sudo fio --filename=/dev/mapper/encrypted-ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=crypt
crypt: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...
Run status group 0 (all jobs):
READ: io=1693.7MB, aggrb=150874KB/s, minb=150874KB/s, maxb=150874KB/s, mint=11491msec, maxt=11491msec
WRITE: io=1696.4MB, aggrb=151170KB/s, minb=151170KB/s, maxb=151170KB/s, mint=11491msec, maxt=11491msec
Whoa, that’s a drop! We only get ~147 MB/s now, which is more than 7 times slower! And this is on a totally idle machine!
Maybe, crypto is just slow
The first thing we considered is to ensure we use the fastest crypto. cryptsetup allows us to benchmark all the available crypto implementations on the system to select the best one:
$sudo cryptsetup benchmark # Tests are approximate using memory only (no storage IO). PBKDF2-sha1 1340890 iterations per second for 256-bit key PBKDF2-sha256 1539759 iterations per second for 256-bit key PBKDF2-sha512 1205259 iterations per second for 256-bit key PBKDF2-ripemd160 967321 iterations per second for 256-bit key PBKDF2-whirlpool 720175 iterations per second for 256-bit key # Algorithm | Key | Encryption | Decryption aes-cbc 128b 969.7 MiB/s 3110.0 MiB/s serpent-cbc 128b N/A N/A twofish-cbc 128b N/A N/A aes-cbc 256b 756.1 MiB/s 2474.7 MiB/s serpent-cbc 256b N/A N/A twofish-cbc 256b N/A N/A aes-xts 256b 1823.1 MiB/s 1900.3 MiB/s serpent-xts 256b N/A N/A twofish-xts 256b N/A N/A aes-xts 512b 1724.4 MiB/s 1765.8 MiB/s serpent-xts 512b N/A N/A twofish-xts 512b N/A N/A It seems aes-xts with a 256-bit data encryption key is the fastest here. But which one are we actually using for our encrypted ramdisk? $ sudo dmsetup table /dev/mapper/encrypted-ram0
0 8388608 crypt aes-xts-plain64 0000000000000000000000000000000000000000000000000000000000000000 0 1:0 0
We do use aes-xts with a 256-bit data encryption key (count all the zeroes conveniently masked by dmsetup tool – if you want to see the actual bytes, add the --showkeys option to the above command). The numbers do not add up however: cryptsetup benchmark tells us above not to rely on the results, as "Tests are approximate using memory only (no storage IO)", but that is exactly how we’ve set up our experiment using the ramdisk. In a somewhat worse case (assuming we’re reading all the data and then encrypting/decrypting it sequentially with no parallelism) doing back-of-the-envelope calculation we should be getting around (1126 * 1823) / (1126 + 1823) =~696 MB/s, which is still quite far from the actual 147 * 2 = 294 MB/s (total for reads and writes).
dm-crypt performance flags
While reading the cryptsetup man page we noticed that it has two options prefixed with --perf-, which are probably related to performance tuning. The first one is --perf-same_cpu_crypt with a rather cryptic description:
Perform encryption using the same cpu that IO was submitted on. The default is to use an unbound workqueue so that encryption work is automatically balanced between available CPUs. This option is only relevant for open action.
So we enable the option
$sudo cryptsetup close encrypted-ram0$ sudo cryptsetup open --header crypthdr.img --perf-same_cpu_crypt /dev/ram0 encrypted-ram0
Note: according to the latest man page there is also a cryptsetup refresh command, which can be used to enable these options live without having to "close" and "re-open" the encrypted device. Our cryptsetup however didn’t support it yet.
Verifying if the option has been really enabled:
$sudo dmsetup table encrypted-ram0 0 8388608 crypt aes-xts-plain64 0000000000000000000000000000000000000000000000000000000000000000 0 1:0 0 1 same_cpu_crypt Yes, we can now see same_cpu_crypt in the output, which is what we wanted. Let’s rerun the benchmark: $ sudo fio --filename=/dev/mapper/encrypted-ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=crypt
crypt: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...
Run status group 0 (all jobs):
READ: io=1596.6MB, aggrb=139811KB/s, minb=139811KB/s, maxb=139811KB/s, mint=11693msec, maxt=11693msec
WRITE: io=1600.9MB, aggrb=140192KB/s, minb=140192KB/s, maxb=140192KB/s, mint=11693msec, maxt=11693msec
Hmm, now it is ~136 MB/s which is slightly worse than before, so no good. What about the second option --perf-submit_from_crypt_cpus:
Disable offloading writes to a separate thread after encryption. There are some situations where offloading write bios from the encryption threads to a single thread degrades performance significantly. The default is to offload write bios to the same thread. This option is only relevant for open action.
Maybe, we are in the "some situation" here, so let’s try it out:
$sudo cryptsetup close encrypted-ram0$ sudo cryptsetup open --header crypthdr.img --perf-submit_from_crypt_cpus /dev/ram0 encrypted-ram0
Enter passphrase for /dev/ram0:
$sudo dmsetup table encrypted-ram0 0 8388608 crypt aes-xts-plain64 0000000000000000000000000000000000000000000000000000000000000000 0 1:0 0 1 submit_from_crypt_cpus And now the benchmark: $ sudo fio --filename=/dev/mapper/encrypted-ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=crypt
crypt: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...
Run status group 0 (all jobs):
READ: io=2066.6MB, aggrb=169835KB/s, minb=169835KB/s, maxb=169835KB/s, mint=12457msec, maxt=12457msec
WRITE: io=2067.7MB, aggrb=169965KB/s, minb=169965KB/s, maxb=169965KB/s, mint=12457msec, maxt=12457msec
~166 MB/s, which is a bit better, but still not good…
Being desperate we decided to seek support from the Internet and posted our findings to the dm-crypt mailing list, but the response we got was not very encouraging:
If the numbers disturb you, then this is from lack of understanding on your side. You are probably unaware that encryption is a heavy-weight operation…
We decided to make a scientific research on this topic by typing "is encryption expensive" into Google Search and one of the top results, which actually contains meaningful measurements, is… our own post about cost of encryption, but in the context of TLS! This is a fascinating read on its own, but the gist is: modern crypto on modern hardware is very cheap even at Cloudflare scale (doing millions of encrypted HTTP requests per second). In fact, it is so cheap that Cloudflare was the first provider to offer free SSL/TLS for everyone.
Digging into the source code
When trying to use the custom dm-crypt options described above we were curious why they exist in the first place and what is that "offloading" all about. Originally we expected dm-crypt to be a simple "proxy", which just encrypts/decrypts data as it flows through the stack. Turns out dm-crypt does more than just encrypting memory buffers and a (simplified) IO traverse path diagram is presented below:
When the file system issues a write request, dm-crypt does not process it immediately – instead it puts it into a workqueue named "kcryptd". In a nutshell, a kernel workqueue just schedules some work (encryption in this case) to be performed at some later time, when it is more convenient. When "the time" comes, dm-crypt sends the request to Linux Crypto API for actual encryption. However, modern Linux Crypto API is asynchronous as well, so depending on which particular implementation your system will use, most likely it will not be processed immediately, but queued again for "later time". When Linux Crypto API will finally do the encryption, dm-crypt may try to sort pending write requests by putting each request into a red-black tree. Then a separate kernel thread again at "some time later" actually takes all IO requests in the tree and sends them down the stack.
Now for read requests: this time we need to get the encrypted data first from the hardware, but dm-crypt does not just ask for the driver for the data, but queues the request into a different workqueue named "kcryptd_io". At some point later, when we actually have the encrypted data, we schedule it for decryption using the now familiar "kcryptd" workqueue. "kcryptd" will send the request to Linux Crypto API, which may decrypt the data asynchronously as well.
To be fair the request does not always traverse all these queues, but the important part here is that write requests may be queued up to 4 times in dm-crypt and read requests up to 3 times. At this point we were wondering if all this extra queueing can cause any performance issues. For example, there is a nice presentation from Google about the relationship between queueing and tail latency. One key takeaway from the presentation is:
A significant amount of tail latency is due to queueing effects
So, why are all these queues there and can we remove them?
Git archeology
No-one writes more complex code just for fun, especially for the OS kernel. So all these queues must have been put there for a reason. Luckily, the Linux kernel source is managed by git, so we can try to retrace the changes and the decisions around them.
The "kcryptd" workqueue was in the source since the beginning of the available history with the following comment:
Needed because it would be very unwise to do decryption in an interrupt context, so bios returning from read requests get queued here.
So it was for reads only, but even then – why do we care if it is interrupt context or not, if Linux Crypto API will likely use a dedicated thread/queue for encryption anyway? Well, back in 2005 Crypto API was not asynchronous, so this made perfect sense.
In 2006 dm-crypt started to use the "kcryptd" workqueue not only for encryption, but for submitting IO requests:
This patch is designed to help dm-crypt comply with the new constraints imposed by the following patch in -mm: md-dm-reduce-stack-usage-with-stacked-block-devices.patch
It seems the goal here was not to add more concurrency, but rather reduce kernel stack usage, which makes sense again as the kernel has a common stack across all the code, so it is a quite limited resource. It is worth noting, however, that the Linux kernel stack has been expanded in 2014 for x86 platforms, so this might not be a problem anymore.
A first version of "kcryptd_io" workqueue was added in 2007 with the intent to avoid:
starvation caused by many requests waiting for memory allocation…
The request processing was bottlenecking on a single workqueue here, so the solution was to add another one. Makes sense.
We are definitely not the first ones experiencing performance degradation because of extensive queueing: in 2011 a change was introduced to conditionally revert some of the queueing for read requests:
If there is enough memory, code can directly submit bio instead queuing this operation in a separate thread.
Unfortunately, at that time Linux kernel commit messages were not as verbose as today, so there is no performance data available.
In 2015 dm-crypt started to sort writes in a separate "dmcrypt_write" thread before sending them down the stack:
On a multiprocessor machine, encryption requests finish in a different order than they were submitted. Consequently, write requests would be submitted in a different order and it could cause severe performance degradation.
It does make sense as sequential disk access used to be much faster than the random one and dm-crypt was breaking the pattern. But this mostly applies to spinning disks, which were still dominant in 2015. It may not be as important with modern fast SSDs (including NVME SSDs).
Another part of the commit message is worth mentioning:
…in particular it enables IO schedulers like CFQ to sort more effectively…
It mentions the performance benefits for the CFQ IO scheduler, but Linux schedulers have improved since then to the point that CFQ scheduler has been removed from the kernel in 2018.
The same patchset replaces the sorting list with a red-black tree:
In theory the sorting should be performed by the underlying disk scheduler, however, in practice the disk scheduler only accepts and sorts a finite number of requests. To allow the sorting of all requests, dm-crypt needs to implement its own sorting.
The overhead associated with rbtree-based sorting is considered negligible so it is not used conditionally.
All that make sense, but it would be nice to have some backing data.
Interestingly, in the same patchset we see the introduction of our familiar "submit_from_crypt_cpus" option:
Overall, we can see that every change was reasonable and needed, however things have changed since then:
• hardware became faster and smarter
• Linux resource allocation was revisited
• coupled Linux subsystems were rearchitected
And many of the design choices above may not be applicable to modern Linux.
The "clean-up"
Based on the research above we decided to try to remove all the extra queueing and asynchronous behaviour and revert dm-crypt to its original purpose: simply encrypt/decrypt IO requests as they pass through. But for the sake of stability and further benchmarking we ended up not removing the actual code, but rather adding yet another dm-crypt option, which bypasses all the queues/threads, if enabled. The flag allows us to switch between the current and new behaviour at runtime under full production load, so we can easily revert our changes should we see any side-effects. The resulting patch can be found on the Cloudflare GitHub Linux repository.
Synchronous Linux Crypto API
From the diagram above we remember that not all queueing is implemented in dm-crypt. Modern Linux Crypto API may also be asynchronous and for the sake of this experiment we want to eliminate queues there as well. What does "may be" mean, though? The OS may contain different implementations of the same algorithm (for example, hardware-accelerated AES-NI on x86 platforms and generic C-code AES implementations). By default the system chooses the "best" one based on the configured algorithm priority. dm-crypt allows overriding this behaviour and request a particular cipher implementation using the capi: prefix. However, there is one problem. Let us actually check the available AES-XTS (this is our disk encryption cipher, remember?) implementations on our system:
$grep -A 11 'xts(aes)' /proc/crypto name : xts(aes) driver : xts(ecb(aes-generic)) module : kernel priority : 100 refcnt : 7 selftest : passed internal : no type : skcipher async : no blocksize : 16 min keysize : 32 max keysize : 64 -- name : __xts(aes) driver : cryptd(__xts-aes-aesni) module : cryptd priority : 451 refcnt : 1 selftest : passed internal : yes type : skcipher async : yes blocksize : 16 min keysize : 32 max keysize : 64 -- name : xts(aes) driver : xts-aes-aesni module : aesni_intel priority : 401 refcnt : 1 selftest : passed internal : no type : skcipher async : yes blocksize : 16 min keysize : 32 max keysize : 64 -- name : __xts(aes) driver : __xts-aes-aesni module : aesni_intel priority : 401 refcnt : 7 selftest : passed internal : yes type : skcipher async : no blocksize : 16 min keysize : 32 max keysize : 64 We want to explicitly select a synchronous cipher from the above list to avoid queueing effects in threads, but the only two supported are xts(ecb(aes-generic)) (the generic C implementation) and __xts-aes-aesni (the x86 hardware-accelerated implementation). We definitely want the latter as it is much faster (we’re aiming for performance here), but it is suspiciously marked as internal (see internal: yes). If we check the source code: Mark a cipher as a service implementation only usable by another cipher and never by a normal user of the kernel crypto API So this cipher is meant to be used only by other wrapper code in the Crypto API and not outside it. In practice this means, that the caller of the Crypto API needs to explicitly specify this flag, when requesting a particular cipher implementation, but dm-crypt does not do it, because by design it is not part of the Linux Crypto API, rather an "external" user. We already patch the dm-crypt module, so we could as well just add the relevant flag. However, there is another problem with AES-NI in particular: x86 FPU. "Floating point" you say? Why do we need floating point math to do symmetric encryption which should only be about bit shifts and XOR operations? We don’t need the math, but AES-NI instructions use some of the CPU registers, which are dedicated to the FPU. Unfortunately the Linux kernel does not always preserve these registers in interrupt context for performance reasons (saving/restoring FPU is expensive). But dm-crypt may execute code in interrupt context, so we risk corrupting some other process data and we go back to "it would be very unwise to do decryption in an interrupt context" statement in the original code. Our solution to address the above was to create another somewhat "smart" Crypto API module. This module is synchronous and does not roll its own crypto, but is just a "router" of encryption requests: • if we can use the FPU (and thus AES-NI) in the current execution context, we just forward the encryption request to the faster, "internal" __xts-aes-aesni implementation (and we can use it here, because now we are part of the Crypto API) • otherwise, we just forward the encryption request to the slower, generic C-based xts(ecb(aes-generic)) implementation Using the whole lot Let’s walk through the process of using it all together. The first step is to grab the patches and recompile the kernel (or just compile dm-crypt and our xtsproxy modules). Next, let’s restart our IO workload in a separate terminal, so we can make sure we can reconfigure the kernel at runtime under load: $ sudo fio --filename=/dev/mapper/encrypted-ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=crypt
crypt: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...
In the main terminal make sure our new Crypto API module is loaded and available:
$sudo modprobe xtsproxy$ grep -A 11 'xtsproxy' /proc/crypto
driver : xts-aes-xtsproxy
module : xtsproxy
priority : 0
refcnt : 0
selftest : passed
internal : no
type : skcipher
async : no
blocksize : 16
min keysize : 32
max keysize : 64
ivsize : 16
chunksize : 16
Reconfigure the encrypted disk to use our newly loaded module and enable our patched dm-crypt flag (we have to use low-level dmsetup tool and cryptsetup obviously is not aware of our modifications):
$sudo dmsetup table encrypted-ram0 --showkeys | sed 's/aes-xts-plain64/capi:xts-aes-xtsproxy-plain64/' | sed 's/$/ 1 force_inline/' | sudo dmsetup reload encrypted-ram0
We just "loaded" the new configuration, but for it to take effect, we need to suspend/resume the encrypted device:
$sudo dmsetup suspend encrypted-ram0 && sudo dmsetup resume encrypted-ram0 And now observe the result. We may go back to the other terminal running the fio job and look at the output, but to make things nicer, here’s a snapshot of the observed read/write throughput in Grafana: Wow, we have more than doubled the throughput! With the total throughput of ~640 MB/s we’re now much closer to the expected ~696 MB/s from above. What about the IO latency? (The await statistic from the iostat reporting tool): The latency has been cut in half as well! To production So far we have been using a synthetic setup with some parts of the full production stack missing, like file systems, real hardware and most importantly, production workload. To ensure we’re not optimising imaginary things, here is a snapshot of the production impact these changes bring to the caching part of our stack: This graph represents a three-way comparison of the worst-case response times (99th percentile) for a cache hit in one of our servers. The green line is from a server with unencrypted disks, which we will use as baseline. The red line is from a server with encrypted disks with the default Linux disk encryption implementation and the blue line is from a server with encrypted disks and our optimisations enabled. As we can see the default Linux disk encryption implementation has a significant impact on our cache latency in worst case scenarios, whereas the patched implementation is indistinguishable from not using encryption at all. In other words the improved encryption implementation does not have any impact at all on our cache response speed, so we basically get it for free! That’s a win! We’re just getting started This post shows how an architecture review can double the performance of a system. Also we reconfirmed that modern cryptography is not expensive and there is usually no excuse not to protect your data. We are going to submit this work for inclusion in the main kernel source tree, but most likely not in its current form. Although the results look encouraging we have to remember that Linux is a highly portable operating system: it runs on powerful servers as well as small resource constrained IoT devices and on many other CPU architectures as well. The current version of the patches just optimises disk encryption for a particular workload on a particular architecture, but Linux needs a solution which runs smoothly everywhere. That said, if you think your case is similar and you want to take advantage of the performance improvements now, you may grab the patches and hopefully provide feedback. The runtime flag makes it easy to toggle the functionality on the fly and a simple A/B test may be performed to see if it benefits any particular case or setup. These patches have been running across our wide network of more than 200 data centres on five generations of hardware, so can be reasonably considered stable. Enjoy both performance and security from Cloudflare for all! Going Keyless Everywhere Post Syndicated from Nick Sullivan original https://blog.cloudflare.com/going-keyless-everywhere/ Time flies. The Heartbleed vulnerability was discovered just over five and a half years ago. Heartbleed became a household name not only because it was one of the first bugs with its own web page and logo, but because of what it revealed about the fragility of the Internet as a whole. With Heartbleed, one tiny bug in a cryptography library exposed the personal data of the users of almost every website online. Heartbleed is an example of an underappreciated class of bugs: remote memory disclosure vulnerabilities. High profile examples other than Heartbleed include Cloudbleed and most recently NetSpectre. These vulnerabilities allow attackers to extract secrets from servers by simply sending them specially-crafted packets. Cloudflare recently completed a multi-year project to make our platform more resilient against this category of bug. For the last five years, the industry has been dealing with the consequences of the design that led to Heartbleed being so impactful. In this blog post we’ll dig into memory safety, and how we re-designed Cloudflare’s main product to protect private keys from the next Heartbleed. Memory Disclosure Perfect security is not possible for businesses with an online component. History has shown us that no matter how robust their security program, an unexpected exploit can leave a company exposed. One of the more famous recent incidents of this sort is Heartbleed, a vulnerability in a commonly used cryptography library called OpenSSL that exposed the inner details of millions of web servers to anyone with a connection to the Internet. Heartbleed made international news, caused millions of dollars of damage, and still hasn’t been fully resolved. Typical web services only return data via well-defined public-facing interfaces called APIs. Clients don’t typically get to see what’s going on under the hood inside the server, that would be a huge privacy and security risk. Heartbleed broke that paradigm: it enabled anyone on the Internet to get access to take a peek at the operating memory used by web servers, revealing privileged data usually not exposed via the API. Heartbleed could be used to extract the result of previous data sent to the server, including passwords and credit cards. It could also reveal the inner workings and cryptographic secrets used inside the server, including TLS certificate private keys. Heartbleed let attackers peek behind the curtain, but not too far. Sensitive data could be extracted, but not everything on the server was at risk. For example, Heartbleed did not enable attackers to steal the content of databases held on the server. You may ask: why was some data at risk but not others? The reason has to do with how modern operating systems are built. A simplified view of process isolation Most modern operating systems are split into multiple layers. These layers are analogous to security clearance levels. So-called user-space applications (like your browser) typically live in a low-security layer called user space. They only have access to computing resources (memory, CPU, networking) if the lower, more credentialed layers let them. User-space applications need resources to function. For example, they need memory to store their code and working memory to do computations. However, it would be risky to give an application direct access to the physical RAM of the computer they’re running on. Instead, the raw computing elements are restricted to a lower layer called the operating system kernel. The kernel only runs specially-designed applications designed to safely manage these resources and mediate access to them for user-space applications. When a new user space application process is launched, the kernel gives it a virtual memory space. This virtual memory space acts like real memory to the application but is actually a safely guarded translation layer the kernel uses to protect the real memory. Each application’s virtual memory space is like a parallel universe dedicated to that application. This makes it impossible for one process to view or modify another’s, the other applications are simply not addressable. Heartbleed, Cloudbleed and the process boundary Heartbleed was a vulnerability in the OpenSSL library, which was part of many web server applications. These web servers run in user space, like any common applications. This vulnerability caused the web server to return up to 2 kilobytes of its memory in response to a specially-crafted inbound request. Cloudbleed was also a memory disclosure bug, albeit one specific to Cloudflare, that got its name because it was so similar to Heartbleed. With Cloudbleed, the vulnerability was not in OpenSSL, but instead in a secondary web server application used for HTML parsing. When this code parsed a certain sequence of HTML, it ended up inserting some process memory into the web page it was serving. It’s important to note that both of these bugs occurred in applications running in user space, not kernel space. This means that the memory exposed by the bug was necessarily part of the virtual memory of the application. Even if the bug were to expose megabytes of data, it would only expose data specific to that application, not other applications on the system. In order for a web server to serve traffic over the encrypted HTTPS protocol, it needs access to the certificate’s private key, which is typically kept in the application’s memory. These keys were exposed to the Internet by Heartbleed. The Cloudbleed vulnerability affected a different process, the HTML parser, which doesn’t do HTTPS and therefore doesn’t keep the private key in memory. This meant that HTTPS keys were safe, even if other data in the HTML parser’s memory space wasn’t. The fact that the HTML parser and the web server were different applications saved us from having to revoke and re-issue our customers’ TLS certificates. However, if another memory disclosure vulnerability is discovered in the web server, these keys are again at risk. Moving keys out of Internet-facing processes Not all web servers keep private keys in memory. In some deployments, private keys are held in a separate machine called a Hardware Security Module (HSM). HSMs are built to withstand physical intrusion and tampering and are often built to comply with stringent compliance requirements. They can often be bulky and expensive. Web servers designed to take advantage of keys in an HSM connect to them over a physical cable and communicate with a specialized protocol called PKCS#11. This allows the web server to serve encrypted content while being physically separated from the private key. At Cloudflare, we built our own way to separate a web server from a private key: Keyless SSL. Rather than keeping the keys in a separate physical machine connected to the server with a cable, the keys are kept in a key server operated by the customer in their own infrastructure (this can also be backed by an HSM). More recently, we launched Geo Key Manager, a service that allows users to store private keys in only select Cloudflare locations. Connections to locations that do not have access to the private key use Keyless SSL with a key server hosted in a datacenter that does have access. In both Keyless SSL and Geo Key Manager, private keys are not only not part of the web server’s memory space, they’re often not even in the same country! This extreme degree of separation is not necessary to protect against the next Heartbleed. All that is needed is for the web server and the key server to not be part of the same application. So that’s what we did. We call this Keyless Everywhere. Keyless SSL is coming from inside the house Repurposing Keyless SSL for Cloudflare-held private keys was easy to conceptualize, but the path from ideation to live in production wasn’t so straightforward. The core functionality of Keyless SSL comes from the open source gokeyless which customers run on their infrastructure, but internally we use it as a library and have replaced the main package with an implementation suited to our requirements (we’ve creatively dubbed it gokeyless-internal). As with all major architecture changes, it’s prudent to start with testing out the model with something new and low risk. In our case, the test bed was our experimental TLS 1.3 implementation. In order to quickly iterate through draft versions of the TLS specification and push releases without affecting the majority of Cloudflare customers, we re-wrote our custom nginx web server in Go and deployed it in parallel to our existing infrastructure. This server was designed to never hold private keys from the start and only leverage gokeyless-internal. At this time there was only a small amount of TLS 1.3 traffic and it was all coming from the beta versions of browsers, which allowed us to work through the initial kinks of gokeyless-internal without exposing the majority of visitors to security risks or outages due to gokeyless-internal. The first step towards making TLS 1.3 fully keyless was identifying and implementing the new functionality we needed to add to gokeyless-internal. Keyless SSL was designed to run on customer infrastructure, with the expectation of supporting only a handful of private keys. But our edge must simultaneously support millions of private keys, so we implemented the same lazy loading logic we use in our web server, nginx. Furthermore, a typical customer deployment would put key servers behind a network load balancer, so they could be taken out of service for upgrades or other maintenance. Contrast this with our edge, where it’s important to maximize our resources by serving traffic during software upgrades. This problem is solved by the excellent tableflip package we use elsewhere at Cloudflare. The next project to go Keyless was Spectrum, which launched with default support for gokeyless-internal. With these small victories in hand, we had the confidence necessary to attempt the big challenge, which was porting our existing nginx infrastructure to a fully keyless model. After implementing the new functionality, and being satisfied with our integration tests, all that’s left is to turn this on in production and call it a day, right? Anyone with experience with large distributed systems knows how far “working in dev” is from “done,” and this story is no different. Thankfully we were anticipating problems, and built a fallback into nginx to complete the handshake itself if any problems were encountered with the gokeyless-internal path. This allowed us to expose gokeyless-internal to production traffic without risking downtime in the event that our reimplementation of the nginx logic was not 100% bug-free. When rolling back the code doesn’t roll back the problem Our deployment plan was to enable Keyless Everywhere, find the most common causes of fallbacks, and then fix them. We could then repeat this process until all sources of fallbacks had been eliminated, after which we could remove access to private keys (and therefore the fallback) from nginx. One of the early causes of fallbacks was gokeyless-internal returning ErrKeyNotFound, indicating that it couldn’t find the requested private key in storage. This should not have been possible, since nginx only makes a request to gokeyless-internal after first finding the certificate and key pair in storage, and we always write the private key and certificate together. It turned out that in addition to returning the error for the intended case of the key truly not found, we were also returning it when transient errors like timeouts were encountered. To resolve this, we updated those transient error conditions to return ErrInternal, and deployed to our canary datacenters. Strangely, we found that a handful of instances in a single datacenter started encountering high rates of fallbacks, and the logs from nginx indicated it was due to a timeout between nginx and gokeyless-internal. The timeouts didn’t occur right away, but once a system started logging some timeouts it never stopped. Even after we rolled back the release, the fallbacks continued with the old version of the software! Furthermore, while nginx was complaining about timeouts, gokeyless-internal seemed perfectly healthy and was reporting reasonable performance metrics (sub-millisecond median request latency). To debug the issue, we added detailed logging to both nginx and gokeyless, and followed the chain of events backwards once timeouts were encountered. ➜ ~ grep 'timed out' nginx.log | grep Keyless | head -5 2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015157 Keyless SSL request/response timed out while reading Keyless SSL response, keyserver: 127.0.0.1 2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015231 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1 2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015271 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1 2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015280 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1 2018-07-25T05:30:50.000 29m41 2018/07/25 05:30:50 [error] 4525#0: *1015289 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1 You can see the first request to log a timeout had id 1015157. Also interesting that the first log line was “timed out while reading,” but all the others are “timed out while waiting,” and this latter message is the one that continues forever. Here is the matching request in the gokeyless log: ➜ ~ grep 'id=1015157 ' gokeyless.log | head -1 2018-07-25T05:30:39.000 29m41 2018/07/25 05:30:39 [DEBUG] connection 127.0.0.1:30520: worker=ecdsa-29 opcode=OpECDSASignSHA256 id=1015157 sni=announce.php?info_hash=%a8%9e%9dc%cc%3b1%c8%23%e4%93%21r%0f%92mc%0c%15%89&peer_id=-ut353s-%ce%ad%5e%b1%99%06%24e%d5d%9a%08&port=42596&uploaded=65536&downloaded=0&left=0&corrupt=0&key=04a184b7&event=started&numwant=200&compact=1&no_peer_id=1 ip=104.20.33.147 Aha! That SNI value is clearly invalid (SNIs are like Host headers, i.e. they are domains, not URL paths), and it’s also quite long. Our storage system indexes certificates based on two indices: which SNI they correspond to, and which IP addresses they correspond to (for older clients that don’t support SNI). Our storage interface uses the memcached protocol, and the client library that gokeyless-internal uses rejects requests for keys longer than 250 characters (memcached’s maximum key length), whereas the nginx logic is to simply ignore the invalid SNI and treat the request as if only had an IP. The change in our new release had shifted this condition from ErrKeyNotFound to ErrInternal, which triggered cascading problems in nginx. The “timeouts” it encountered were actually a result of throwing away all in-flight requests multiplexed on a connection which happened to return ErrInternalfor a single request. These requests were retried, but once this condition triggered, nginx became overloaded by the number of retried requests plus the continuous stream of new requests coming in with bad SNI, and was unable to recover. This explains why rolling back gokeyless-internal didn’t fix the problem. This discovery finally brought our attention to nginx, which thus far had escaped blame since it had been working reliably with customer key servers for years. However, communicating over localhost to a multitenant key server is fundamentally different than reaching out over the public Internet to communicate with a customer’s key server, and we had to make the following changes: • Instead of a long connection timeout and a relatively short response timeout for customer key servers, extremely short connection timeouts and longer request timeouts are appropriate for a localhost key server. • Similarly, it’s reasonable to retry (with backoff) if we timeout waiting on a customer key server response, since we can’t trust the network. But over localhost, a timeout would only occur if gokeyless-internal were overloaded and the request were still queued for processing. In this case a retry would only lead to more total work being requested of gokeyless-internal, making the situation worse. • Most significantly, nginx must not throw away all requests multiplexed on a connection if any single one of them encounters an error, since a single connection no longer represents a single customer. Implementations matter CPU at the edge is one of our most precious assets, and it’s closely guarded by our performance team (aka CPU police). Soon after turning on Keyless Everywhere in one of our canary datacenters, they noticed gokeyless using ~50% of a core per instance. We were shifting the sign operations from nginx to gokeyless, so of course it would be using more CPU now. But nginx should have seen a commensurate reduction in CPU usage, right? Wrong. Elliptic curve operations are very fast in Go, but it’s known that RSA operations are much slower than their BoringSSL counterparts. Although Go 1.11 includes optimizations for RSA math operations, we needed more speed. Well-tuned assembly code is required to match the performance of BoringSSL, so Armando Faz from our Crypto team helped claw back some of the lost CPU by reimplementing parts of the math/big package with platform-dependent assembly in an internal fork of Go. The recent assembly policy of Go prefers the use of Go portable code instead of assembly, so these optimizations were not upstreamed. There is still room for more optimizations, and for that reason we’re still evaluating moving to cgo + BoringSSL for sign operations, despite cgo’s many downsides. Changing our tooling Process isolation is a powerful tool for protecting secrets in memory. Our move to Keyless Everywhere demonstrates that this is not a simple tool to leverage. Re-architecting an existing system such as nginx to use process isolation to protect secrets was time-consuming and difficult. Another approach to memory safety is to use a memory-safe language such as Rust. Rust was originally developed by Mozilla but is starting to be used much more widely. The main advantage that Rust has over C/C++ is that it has memory safety features without a garbage collector. Re-writing an existing application in a new language such as Rust is a daunting task. That said, many new Cloudflare features, from the powerful Firewall Rules feature to our 1.1.1.1 with WARP app, have been written in Rust to take advantage of its powerful memory-safety properties. We’re really happy with Rust so far and plan on using it even more in the future. Conclusion The harrowing aftermath of Heartbleed taught the industry a lesson that should have been obvious in retrospect: keeping important secrets in applications that can be accessed remotely via the Internet is a risky security practice. In the following years, with a lot of work, we leveraged process separation and Keyless SSL to ensure that the next Heartbleed wouldn’t put customer keys at risk. However, this is not the end of the road. Recently memory disclosure vulnerabilities such as NetSpectre have been discovered which are able to bypass application process boundaries, so we continue to actively explore new ways to keep keys secure. Delegated Credentials for TLS Post Syndicated from Nick Sullivan original https://blog.cloudflare.com/keyless-delegation/ Today we’re happy to announce support for a new cryptographic protocol that helps make it possible to deploy encrypted services in a global network while still maintaining fast performance and tight control of private keys: Delegated Credentials for TLS. We have been working with partners from Facebook, Mozilla, and the broader IETF community to define this emerging standard. We’re excited to share the gory details today in this blog post. Deploying TLS globally Many of the technical problems we face at Cloudflare are widely shared problems across the Internet industry. As gratifying as it can be to solve a problem for ourselves and our customers, it can be even more gratifying to solve a problem for the entire Internet. For the past three years, we have been working with peers in the industry to solve a specific shared problem in the TLS infrastructure space: How do you terminate TLS connections while storing keys remotely and maintaining performance and availability? Today we’re announcing that Cloudflare now supports Delegated Credentials, the result of this work. Cloudflare’s TLS/SSL features are among the top reasons customers use our service. Configuring TLS is hard to do without internal expertise. By automating TLS, web site and web service operators gain the latest TLS features and the most secure configurations by default. It also reduces the risk of outages or bad press due to misconfigured or insecure encryption settings. Customers also gain early access to unique features like TLS 1.3, post-quantum cryptography, and OCSP stapling as they become available. Unfortunately, for web services to authorize a service to terminate TLS for them, they have to trust the service with their private keys, which demands a high level of trust. For services with a global footprint, there is an additional level of nuance. They may operate multiple data centers located in places with varying levels of physical security, and each of these needs to be trusted to terminate TLS. To tackle these problems of trust, Cloudflare has invested in two technologies: Keyless SSL, which allows customers to use Cloudflare without sharing their private key with Cloudflare; and Geo Key Manager, which allows customers to choose the datacenters in which Cloudflare should keep their keys. Both of these technologies are able to be deployed without any changes to browsers or other clients. They also come with some downsides in the form of availability and performance degradation. Keyless SSL introduces extra latency at the start of a connection. In order for a server without access to a private key to establish a connection with a client, that servers needs to reach out to a key server, or a remote point of presence, and ask them to do a private key operation. This not only adds additional latency to the connection, causing the content to load slower, but it also introduces some troublesome operational constraints on the customer. Specifically, the server with access to the key needs to be highly available or the connection can fail. Sites often use Cloudflare to improve their site’s availability, so having to run a high-availability key server is an unwelcome requirement. Turning a pull into a push The reason services like Keyless SSL that rely on remote keys are so brittle is their architecture: they are pull-based rather than push-based. Every time a client attempts a handshake with a server that doesn’t have the key, it needs to pull the authorization from the key server. An alternative way to build this sort of system is to periodically push a short-lived authorization key to the server and use that for handshakes. Switching from a pull-based model to a push-based model eliminates the additional latency, but it comes with additional requirements, including the need to change the client. Enter the new TLS feature of Delegated Credentials (DCs). A delegated credential is a short-lasting key that the certificate’s owner has delegated for use in TLS. They work like a power of attorney: your server authorizes our server to terminate TLS for a limited time. When a browser that supports this protocol connects to our edge servers we can show it this “power of attorney”, instead of needing to reach back to a customer’s server to get it to authorize the TLS connection. This reduces latency and improves performance and reliability. A fresh delegated credential can be created and pushed out to TLS servers long before the previous credential expires. Momentary blips in availability will not lead to broken handshakes for clients that support delegated credentials. Furthermore, a Delegated Credentials-enabled TLS connection is just as fast as a standard TLS connection: there’s no need to connect to the key server for every handshake. This removes the main drawback of Keyless SSL for DC-enabled clients. Delegated credentials are intended to be an Internet Standard RFC that anyone can implement and use, not a replacement for Keyless SSL. Since browsers will need to be updated to support the standard, proprietary mechanisms like Keyless SSL and Geo Key Manager will continue to be useful. Delegated credentials aren’t just useful in our context, which is why we’ve developed it openly and with contributions from across industry and academia. Facebook has integrated them into their own TLS implementation, and you can read more about how they view the security benefits here. When it comes to improving the security of the Internet, we’re all on the same team. "We believe delegated credentials provide an effective way to boost security by reducing certificate lifetimes without sacrificing reliability. This will soon become an Internet standard and we hope others in the industry adopt delegated credentials to help make the Internet ecosystem more secure." Subodh Iyengar, software engineer at Facebook Extensibility beyond the PKI At Cloudflare, we’re interested in pushing the state of the art forward by experimenting with new algorithms. In TLS, there are three main areas of experimentation: ciphers, key exchange algorithms, and authentication algorithms. Ciphers and key exchange algorithms are only dependent on two parties: the client and the server. This freedom allows us to deploy exciting new choices like ChaCha20-Poly1305 or post-quantum key agreement in lockstep with browsers. On the other hand, the authentication algorithms used in TLS are dependent on certificates, which introduces certificate authorities and the entire public key infrastructure into the mix. Unfortunately, the public key infrastructure is very conservative in its choice of algorithms, making it harder to adopt newer cryptography for authentication algorithms in TLS. For instance, EdDSA, a highly-regarded signature scheme, is not supported by certificate authorities, and root programs limit the certificates that will be signed. With the emergence of quantum computing, experimenting with new algorithms is essential to determine which solutions are deployable and functional on the Internet. Since delegated credentials introduce the ability to use new authentication key types without requiring changes to certificates themselves, this opens up a new area of experimentation. Delegated credentials can be used to provide a level of flexibility in the transition to post-quantum cryptography, by enabling new algorithms and modes of operation to coexist with the existing PKI infrastructure. It also enables tiny victories, like the ability to use smaller, faster Ed25519 signatures in TLS. Inside DCs A delegated credential contains a public key and an expiry time. This bundle is then signed by a certificate along with the certificate itself, binding the delegated credential to the certificate for which it is acting as “power of attorney”. A supporting client indicates its support for delegated credentials by including an extension in its Client Hello. A server that supports delegated credentials composes the TLS Certificate Verify and Certificate messages as usual, but instead of signing with the certificate’s private key, it includes the certificate along with the DC, and signs with the DC’s private key. Therefore, the private key of the certificate only needs to be used for the signing of the DC. Certificates used for signing delegated credentials require a special X.509 certificate extension. This requirement exists to avoid breaking assumptions people may have about the impact of temporary access to their keys on security, particularly in cases involving HSMs and the still unfixed Bleichbacher oracles in older TLS versions. Temporary access to a key can enable signing lots of delegated credentials which start far in the future, and as a result support was made opt-in. Early versions of QUIC had similar issues, and ended up adopting TLS to fix them. Protocol evolution on the Internet requires working well with already existing protocols and their flaws. Delegated Credentials at Cloudflare and Beyond Currently we use delegated credentials as a performance optimization for Geo Key Manager and Keyless SSL. Customers can update their certificates to include the special extension for delegated credentials, and we will automatically create delegated credentials and distribute them to the edge through the Keyless SSL or Geo Key Manager. For more information, see the documentation. It also enables us to be more conservative about where we keep keys for customers, improving our security posture. Delegated Credentials would be useless if it wasn’t also supported by browsers and other HTTP clients. Christopher Patton, a former intern at Cloudflare, implemented support in Firefox and its underlying NSS security library. This feature is now in the Nightly versions of Firefox. You can turn it on by activating the configuration option security.tls.enable_delegated_credentials at about:config. Studies are ongoing on how effective this will be in a wider deployment. There also is support for Delegated Credentials in BoringSSL. "At Mozilla we welcome ideas that help to make the Web PKI more robust. The Delegated Credentials feature can help to provide secure and performant TLS connections for our users, and we’re happy to work with Cloudflare to help validate this feature." Thyla van der Merwe, Cryptography Engineering Manager at Mozilla One open issue is the question of client clock accuracy. Until we have a wide-scale study we won’t know how many connections using delegated credentials will break because of the 24 hour time limit that is imposed. Some clients, in particular mobile clients, may have inaccurately set clocks, the root cause of one third of all certificate errors in Chrome. Part of the way that we’re aiming to solve this problem is through standardizing and improving Roughtime, so web browsers and other services that need to validate certificates can do so independent of the client clock. Cloudflare’s global scale means that we see connections from every corner of the world, and from many different kinds of connection and device. That reach enables us to find rare problems with the deployability of protocols. For example, our early deployment helped inform the development of the TLS 1.3 standard. As we enable developing protocols like delegated credentials, we learn about obstacles that inform and affect their future development. Conclusion As new protocols emerge, we’ll continue to play a role in their development and bring their benefits to our customers. Today’s announcement of a technology that overcomes some limitations of Keyless SSL is just one example of how Cloudflare takes part in improving the Internet not just for our customers, but for everyone. During the standardization process of turning the draft into an RFC, we’ll continue to maintain our implementation and come up with new ways to apply delegated credentials. Announcing cfnts: Cloudflare’s implementation of NTS in Rust Post Syndicated from Watson Ladd original https://blog.cloudflare.com/announcing-cfnts/ Several months ago we announced that we were providing a new public time service. Part of what we were providing was the first major deployment of the new Network Time Security (NTS) protocol, with a newly written implementation of NTS in Rust. In the process, we received helpful advice from the NTP community, especially from the NTPSec and Chrony projects. We’ve also participated in several interoperability events. Now we are returning something to the community: Our implementation, cfnts, is now open source and we welcome your pull requests and issues. The journey from a blank source file to a working, deployed service was a lengthy one, and it involved many people across multiple teams. "Correct time is a necessity for most security protocols in use on the Internet. Despite this, secure time transfer over the Internet has previously required complicated configuration on a case by case basis. With the introduction of NTS, secure time synchronization will finally be available for everyone. It is a small, but important, step towards increasing security in all systems that depend on accurate time. I am happy that Cloudflare are sharing their NTS implementation. A diversity of software with NTS support is important for quick adoption of the new protocol." Marcus Dansarie, coauthor of the NTS specification How NTS works NTS is structured as a suite of two sub-protocols as shown in the figure below. The first is the Network Time Security Key Exchange (NTS-KE), which is always conducted over Transport Layer Security (TLS) and handles the creation of key material and parameter negotiation for the second protocol. The second is NTPv4, the current version of the NTP protocol, which allows the client to synchronize their time from the remote server. In order to maintain the scalability of NTPv4, it was important that the server not maintain per-client state. A very small server can serve millions of NTP clients. Maintaining this property while providing security is achieved with cookies that the server provides to the client that contain the server state. In the first stage, the client sends a request to the NTS-KE server and gets a response via TLS. This exchange carries out a number of functions: • Negotiates the AEAD algorithm to be used in the second stage. • Negotiates the second protocol. Currently, the standard only defines how NTS works with NTPv4. • Negotiates the NTP server IP address and port. • Creates cookies for use in the second stage. • Creates two symmetric keys (C2S and S2C) from the TLS session via exporters. In the second stage, the client securely synchronizes the clock with the negotiated NTP server. To synchronize securely, the client sends NTPv4 packets with four special extensions: • Unique Identifier Extension contains a random nonce used to prevent replay attacks. • NTS Cookie Extension contains one of the cookies that the client stores. Since currently only the client remembers the two AEAD keys (C2S and S2C), the server needs to use the cookie from this extension to extract the keys. Each cookie contains the keys encrypted under a secret key the server has. • NTS Cookie Placeholder Extension is a signal from the client to request additional cookies from the server. This extension is needed to make sure that the response is not much longer than the request to prevent amplification attacks. • NTS Authenticator and Encrypted Extension Fields Extension contains a ciphertext from the AEAD algorithm with C2S as a key and with the NTP header, timestamps, and all the previously mentioned extensions as associated data. Other possible extensions can be included as encrypted data within this field. Without this extension, the timestamp can be spoofed. After getting a request, the server sends a response back to the client echoing the Unique Identifier Extension to prevent replay attacks, the NTS Cookie Extension to provide the client with more cookies, and the NTS Authenticator and Encrypted Extension Fields Extension with an AEAD ciphertext with S2C as a key. But in the server response, instead of sending the NTS Cookie Extension in plaintext, it needs to be encrypted with the AEAD to provide unlinkability of the NTP requests. The second handshake can be repeated many times without going back to the first stage since each request and response gives the client a new cookie. The expensive public key operations in TLS are thus amortized over a large number of requests. Furthermore, specialized timekeeping devices like FPGA implementations only need to implement a few symmetric cryptographic functions and can delegate the complex TLS stack to a different device. Why Rust? While many of our services are written in Go, and we have considerable experience on the Crypto team with Go, a garbage collection pause in the middle of responding to an NTP packet would negatively impact accuracy. We picked Rust because of its zero-overhead and useful features. • Memory safety After Heartbleed, Cloudbleed, and the steady drip of vulnerabilities caused by C’s lack of memory safety, it’s clear that C is not a good choice for new software dealing with untrusted inputs. The obvious solution for memory safety is to use garbage collection, but garbage collection has a substantial runtime overhead, while Rust has less runtime overhead. • Non-nullability Null pointers are an edge case that is frequently not handled properly. Rust explicitly marks optionality, so all references in Rust can be safely dereferenced. The type system ensures that option types are properly handled. • Thread safety Data-race prevention is another key feature of Rust. Rust’s ownership model ensures that all cross-thread accesses are synchronized by default. While not a panacea, this eliminates a major class of bugs. • Immutability Separating types into mutable and immutable is very important for reducing bugs. For example, in Java, when you pass an object into a function as a parameter, after the function is finished, you will never know whether the object has been mutated or not. Rust allows you to pass the object reference into the function and still be assured that the object is not mutated. • Error handling Rust result types help with ensuring that operations that can produce errors are identified and a choice made about the error, even if that choice is passing it on. While Rust provides safety with zero overhead, coding in Rust involves understanding linear types and for us a new language. In this case the importance of security and performance meant we chose Rust over a potentially easier task in Go. Dependencies we use Because of our scale and for DDoS protection we needed a highly scalable server. For UDP protocols without the concept of a connection, the server can respond to one packet at a time easily, but for TCP this is more complex. Originally we thought about using Tokio. However, at the time Tokio suffered from scheduler problems that had caused other teams some issues. As a result we decided to use Mio directly, basing our work on the examples in Rustls. We decided to use Rustls over OpenSSL or BoringSSL because of the crate’s consistent error codes and default support for authentication that is difficult to disable accidentally. While there are some features that are not yet supported, it got the job done for our service. Other engineering choices More important than our choice of programming language was our implementation strategy. A working, fully featured NTP implementation is a complicated program involving a phase-locked loop. These have a difficult reputation due to their nonlinear nature, beyond the usual complexities of closed loop control. The response of a phase lock loop to a disturbance can be estimated if the loop is locked and the disturbance small. However, lock acquisition, large disturbances, and the necessary filtering in NTP are all hard to analyze mathematically since they are not captured in the linear models applied for small scale analysis. While NTP works with the total phase, unlike the phase-locked loops of electrical engineering, there are still nonlinear elements. For NTP testing, changes to this loop requires weeks of operation to determine the performance as the loop responds very slowly. Computer clocks are generally accurate over short periods, while networks are plagued with inconsistent delays. This demands a slow response. Changes we make to our service have taken hours to have an effect, as the clients slowly adapt to the new conditions. While RFC 5905 provides lots of details on an algorithm to adjust the clock, later implementations such as chrony have improved upon the algorithm through much more sophisticated nonlinear filters. Rather than implement these more sophisticated algorithms, we let chrony adjust the clock of our servers, and copy the state variables in the header from chrony and adjust the dispersion and root delay according to the formulas given in the RFC. This strategy let us focus on the new protocols. Prague Part of what the Internet Engineering Task Force (IETF) does is organize events like hackathons where implementers of a new standard can get together and try to make their stuff work with one another. This exposes bugs and infelicities of language in the standard and the implementations. We attended the IETF 104 hackathon to develop our server and make it work with other implementations. The NTP working group members were extremely generous with their time, and during the process we uncovered a few issues relating to the exact way one has to handle ALPN with older OpenSSL versions. At the IETF 104 in Prague we had a working client and server for NTS-KE by the end of the hackathon. This was a good amount of progress considering we started with nothing. However, without implementing NTP we didn’t actually know that our server and client were computing the right thing. That would have to wait for later rounds of testing. Crypto Week As Crypto Week 2019 approached we were busily writing code. All of the NTP protocol had to be implemented, together with the connection between the NTP and NTS-KE parts of the server. We also had to deploy processes to synchronize the ticket encrypting keys around the world and work on reconfiguring our own timing infrastructure to support this new service. With a few weeks to go we had a working implementation, but we needed servers and clients out there to test with. But because we only support TLS 1.3 on the server, which had only just entered into OpenSSL, there were some compatibility problems. We ended up compiling a chrony branch with NTS support and NTPsec ourselves and testing against time.cloudflare.com. We also tested our client against test servers set up by the chrony and NTPsec projects, in the hopes that this would expose bugs and have our implementations work nicely together. After a few lengthy days of debugging, we found out that our nonce length wasn’t exactly in accordance with the spec, which was quickly fixed. The NTPsec project was extremely helpful in this effort. Of course, this was the day that our office had a blackout, so the testing happened outside in Yerba Buena Gardens. During the deployment of time.cloudflare.com, we had to open up our firewall to incoming NTP packets. Since the start of Cloudflare’s network existence and because of NTP reflection attacks we had previously closed UDP port 123 on the router. Since source port 123 is also used by clients sometimes to send NTP packets, it’s impossible for NTP servers to filter reflection attacks without parsing the contents of NTP packet, which routers have difficulty doing. In order to protect Cloudflare infrastructure we got an entire subnet just for the time service, so it could be aggressively throttled and rerouted in case of massive DDoS attacks. This is an exceptional case: most edge services at Cloudflare run on every available IP. Bug fixes Shortly after the public launch, we discovered that older Windows versions shipped with NTP version 3, and our server only spoke version 4. This was easy to fix since the timestamps have not moved in NTP versions: we echo the version back and most still existing NTP version 3 clients will understand what we meant. Also tricky was the failure of Network Time Foundation ntpd clients to expand the polling interval. It turns out that one has to echo back the client’s polling interval to have the polling interval expand. Chrony does not use the polling interval from the server, and so was not affected by this incompatibility. Both of these issues were fixed in ways suggested by other NTP implementers who had run into these problems themselves. We thank Miroslav Lichter tremendously for telling us exactly what the problem was, and the members of the Cloudflare community who posted packet captures demonstrating these issues. Continued improvement The original production version of cfnts was not particularly object oriented and several contributors were just learning Rust. As a result there was quite a bit of unwrap and unnecessary mutability flying around. Much of the code was in functions even when it could profitably be attached to structures. All of this had to be restructured. Keep in mind that some of the best code running in the real-world have been written, rewritten, and sometimes rewritten again! This is actually a good thing. As an internal project we relied on Cloudflare’s internal tooling for building, testing, and deploying code. These were replaced with tools available to everyone like Docker to ensure anyone can contribute. Our repository is integrated with Circle CI, ensuring that all contributions are automatically tested. In addition to unit tests we test the entire end to end functionality of getting a measurement of the time from a server. The Future NTPsec has already released support for NTS but we see very little usage. Please try turning on NTS if you use NTPsec and see how it works with time.cloudflare.com. As the draft advances through the standards process the protocol will undergo an incompatible change when the identifiers are updated and assigned out of the IANA registry instead of being experimental ones, so this is very much an experiment. Note that your daemon will need TLS 1.3 support and so could require manually compiling OpenSSL and then linking against it. We’ve also added our time service to the public NTP pool. The NTP pool is a widely used volunteer-maintained service that provides NTP servers geographically spread across the world. Unfortunately, NTS doesn’t currently work well with the pool model, so for the best security, we recommend enabling NTS and using time.cloudflare.com and other NTS supporting servers. In the future, we’re hoping that more clients support NTS, and have licensed our code liberally to enable this. We would love to hear if you incorporate it into a product and welcome contributions to make it more useful. We’re also encouraged to see that Netnod has a production NTS service at nts.ntp.se. The more time services and clients that adopt NTS, the more secure the Internet will be. Acknowledgements Tanya Verma and Gabbi Fisher were major contributors to the code, especially the configuration system and the client code. We’d also like to thank Gary Miller, Miroslav Lichter, and all the people at Cloudflare who set up their laptops and home machines to point to time.cloudflare.com for early feedback. The TLS Post-Quantum Experiment Post Syndicated from Kris Kwiatkowski original https://blog.cloudflare.com/the-tls-post-quantum-experiment/ In June, we announced a wide-scale post-quantum experiment with Google. We implemented two post-quantum (i.e., not yet known to be broken by quantum computers) key exchanges, integrated them into our TLS stack and deployed the implementation on our edge servers and in Chrome Canary clients. The goal of the experiment was to evaluate the performance and feasibility of deployment in TLS of two post-quantum key agreement ciphers. In our previous blog post on post-quantum cryptography, we described differences between those two ciphers in detail. In case you didn’t have a chance to read it, we include a quick recap here. One characteristic of post-quantum key exchange algorithms is that the public keys are much larger than those used by “classical” algorithms. This will have an impact on the duration of the TLS handshake. For our experiment, we chose two algorithms: isogeny-based SIKE and lattice-based HRSS. The former has short key sizes (~330 bytes) but has a high computational cost; the latter has larger key sizes (~1100 bytes), but is a few orders of magnitude faster. During NIST’s Second PQC Standardization Conference, Nick Sullivan presented our approach to this experiment and some initial results. Quite accurately, he compared NTRU-HRSS to an ostrich and SIKE to a turkey—one is big and fast and the other is small and slow. Setup & Execution We based our experiment on TLS 1.3. Cloudflare operated the server-side TLS connections and Google Chrome (Canary and Dev builds) represented the client side of the experiment. We enabled both CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE/p434 + X25519) key-agreement algorithms on all TLS-terminating edge servers. Since the post-quantum algorithms are considered experimental, the X25519 key exchange serves as a fallback to ensure the classical security of the connection. Clients participating in the experiment were split into 3 groups—those who initiated TLS handshake with post-quantum CECPQ2, CECPQ2b or non post-quantum X25519 public keys. Each group represented approximately one third of the Chrome Canary population participating in the experiment. In order to distinguish between clients participating in or excluded from the experiment, we added a custom extension to the TLS handshake. It worked as a simple flag sent by clients and echoed back by Cloudflare edge servers. This allowed us to measure the duration of TLS handshakes only for clients participating in the experiment. For each connection, we collected telemetry metrics. The most important metric was a TLS server-side handshake duration defined as the time between receiving the Client Hello and Client Finished messages. The diagram below shows details of what was measured and how post-quantum key exchange was integrated with TLS 1.3. The experiment ran for 53 days in total, between August and October. During this time we collected millions of data samples, representing 5% of (anonymized) TLS connections that contained the extension signaling that the client was part of the experiment. We carried out the experiment in two phases. In the first phase of the experiment, each client was assigned to use one of the three key exchange groups, and each client offered the same key exchange group for every connection. We collected over 10 million records over 40 days. In the second phase of the experiment, client behavior was modified so that each client randomly chose which key exchange group to offer for each new connection, allowing us to directly compare the performance of each algorithm on a per-client basis. Data collection for this phase lasted 13 days and we collected 270 thousand records. Results We now describe our server-side measurement results. Client-side results are described at https://www.imperialviolet.org/2019/10/30/pqsivssl.html. What did we find? The primary metric we collected for each connection was the server-side handshake duration. The below histograms show handshake duration timings for all client measurements gathered in the first phase of the experiment, as well as breakdowns into the top five operating systems. The operating system breakdowns shown are restricted to only desktop/laptop devices except for Android, which consists of only mobile devices. It’s clear from the above plots that for most clients, CECPQ2b performs worse than CECPQ2 and CONTROL. Thus, the small key size of CECPQ2b does not make up for its large computational cost—the ostrich outpaces the turkey. Digging a little deeper This means we’re done, right? Not quite. We are interested in determining if there are any populations of TLS clients for which CECPQ2b consistency outperforms CECPQ2. This requires taking a closer look at the long tail of handshake durations. The below plots show cumulative distribution functions (CDFs) of handshake timings zoomed in on the 80th percentile (e.g., showing the top 20% of slowest handshakes). Here, we start to see something interesting. For Android, Linux, and Windows devices, there is a crossover point where CECPQ2b actually starts to outperform CECPQ2 (Android: ~94th percentile, Linux: ~92nd percentile, Windows: ~95th percentile). macOS and ChromeOS do not appear to have these crossover points. These effects are small but statistically significant in some cases. The below table shows approximate 95% confidence intervals for the 50th (median), 95th, and 99th percentiles of handshake durations for each key exchange group and device type, calculated using Maritz-Jarrett estimators. The numbers within square brackets give the lower and upper bounds on our estimates for each percentile of the “true” distribution of handshake durations based on the samples collected in the experiment. For example, with a 95% confidence level we can say that the 99th percentile of handshake durations for CECPQ2 on Android devices lies between 4057ms and 4478ms, while the 99th percentile for CECPQ2b lies between 3276ms and 3646ms. Since the intervals do not overlap, we say that with statistical significance, the experiment indicates that CECPQ2b performs better than CECPQ2 for the slowest 1% of Android connections. Configurations where CECPQ2 or CECPQ2b outperforms the other with statistical significance are marked with green in the table. Per-client comparison A second phase of the experiment directly examined the performance of each key exchange algorithm for individual clients, where a client is defined to be a unique (anonymized) IP address and user agent pair. Instead of choosing a single key exchange algorithm for the duration of the experiment, clients randomly selected one of the experiment configurations for each new connection. Although the duration and sample size were limited for this phase of the experiment, we collected at least three handshake measurements for each group configuration from 3900 unique clients. The plot below shows for each of these clients the difference in latency between CECPQ2 and CECPQ2b, taking the minimum latency sample for each key exchange group as the representative value. The CDF plot shows that for 80% of clients, CECPQ2 outperformed or matched CECPQ2b, and for 99% of clients, the latency gap remained within 70ms. At a high level, this indicates that very few clients performed significantly worse with CECPQ2 over CECPQ2b. Do other factors impact the latency gap? We looked at a number of other factors—including session resumption, IP version, and network location—to see if they impacted the latency gap between CECPQ2 and CECPQ2b. These factors impacted the overall handshake latency, but we did not find that any made a significant impact on the latency gap between post-quantum ciphers. We share some interesting observations from this analysis below. Session resumption Approximately 53% of all connections in the experiment were completed with TLS handshake resumption. However, the percentage of resumed connections varied significantly based on the device configuration. Connections from mobile devices were only resumed ~25% of the time, while between 40% and 70% of connections from laptop/desktop devices were resumed. Additionally, resumption provided between a 30% and 50% speedup for all device types. IP version We also examined the impact of IP version on handshake latency. Only 12.5% of the connections in the experiment used IPv6. These connections were 20-40% faster than IPv4 connections for desktop/laptop devices, but ~15% slower for mobile devices. This could be an artifact of IPv6 being generally deployed on newer devices with faster processors. For Android, the experiment was only run on devices with more modern processors, which perhaps eliminated the bias. Network location The slow connections making up the long tail of handshake durations were not isolated to a few countries, Autonomous Systems (ASes), or subnets, but originated from a globally diverse set of clients. We did not find a correlation between the relative performance of the two post-quantum key exchange algorithms based on these factors. Discussion We found that CECPQ2 (the ostrich) outperformed CECPQ2 (the turkey) for the majority of connections in the experiment, indicating that fast algorithms with large keys may be more suitable for TLS than slow algorithms with small keys. However, we observed the opposite—that CECPQ2b outperformed CECPQ2—for the slowest connections on some devices, including Windows computers and Android mobile devices. One possible explanation for this is packet fragmentation and packet loss. The maximum size of TCP packets that can be sent across a network is limited by the maximum transmission unit (MTU) of the network path, which is often ~1400 bytes. During the TLS handshake the server responds to the client with its public key and ciphertext, the combined size of which exceeds the MTU, so it is likely that handshake messages must be split across multiple TCP packets. This increases the risk of lost packets and delays due to retransmission. A repeat of this experiment that includes collection of fine-grained TCP telemetry could confirm this hypothesis. A somewhat surprising result of this experiment is just how fast HRSS performs for the majority of connections. Recall that the CECPQ2 cipher performs key exchange operations for both X25519 and HRSS, but the additional overhead of HRSS is barely noticeable. Comparing benchmark results, we can see that HRSS will be faster than X25519 on the server side and slower on the client side. In our design, the client side performs two operations—key generation and KEM decapsulation. Looking at those two operations we can see that the key generation is a bottleneck here. Key generation: 3553.5 [ops/sec] KEM decapsulation: 17186.7 [ops/sec] In algorithms with quotient-style keys (like NTRU), the key generation algorithm performs an inversion in the quotient ring—an operation that is quite computationally expensive. Alternatively, a TLS implementation could generate ephemeral keys ahead of time in order to speed up key exchange. There are several other lattice-based key exchange candidates that may be worth experimenting with in the context of TLS key exchange, which are based on different underlying principles than the HRSS construction. These candidates have similar key sizes and faster key generation algorithms, but have their own drawbacks. For now, HRSS looks like the more promising algorithm for use in TLS. In the case of SIKE, we implemented the most recent version of the algorithm, and instantiated it with the most performance-efficient parameter set for our experiment. The algorithm is computationally expensive, so we were required to use assembly to optimize it. In order to ensure best performance on Intel, most performance-critical operations have two different implementations; the library detects CPU capabilities and uses faster instructions if available, but otherwise falls back to a slightly slower generic implementation. We developed our own optimizations for 64-bit ARM CPUs. Nevertheless, our results show that SIKE incurred a significant overhead for every connection, especially on devices with weaker processors. It must be noted that high-performance isogeny-based public key cryptography is arguably much less developed than its lattice-based counterparts. Some ideas to develop this are floating around, and we hope to see performance improvements in the future. DNS Encryption Explained Post Syndicated from Peter Wu original https://blog.cloudflare.com/dns-encryption-explained/ The Domain Name System (DNS) is the address book of the Internet. When you visit cloudflare.com or any other site, your browser will ask a DNS resolver for the IP address where the website can be found. Unfortunately, these DNS queries and answers are typically unprotected. Encrypting DNS would improve user privacy and security. In this post, we will look at two mechanisms for encrypting DNS, known as DNS over TLS (DoT) and DNS over HTTPS (DoH), and explain how they work. Applications that want to resolve a domain name to an IP address typically use DNS. This is usually not done explicitly by the programmer who wrote the application. Instead, the programmer writes something such as fetch("https://example.com/news") and expects a software library to handle the translation of “example.com” to an IP address. Behind the scenes, the software library is responsible for discovering and connecting to the external recursive DNS resolver and speaking the DNS protocol (see the figure below) in order to resolve the name requested by the application. The choice of the external DNS resolver and whether any privacy and security is provided at all is outside the control of the application. It depends on the software library in use, and the policies provided by the operating system of the device that runs the software. The external DNS resolver The operating system usually learns the resolver address from the local network using Dynamic Host Configuration Protocol (DHCP). In home and mobile networks, it typically ends up using the resolver from the Internet Service Provider (ISP). In corporate networks, the selected resolver is typically controlled by the network administrator. If desired, users with control over their devices can override the resolver with a specific address, such as the address of a public resolver like Google’s 8.8.8.8 or Cloudflare’s 1.1.1.1, but most users will likely not bother changing it when connecting to a public Wi-Fi hotspot at a coffee shop or airport. The choice of external resolver has a direct impact on the end-user experience. Most users do not change their resolver settings and will likely end up using the DNS resolver from their network provider. The most obvious observable property is the speed and accuracy of name resolution. Features that improve privacy or security might not be immediately visible, but will help to prevent others from profiling or interfering with your browsing activity. This is especially important on public Wi-Fi networks where anyone in physical proximity can capture and decrypt wireless network traffic. Unencrypted DNS Ever since DNS was created in 1987, it has been largely unencrypted. Everyone between your device and the resolver is able to snoop on or even modify your DNS queries and responses. This includes anyone in your local Wi-Fi network, your Internet Service Provider (ISP), and transit providers. This may affect your privacy by revealing the domain names that are you are visiting. What can they see? Well, consider this network packet capture taken from a laptop connected to a home network: The following observations can be made: • The UDP source port is 53 which is the standard port number for unencrypted DNS. The UDP payload is therefore likely to be a DNS answer. • That suggests that the source IP address 192.168.2.254 is a DNS resolver while the destination IP 192.168.2.14 is the DNS client. • The UDP payload could indeed be parsed as a DNS answer, and reveals that the user was trying to visit twitter.com. • If there are any future connections to 104.244.42.129 or 104.244.42.1, then it is most likely traffic that is directed at “twitter.com”. • If there is some further encrypted HTTPS traffic to this IP, succeeded by more DNS queries, it could indicate that a web browser loaded additional resources from that page. That could potentially reveal the pages that a user was looking at while visiting twitter.com. Since the DNS messages are unprotected, other attacks are possible: • Queries could be directed to a resolver that performs DNS hijacking. For example, in the UK, Virgin Media and BT return a fake response for domains that do not exist, redirecting users to a search page. This redirection is possible because the computer/phone blindly trusts the DNS resolver that was advertised using DHCP by the ISP-provided gateway router. • Firewalls can easily intercept, block or modify any unencrypted DNS traffic based on the port number alone. It is worth noting that plaintext inspection is not a silver bullet for achieving visibility goals, because the DNS resolver can be bypassed. Encrypting DNS Encrypting DNS makes it much harder for snoopers to look into your DNS messages, or to corrupt them in transit. Just as the web moved from unencrypted HTTP to encrypted HTTPS there are now upgrades to the DNS protocol that encrypt DNS itself. Encrypting the web has made it possible for private and secure communications and commerce to flourish. Encrypting DNS will further enhance user privacy. Two standardized mechanisms exist to secure the DNS transport between you and the resolver, DNS over TLS (2016) and DNS Queries over HTTPS (2018). Both are based on Transport Layer Security (TLS) which is also used to secure communication between you and a website using HTTPS. In TLS, the server (be it a web server or DNS resolver) authenticates itself to the client (your device) using a certificate. This ensures that no other party can impersonate the server (the resolver). With DNS over TLS (DoT), the original DNS message is directly embedded into the secure TLS channel. From the outside, one can neither learn the name that was being queried nor modify it. The intended client application will be able to decrypt TLS, it looks like this: In the packet trace for unencrypted DNS, it was clear that a DNS request can be sent directly by the client, followed by a DNS answer from the resolver. In the encrypted DoT case however, some TLS handshake messages are exchanged prior to sending encrypted DNS messages: • The client sends a Client Hello, advertising its supported TLS capabilities. • The server responds with a Server Hello, agreeing on TLS parameters that will be used to secure the connection. The Certificate message contains the identity of the server while the Certificate Verify message will contain a digital signature which can be verified by the client using the server Certificate. The client typically checks this certificate against its local list of trusted Certificate Authorities, but the DoT specification mentions alternative trust mechanisms such as public key pinning. • Once the TLS handshake is Finished by both the client and server, they can finally start exchanging encrypted messages. • While the above picture contains one DNS query and answer, in practice the secure TLS connection will remain open and will be reused for future DNS queries. Securing unencrypted protocols by slapping TLS on top of a new port has been done before: • Web traffic: HTTP (tcp/80) -> HTTPS (tcp/443) • Sending email: SMTP (tcp/25) -> SMTPS (tcp/465) • Receiving email: IMAP (tcp/143) -> IMAPS (tcp/993) • Now: DNS (tcp/53 or udp/53) -> DoT (tcp/853) A problem with introducing a new port is that existing firewalls may block it. Either because they employ a whitelist approach where new services have to be explicitly enabled, or a blocklist approach where a network administrator explicitly blocks a service. If the secure option (DoT) is less likely to be available than its insecure option, then users and applications might be tempted to try to fall back to unencrypted DNS. This subsequently could allow attackers to force users to an insecure version. Such fallback attacks are not theoretical. SSL stripping has previously been used to downgrade HTTPS websites to HTTP, allowing attackers to steal passwords or hijack accounts. Another approach, DNS Queries over HTTPS (DoH), was designed to support two primary use cases: • Prevent the above problem where on-path devices interfere with DNS. This includes the port blocking problem above. • Enable web applications to access DNS through existing browser APIs. DoH is essentially HTTPS, the same encrypted standard the web uses, and reuses the same port number (tcp/443). Web browsers have already deprecated non-secure HTTP in favor of HTTPS. That makes HTTPS a great choice for securely transporting DNS messages. An example of such a DoH request can be found here. Some users have been concerned that the use of HTTPS could weaken privacy due to the potential use of cookies for tracking purposes. The DoH protocol designers considered various privacy aspects and explicitly discouraged use of HTTP cookies to prevent tracking, a recommendation that is widely respected. TLS session resumption improves TLS 1.2 handshake performance, but can potentially be used to correlate TLS connections. Luckily, use of TLS 1.3 obviates the need for TLS session resumption by reducing the number of round trips by default, effectively addressing its associated privacy concern. Using HTTPS means that HTTP protocol improvements can also benefit DoH. For example, the in-development HTTP/3 protocol, built on top of QUIC, could offer additional performance improvements in the presence of packet loss due to lack of head-of-line blocking. This means that multiple DNS queries could be sent simultaneously over the secure channel without blocking each other when one packet is lost. A draft for DNS over QUIC (DNS/QUIC) also exists and is similar to DoT, but without the head-of-line blocking problem due to the use of QUIC. Both HTTP/3 and DNS/QUIC, however, require a UDP port to be accessible. In theory, both could fall back to DoH over HTTP/2 and DoT respectively. Deployment of DoT and DoH As both DoT and DoH are relatively new, they are not universally deployed yet. On the server side, major public resolvers including Cloudflare’s 1.1.1.1 and Google DNS support it. Many ISP resolvers however still lack support for it. A small list of public resolvers supporting DoH can be found at DNS server sources, another list of public resolvers supporting DoT and DoH can be found on DNS Privacy Public Resolvers. There are two methods to enable DoT or DoH on end-user devices: • Add support to applications, bypassing the resolver service from the operating system. • Add support to the operating system, transparently providing support to applications. There are generally three configuration modes for DoT or DoH on the client side: • Off: DNS will not be encrypted. • Opportunistic mode: try to use a secure transport for DNS, but fallback to unencrypted DNS if the former is unavailable. This mode is vulnerable to downgrade attacks where an attacker can force a device to use unencrypted DNS. It aims to offer privacy when there are no on-path active attackers. • Strict mode: try to use DNS over a secure transport. If unavailable, fail hard and show an error to the user. The current state for system-wide configuration of DNS over a secure transport: • Android 9: supports DoT through its “Private DNS” feature. Modes: • Opportunistic mode (“Automatic”) is used by default. The resolver from network settings (typically DHCP) will be used. • Strict mode can be configured by setting an explicit hostname. No IP address is allowed, the hostname is resolved using the default resolver and is also used for validating the certificate. (Relevant source code) • iOS and Android users can also install the 1.1.1.1 app to enable either DoH or DoT support in strict mode. Internally it uses the VPN programming interfaces to enable interception of unencrypted DNS traffic before it is forwarded over a secure channel. • Linux with systemd-resolved from systemd 239: DoT through the DNSOverTLS option. • Off is the default. • Opportunistic mode can be configured, but no certificate validation is performed. • Strict mode is available since systemd 243. Any certificate signed by a trusted certificate authority is accepted. However, there is no hostname validation with the GnuTLS backend while the OpenSSL backend expects an IP address. • In any case, no Server Name Indication (SNI) is sent. The certificate name is not validated, making a man-in-the-middle rather trivial. • Linux, macOS, and Windows can use a DoH client in strict mode. The cloudflared proxy-dns command uses the Cloudflare DNS resolver by default, but users can override it through the proxy-dns-upstream option. Web browsers support DoH instead of DoT: • Firefox 62 supports DoH and provides several Trusted Recursive Resolver (TRR) settings. By default DoH is disabled, but Mozilla is running an experiment to enable DoH for some users in the USA. This experiment currently uses Cloudflare’s 1.1.1.1 resolver, since we are the only provider that currently satisfies the strict resolver policy required by Mozilla. Since many DNS resolvers still do not support an encrypted DNS transport, Mozilla’s approach will ensure that more users are protected using DoH. • When enabled through the experiment, or through the “Enable DNS over HTTPS” option at Network Settings, Firefox will use opportunistic mode (network.trr.mode=2 at about:config). • Strict mode can be enabled with network.trr.mode=3, but requires an explicit resolver IP to be specified (for example, network.trr.bootstrapAddress=1.1.1.1). • While Firefox ignores the default resolver from the system, it can be configured with alternative resolvers. Additionally, enterprise deployments who use a resolver that does not support DoH have the option to disable DoH. • Chrome 78 enables opportunistic DoH if the system resolver address matches one of the hard-coded DoH providers (source code change). This experiment is enabled for all platforms except Linux and iOS, and excludes enterprise deployments by default. • Opera 65 adds an option to enable DoH through Cloudflare’s 1.1.1.1 resolver. This feature is off by default. Once enabled, it appears to use opportunistic mode: if 1.1.1.1:443 (without SNI) is reachable, it will be used. Otherwise it falls back to the default resolver, unencrypted. The DNS over HTTPS page from the curl project has a comprehensive list of DoH providers and additional implementations. As an alternative to encrypting the full network path between the device and the external DNS resolver, one can take a middle ground: use unencrypted DNS between devices and the gateway of the local network, but encrypt all DNS traffic between the gateway router and the external DNS resolver. Assuming a secure wired or wireless network, this would protect all devices in the local network against a snooping ISP, or other adversaries on the Internet. As public Wi-Fi hotspots are not considered secure, this approach would not be safe on open Wi-Fi networks. Even if it is password-protected with WPA2-PSK, others will still be able to snoop and modify unencrypted DNS. Other security considerations The previous sections described secure DNS transports, DoH and DoT. These will only ensure that your client receives the untampered answer from the DNS resolver. It does not, however, protect the client against the resolver returning the wrong answer (through DNS hijacking or DNS cache poisoning attacks). The “true” answer is determined by the owner of a domain or zone as reported by the authoritative name server. DNSSEC allows clients to verify the integrity of the returned DNS answer and catch any unauthorized tampering along the path between the client and authoritative name server. However deployment of DNSSEC is hindered by middleboxes that incorrectly forward DNS messages, and even if the information is available, stub resolvers used by applications might not even validate the results. A report from 2016 found that only 26% of users use DNSSEC-validating resolvers. DoH and DoT protect the transport between the client and the public resolver. The public resolver may have to reach out to additional authoritative name servers in order to resolve a name. Traditionally, the path between any resolver and the authoritative name server uses unencrypted DNS. To protect these DNS messages as well, we did an experiment with Facebook, using DoT between 1.1.1.1 and Facebook’s authoritative name servers. While setting up a secure channel using TLS increases latency, it can be amortized over many queries. Transport encryption ensures that resolver results and metadata are protected. For example, the EDNS Client Subnet (ECS) information included with DNS queries could reveal the original client address that started the DNS query. Hiding that information along the path improves privacy. It will also prevent broken middle-boxes from breaking DNSSEC due to issues in forwarding DNS. Operational issues with DNS encryption DNS encryption may bring challenges to individuals or organizations that rely on monitoring or modifying DNS traffic. Security appliances that rely on passive monitoring watch all incoming and outgoing network traffic on a machine or on the edge of a network. Based on unencrypted DNS queries, they could potentially identify machines which are infected with malware for example. If the DNS query is encrypted, then passive monitoring solutions will not be able to monitor domain names. Some parties expect DNS resolvers to apply content filtering for purposes such as: • Blocking domains used for malware distribution. • Blocking advertisements. • Perform parental control filtering, blocking domains associated with adult content. • Block access to domains serving illegal content according to local regulations. • Offer a split-horizon DNS to provide different answers depending on the source network. An advantage of blocking access to domains via the DNS resolver is that it can be centrally done, without reimplementing it in every single application. Unfortunately, it is also quite coarse. Suppose that a website hosts content for multiple users at example.com/videos/for-kids/ and example.com/videos/for-adults/. The DNS resolver will only be able to see “example.com” and can either choose to block it or not. In this case, application-specific controls such as browser extensions would be more effective since they can actually look into the URLs and selectively prevent content from being accessible. DNS monitoring is not comprehensive. Malware could skip DNS and hardcode IP addresses, or use alternative methods to query an IP address. However, not all malware is that complicated, so DNS monitoring can still serve as a defence-in-depth tool. All of these non-passive monitoring or DNS blocking use cases require support from the DNS resolver. Deployments that rely on opportunistic DoH/DoT upgrades of the current resolver will maintain the same feature set as usually provided over unencrypted DNS. Unfortunately this is vulnerable to downgrades, as mentioned before. To solve this, system administrators can point endpoints to a DoH/DoT resolver in strict mode. Ideally this is done through secure device management solutions (MDM, group policy on Windows, etc.). Conclusion One of the cornerstones of the Internet is mapping names to an address using DNS. DNS has traditionally used insecure, unencrypted transports. This has been abused by ISPs in the past for injecting advertisements, but also causes a privacy leak. Nosey visitors in the coffee shop can use unencrypted DNS to follow your activity. All of these issues can be solved by using DNS over TLS (DoT) or DNS over HTTPS (DoH). These techniques to protect the user are relatively new and are seeing increasing adoption. From a technical perspective, DoH is very similar to HTTPS and follows the general industry trend to deprecate non-secure options. DoT is a simpler transport mode than DoH as the HTTP layer is removed, but that also makes it easier to be blocked, either deliberately or by accident. Secondary to enabling a secure transport is the choice of a DNS resolver. Some vendors will use the locally configured DNS resolver, but try to opportunistically upgrade the unencrypted transport to a more secure transport (either DoT or DoH). Unfortunately, the DNS resolver usually defaults to one provided by the ISP which may not support secure transports. Mozilla has adopted a different approach. Rather than relying on local resolvers that may not even support DoH, they allow the user to explicitly select a resolver. Resolvers recommended by Mozilla have to satisfy high standards to protect user privacy. To ensure that parental control features based on DNS remain functional, and to support the split-horizon use case, Mozilla has added a mechanism that allows private resolvers to disable DoH. The DoT and DoH transport protocols are ready for us to move to a more secure Internet. As can be seen in previous packet traces, these protocols are similar to existing mechanisms to secure application traffic. Once this security and privacy hole is closed, there will be many more to tackle. Supporting the latest version of the Privacy Pass Protocol Post Syndicated from Alex Davidson original https://blog.cloudflare.com/supporting-the-latest-version-of-the-privacy-pass-protocol/ At Cloudflare, we are committed to supporting and developing new privacy-preserving technologies that benefit all Internet users. In November 2017, we announced server-side support for the Privacy Pass protocol, a piece of work developed in collaboration with the academic community. Privacy Pass, in a nutshell, allows clients to provide proof of trust without revealing where and when the trust was provided. The aim of the protocol is then to allow anyone to prove they are trusted by a server, without that server being able to track the user via the trust that was assigned. On a technical level, Privacy Pass clients receive attestation tokens from a server, that can then be redeemed in the future. These tokens are provided when a server deems the client to be trusted; for example, after they have logged into a service or if they prove certain characteristics. The redeemed tokens are cryptographically unlinkable to the attestation originally provided by the server, and so they do not reveal anything about the client. To use Privacy Pass, clients can install an open-source browser extension available in Chrome & Firefox. There have been over 150,000 individual downloads of Privacy Pass worldwide; approximately 130,000 in Chrome and more than 20,000 in Firefox. The extension is supported by Cloudflare to make websites more accessible for users. This complements previous work, including the launch of Cloudflare onion services to help improve accessibility for users of the Tor Browser. The initial release was almost two years ago, and it was followed up with a research publication that was presented at the Privacy Enhancing Technologies Symposium 2018 (winning a Best Student Paper award). Since then, Cloudflare has been working with the wider community to build on the initial design and improve Privacy Pass. We’ll be talking about the work that we have done to develop the existing implementations, alongside the protocol itself. What’s new? Support for Privacy Pass v2.0 browser extension: • Easier configuration of workflow. • Integration with new service provider (hCaptcha). • Compliance with hash-to-curve draft. • Possible to rotate keys outside of extension release. • Available in Chrome and Firefox (works best with up-to-date browser versions). Rolling out a new server backend using Cloudflare Workers platform: • Cryptographic operations performed using internal V8 engine. • Provides public redemption API for Cloudflare Privacy Pass v2.0 tokens. • Available by making POST requests to https://privacypass.cloudflare.com/api/redeem. See the documentation for example usage. • Only compatible with extension v2.0 (check that you have updated!). Standardization: • Continued development of oblivious pseudorandom functions (OPRFs) draft in prime-order groups with [email protected] • New draft specifying Privacy Pass protocol. Extension v2.0 In the time since the release, we’ve been working on a number of new features. Today we’re excited to announce support for version 2.0 of the extension, the first update since the original release. The extension continues to be available for Chrome and Firefox. You may need to download v2.0 manually from the store if you have auto-updates disabled in your browser. The extension remains under active development and we still regard our support as in the beta phase. This will continue to be the case as the draft specification of the protocol continues to be written in collaboration with the wider community. New Integrations The client implementation uses the WebRequest API to look for certain types of HTTP requests. When these requests are spotted, they are rewritten to include some cryptographic data required for the Privacy Pass protocol. This allows Privacy Pass providers receiving this data to authorize access for the user. For example, a user may receive Privacy Pass tokens for completing some server security checks. These tokens are stored by the browser extension, and any future request that needs similar security clearance can be modified to add a stored token as an extra HTTP header. The server can then check the client token and verify that the client has the correct authorization to proceed. While Cloudflare supports a particular type of request flow, it would be impossible to expect different service providers to all abide by the same exact interaction characteristics. One of the major changes in the v2.0 extension has been a technical rewrite to instead use a central configuration file. The config is specified in the source code of the extension and allows easier modification of the browsing characteristics that initiate Privacy Pass actions. This makes adding new, completely different request flows possible by simply cloning and adapting the configuration for new providers. To demonstrate that such integrations are now possible with other services beyond Cloudflare, a new version of the extension will soon be rolling out that is supported by the CAPTCHA provider hCaptcha. Users that solve ephemeral challenges provided by hCaptcha will receive privacy-preserving tokens that will be redeemable at other hCaptcha customer sites. “hCaptcha is focused on user privacy, and supporting Privacy Pass is a natural extension of our work in this area. We look forward to working with Cloudflare and others to make this a common and widely adopted standard, and are currently exploring other applications. Implementing Privacy Pass into our globally distributed service was relatively straightforward, and we have enjoyed working with the Cloudflare team to improve the open source Chrome browser extension in order to deliver the best experience for our users.” – Eli-Shaoul Khedouri, founder of hCaptcha This hCaptcha integration with the Privacy Pass browser extension acts as a proof-of-concept in establishing support for new services. Any new providers that would like to integrate with the Privacy Pass browser extension can do so simply by making a PR to the open-source repository. Improved cryptographic functionality After the release of v1.0 of the extension, there were features that were still unimplemented. These included proper zero-knowledge proof validation for checking that the server was always using the same committed key. In v2.0 this functionality has been completed, verifiably preventing a malicious server from attempting to deanonymize users by using a different key for each request. The cryptographic operations required for Privacy Pass are performed using elliptic curve cryptography (ECC). The extension currently uses the NIST P-256 curve, for which we have included some optimisations. Firstly, this makes it possible to store elliptic curve points in compressed and uncompressed data formats. This means that browser storage can be reduced by 50%, and that server responses can be made smaller too. Secondly, support has been added for hashing to the P-256 curve using the “Simplified Shallue-van de Woestijne-Ulas” (SSWU) method specified in an ongoing draft (https://tools.ietf.org/html/draft-irtf-cfrg-hash-to-curve-03) for standardizing encodings for hashing to elliptic curves. The implementation is compliant with the specification of the “P256-SHA256-SSWU-” ciphersuite in this draft. These changes have a dual advantage, firstly ensuring that the P-256 hash-to-curve specification is compliant with the draft specification. Secondly this ciphersuite removes the necessity for using probabilistic methods, such as hash-and-increment. The hash-and-increment method has a non-negligible chance of failure, and the running time is highly dependent on the hidden client input. While it is not clear how to abuse timing attack vectors currently, using the SSWU method should reduce the potential for attacking the implementation, and learning the client input. Key rotation As we mentioned above, verifying that the server is always using the same key is an important part of ensuring the client’s privacy. This ensures that the server cannot segregate the user base and reduce client privacy by using different secret keys for each client that it interacts with. The server guarantees that it’s always using the same key by publishing a commitment to its public key somewhere that the client can access. Every time the server issues Privacy Pass tokens to the client, it also produces a zero-knowledge proof that it has produced these tokens using the correct key. Before the extension stores any tokens, it first verifies the proof against the commitments it knows. Previously, these commitments were stored directly in the source code of the extension. This meant that if the server wanted to rotate its key, then it required releasing a new version of the extension, which was unnecessarily difficult. The extension has been modified so that the commitments are stored in a trusted location that the client can access when it needs to verify the server response. Currently this location is a separate Privacy Pass GitHub repository. For those that are interested, we have provided a more detailed description of the new commitment format in Appendix A at the end of this post. Implementing server-side support in Workers So far we have focused on client-side updates. As part of supporting v2.0 of the extension, we are rolling out some major changes to the server-side support that Cloudflare uses. For version 1.0, we used a Go implementation of the server. In v2.0 we are introducing a new server implementation that runs in the Cloudflare Workers platform. This server implementation is only compatible with v2.0 releases of Privacy Pass, so you may need to update your extension if you have auto-updates turned off in your browser. Our server will run at https://privacypass.cloudflare.com, and all Privacy Pass requests sent to the Cloudflare edge are handled by Worker scripts that run on this domain. Our implementation has been rewritten using Javascript, with cryptographic operations running in the V8 engine that powers Cloudflare Workers. This means that we are able to run highly efficient and constant-time cryptographic operations. On top of this, we benefit from the enhanced performance provided by running our code in the Workers Platform, as close to the user as possible. WebCrypto support Firstly, you may be asking, how do we manage to implement cryptographic operations in Cloudflare Workers? Currently, support for performing cryptographic operations is provided in the Workers platform via the WebCrypto API. This API allows users to compute functionality such as cryptographic hashing, alongside more complicated operations like ECDSA signatures. In the Privacy Pass protocol, as we’ll discuss a bit later, the main cryptographic operations are performed by a protocol known as a verifiable oblivious pseudorandom function (VOPRF). Such a protocol allows a client to learn function outputs computed by a server, without revealing to the server what their actual input was. The verifiable aspect means that the server must also prove (in a publicly verifiable way) that the evaluation they pass to the user is correct. Such a function is pseudorandom because the server output is indistinguishable from a random sequence of bytes. The VOPRF functionality requires a server to perform low-level ECC operations that are not currently exposed in the WebCrypto API. We balanced out the possible ways of getting around this requirement. First we trialled trying to use the WebCrypto API in a non-standard manner, using EC Diffie-Hellman key exchange as a method for performing the scalar multiplication that we needed. We also tried to implement all operations using pure JavaScript. Unfortunately both methods were unsatisfactory in the sense that they would either mean integrating with large external cryptographic libraries, or they would be far too slow to be used in a performant Internet setting. In the end, we settled on a solution that adds functions necessary for Privacy Pass to the internal WebCrypto interface in the Cloudflare V8 Javascript engine. This algorithm mimics the sign/verify interface provided by signature algorithms like ECDSA. In short, we use the sign() function to issue Privacy Pass tokens to the client. While verify() can be used by the server to verify data that is redeemed by the client. These functions are implemented directly in the V8 layer and so they are much more performant and secure (running in constant-time, for example) than pure JS alternatives. The Privacy Pass WebCrypto interface is not currently available for public usage. If it turns out there is enough interest in using this additional algorithm in the Workers platform, then we will consider making it public. Applications In recent times, VOPRFs have been shown to be a highly useful primitive in establishing many cryptographic tools. Aside from Privacy Pass, they are also essential for constructing password-authenticated key exchange protocols such as OPAQUE. They have also been used in designs of private set intersection, password-protected secret-sharing protocols, and privacy-preserving access-control for private data storage. Public redemption API Writing the server in Cloudflare Workers means that we will be providing server-side support for Privacy Pass on a public domain! While we only issue tokens to clients after we are sure that we can trust them, anyone will be able to redeem the tokens using our public redemption API at https://privacypass.cloudflare.com/api/redeem. As we roll-out the server-side component worldwide, you will be able to interact with this API and verify Cloudflare Privacy Pass tokens independently of the browser extension. This means that any service can accept Privacy Pass tokens from a client that were issued by Cloudflare, and then verify them with the Cloudflare redemption API. Using the result provided by the API, external services can check whether Cloudflare has authorized the user in the past. We think that this will benefit other service providers because they can use the attestation of authorization from Cloudflare in their own decision-making processes, without sacrificing the privacy of the client at any stage. We hope that this ecosystem can grow further, with potentially more services providing public redemption APIs of their own. With a more diverse set of issuers, these attestations will become more useful. By running our server on a public domain, we are effectively a customer of the Cloudflare Workers product. This means that we are also able to make use of Workers KV for protecting against malicious clients. In particular, servers must check that clients are not re-using tokens during the redemption phase. The performance of Workers KV in analyzing reads makes this an obvious choice for providing double-spend protection globally. If you would like to use the public redemption API, we provide documentation for using it at https://privacypass.github.io/api-redeem. We also provide some example requests and responses in Appendix B at the end of the post. Standardization & new applications In tandem with the recent engineering work that we have been doing on supporting Privacy Pass, we have been collaborating with the wider community in an attempt to standardize both the underlying VOPRF functionality, and the protocol itself. While the process of standardization for oblivious pseudorandom functions (OPRFs) has been running for over a year, the recent efforts to standardize the Privacy Pass protocol have been driven by very recent applications that have come about in the last few months. Standardizing protocols and functionality is an important way of providing interoperable, secure, and performant interfaces for running protocols on the Internet. This makes it easier for developers to write their own implementations of this complex functionality. The process also provides helpful peer reviews from experts in the community, which can lead to better surfacing of potential security risks that should be mitigated in any implementation. Other benefits include coming to a consensus on the most reliable, scalable and performant protocol designs for all possible applications. Oblivious pseudorandom functions Oblivious pseudorandom functions (OPRFs) are a generalization of VOPRFs that do not require the server to prove that they have evaluated the functionality properly. Since July 2019, we have been collaborating on a draft with the Crypto Forum Research Group (CFRG) at the Internet Research Task Force (IRTF) to standardize an OPRF protocol that operates in prime-order groups. This is a generalisation of the setting that is provided by elliptic curves. This is the same VOPRF construction that was originally specified by the Privacy Pass protocol and is based heavily on the original protocol design from the paper of Jarecki, Kiayias and Krawczyk. One of the recent changes that we’ve made in the draft is to increase the size of the key that we consider for performing OPRF operations on the server-side. Existing research suggests that it is possible to create specific queries that can lead to small amounts of the key being leaked. For keys that provide only 128 bits of security this can be a problem as leaking too many bits would reduce security beyond currently accepted levels. To counter this, we have effectively increased the minimum key size to 196 bits. This prevents this leakage becoming an attack vector using any practical methods. We discuss these attacks in more detail later on when discussing our future plans for VOPRF development. Recent applications and standardizing the protocol The application that we demonstrated when originally supporting Privacy Pass was always intended as a proof-of-concept for the protocol. Over the past few months, a number of new possibilities have arisen in areas that go far beyond what was previously envisaged. For example, the trust token API, developed by the Web Incubator Community Group, has been proposed as an interface for using Privacy Pass. This applications allows third-party vendors to check that a user has received a trust attestation from a set of central issuers. This allows the vendor to make decisions about the honesty of a client without having to associate a behaviour profile with the identity of the user. The objective is to prevent against fraudulent activity from users who are not trusted by the central issuer set. Checking trust attestations with central issuers would be possible using similar redemption APIs to the one that we have introduced. A separate piece of work from Facebook details a similar application for preventing fraudulent behavior that may also be compatible with the Privacy Pass protocol. Finally, other applications have arisen in the areas of providing access to private storage and establishing security and privacy models in advertisement confirmations. A new draft With the applications above in mind, we have recently started collaborative work on a new IETF draft that specifically lays out the required functionality provided by the Privacy Pass protocol as a whole. Our aim is to develop, alongside wider industrial partners and the academic community, a functioning specification of the Privacy Pass protocol. We hope that by doing this we will be able to design a base-layer protocol, that can then be used as a cryptographic primitive in wider applications that require some form of lightweight authorization. Our plan is to present the first version of this draft at the upcoming IETF 106 meeting in Singapore next month. The draft is still in the early stages of development and we are actively looking for people who are interested in helping to shape the protocol specification. We would be grateful for any help that contributes to this process. See the GitHub repository for the current version of the document. Future avenues Finally, while we are actively working on a number of different pathways in the present, the future directions for the project are still open. We believe that there are many applications out there that we have not considered yet and we are excited to see where the protocol is used in the future. Here are some other ideas we have for novel applications and security properties that we think might be worth pursuing in future. Publicly verifiable tokens One of the disadvantages of using a VOPRF is that redemption tokens are only verifiable by the original issuing server. If we used an underlying primitive that allowed public verification of redemption tokens, then anyone could verify that the issuing server had issued the particular token. Such a protocol could be constructed on top of so-called blind signature schemes, such as Blind RSA. Unfortunately, there are performance and security concerns arising from the usage of blind signature schemes in a browser environment. Existing schemes (especially RSA-based variants) require cryptographic computations that are much heavier than the construction used in our VOPRF protocol. Post-quantum VOPRF alternatives The only constructions of VOPRFs exist in pre-quantum settings, usually based on the hardness of well-known problems in group settings such as the discrete-log assumption. No constructions of VOPRFs are known to provide security against adversaries that can run quantum computational algorithms. This means that the Privacy Pass protocol is only believed to be secure against adversaries running on classical hardware. Recent developments suggest that quantum computing may arrive sooner than previously thought. As such, we believe that investigating the possibility of constructing practical post-quantum alternatives for our current cryptographic toolkit is a task of great importance for ourselves and the wider community. In this case, devising performant post-quantum alternatives for VOPRF constructions would be an important theoretical advancement. Eventually this would lead to a Privacy Pass protocol that still provides privacy-preserving authorization in a post-quantum world. VOPRF security and larger ciphersuites We mentioned previously that VOPRFs (or simply OPRFs) are susceptible to small amounts of possible leakage in the key. Here we will give a brief description of the actual attacks themselves, along with further details on our plans for implementing higher security ciphersuites to mitigate the leakage. Specifically, malicious clients can interact with a VOPRF for creating something known as a q-Strong-Diffie-Hellman (q-sDH) sample. Such samples are created in mathematical groups (usually in the elliptic curve setting). For any group there is a public element g that is central to all Diffie-Hellman type operations, along with the server key K, which is usually just interpreted as a randomly generated number from this group. A q-sDH sample takes the form: ( g, g^K, g^(K^2), … , g^(K^q) ) and asks the malicious adversary to create a pair of elements satisfying (g^(1/(s+K)),s). It is possible for a client in the VOPRF protocol to create a q-SDH sample by just submitting the result of the previous VOPRF evaluation back to the server. While this problem is believed to be hard to break, there are a number of past works that show that the problem is somewhat easier than the size of the group suggests (for example, see here and here). Concretely speaking, the bit security implied by the group can be reduced by up to log2(q) bits. While this is not immediately fatal, even to groups that should provide 128 bits of security, it can lead to a loss of security that means that the setting is no longer future-proof. As a result, any group providing VOPRF functionality that is instantiated using an elliptic curve such as P-256 or Curve25519 provides weaker than advised security guarantees. With this in mind, we have taken the recent decision to upgrade the ciphersuites that we recommend for OPRF usage to only those that provide > 128 bits of security, as standard. For example, Curve448 provides 196 bits of security. To launch an attack that reduced security to an amount lower than 128 bits would require making 2^(68) client OPRF queries. This is a significant barrier to entry for any attacker, and so we regard these ciphersuites as safe for instantiating the OPRF functionality. In the near future, it will be necessary to upgrade the ciphersuites that are used in our support of the Privacy Pass browser extension to the recommendations made in the current VOPRF draft. In general, with a more iterative release process, we hope that the Privacy Pass implementation will be able to follow the current draft standard more closely as it evolves during the standardization process. Get in touch! You can now install v2.0 of the Privacy Pass extension in Chrome or Firefox. If you would like to help contribute to the development of this extension then you can do so on GitHub. Are you a service provider that would like to integrate server-side support for the extension? Then we would be very interested in hearing from you! We will continue to work with the wider community in developing the standardization of the protocol; taking our motivation from the available applications that have been developed. We are always looking for new applications that can help to expand the Privacy Pass ecosystem beyond its current boundaries. Appendix Here are some extra details related to the topics that we covered above. A. Commitment format for key rotations Key commitments are necessary for the server to prove that they’re acting honestly during the Privacy Pass protocol. The commitments that Privacy Pass uses for the v2.0 release have a slightly different format from the previous release. "2.00": { "H": "BPivZ+bqrAZzBHZtROY72/E4UGVKAanNoHL1Oteg25oTPRUkrYeVcYGfkOr425NzWOTLRfmB8cgnlUfAeN2Ikmg=", "expiry": "2020-01-11T10:29:10.658286752Z", "sig": "MEUCIQDu9xeF1q89bQuIMtGm0g8KS2srOPv+4hHjMWNVzJ92kAIgYrDKNkg3GRs9Jq5bkE/4mM7/QZInAVvwmIyg6lQZGE0=" } First, the version of the server key is 2.00, the server must inform the client which version it intends to use in the response to a client containing issued tokens. This is so that the client can always use the correct commitments when verifying the zero-knowledge proof that the server sends. The value of the member H is the public key commitment to the secret key used by the server. This is base64-encoded elliptic curve point of the form H=kG where G is the fixed generator of the curve, and k is the secret key of the server. Since the discrete-log problem is believed to be hard to solve, deriving k from H is believed to be difficult. The value of the member expiry is an expiry date for the commitment that is used. The value of the member sig is an ECDSA signature evaluated using a long-term signing key associated with the server, and over the values of H and expiry. When a client retrieves the commitment, it checks that it hasn’t expired and that the signature verifies using the corresponding verification key that is embedded into the configuration of the extension. If these checks pass, it retrieves H and verifies the issuance response sent by the server. Previous versions of these commitments did not include signatures, but these signatures will be validated from v2.0 onwards. When a server wants to rotate the key, it simply generates a new key k2 and appends a new commitment to k2 with a new identifier such as 2.01. It can then use k2 as the secret for the VOPRF operations that it needs to compute. B. Example Redemption API request The redemption API at is available over HTTPS by sending POST requests to https://privacypass.cloudflare.com/api/redeem. Requests to this endpoint must specify Privacy Pass data using JSON-RPC 2.0 syntax in the body of the request. Let’s look at an example request: { "jsonrpc": "2.0", "method": "redeem", "params": { "data": [ "lB2ZEtHOK/2auhOySKoxqiHWXYaFlAIbuoHQnlFz57A=", "EoSetsN0eVt6ztbLcqp4Gt634aV73SDPzezpku6ky5w=", "eyJjdXJ2ZSI6InAyNTYiLCJoYXNoIjoic2hhMjU2IiwibWV0aG9kIjoic3d1In0=" ], "bindings": [ "string1", "string2" ], "compressed":"false" }, "id": 1 } In the above: params.data[0] is the client input data used to generate a token in the issuance phase; params.data[1] is the HMAC tag that the server uses to verify a redemption; and params.data[2] is a stringified, base64-encoded JSON object that specifies the hash-to-curve parameters used by the client. For example, the last element in the array corresponds to the object: { curve: "p256", hash: "sha256", method: "swu", } Which specifies that the client has used the curve P-256, with hash function SHA-256, and the SSWU method for hashing to curve. This allows the server to verify the transaction with the correct ciphersuite. The client must bind the redemption request to some fixed information, which it stores as multiple strings in the array params.bindings. For example, it could send the Host header of the HTTP request, and the HTTP path that was used (this is what is used in the Privacy Pass browser extension). Finally, params.compressed is an optional boolean value (defaulting to false) that indicates whether the HMAC tag was computed over compressed or uncompressed point encodings. Currently the only supported ciphersuites are the example above, or the same except with method equal to increment for the hash-and-increment method of hashing to a curve. This is the original method used in v1.0 of Privacy Pass, and is supported for backwards-compatibility only. See the provided documentation for more details. Example response If a request is sent to the redemption API and it is successfully verified, then the following response will be returned. { "jsonrpc": "2.0", "result": "success", "id": 1 } When an error occurs something similar to the following will be returned. { "jsonrpc": "2.0", "error": { "message": <error-message>, "code": <error-code>, }, "id": 1 } The error codes that we provide are specified as JSON-RPC 2.0 codes, we document the types of errors that we provide in the API documentation. Tales from the Crypt(o team) Post Syndicated from Nick Sullivan original https://blog.cloudflare.com/tales-from-the-crypt-o-team/ Halloween season is upon us. This week we’re sharing a series of blog posts about work being done at Cloudflare involving cryptography, one of the spookiest technologies around. So bookmark this page and come back every day for tricks, treats, and deep technical content. A long-term mission Cryptography is one of the most powerful technological tools we have, and Cloudflare has been at the forefront of using cryptography to help build a better Internet. Of course, we haven’t been alone on this journey. Making meaningful changes to the way the Internet works requires time, effort, experimentation, momentum, and willing partners. Cloudflare has been involved with several multi-year efforts to leverage cryptography to help make the Internet better. Here are some highlights to expect this week: • We’re renewing Cloudflare’s commitment to privacy-enhancing technologies by sharing some of the recent work being done on Privacy Pass • We’re helping forge a path to a quantum-safe Internet by sharing some of the results of the Post-quantum Cryptography experiment • We’re sharing the rust-based software we use to power time.cloudflare.com • We’re doing a deep dive into the technical details of Encrypted DNS • We’re announcing support for a new technique we developed with industry partners to help keep TLS private keys more secure The milestones we’re sharing this week would not be possible without partnerships with companies, universities, and individuals working in good faith to help build a better Internet together. Hopefully, this week provides a fun peek into the future of the Internet. Introducing time.cloudflare.com Post Syndicated from Guest Author original https://blog.cloudflare.com/secure-time/ This is a guest post by Aanchal Malhotra, a Graduate Research Assistant at Boston University and former Cloudflare intern on the Cryptography team. Cloudflare has always been a leader in deploying secure versions of insecure Internet protocols and making them available for free for anyone to use. In 2014, we launched one of the world’s first free, secure HTTPS service (Universal SSL) to go along with our existing free HTTP plan. When we launched the 1.1.1.1 DNS resolver, we also supported the new secure versions of DNS (DNS over HTTPS and DNS over TLS). Today, we are doing the same thing for the Network Time Protocol (NTP), the dominant protocol for obtaining time over the Internet. This announcement is personal for me. I’ve spent the last four years identifying and fixing vulnerabilities in time protocols. Today I’m proud to help introduce a service that would have made my life from 2015 through 2019 a whole lot harder: time.cloudflare.com, a free time service that supports both NTP and the emerging Network Time Security (NTS) protocol for securing NTP. Now, anyone can get time securely from all our datacenters in 180 cities around the world. You can use time.cloudflare.com as the source of time for all your devices today with NTP, while NTS clients are still under development. NTPsec includes experimental support for NTS. If you’d like to get updates about NTS client development, email us asking to join at [email protected]. To use NTS to secure time synchronization, reach out to your vendors and inquire about NTS support. A small tale of “time” first Back in 2015, as a fresh graduate student interested in Internet security, I came across this mostly esoteric Internet protocol called the Network Time Protocol (NTP). NTP was designed to synchronize time between computer systems communicating over unreliable and variable-latency network paths. I was actually studying Internet routing security, in particular attacks against the Resource Public Key Infrastructure (RPKI), and kept hitting a dead end because of a cache-flushing issue. As a last-ditch effort I decided to roll back the time on my computer manually, and the attack worked. I had discovered the importance of time to computer security. Most cryptography uses timestamps to limit certificate and signature validity periods. When connecting to a website, knowledge of the correct time ensures that the certificate you see is current and is not compromised by an attacker. When looking at logs, time synchronization makes sure that events on different machines can be correlated accurately. Certificates and logging infrastructure can break with minutes, hours or months of time difference. Other applications like caching and Bitcoin are sensitive to even very small differences in time on the order of seconds. Two factor authentication using rolling numbers also rely on accurate clocks. This then creates the need for computer clocks to have access to reasonably accurate time that is securely delivered. NTP is the most commonly used protocol for time synchronization on the Internet. If an attacker can leverage vulnerabilities in NTP to manipulate time on computer clocks, they can undermine the security guarantees provided by these systems. Motivated by the severity of the issue, I decided to look deeper into NTP and its security. Since the need for synchronizing time across networks was visible early on, NTP is a very old protocol. The first standardized version of NTP dates back to 1985, while the latest NTP version 4 was completed in 2010 (see RFC5905). In its most common mode, NTP works by having a client send a query packet out to an NTP server that then responds with its clock time. The client then computes an estimate of the difference between its clock and the remote clock and attempts to compensate for network delay in this. NTP client queries multiple servers and implements algorithms to select the best estimate, and rejects clearly wrong answers. Surprisingly enough, research on NTP and its security was not very active at the time. Before this, in late 2013 and early 2014, high-profile Distributed Denial of Service (DDoS) attacks were carried out by amplifying traffic from NTP servers; attackers able to spoof a victim’s IP address were able to funnel copious amounts of traffic overwhelming the targeted domains. This caught the attention of some researchers. However, these attacks did not exploit flaws in the fundamental protocol design. The attackers simply used NTP as a boring bandwidth multiplier. Cloudflare wrote extensively about these attacks and you can read about it here, here, and here. I found several flaws in the core NTP protocol design and its implementation that can be exploited by network attackers to launch much more devastating attacks by shifting time or denying service to NTP clients. What is even more concerning was that these attackers do not need to be a Monster-In-The-Middle (MITM), where an attacker can modify traffic between the client and the server, to mount these attacks. A set of recent papers authored by one of us showed that an off-path attacker present anywhere on the network can shift time or deny service to NTP clients. One of the ways this is done is by abusing IP fragmentation. Fragmentation is a feature of the IP layer where a large packet is chopped into several smaller fragments so that they can pass through the networks that do not support large packets. Basically, any random network element on the path between the client and the server can send a special “ICMP fragmentation needed” packet to the server telling them to fragment the packet to say X bytes. Since the server is not expected to know the IP addresses of all the network elements on its path, this packet can be sent from any source IP. In our attack, the attacker exploits this feature to make the NTP server fragment its NTP response packet for the victim NTP client. The attacker then spoofs carefully crafted overlapping response fragments from off-path that contain the attacker’s timestamp values. By further exploiting the reassembly policies for overlapping fragments the attacker fools the client into assembling a packet with legitimate fragments and the attacker’s insertions. This evades the authenticity checks that rely on values in the original parts of the packet. NTP’s past and future At the time of NTP’s creation back in 1985, there were two main design goals for the service provided by NTP. First, they wanted it to be robust enough to handle networking errors and equipment failures. So it was designed as a service where client can gather timing samples from multiple peers over multiple communication paths and then average them to get more accurate measurement. The second goal was load distribution. While every client would like to talk to time servers which are directly attached to high precision time-keeping devices like atomic clocks, GPS, etc, and thus have more accurate time, the capacity of those devices is only so much. So, to reduce protocol load on the network, the service was designed in a hierarchical manner. At the top of the hierarchy are servers connected to non-NTP time sources, that distribute time to other servers, that further distribute time to even more servers. Most computers connect to either these second or third level servers. The original specification (RFC 958) also states the “non-goals” of the protocol, namely peer authentication and data integrity. Security wasn’t considered critical in the relatively small and trusting early Internet, and the protocols and applications that rely on time for security didn’t exist then. Securing NTP came second to improving the protocol and implementation. As the Internet has grown, more and more core Internet protocols have been secured through cryptography to protect against abuse: TLS, DNSSEC, RPKI are all steps toward ensuring the security of all communications on the Internet. These protocols use “time” to provide security guarantees. Since security of Internet hinges on the security of NTP, it becomes even more important to secure NTP. This research perspicuously showed the need for securing NTP. As a result, there was more work in the standards body for Internet Protocols, the Internet Engineering Task Force (IETF) towards cryptographically authenticating NTP. At the time, even though NTPv4 supported both symmetric and asymmetric cryptographic authentication, it was rarely used in practice due to limitations of both approaches. NTPv4’s symmetric approach to securing synchronization doesn’t scale as the symmetric key must be pre-shared and configured manually: imagine if every client on earth needed a special secret key with the servers they wanted to get time from, the organizations that run those servers would have to do a great deal of work managing keys. This makes this solution quite cumbersome for public servers that must accept queries from arbitrary clients. For context, NIST operates important public time servers and distributes symmetric keys only to users that register, once per year, via US mail or facsimile; the US Naval Office does something similar. The first attempt to solve the problem of key distribution was the Autokey protocol, described in RFC 5906. Many public NTP servers do not support Autokey (e.g., the NIST and USNO time servers, and many servers in pool.ntp.org). The protocol is badly broken as any network attacker can trivially retrieve the secret key shared between the client and server. The authentication mechanisms are non-standard and quite idiosyncratic. The future of the Internet is a secure Internet, which means an authenticated and encrypted Internet. But until now NTP remains mostly insecure, despite continuing protocol development. In the meantime more and more services depended on it. Fixing the problem Following the release of our paper, there was a lot more enthusiasm in the NTP community at standards body for Internet Protocols, the Internet Engineering Task Force (IETF) and outside for improving the state of NTP security. As a short-term fix, the ntpd reference implementation software was patched for several vulnerabilities that we found. And for a long-term solution, the community realized the dire need for a secure, authenticated time synchronization protocol based on public-key cryptography, which enables encryption and authentication without requiring the sharing of key material beforehand. Today we have a Network Time Security (NTS) draft at the IETF, thanks to the work of dozens of dedicated individuals at the NTP working group. In a nutshell, the NTS protocol is divided into two-phases. The first phase is the NTS key exchange that establishes the necessary key material between the NTP client and the server. This phase uses the Transport Layer Security (TLS) handshake and relies on the same public key infrastructure as the web. Once the keys are exchanged, the TLS channel is closed and the protocol enters the second phase. In this phase the results of that TLS handshake are used to authenticate NTP time synchronization packets via extension fields. The interested reader can find more information in the Internet draft. Cloudflare’s new service Today, Cloudflare announces its free time service to anyone on the Internet. We intend to solve the limitations with the existing public time services, in particular by increasing availability, robustness and security. We use our global network to provide an advantage in latency and accuracy. Our 180 locations around the world all use anycast to automatically route your packets to our closest server. All of our servers are synchronized with stratum 1 time service providers, and then offer NTP to the general public, similar to how other public NTP providers function. The biggest source of inaccuracy for time synchronization protocols is the network asymmetry, leading to a difference in travel times between the client and server and back from the server to the client. However, our servers’ proximity to users means there will be less jitter — a measurement of variance in latency on the network — and possible asymmetry in packet paths. We also hope that in regions with a dearth of NTP servers our service significantly improves the capacity and quality of the NTP ecosystem. Cloudflare servers obtain authenticated time by using a shared symmetric key with our stratum 1 upstream servers. These upstream servers are geographically spread and ensure that our servers have accurate time in our datacenters. But this approach to securing time doesn’t scale. We had to exchange emails individually with the organizations that run stratum 1 servers, as well as negotiate permission to use them. While this is a solution for us, it isn’t a solution for everyone on the Internet. As a secure time service provider Cloudflare is proud to announce that we are among the first to offer a free and secure public time service based on Network Time Security. We have implemented the latest NTS IETF draft. As this draft progresses through the Internet standards process we are committed to keeping our service current. Most NTP implementations are currently working on NTS support, and we expect that the next few months will see broader introduction as well as advancement of the current draft protocol to an RFC. Currently we have interoperability with NTPsec who have implemented draft 18 of NTS. We hope that our service will spur faster adoption of this important improvement to Internet security. Because this is a new service with no backwards compatibility requirements, we are requiring the use of TLS v1.3 with it to promote adoption of the most secure version of TLS. Use it If you have an NTS client, point it at time.cloudflare.com:1234. Otherwise point your NTP client at time.cloudflare.com. More details on configuration are available in the developer docs. Conclusion From our Roughtime service to Universal SSL Cloudflare has played a role in expanding the availability and use of secure protocols. Now with our free public time service we provide a trustworthy, widely available alternative to another insecure legacy protocol. It’s all a part of our mission to help make a faster, reliable, and more secure Internet for everyone. Thanks to the many other engineers who worked on this project, including Watson Ladd, Gabbi Fisher, and Dina Kozlov Introducing time.cloudflare.com Post Syndicated from Guest Author original https://blog.cloudflare.com/secure-time/ This is a guest post by Aanchal Malhotra, a Graduate Research Assistant at Boston University and former Cloudflare intern on the Cryptography team. Cloudflare has always been a leader in deploying secure versions of insecure Internet protocols and making them available for free for anyone to use. In 2014, we launched one of the world’s first free, secure HTTPS service (Universal SSL) to go along with our existing free HTTP plan. When we launched the 1.1.1.1 DNS resolver, we also supported the new secure versions of DNS (DNS over HTTPS and DNS over TLS). Today, as part of Crypto Week 2019, we are doing the same thing for the Network Time Protocol (NTP), the dominant protocol for obtaining time over the Internet. This announcement is personal for me. I’ve spent the last four years identifying and fixing vulnerabilities in time protocols. Today I’m proud to help introduce a service that would have made my life from 2015 through 2019 a whole lot harder: time.cloudflare.com, a free time service that supports both NTP and the emerging Network Time Security (NTS) protocol for securing NTP. Now, anyone can get time securely from all our datacenters in 180 cities around the world. You can use time.cloudflare.com as the source of time for all your devices today with NTP, while NTS clients are still under development. NTPsec includes experimental support for NTS. If you’d like to get updates about NTS client development, email us asking to join at [email protected]. To use NTS to secure time synchronization, reach out to your vendors and inquire about NTS support. A small tale of “time” first Back in 2015, as a fresh graduate student interested in Internet security, I came across this mostly esoteric Internet protocol called the Network Time Protocol (NTP). NTP was designed to synchronize time between computer systems communicating over unreliable and variable-latency network paths. I was actually studying Internet routing security, in particular attacks against the Resource Public Key Infrastructure (RPKI), and kept hitting a dead end because of a cache-flushing issue. As a last-ditch effort I decided to roll back the time on my computer manually, and the attack worked. I had discovered the importance of time to computer security. Most cryptography uses timestamps to limit certificate and signature validity periods. When connecting to a website, knowledge of the correct time ensures that the certificate you see is current and is not compromised by an attacker. When looking at logs, time synchronization makes sure that events on different machines can be correlated accurately. Certificates and logging infrastructure can break with minutes, hours or months of time difference. Other applications like caching and Bitcoin are sensitive to even very small differences in time on the order of seconds. Two factor authentication using rolling numbers also rely on accurate clocks. This then creates the need for computer clocks to have access to reasonably accurate time that is securely delivered. NTP is the most commonly used protocol for time synchronization on the Internet. If an attacker can leverage vulnerabilities in NTP to manipulate time on computer clocks, they can undermine the security guarantees provided by these systems. Motivated by the severity of the issue, I decided to look deeper into NTP and its security. Since the need for synchronizing time across networks was visible early on, NTP is a very old protocol. The first standardized version of NTP dates back to 1985, while the latest NTP version 4 was completed in 2010 (see RFC5905). In its most common mode, NTP works by having a client send a query packet out to an NTP server that then responds with its clock time. The client then computes an estimate of the difference between its clock and the remote clock and attempts to compensate for network delay in this. NTP client queries multiple servers and implements algorithms to select the best estimate, and rejects clearly wrong answers. Surprisingly enough, research on NTP and its security was not very active at the time. Before this, in late 2013 and early 2014, high-profile Distributed Denial of Service (DDoS) attacks were carried out by amplifying traffic from NTP servers; attackers able to spoof a victim’s IP address were able to funnel copious amounts of traffic overwhelming the targeted domains. This caught the attention of some researchers. However, these attacks did not exploit flaws in the fundamental protocol design. The attackers simply used NTP as a boring bandwidth multiplier. Cloudflare wrote extensively about these attacks and you can read about it here, here, and here. I found several flaws in the core NTP protocol design and its implementation that can be exploited by network attackers to launch much more devastating attacks by shifting time or denying service to NTP clients. What is even more concerning was that these attackers do not need to be a Monster-In-The-Middle (MITM), where an attacker can modify traffic between the client and the server, to mount these attacks. A set of recent papers authored by one of us showed that an off-path attacker present anywhere on the network can shift time or deny service to NTP clients. One of the ways this is done is by abusing IP fragmentation. Fragmentation is a feature of the IP layer where a large packet is chopped into several smaller fragments so that they can pass through the networks that do not support large packets. Basically, any random network element on the path between the client and the server can send a special “ICMP fragmentation needed” packet to the server telling them to fragment the packet to say X bytes. Since the server is not expected to know the IP addresses of all the network elements on its path, this packet can be sent from any source IP. In our attack, the attacker exploits this feature to make the NTP server fragment its NTP response packet for the victim NTP client. The attacker then spoofs carefully crafted overlapping response fragments from off-path that contain the attacker’s timestamp values. By further exploiting the reassembly policies for overlapping fragments the attacker fools the client into assembling a packet with legitimate fragments and the attacker’s insertions. This evades the authenticity checks that rely on values in the original parts of the packet. NTP’s past and future At the time of NTP’s creation back in 1985, there were two main design goals for the service provided by NTP. First, they wanted it to be robust enough to handle networking errors and equipment failures. So it was designed as a service where client can gather timing samples from multiple peers over multiple communication paths and then average them to get more accurate measurement. The second goal was load distribution. While every client would like to talk to time servers which are directly attached to high precision time-keeping devices like atomic clocks, GPS, etc, and thus have more accurate time, the capacity of those devices is only so much. So, to reduce protocol load on the network, the service was designed in a hierarchical manner. At the top of the hierarchy are servers connected to non-NTP time sources, that distribute time to other servers, that further distribute time to even more servers. Most computers connect to either these second or third level servers. The original specification (RFC 958) also states the “non-goals” of the protocol, namely peer authentication and data integrity. Security wasn’t considered critical in the relatively small and trusting early Internet, and the protocols and applications that rely on time for security didn’t exist then. Securing NTP came second to improving the protocol and implementation. As the Internet has grown, more and more core Internet protocols have been secured through cryptography to protect against abuse: TLS, DNSSEC, RPKI are all steps toward ensuring the security of all communications on the Internet. These protocols use “time” to provide security guarantees. Since security of Internet hinges on the security of NTP, it becomes even more important to secure NTP. This research perspicuously showed the need for securing NTP. As a result, there was more work in the standards body for Internet Protocols, the Internet Engineering Task Force (IETF) towards cryptographically authenticating NTP. At the time, even though NTPv4 supported both symmetric and asymmetric cryptographic authentication, it was rarely used in practice due to limitations of both approaches. NTPv4’s symmetric approach to securing synchronization doesn’t scale as the symmetric key must be pre-shared and configured manually: imagine if every client on earth needed a special secret key with the servers they wanted to get time from, the organizations that run those servers would have to do a great deal of work managing keys. This makes this solution quite cumbersome for public servers that must accept queries from arbitrary clients. For context, NIST operates important public time servers and distributes symmetric keys only to users that register, once per year, via US mail or facsimile; the US Naval Office does something similar. The first attempt to solve the problem of key distribution was the Autokey protocol, described in RFC 5906. Many public NTP servers do not support Autokey (e.g., the NIST and USNO time servers, and many servers in pool.ntp.org). The protocol is badly broken as any network attacker can trivially retrieve the secret key shared between the client and server. The authentication mechanisms are non-standard and quite idiosyncratic. The future of the Internet is a secure Internet, which means an authenticated and encrypted Internet. But until now NTP remains mostly insecure, despite continuing protocol development. In the meantime more and more services depended on it. Fixing the problem Following the release of our paper, there was a lot more enthusiasm in the NTP community at standards body for Internet Protocols, the Internet Engineering Task Force (IETF) and outside for improving the state of NTP security. As a short-term fix, the ntpd reference implementation software was patched for several vulnerabilities that we found. And for a long-term solution, the community realized the dire need for a secure, authenticated time synchronization protocol based on public-key cryptography, which enables encryption and authentication without requiring the sharing of key material beforehand. Today we have a Network Time Security (NTS) draft at the IETF, thanks to the work of dozens of dedicated individuals at the NTP working group. In a nutshell, the NTS protocol is divided into two-phases. The first phase is the NTS key exchange that establishes the necessary key material between the NTP client and the server. This phase uses the Transport Layer Security (TLS) handshake and relies on the same public key infrastructure as the web. Once the keys are exchanged, the TLS channel is closed and the protocol enters the second phase. In this phase the results of that TLS handshake are used to authenticate NTP time synchronization packets via extension fields. The interested reader can find more information in the Internet draft. Cloudflare’s new service Today, Cloudflare announces its free time service to anyone on the Internet. We intend to solve the limitations with the existing public time services, in particular by increasing availability, robustness and security. We use our global network to provide an advantage in latency and accuracy. Our 180 locations around the world all use anycast to automatically route your packets to our closest server. All of our servers are synchronized with stratum 1 time service providers, and then offer NTP to the general public, similar to how other public NTP providers function. The biggest source of inaccuracy for time synchronization protocols is the network asymmetry, leading to a difference in travel times between the client and server and back from the server to the client. However, our servers’ proximity to users means there will be less jitter — a measurement of variance in latency on the network — and possible asymmetry in packet paths. We also hope that in regions with a dearth of NTP servers our service significantly improves the capacity and quality of the NTP ecosystem. Cloudflare servers obtain authenticated time by using a shared symmetric key with our stratum 1 upstream servers. These upstream servers are geographically spread and ensure that our servers have accurate time in our datacenters. But this approach to securing time doesn’t scale. We had to exchange emails individually with the organizations that run stratum 1 servers, as well as negotiate permission to use them. While this is a solution for us, it isn’t a solution for everyone on the Internet. As a secure time service provider Cloudflare is proud to announce that we are among the first to offer a free and secure public time service based on Network Time Security. We have implemented the latest NTS IETF draft. As this draft progresses through the Internet standards process we are committed to keeping our service current. Most NTP implementations are currently working on NTS support, and we expect that the next few months will see broader introduction as well as advancement of the current draft protocol to an RFC. Currently we have interoperability with NTPsec who have implemented draft 18 of NTS. We hope that our service will spur faster adoption of this important improvement to Internet security. Because this is a new service with no backwards compatibility requirements, we are requiring the use of TLS v1.3 with it to promote adoption of the most secure version of TLS. Use it If you have an NTS client, point it at time.cloudflare.com:1234. Otherwise point your NTP client at time.cloudflare.com. More details on configuration are available in the developer docs. Conclusion From our Roughtime service to Universal SSL Cloudflare has played a role in expanding the availability and use of secure protocols. Now with our free public time service we provide a trustworthy, widely available alternative to another insecure legacy protocol. It’s all a part of our mission to help make a faster, reliable, and more secure Internet for everyone. Thanks to the many other engineers who worked on this project, including Watson Ladd, Gabbi Fisher, and Dina Kozlov The Quantum Menace Post Syndicated from Armando Faz-Hernández original https://blog.cloudflare.com/the-quantum-menace/ Over the last few decades, the word ‘quantum’ has become increasingly popular. It is common to find articles, reports, and many people interested in quantum mechanics and the new capabilities and improvements it brings to the scientific community. This topic not only concerns physics, since the development of quantum mechanics impacts on several other fields such as chemistry, economics, artificial intelligence, operations research, and undoubtedly, cryptography. This post begins a trio of blogs describing the impact of quantum computing on cryptography, and how to use stronger algorithms resistant to the power of quantum computing. • This post introduces quantum computing and describes the main aspects of this new computing model and its devastating impact on security standards; it summarizes some approaches to securing information using quantum-resistant algorithms. • Due to the relevance of this matter, we present our experiments on a large-scale deployment of quantum-resistant algorithms. • Our third post introduces CIRCL, open-source Go library featuring optimized implementations of quantum-resistant algorithms and elliptic curve-based primitives. All of this is part of Cloudflare’s Crypto Week 2019, now fasten your seatbelt and get ready to make a quantum leap. What is Quantum Computing? Back in 1981, Richard Feynman raised the question about what kind of computers can be used to simulate physics. However, some physical phenomena, such as quantum mechanics, cannot be simulated using a classical computer. Then, he conjectured the existence of a computer model that behaves under quantum mechanics rules, which opened a field of research now called quantum computing. To understand the basics of quantum computing, it is necessary to recall how classical computers work, and from that shine a spotlight on the differences between these computational models. In 1936, Alan Turing and Emil Post independently described models that gave rise to the foundation of the computing model known as the Post-Turing machine, which describes how computers work and allowed further determination of limits for solving problems. In this model, the units of information are bits, which store one of two possible values, usually denoted by 0 and 1. A computing machine contains a set of bits and performs operations that modify the values of the bits, also known as the machine’s state. Thus, a machine with N bits can be in one of 2ᴺ possible states. With this in mind, the Post-Turing computing model can be abstractly described as a machine of states, in which running a program is translated as machine transitions along the set of states. A paper David Deutsch published in 1985 describes a computing model that extends the capabilities of a Turing machine based on the theory of quantum mechanics. This computing model introduces several advantages over the Turing model for processing large volumes of information. It also presents unique properties that deviate from the way we understand classical computing. Most of these properties come from the nature of quantum mechanics. We’re going to dive into these details before approaching the concept of quantum computing. Superposition One of the most exciting properties of quantum computing that provides an advantage over the classical computing model is superposition. In physics, superposition is the ability to produce valid states from the addition or superposition of several other states that are part of a system. Applying these concepts to computing information, it means that there is a system in which it is possible to generate a machine state that represents a (weighted) sum of the states 0 and 1; in this case, the term weighted means that the state can keep track of “the quantity of” 0 and 1 present in the state. In the classical computation model, one bit can only store either the state of 0 or 1, not both; even using two bits, they cannot represent the weighted sum of these states. Hence, to make a distinction from the basic states, quantum computing uses the concept of a quantum bit (qubit) — a unit of information to denote the superposition of two states. This is a cornerstone concept of quantum computing as it provides a way of tracking more than a single state per unit of information, making it a powerful tool for processing information. So, a qubit represents the sum of two parts: the 0 or 1 state plus the amount each 0/1 state contributes to produce the state of the qubit. In mathematical notation, qubit $$| \Psi \rangle$$ is an explicit sum indicating that a qubit represents the superposition of the states 0 and 1. This is the Dirac notation used to describe the value of a qubit $$| \Psi \rangle = A | 0 \rangle +B | 1 \rangle$$, where, A and B are complex numbers known as the amplitude of the states 0 and 1, respectively. The value of the basic states is represented by qubits as $$| 0 \rangle = 1 | 0 \rangle + 0 | 1 \rangle$$ and $$| 1 \rangle = 0 | 0 \rangle + 1 | 1 \rangle$$, respectively. The right side of the term contains the abbreviated notation for these special states. Measurement In a classical computer, the values 0 and 1 are implemented as digital signals. Measuring the current of the signal automatically reveals the status of a bit. This means that at any moment the value of the bit can be observed or measured. The state of a qubit is maintained in a physically closed system, meaning that the properties of the system, such as superposition, require no interaction with the environment; otherwise any interaction, like performing a measurement, can cause interference on the state of a qubit. Measuring a qubit is a probabilistic experiment. The result is a bit of information that depends on the state of the qubit. The bit, obtained by measuring $$| \Psi \rangle = A | 0 \rangle +B | 1 \rangle$$, will be equal to 0 with probability $$|A|^2$$, and equal to 1 with probability $$|B|^2$$, where $$|x|$$ represents the absolute value of $$x$$. From Statistics, we know that the sum of probabilities of all possible events is always equal to 1, so it must hold that $$|A|^2 +|B|^2 =1$$. This last equation motivates to represent qubits as the points of a circle of radius one, and more generally, as the points on the surface of a sphere of radius one, which is known as the Bloch Sphere. Let’s break it down: If you measure a qubit you also destroy the superposition of the qubit, resulting in a superposition state collapse, where it assumes one of the basics states, providing your final result. Another way to think about superposition and measurement is through the coin tossing experiment. Toss a coin in the air and you give people a random choice between two options: heads or tails. Now, don’t focus on the randomness of the experiment, instead note that while the coin is rotating in the air, participants are uncertain which side will face up when the coin lands. Conversely, once the coin stops with a random side facing up, participants are 100% certain of the status. How does it relate? Qubits are similar to the participants. When a qubit is in a superposition of states, it is tracking the probability of heads or tails, which is the participants’ uncertainty quotient while the coin is in the air. However, once you start to measure the qubit to retrieve its value, the superposition vanishes, and a classical bit value sticks: heads or tails. Measurement is that moment when the coin is static with only one side facing up. A fair coin is a coin that is not biased. Each side (assume 0=heads and 1=tails) of a fair coin has the same probability of sticking after a measurement is performed. The qubit $$\tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle$$ describes the probabilities of tossing a fair coin. Note that squaring either of the amplitudes results in ½, indicating that there is a 50% chance either heads or tails sticks. It would be interesting to be able to charge a fair coin at will while it is in the air. Although this is the magic of a professional illusionist, this task, in fact, can be achieved by performing operations over qubits. So, get ready to become the next quantum magician! Quantum Gates A logic gate represents a Boolean function operating over a set of inputs (on the left) and producing an output (on the right). A logic circuit is a set of connected logic gates, a convenient way to represent bit operations. Other gates are AND, OR, XOR, and NAND, and more. A set of gates is universal if it can generate other gates. For example, NOR and NAND gates are universal since any circuit can be constructed using only these gates. Quantum computing also admits a description using circuits. Quantum gates operate over qubits, modifying the superposition of the states. For example, there is a quantum gate analogous to the NOT gate, the X gate. The X quantum gate interchanges the amplitudes of the states of the input qubit. The Z quantum gate flips the sign’s amplitude of state 1: Another quantum gate is the Hadamard gate, which generates an equiprobable superposition of the basic states. Using our coin tossing analogy, the Hadamard gate has the action of tossing a fair coin to the air. In quantum circuits, a triangle represents measuring a qubit, and the resulting bit is indicated by a double-wire. Other gates, such as the CNOT gate, Pauli’s gates, Toffoli gate, Deutsch gate, are slightly more advanced. Quirk, the open-source playground, is a fun sandbox where you can construct quantum circuits using all of these gates. Reversibility An operation is reversible if there exists another operation that rolls back the output state to the initial state. For instance, a NOT gate is reversible since applying a second NOT gate recovers the initial input. In contrast, AND, OR, NAND gates are not reversible. This means that some classical computations cannot be reversed by a classic circuit that uses only the output bits. However, if you insert additional bits of information, the operation can be reversed. Quantum computing mainly focuses on reversible computations, because there’s always a way to construct a reversible circuit to perform an irreversible computation. The reversible version of a circuit could require the use of ancillary qubits as auxiliary (but not temporary) variables. Due to the nature of composed systems, it could be possible that these ancillas (extra qubits) correlate to qubits of the main computation. This correlation makes it infeasible to reuse ancillas since any modification could have the side-effect on the operation of a reversible circuit. This is like memory assigned to a process by the operating system: the process cannot use memory from other processes or it could cause memory corruption, and processes cannot release their assigned memory to other processes. You could use garbage collection mechanisms for ancillas, but performing reversible computations increases your qubit budget. Composed Systems In quantum mechanics, a single qubit can be described as a single closed system: a system that has no interaction with the environment nor other qubits. Letting qubits interact with others leads to a composed system where more states are represented. The state of a 2-qubit composite system is denoted as $$A_0|00\rangle+A_1|01\rangle+A_2|10\rangle+A_3|11\rangle$$, where, $$A_i$$ values correspond to the amplitudes of the four basic states 00, 01, 10, and 11. This qubit $$\tfrac{1}{2}|00\rangle+\tfrac{1}{2}|01\rangle+\tfrac{1}{2}|10\rangle+\tfrac{1}{2}|11\rangle$$ represents the superposition of these basic states, both having the same probability obtained after measuring the two qubits. In the classical case, the state of N bits represents only one of 2ᴺ possible states, whereas a composed state of N qubits represents all the 2ᴺ states but in superposition. This is one big difference between these computing models as it carries two important properties: entanglement and quantum parallelism. Entanglement According to the theory behind quantum mechanics, some composed states can be described through the description of its constituents. However, there are composed states where no description is possible, known as entangled states. The entanglement phenomenon was pointed out by Einstein, Podolsky, and Rosen in the so-called EPR paradox. Suppose there is a composed system of two entangled qubits, in which by performing a measurement in one qubit causes interference in the measurement of the second. This interference occurs even when qubits are separated by a long distance, which means that some information transfer happens faster than the speed of light. This is how quantum entanglement conflicts with the theory of relativity, where information cannot travel faster than the speed of light. The EPR paradox motivated further investigation for deriving new interpretations about quantum mechanics and aiming to resolve the paradox. Quantum entanglement can help to transfer information at a distance by following a communication protocol. The following protocol examples rely on the fact that Alice and Bob separately possess one of two entangled qubits: • The superdense coding protocol allows Alice to communicate a 2-bit message $$m_0,m_1$$ to Bob using a quantum communication channel, for example, using fiber optics to transmit photons. All Alice has to do is operate on her qubit according to the value of the message and send the resulting qubit to Bob. Once Bob receives the qubit, he measures both qubits, noting that the collapsed 2-bit state corresponds to Alice’s message. • The quantum teleportation protocol allows Alice to transmit a qubit to Bob without using a quantum communication channel. Alice measures the qubit to send Bob and her entangled qubit resulting in two bits. Alice sends these bits to Bob, who operates on his entangled qubit according to the bits received and notes that the result state matches the original state of Alice’s qubit. Quantum Parallelism Composed systems of qubits allow representation of more information per composed state. Note that operating on a composed state of N qubits is equivalent to operating over a set of 2ᴺ states in superposition. This procedure is quantum parallelism. In this setting, operating over a large volume of information gives the intuition of performing operations in parallel, like in the parallel computing paradigm; one big caveat is that superposition is not equivalent to parallelism. Remember that a composed state is a superposition of several states so, a computation that takes a composed state of inputs will result in a composed state of outputs. The main divergence between classical and quantum parallelism is that quantum parallelism can obtain only one of the processed outputs. Observe that a measurement in the output of a composed state causes that the qubits collapse to only one of the outputs, making it unattainable to calculate all computed values. Although quantum parallelism does not match precisely with the traditional notion of parallel computing, you can still leverage this computational power to get related information. Deutsch-Jozsa Problem: Assume $$F$$ is a function that takes as input N bits, outputs one bit, and is either constant (always outputs the same value for all inputs) or balanced (outputs 0 for half of the inputs and 1 for the other half). The problem is to determine if $$F$$ is constant or balanced. The quantum algorithm that solves the Deutsch-Jozsa problem uses quantum parallelism. First, N qubits are initialized in a superposition of 2ᴺ states. Then, in a single shot, it evaluates $$F$$ for all of these states. The result of applying $$F$$ appears (in the exponent) of the amplitude of the all-zero state. Note that only when $$F$$ is constant is this amplitude, either +1 or -1. If the result of measuring the N qubit is an all-zeros bitstring, then there is a 100% certainty that $$F$$ is constant. Any other result indicates that $$F$$ is balanced. A deterministic classical algorithm solves this problem using $$2^{N-1}+1$$ evaluations of $$F$$ in the worst case. Meanwhile, the quantum algorithm requires only one evaluation. The Deutsch-Jozsa problem exemplifies the exponential advantage of a quantum algorithm over classical algorithms. Quantum Computers The theory of quantum computing is supported by investigations in the field of quantum mechanics. However, constructing a quantum machine requires a physical system that allows representing qubits and manipulation of states in a reliable and precise way. The DiVincenzo Criteria require that a physical implementation of a quantum computer must: 1. Be scalable and have well-defined qubits. 2. Be able to initialize qubits to a state. 3. Have long decoherence times to apply quantum error-correcting codes. Decoherence of a qubit happens when the qubit interacts with the environment, for example, when a measurement is performed. 4. Use a universal set of quantum gates. 5. Be able to measure single qubits without modifying others. Quantum computer physical implementations face huge engineering obstacles to satisfy these requirements. The most important challenge is to guarantee low error rates during computation and measurement. Lowering these rates require techniques for error correction, which add a significant number of qubits specialized on this task. For this reason, the number of qubits of a quantum computer should not be regarded as for classical systems. In a classical computer, the bits of a computer are all effective for performing a calculation, whereas the number of qubits is the sum of the effective qubits (those used to make calculations) plus the ancillas (used for reversible computations) plus the error correction qubits. Current implementations of quantum computers partially satisfy the DiVincenzo criteria. Quantum adiabatic computers fit in this category since they do not operate using quantum gates. For this reason, they are not considered to be universal quantum computers. Quantum Adiabatic Computers A recurrent problem in optimization is to find the global minimum of an objective function. For example, a route-traffic control system can be modeled as a function that reduces the cost of routing to a minimum. Simulated annealing is a heuristic procedure that provides a good solution to these types of problems. Simulated annealing finds the solution state by slowly introducing changes (the adiabatic process) on the variables that govern the system. Quantum annealing is the analogous quantum version of simulated annealing. A qubit is initialized into a superposition of states representing all possible solutions to the problem. Here is used the Hamiltonian operator, which is the sum of vectors of potential and kinetic energies of the system. Hence, the objective function is encoded using this operator describing the evolution of the system in correspondence with time. Then, if the system is allowed to evolve very slowly, it will eventually land on a final state representing the optimal value of the objective function. Currently, there exist adiabatic computers in the market, such as the D-Wave and IBM Q systems, featuring hundreds of qubits; however, their capabilities are somewhat limited to some problems that can be modeled as optimization problems. The limits of adiabatic computers were studied by van Dam et al, showing that despite solving local searching problems and even some instances of the max-SAT problem, there exists harder searching problems this computing model cannot efficiently solve. Nuclear Magnetic Resonance Nuclear Magnetic Resonance (NMR) is a physical phenomena that can be used to represent qubits. The spin of atomic nucleus of molecules is perturbed by an oscillating magnetic field. A 2001 report describes successful implementation of Shor’s algorithm in a 7-qubit NMR quantum computer. An iconic result since this computer was able to factor the number 15. Superconducting Quantum Computers One way to physically construct qubits is based on superconductors, materials that conduct electric current with zero resistance when exposed to temperatures close to absolute zero. The Josephson effect, in which current flows across the junction of two superconductors separated by a non-superconducting material, is used to physically implement a superposition of states. When a magnetic flux is applied to this junction, the current flows continuously in one direction. But, depending on the quantity of magnetic flux applied, the current can also flow in the opposite direction. There exists a quantum superposition of currents going both clockwise and counterclockwise leading to a physical implementation of a qubit called flux qubit. The complete device is known as the Superconducting Quantum Interference Device (SQUID) and can be easily coupled scaling the number of qubits. Thus, SQUIDs are like the transistors of a quantum computer. Examples of superconducting computers are: • D-wave’s adiabatic computers process quantum annealing for solving diverse optimization problems. • Google’s 72-qubit computer was recently announced and also several engineering issues such as achieving lower temperatures. • IBM’s IBM-Q Tokyo, a 20-qubit adiabatic computer, and IBM Q Experience, a cloud-based system for exploring quantum circuits. IBM Q System The Imminent Threat of Quantum Algorithms The quantum zoo website tracks problems that can be solved using quantum algorithms. As of mid-2018, more than 60 problems appear on this list, targeting diverse applications in the area of number theory, approximation, simulation, and searching. As terrific as it sounds, some easily-solvable problems by quantum computing are surrounding the security of information. Grover’s Algorithm Tales of a quantum detective (fragment). A couple of detectives have the mission of finding one culprit in a group of suspects that always respond to this question honestly: “are you guilty?”. The detective C follows a classic interrogative method and interviews every person one at a time, until finding the first one that confesses. The detective Q proceeds in a different way, First gather all suspects in a completely dark room, and after that, the detective Q asks them — are you guilty? — A steady sound comes from the room saying “No!” while at the same time, a single voice mixed in the air responds “Yes!.” Since everybody is submerged in darkness, the detective cannot see the culprit. However, detective Q knows that, as long as the interrogation advances, the culprit will feel desperate and start to speak louder and louder, and so, he continues asking the same question. Suddenly, detective Q turns on the lights, enters into the room, and captures the culprit. How did he do it? The task of the detective can be modeled as a searching problem. Given a Boolean function $$f$$ that takes N bits and produces one bit, to find the unique input $$x$$ such that $$f(x)=1$$. A classical algorithm (detective C) finds $$x$$ using $$2^N-1$$ function evaluations in the worst case. However, the quantum algorithm devised by Grover, corresponding to detective Q, searches quadratically faster using around $$2^{N/2}$$ function evaluations. The key intuition of Grover’s algorithm is increasing the amplitude of the state that represents the solution while maintaining the other states in a lower amplitude. In this way, a system of N qubits, which is a superposition of 2ᴺ possible inputs, can be continuously updated using this intuition until the solution state has an amplitude closer to 1. Hence, after updating the qubits many times, there will be a high probability to measure the solution state. Initially, a superposition of 2ᴺ states (horizontal axis) is set, each state has an amplitude (vertical axis) close to 0. The qubits are updated so that the amplitude of the solution state increases more than the amplitude of other states. By repeating the update step, the amplitude of the solution state gets closer to 1, which boosts the probability of collapsing to the solution state after measuring. Grover’s Algorithm (pseudo-code): 1. Prepare an N qubit $$|x\rangle$$ as a uniform superposition of 2ᴺ states. 2. Update the qubits by performing this core operation. $$|x\rangle \mapsto (-1)^{f(x)} |x\rangle$$ The result of $$f(x)$$ only flips the amplitude of the searched state. 3. Negate the N qubit over the average of the amplitudes. 4. Repeat Step 2 and 3 for $$(\tfrac{\pi}{4}) 2^{ N/2}$$ times. 5. Measure the qubit and return the bits obtained. Alternatively, the second step can be better understood as a conditional statement: IF f(x) = 1 THEN Negate the amplitude of the solution state. ELSE /* nothing */ ENDIF Grover’s algorithm considers function $$f$$ a black box, so with slight modifications, the algorithm can also be used to find collisions on the function. This implies that Grover’s algorithm can find a collision using an asymptotically less number of operations than using a brute-force algorithm. The power of Grover’s algorithm can be turned against cryptographic hash functions. For instance, a quantum computer running Grover’s algorithm could find a collision on SHA256 performing only 2¹²⁸ evaluations of a reversible circuit of SHA256. The natural protection for hash functions is to increase the output size to double. More generally, most of symmetric key encryption algorithms will survive to the power of Grover’s algorithm by doubling the size of keys. The scenario for public-key algorithms is devastating in face of Peter Shor’s algorithm. Shor’s Algorithm Multiplying integers is an easy task to accomplish, however, finding the factors that compose an integer is difficult. The integer factorization problem is to decompose a given integer number into its prime factors. For example, 42 has three factors 2, 3, and 7 since $$2\times 3\times 7 = 42$$. As the numbers get bigger, integer factorization becomes more difficult to solve, and the hardest instances of integer factorization are when the factors are only two different large primes. Thus, given an integer number $$N$$, to find primes $$p$$ and $$q$$ such that $$N = p \times q$$, is known as integer splitting. Factoring integers is like cutting wood, and the specific task of splitting integers is analogous to using an axe for splitting the log in two parts. There exist many different tools (algorithms) for accomplishing each task. For integer factorization, trial division, the Rho method, the elliptic curve method are common algorithms. Fermat’s method, the quadratic- and rational-sieve, leads to the (general) number field sieve (NFS) algorithm for integer splitting. The latter relies on finding a congruence of squares, that is, splitting $$N$$ as a product of squares such that $$N = x^2 – y^2 = (x+y)\times(x-y)$$ The complexity of NFS is mainly attached to the number of pairs $$(x, y)$$ that must be examined before getting a pair that factors $$N$$. The NFS algorithm has subexponential complexity on the size of $$N$$, meaning that the time required for splitting an integer increases significantly as the size of $$N$$ grows. For large integers, the problem becomes intractable for classical computers. The Axe of Thor Shor The many different guesses of the NFS algorithm are analogous to hitting the log using a dulled axe; after subexponential many tries, the log is cut by half. However, using a sharper axe allows you to split the log faster. This sharpened axe is the quantum algorithm proposed by Shor in 1994. Let $$x$$ be an integer less than $$N$$ and of the order $$k$$. Then, if $$k$$ is even, there exists an integer $$q$$ so $$qN$$ can be factored as follows. This approach has some issues. For example, the factorization could correspond to $$q$$ not $$N$$ and the order of $$x$$ is unknown, and here is where Shor’s algorithm enters the picture, finding the order of $$x$$. The internals of Shor’s algorithm rely on encoding the order $$k$$ into a periodic function, so that its period can be obtained using the quantum version of the Fourier transform (QFT). The order of $$x$$ can be found using a polynomial number quantum evaluations of Shor’s algorithm. Therefore, splitting integers using this quantum approach has polynomial complexity on the size of $$N$$. Shor’s algorithm carries strong implications on the security of the RSA encryption scheme because its security relies on integer factorization. A large-enough quantum computer can efficiently break RSA for current instances. Alternatively, one may recur to elliptic curves, used in cryptographic protocols like ECDSA or ECDH. Moreover, all TLS ciphersuites use a combination of elliptic curve groups, large prime groups, and RSA and DSA signatures. Unfortunately, these algorithms all succumb to Shor’s algorithm. It only takes a few modifications for Shor’s algorithm to solve the discrete logarithm problem on finite groups. This sounds like a catastrophic story where all of our encrypted data and privacy are no longer secure with the advent of a quantum computer, and in some sense this is true. On one hand, it is a fact that the quantum computers constructed as of 2019 are not large enough to run, for instance, Shor’s algorithm for the RSA key sizes used in standard protocols. For example, a 2018 report shows experiments on the factorization of a 19-bit number using 94 qubits, they also estimate that 147456 qubits are needed for factoring a 768-bit number. Hence, there numbers indicates that we are still far from breaking RSA. What if we increment RSA key sizes to be resistant to quantum algorithms, just like for symmetric algorithms? Bernstein et al. estimated that RSA public keys should be as large as 1 terabyte to maintain secure RSA even in the presence of quantum factoring algorithms. So, for public-key algorithms, increasing the size of keys does not help. A recent investigation by Gidney and Ekerá shows improvements that accelerate the evaluation of quantum factorization. In their report, the cost of factoring 2048-bit integers is estimated to take a few hours using a quantum machine of 20 million qubits, which is far from any current development. Something worth noting is that the number of qubits needed is two orders of magnitude smaller than the estimated numbers given in previous works developed in this decade. Under these estimates, current encryption algorithms will remain secure several more years; however, consider the following not-so-unrealistic situation. Information currently encrypted with for example, RSA, can be easily decrypted with a quantum computer in the future. Now, suppose that someone records encrypted information and stores them until a quantum computer is able to decrypt ciphertexts. Although this could be as far as 20 years from now, the forward-secrecy principle is violated. A 20-year gap to the future is sometimes difficult to imagine. So, let’s think backwards, what would happen if all you did on the Internet at the end of the 1990s can be revealed 20 years later — today. How does this impact the security of your personal information? What if the ciphertexts were company secrets or business deals? In 1999, most of us were concerned about the effects of the Y2K problem, now we’re facing Y2Q (years to quantum): the advent of quantum computers. Post-Quantum Cryptography Although the current capacity of the physical implementation of quantum computers is far from a real threat to secure communications, a transition to use stronger problems to protect information has already started. This wave emerged as post-quantum cryptography (PQC). The core idea of PQC is finding algorithms difficult enough that no quantum (and classical) algorithm can solve them. A recurrent question is: How does it look like a problem that even a quantum computer can not solve? These so-called quantum-resistant algorithms rely on different hard mathematical assumptions; some of them as old as RSA, others more recently proposed. For example, McEliece cryptosystem, formulated in the late 70s, relies on the hardness of decoding a linear code (in the sense of coding theory). The practical use of this cryptosystem didn’t become widespread, since with the passing of time, other cryptosystems superseded in efficiency. Fortunately, McEliece cryptosystem remains immune to Shor’s algorithm, gaining it relevance in the post-quantum era. Post-quantum cryptography presents alternatives: As of 2017, the NIST started an evaluation process that tracks possible alternatives for next-generation secure algorithms. From a practical perspective, all candidates present different trade-offs in implementation and usage. The time and space requirements are diverse; at this moment, it’s too early to define which will succeed RSA and elliptic curves. An initial round collected 70 algorithms for deploying key encapsulation mechanisms and digital signatures. As of early 2019, 28 of these survive and are currently in the analysis, investigation, and experimentation phase. Cloudflare’s mission is to help build a better Internet. As a proactive action, our cryptography team is preparing experiments on the deployment of post-quantum algorithms at Cloudflare scale. Watch our blog post for more details. The Quantum Menace Post Syndicated from Armando Faz-Hernández original https://blog.cloudflare.com/the-quantum-menace/ Over the last few decades, the word ‘quantum’ has become increasingly popular. It is common to find articles, reports, and many people interested in quantum mechanics and the new capabilities and improvements it brings to the scientific community. This topic not only concerns physics, since the development of quantum mechanics impacts on several other fields such as chemistry, economics, artificial intelligence, operations research, and undoubtedly, cryptography. This post begins a trio of blogs describing the impact of quantum computing on cryptography, and how to use stronger algorithms resistant to the power of quantum computing. • This post introduces quantum computing and describes the main aspects of this new computing model and its devastating impact on security standards; it summarizes some approaches to securing information using quantum-resistant algorithms. • Due to the relevance of this matter, we present our experiments on a large-scale deployment of quantum-resistant algorithms. • Our third post introduces CIRCL, open-source Go library featuring optimized implementations of quantum-resistant algorithms and elliptic curve-based primitives. All of this is part of Cloudflare’s Crypto Week 2019, now fasten your seatbelt and get ready to make a quantum leap. What is Quantum Computing? Back in 1981, Richard Feynman raised the question about what kind of computers can be used to simulate physics. Although some physical systems can be simulated in a classical computer, the amount of resources used by such a computer can grow exponentially. Then, he conjectured the existence of a computer model that behaves under quantum mechanics rules, which opened a field of research now called quantum computing. To understand the basics of quantum computing, it is necessary to recall how classical computers work, and from that shine a spotlight on the differences between these computational models. In 1936, Alan Turing and Emil Post independently described models that gave rise to the foundation of the computing model known as the Post-Turing machine, which describes how computers work and allowed further determination of limits for solving problems. In this model, the units of information are bits, which store one of two possible values, usually denoted by 0 and 1. A computing machine contains a set of bits and performs operations that modify the values of the bits, also known as the machine’s state. Thus, a machine with N bits can be in one of 2ᴺ possible states. With this in mind, the Post-Turing computing model can be abstractly described as a machine of states, in which running a program is translated as machine transitions along the set of states. A paper David Deutsch published in 1985 describes a computing model that extends the capabilities of a Turing machine based on the theory of quantum mechanics. This computing model introduces several advantages over the Turing model for processing large volumes of information. It also presents unique properties that deviate from the way we understand classical computing. Most of these properties come from the nature of quantum mechanics. We’re going to dive into these details before approaching the concept of quantum computing. Superposition One of the most exciting properties of quantum computing that provides an advantage over the classical computing model is superposition. In physics, superposition is the ability to produce valid states from the addition or superposition of several other states that are part of a system. Applying these concepts to computing information, it means that there is a system in which it is possible to generate a machine state that represents a (weighted) sum of the states 0 and 1; in this case, the term weighted means that the state can keep track of “the quantity of” 0 and 1 present in the state. In the classical computation model, one bit can only store either the state of 0 or 1, not both; even using two bits, they cannot represent the weighted sum of these states. Hence, to make a distinction from the basic states, quantum computing uses the concept of a quantum bit (qubit) — a unit of information to denote the superposition of two states. This is a cornerstone concept of quantum computing as it provides a way of tracking more than a single state per unit of information, making it a powerful tool for processing information. So, a qubit represents the sum of two parts: the 0 or 1 state plus the amount each 0/1 state contributes to produce the state of the qubit. In mathematical notation, qubit $$| \Psi \rangle$$ is an explicit sum indicating that a qubit represents the superposition of the states 0 and 1. This is the Dirac notation used to describe the value of a qubit $$| \Psi \rangle = A | 0 \rangle +B | 1 \rangle$$, where, A and B are complex numbers known as the amplitude of the states 0 and 1, respectively. The value of the basic states is represented by qubits as $$| 0 \rangle = 1 | 0 \rangle + 0 | 1 \rangle$$ and $$| 1 \rangle = 0 | 0 \rangle + 1 | 1 \rangle$$, respectively. The right side of the term contains the abbreviated notation for these special states. Measurement In a classical computer, the values 0 and 1 are implemented as digital signals. Measuring the current of the signal automatically reveals the status of a bit. This means that at any moment the value of the bit can be observed or measured. The state of a qubit is maintained in a physically closed system, meaning that the properties of the system, such as superposition, require no interaction with the environment; otherwise any interaction, like performing a measurement, can cause interference on the state of a qubit. Measuring a qubit is a probabilistic experiment. The result is a bit of information that depends on the state of the qubit. The bit, obtained by measuring $$| \Psi \rangle = A | 0 \rangle +B | 1 \rangle$$, will be equal to 0 with probability $$|A|^2$$, and equal to 1 with probability $$|B|^2$$, where $$|x|$$ represents the absolute value of $$x$$. From Statistics, we know that the sum of probabilities of all possible events is always equal to 1, so it must hold that $$|A|^2 +|B|^2 =1$$. This last equation motivates to represent qubits as the points of a circle of radius one, and more generally, as the points on the surface of a sphere of radius one, which is known as the Bloch Sphere. Let’s break it down: If you measure a qubit you also destroy the superposition of the qubit, resulting in a superposition state collapse, where it assumes one of the basics states, providing your final result. Another way to think about superposition and measurement is through the coin tossing experiment. Toss a coin in the air and you give people a random choice between two options: heads or tails. Now, don’t focus on the randomness of the experiment, instead note that while the coin is rotating in the air, participants are uncertain which side will face up when the coin lands. Conversely, once the coin stops with a random side facing up, participants are 100% certain of the status. How does it relate? Qubits are similar to the participants. When a qubit is in a superposition of states, it is tracking the probability of heads or tails, which is the participants’ uncertainty quotient while the coin is in the air. However, once you start to measure the qubit to retrieve its value, the superposition vanishes, and a classical bit value sticks: heads or tails. Measurement is that moment when the coin is static with only one side facing up. A fair coin is a coin that is not biased. Each side (assume 0=heads and 1=tails) of a fair coin has the same probability of sticking after a measurement is performed. The qubit $$\tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle$$ describes the probabilities of tossing a fair coin. Note that squaring either of the amplitudes results in ½, indicating that there is a 50% chance either heads or tails sticks. It would be interesting to be able to charge a fair coin at will while it is in the air. Although this is the magic of a professional illusionist, this task, in fact, can be achieved by performing operations over qubits. So, get ready to become the next quantum magician! Quantum Gates A logic gate represents a Boolean function operating over a set of inputs (on the left) and producing an output (on the right). A logic circuit is a set of connected logic gates, a convenient way to represent bit operations. Other gates are AND, OR, XOR, and NAND, and more. A set of gates is universal if it can generate other gates. For example, NOR and NAND gates are universal since any circuit can be constructed using only these gates. Quantum computing also admits a description using circuits. Quantum gates operate over qubits, modifying the superposition of the states. For example, there is a quantum gate analogous to the NOT gate, the X gate. The X quantum gate interchanges the amplitudes of the states of the input qubit. The Z quantum gate flips the sign’s amplitude of state 1: Another quantum gate is the Hadamard gate, which generates an equiprobable superposition of the basic states. Using our coin tossing analogy, the Hadamard gate has the action of tossing a fair coin to the air. In quantum circuits, a triangle represents measuring a qubit, and the resulting bit is indicated by a double-wire. Other gates, such as the CNOT gate, Pauli’s gates, Toffoli gate, Deutsch gate, are slightly more advanced. Quirk, the open-source playground, is a fun sandbox where you can construct quantum circuits using all of these gates. Reversibility An operation is reversible if there exists another operation that rolls back the output state to the initial state. For instance, a NOT gate is reversible since applying a second NOT gate recovers the initial input. In contrast, AND, OR, NAND gates are not reversible. This means that some classical computations cannot be reversed by a classic circuit that uses only the output bits. However, if you insert additional bits of information, the operation can be reversed. Quantum computing mainly focuses on reversible computations, because there’s always a way to construct a reversible circuit to perform an irreversible computation. The reversible version of a circuit could require the use of ancillary qubits as auxiliary (but not temporary) variables. Due to the nature of composed systems, it could be possible that these ancillas (extra qubits) correlate to qubits of the main computation. This correlation makes it infeasible to reuse ancillas since any modification could have the side-effect on the operation of a reversible circuit. This is like memory assigned to a process by the operating system: the process cannot use memory from other processes or it could cause memory corruption, and processes cannot release their assigned memory to other processes. You could use garbage collection mechanisms for ancillas, but performing reversible computations increases your qubit budget. Composed Systems In quantum mechanics, a single qubit can be described as a single closed system: a system that has no interaction with the environment nor other qubits. Letting qubits interact with others leads to a composed system where more states are represented. The state of a 2-qubit composite system is denoted as $$A_0|00\rangle+A_1|01\rangle+A_2|10\rangle+A_3|11\rangle$$, where, $$A_i$$ values correspond to the amplitudes of the four basic states 00, 01, 10, and 11. This qubit $$\tfrac{1}{2}|00\rangle+\tfrac{1}{2}|01\rangle+\tfrac{1}{2}|10\rangle+\tfrac{1}{2}|11\rangle$$ represents the superposition of these basic states, both having the same probability obtained after measuring the two qubits. In the classical case, the state of N bits represents only one of 2ᴺ possible states, whereas a composed state of N qubits represents all the 2ᴺ states but in superposition. This is one big difference between these computing models as it carries two important properties: entanglement and quantum parallelism. Entanglement According to the theory behind quantum mechanics, some composed states can be described through the description of its constituents. However, there are composed states where no description is possible, known as entangled states. The entanglement phenomenon was pointed out by Einstein, Podolsky, and Rosen in the so-called EPR paradox. Suppose there is a composed system of two entangled qubits, in which by performing a measurement in one qubit causes interference in the measurement of the second. This interference occurs even when qubits are separated by a long distance, which means that some information transfer happens faster than the speed of light. This is how quantum entanglement conflicts with the theory of relativity, where information cannot travel faster than the speed of light. The EPR paradox motivated further investigation for deriving new interpretations about quantum mechanics and aiming to resolve the paradox. Quantum entanglement can help to transfer information at a distance by following a communication protocol. The following protocol examples rely on the fact that Alice and Bob separately possess one of two entangled qubits: • The superdense coding protocol allows Alice to communicate a 2-bit message $$m_0,m_1$$ to Bob using a quantum communication channel, for example, using fiber optics to transmit photons. All Alice has to do is operate on her qubit according to the value of the message and send the resulting qubit to Bob. Once Bob receives the qubit, he measures both qubits, noting that the collapsed 2-bit state corresponds to Alice’s message. • The quantum teleportation protocol allows Alice to transmit a qubit to Bob without using a quantum communication channel. Alice measures the qubit to send Bob and her entangled qubit resulting in two bits. Alice sends these bits to Bob, who operates on his entangled qubit according to the bits received and notes that the result state matches the original state of Alice’s qubit. Quantum Parallelism Composed systems of qubits allow representation of more information per composed state. Note that operating on a composed state of N qubits is equivalent to operating over a set of 2ᴺ states in superposition. This procedure is quantum parallelism. In this setting, operating over a large volume of information gives the intuition of performing operations in parallel, like in the parallel computing paradigm; one big caveat is that superposition is not equivalent to parallelism. Remember that a composed state is a superposition of several states so, a computation that takes a composed state of inputs will result in a composed state of outputs. The main divergence between classical and quantum parallelism is that quantum parallelism can obtain only one of the processed outputs. Observe that a measurement in the output of a composed state causes that the qubits collapse to only one of the outputs, making it unattainable to calculate all computed values. Although quantum parallelism does not match precisely with the traditional notion of parallel computing, you can still leverage this computational power to get related information. Deutsch-Jozsa Problem: Assume $$F$$ is a function that takes as input N bits, outputs one bit, and is either constant (always outputs the same value for all inputs) or balanced (outputs 0 for half of the inputs and 1 for the other half). The problem is to determine if $$F$$ is constant or balanced. The quantum algorithm that solves the Deutsch-Jozsa problem uses quantum parallelism. First, N qubits are initialized in a superposition of 2ᴺ states. Then, in a single shot, it evaluates $$F$$ for all of these states. The result of applying $$F$$ appears (in the exponent) of the amplitude of the all-zero state. Note that only when $$F$$ is constant is this amplitude, either +1 or -1. If the result of measuring the N qubit is an all-zeros bitstring, then there is a 100% certainty that $$F$$ is constant. Any other result indicates that $$F$$ is balanced. A deterministic classical algorithm solves this problem using $$2^{N-1}+1$$ evaluations of $$F$$ in the worst case. Meanwhile, the quantum algorithm requires only one evaluation. The Deutsch-Jozsa problem exemplifies the exponential advantage of a quantum algorithm over classical algorithms. Quantum Computers The theory of quantum computing is supported by investigations in the field of quantum mechanics. However, constructing a quantum machine requires a physical system that allows representing qubits and manipulation of states in a reliable and precise way. The DiVincenzo Criteria require that a physical implementation of a quantum computer must: 1. Be scalable and have well-defined qubits. 2. Be able to initialize qubits to a state. 3. Have long decoherence times to apply quantum error-correcting codes. Decoherence of a qubit happens when the qubit interacts with the environment, for example, when a measurement is performed. 4. Use a universal set of quantum gates. 5. Be able to measure single qubits without modifying others. Quantum computer physical implementations face huge engineering obstacles to satisfy these requirements. The most important challenge is to guarantee low error rates during computation and measurement. Lowering these rates require techniques for error correction, which add a significant number of qubits specialized on this task. For this reason, the number of qubits of a quantum computer should not be regarded as for classical systems. In a classical computer, the bits of a computer are all effective for performing a calculation, whereas the number of qubits is the sum of the effective qubits (those used to make calculations) plus the ancillas (used for reversible computations) plus the error correction qubits. Current implementations of quantum computers partially satisfy the DiVincenzo criteria. Quantum adiabatic computers fit in this category since they do not operate using quantum gates. For this reason, they are not considered to be universal quantum computers. Quantum Adiabatic Computers A recurrent problem in optimization is to find the global minimum of an objective function. For example, a route-traffic control system can be modeled as a function that reduces the cost of routing to a minimum. Simulated annealing is a heuristic procedure that provides a good solution to these types of problems. Simulated annealing finds the solution state by slowly introducing changes (the adiabatic process) on the variables that govern the system. Quantum annealing is the analogous quantum version of simulated annealing. A qubit is initialized into a superposition of states representing all possible solutions to the problem. Here is used the Hamiltonian operator, which is the sum of vectors of potential and kinetic energies of the system. Hence, the objective function is encoded using this operator describing the evolution of the system in correspondence with time. Then, if the system is allowed to evolve very slowly, it will eventually land on a final state representing the optimal value of the objective function. Currently, there exist adiabatic computers in the market, such as the D-Wave and IBM Q systems, featuring hundreds of qubits; however, their capabilities are somewhat limited to some problems that can be modeled as optimization problems. The limits of adiabatic computers were studied by van Dam et al, showing that despite solving local searching problems and even some instances of the max-SAT problem, there exists harder searching problems this computing model cannot efficiently solve. Nuclear Magnetic Resonance Nuclear Magnetic Resonance (NMR) is a physical phenomena that can be used to represent qubits. The spin of atomic nucleus of molecules is perturbed by an oscillating magnetic field. A 2001 report describes successful implementation of Shor’s algorithm in a 7-qubit NMR quantum computer. An iconic result since this computer was able to factor the number 15. Superconducting Quantum Computers One way to physically construct qubits is based on superconductors, materials that conduct electric current with zero resistance when exposed to temperatures close to absolute zero. The Josephson effect, in which current flows across the junction of two superconductors separated by a non-superconducting material, is used to physically implement a superposition of states. When a magnetic flux is applied to this junction, the current flows continuously in one direction. But, depending on the quantity of magnetic flux applied, the current can also flow in the opposite direction. There exists a quantum superposition of currents going both clockwise and counterclockwise leading to a physical implementation of a qubit called flux qubit. The complete device is known as the Superconducting Quantum Interference Device (SQUID) and can be easily coupled scaling the number of qubits. Thus, SQUIDs are like the transistors of a quantum computer. Examples of superconducting computers are: • D-wave’s adiabatic computers process quantum annealing for solving diverse optimization problems. • Google’s 72-qubit computer was recently announced and also several engineering issues such as achieving lower temperatures. • IBM’s IBM-Q Tokyo, a 20-qubit adiabatic computer, and IBM Q Experience, a cloud-based system for exploring quantum circuits. IBM Q System The Imminent Threat of Quantum Algorithms The quantum zoo website tracks problems that can be solved using quantum algorithms. As of mid-2018, more than 60 problems appear on this list, targeting diverse applications in the area of number theory, approximation, simulation, and searching. As terrific as it sounds, some easily-solvable problems by quantum computing are surrounding the security of information. Grover’s Algorithm Tales of a quantum detective (fragment). A couple of detectives have the mission of finding one culprit in a group of suspects that always respond to this question honestly: “are you guilty?”. The detective C follows a classic interrogative method and interviews every person one at a time, until finding the first one that confesses. The detective Q proceeds in a different way, First gather all suspects in a completely dark room, and after that, the detective Q asks them — are you guilty? — A steady sound comes from the room saying “No!” while at the same time, a single voice mixed in the air responds “Yes!.” Since everybody is submerged in darkness, the detective cannot see the culprit. However, detective Q knows that, as long as the interrogation advances, the culprit will feel desperate and start to speak louder and louder, and so, he continues asking the same question. Suddenly, detective Q turns on the lights, enters into the room, and captures the culprit. How did he do it? The task of the detective can be modeled as a searching problem. Given a Boolean function $$f$$ that takes N bits and produces one bit, to find the unique input $$x$$ such that $$f(x)=1$$. A classical algorithm (detective C) finds $$x$$ using $$2^N-1$$ function evaluations in the worst case. However, the quantum algorithm devised by Grover, corresponding to detective Q, searches quadratically faster using around $$2^{N/2}$$ function evaluations. The key intuition of Grover’s algorithm is increasing the amplitude of the state that represents the solution while maintaining the other states in a lower amplitude. In this way, a system of N qubits, which is a superposition of 2ᴺ possible inputs, can be continuously updated using this intuition until the solution state has an amplitude closer to 1. Hence, after updating the qubits many times, there will be a high probability to measure the solution state. Initially, a superposition of 2ᴺ states (horizontal axis) is set, each state has an amplitude (vertical axis) close to 0. The qubits are updated so that the amplitude of the solution state increases more than the amplitude of other states. By repeating the update step, the amplitude of the solution state gets closer to 1, which boosts the probability of collapsing to the solution state after measuring. Grover’s Algorithm (pseudo-code): 1. Prepare an N qubit $$|x\rangle$$ as a uniform superposition of 2ᴺ states. 2. Update the qubits by performing this core operation. $$|x\rangle \mapsto (-1)^{f(x)} |x\rangle$$ The result of $$f(x)$$ only flips the amplitude of the searched state. 3. Negate the N qubit over the average of the amplitudes. 4. Repeat Step 2 and 3 for $$(\tfrac{\pi}{4}) 2^{ N/2}$$ times. 5. Measure the qubit and return the bits obtained. Alternatively, the second step can be better understood as a conditional statement: IF f(x) = 1 THEN Negate the amplitude of the solution state. ELSE /* nothing */ ENDIF Grover’s algorithm considers function $$f$$ a black box, so with slight modifications, the algorithm can also be used to find collisions on the function. This implies that Grover’s algorithm can find a collision using an asymptotically less number of operations than using a brute-force algorithm. The power of Grover’s algorithm can be turned against cryptographic hash functions. For instance, a quantum computer running Grover’s algorithm could find a collision on SHA256 performing only 2¹²⁸ evaluations of a reversible circuit of SHA256. The natural protection for hash functions is to increase the output size to double. More generally, most of symmetric key encryption algorithms will survive to the power of Grover’s algorithm by doubling the size of keys. The scenario for public-key algorithms is devastating in face of Peter Shor’s algorithm. Shor’s Algorithm Multiplying integers is an easy task to accomplish, however, finding the factors that compose an integer is difficult. The integer factorization problem is to decompose a given integer number into its prime factors. For example, 42 has three factors 2, 3, and 7 since $$2\times 3\times 7 = 42$$. As the numbers get bigger, integer factorization becomes more difficult to solve, and the hardest instances of integer factorization are when the factors are only two different large primes. Thus, given an integer number $$N$$, to find primes $$p$$ and $$q$$ such that $$N = p \times q$$, is known as integer splitting. Factoring integers is like cutting wood, and the specific task of splitting integers is analogous to using an axe for splitting the log in two parts. There exist many different tools (algorithms) for accomplishing each task. For integer factorization, trial division, the Rho method, the elliptic curve method are common algorithms. Fermat’s method, the quadratic- and rational-sieve, leads to the (general) number field sieve (NFS) algorithm for integer splitting. The latter relies on finding a congruence of squares, that is, splitting $$N$$ as a product of squares such that $$N = x^2 – y^2 = (x+y)\times(x-y)$$ The complexity of NFS is mainly attached to the number of pairs $$(x, y)$$ that must be examined before getting a pair that factors $$N$$. The NFS algorithm has subexponential complexity on the size of $$N$$, meaning that the time required for splitting an integer increases significantly as the size of $$N$$ grows. For large integers, the problem becomes intractable for classical computers. The Axe of Thor Shor The many different guesses of the NFS algorithm are analogous to hitting the log using a dulled axe; after subexponential many tries, the log is cut by half. However, using a sharper axe allows you to split the log faster. This sharpened axe is the quantum algorithm proposed by Shor in 1994. Let $$x$$ be an integer less than $$N$$ and of the order $$k$$. Then, if $$k$$ is even, there exists an integer $$q$$ so $$qN$$ can be factored as follows. This approach has some issues. For example, the factorization could correspond to $$q$$ not $$N$$ and the order of $$x$$ is unknown, and here is where Shor’s algorithm enters the picture, finding the order of $$x$$. The internals of Shor’s algorithm rely on encoding the order $$k$$ into a periodic function, so that its period can be obtained using the quantum version of the Fourier transform (QFT). The order of $$x$$ can be found using a polynomial number quantum evaluations of Shor’s algorithm. Therefore, splitting integers using this quantum approach has polynomial complexity on the size of $$N$$. Shor’s algorithm carries strong implications on the security of the RSA encryption scheme because its security relies on integer factorization. A large-enough quantum computer can efficiently break RSA for current instances. Alternatively, one may recur to elliptic curves, used in cryptographic protocols like ECDSA or ECDH. Moreover, all TLS ciphersuites use a combination of elliptic curve groups, large prime groups, and RSA and DSA signatures. Unfortunately, these algorithms all succumb to Shor’s algorithm. It only takes a few modifications for Shor’s algorithm to solve the discrete logarithm problem on finite groups. This sounds like a catastrophic story where all of our encrypted data and privacy are no longer secure with the advent of a quantum computer, and in some sense this is true. On one hand, it is a fact that the quantum computers constructed as of 2019 are not large enough to run, for instance, Shor’s algorithm for the RSA key sizes used in standard protocols. For example, a 2018 report shows experiments on the factorization of a 19-bit number using 94 qubits, they also estimate that 147456 qubits are needed for factoring a 768-bit number. Hence, there numbers indicates that we are still far from breaking RSA. What if we increment RSA key sizes to be resistant to quantum algorithms, just like for symmetric algorithms? Bernstein et al. estimated that RSA public keys should be as large as 1 terabyte to maintain secure RSA even in the presence of quantum factoring algorithms. So, for public-key algorithms, increasing the size of keys does not help. A recent investigation by Gidney and Ekerá shows improvements that accelerate the evaluation of quantum factorization. In their report, the cost of factoring 2048-bit integers is estimated to take a few hours using a quantum machine of 20 million qubits, which is far from any current development. Something worth noting is that the number of qubits needed is two orders of magnitude smaller than the estimated numbers given in previous works developed in this decade. Under these estimates, current encryption algorithms will remain secure several more years; however, consider the following not-so-unrealistic situation. Information currently encrypted with for example, RSA, can be easily decrypted with a quantum computer in the future. Now, suppose that someone records encrypted information and stores them until a quantum computer is able to decrypt ciphertexts. Although this could be as far as 20 years from now, the forward-secrecy principle is violated. A 20-year gap to the future is sometimes difficult to imagine. So, let’s think backwards, what would happen if all you did on the Internet at the end of the 1990s can be revealed 20 years later — today. How does this impact the security of your personal information? What if the ciphertexts were company secrets or business deals? In 1999, most of us were concerned about the effects of the Y2K problem, now we’re facing Y2Q (years to quantum): the advent of quantum computers. Post-Quantum Cryptography Although the current capacity of the physical implementation of quantum computers is far from a real threat to secure communications, a transition to use stronger problems to protect information has already started. This wave emerged as post-quantum cryptography (PQC). The core idea of PQC is finding algorithms difficult enough that no quantum (and classical) algorithm can solve them. A recurrent question is: How does it look like a problem that even a quantum computer can not solve? These so-called quantum-resistant algorithms rely on different hard mathematical assumptions; some of them as old as RSA, others more recently proposed. For example, McEliece cryptosystem, formulated in the late 70s, relies on the hardness of decoding a linear code (in the sense of coding theory). The practical use of this cryptosystem didn’t become widespread, since with the passing of time, other cryptosystems superseded in efficiency. Fortunately, McEliece cryptosystem remains immune to Shor’s algorithm, gaining it relevance in the post-quantum era. Post-quantum cryptography presents alternatives: As of 2017, the NIST started an evaluation process that tracks possible alternatives for next-generation secure algorithms. From a practical perspective, all candidates present different trade-offs in implementation and usage. The time and space requirements are diverse; at this moment, it’s too early to define which will succeed RSA and elliptic curves. An initial round collected 70 algorithms for deploying key encapsulation mechanisms and digital signatures. As of early 2019, 28 of these survive and are currently in the analysis, investigation, and experimentation phase. Cloudflare’s mission is to help build a better Internet. As a proactive action, our cryptography team is preparing experiments on the deployment of post-quantum algorithms at Cloudflare scale. Watch our blog post for more details. Towards Post-Quantum Cryptography in TLS Post Syndicated from Kris Kwiatkowski original https://blog.cloudflare.com/towards-post-quantum-cryptography-in-tls/ We live in a completely connected society. A society connected by a variety of devices: laptops, mobile phones, wearables, self-driving or self-flying things. We have standards for a common language that allows these devices to communicate with each other. This is critical for wide-scale deployment – especially in cryptography where the smallest detail has great importance. One of the most important standards-setting organizations is the National Institute of Standards and Technology (NIST), which is hugely influential in determining which standardized cryptographic systems see worldwide adoption. At the end of 2016, NIST announced it would hold a multi-year open project with the goal of standardizing new post-quantum (PQ) cryptographic algorithms secure against both quantum and classical computers. Many of our devices have very different requirements and capabilities, so it may not be possible to select a “one-size-fits-all” algorithm during the process. NIST mathematician, Dustin Moody, indicated that institute will likely select more than one algorithm: “There are several systems in use that could be broken by a quantum computer – public-key encryption and digital signatures, to take two examples – and we will need different solutions for each of those systems.” Initially, NIST selected 82 candidates for further consideration from all submitted algorithms. At the beginning of 2019, this process entered its second stage. Today, there are 26 algorithms still in contention. Post-quantum cryptography: what is it really and why do I need it? In 1994, Peter Shor made a significant discovery in quantum computation. He found an algorithm for integer factorization and computing discrete logarithms, both believed to be hard to solve in classical settings. Since then it has become clear that the ‘hard problems’ on which cryptosystems like RSA and elliptic curve cryptography (ECC) rely – integer factoring and computing discrete logarithms, respectively – are efficiently solvable with quantum computing. A quantum computer can help to solve some of the problems that are intractable on a classical computer. In theory, they could efficiently solve some fundamental problems in mathematics. This amazing computing power would be highly beneficial, which is why companies are actually trying to build quantum computers. At first, Shor’s algorithm was merely a theoretical result – quantum computers powerful enough to execute it did not exist – but this is quickly changing. In March 2018, Google announced a 72-qubit universal quantum computer. While this is not enough to break say RSA-2048 (still more is needed), many fundamental problems have already been solved. In anticipation of wide-spread quantum computing, we must start the transition from classical public-key cryptography primitives to post-quantum (PQ) alternatives. It may be that consumers will never get to hold a quantum computer, but a few powerful attackers who will get one can still pose a serious threat. Moreover, under the assumption that current TLS handshakes and ciphertexts are being captured and stored, a future attacker could crack these stored individual session keys and use those results to decrypt the corresponding individual ciphertexts. Even strong security guarantees, like forward secrecy, do not help out much there. In 2006, the academic research community launched a conference series dedicated to finding alternatives to RSA and ECC. This so-called post-quantum cryptography should run efficiently on a classical computer, but it should also be secure against attacks performed by a quantum computer. As a research field, it has grown substantially in popularity. Several companies, including Google, Microsoft, Digicert and Thales, are already testing the impact of deploying PQ cryptography. Cloudflare is involved in some of this, but we want to be a company that leads in this direction. The first thing we need to do is understand the real costs of deploying PQ cryptography, and that’s not obvious at all. What options do we have? Many submissions to the NIST project are still under study. Some are very new and little understood; others are more mature and already standardized as RFCs. Some have been broken or withdrawn from the process; others are more conservative or illustrate how far classical cryptography would need to be pushed so that a quantum computer could not crack it within a reasonable cost. Some are very slow and big; others are not. But most cryptographic schemes can be categorized into these families: lattice-based, multivariate, hash-based (signatures only), code-based and isogeny-based. For some algorithms, nevertheless, there is a fear they may be too inconvenient to use with today’s Internet. We must also be able to integrate new cryptographic schemes with existing protocols, such as SSH or TLS. To do that, designers of PQ cryptosystems must consider these characteristics: • Latency caused by encryption and decryption on both ends of the communication channel, assuming a variety of devices from big and fast servers to slow and memory constrained IoT (Internet of Things) devices • Small public keys and signatures to minimize bandwidth • Clear design that allows cryptanalysis and determining weaknesses that could be exploited • Use of existing hardware for fast implementation The work on post-quantum public key cryptosystems must be done in a full view of organizations, governments, cryptographers, and the public. Emerging ideas must be properly vetted by this community to ensure widespread support. Helping Build a Better Internet To better understand the post-quantum world, Cloudflare began experimenting with these algorithms and used them to provide confidentiality in TLS connections. With Google, we are proposing a wide-scale experiment that combines client- and server-side data collection to evaluate the performance of key-exchange algorithms on actual users’ devices. We hope that this experiment helps choose an algorithm with the best characteristics for the future of the Internet. With Cloudflare’s highly distributed network of access points and Google’s Chrome browser, both companies are in a very good position to perform this experiment. Our goal is to understand how these algorithms act when used by real clients over real networks, particularly candidate algorithms with significant differences in public-key or ciphertext sizes. Our focus is on how different key sizes affect handshake time in the context of Transport Layer Security (TLS) as used on the web over HTTPS. Our primary candidates are an NTRU-based construction called HRSS-SXY (by Hülsing – Rijneveld – Schanck – Schwabe, and Tsunekazu Saito – Keita Xagawa – Takashi Yamakawa) and an isogeny-based Supersingular Isogeny Key Encapsulation (SIKE). Details of both algorithms are described in more detail below in section “Dive into post-quantum cryptography”. This table shows a few characteristics for both algorithms. Performance timings were obtained by running the BoringSSL speed test on an Intel Skylake CPU. KEM Public Key size (bytes) Ciphertext (bytes) Secret size (bytes) KeyGen (op/sec) Encaps (op/sec) Decaps (op/sec) NIST level HRSS-SXY 1138 1138 32 3952.3 76034.7 21905.8 1 SIKE/p434 330 346 16 367.1 228.0 209.3 1 Currently the most commonly used key exchange algorithm (according to Cloudflare’s data) is the non-quantum X25519. Its public keys are 32 bytes and BoringSSL can generate 49301.2 key pairs, and is able to perform 19628.6 key agreements every second on my Skylake CPU. Note that HRSS-SXY shows a significant speed advantage, while SIKE has a size advantage. In our experiment, we will deploy these two algorithms on both the server side using Cloudflare’s infrastructure, and the client side using Chrome Canary; both sides will collect telemetry information about TLS handshakes using these two PQ algorithms to see how they perform in practice. What do we expect to find? In 2018, Adam Langley conducted an experiment with the goal of evaluating the likely latency impact of a post-quantum key exchange in TLS. Chrome was augmented with the ability to include a dummy, arbitrarily-sized extension in the TLS ClientHello (fixed number of bytes of random noise). After taking into account the performance and key size offered by different types key-exchange schemes, he concluded that constructs based on structured lattices may be most suitable for future use in TLS. However, Langley also observed a peculiar phenomenon; client connections measured at 95th percentile had much higher latency than the median. It means that in those cases, isogeny-based systems may be a better choice. In the Dive into post-quantum cryptography, we describe the difference between isogeny-based SIKE and lattice-based NTRU cryptosystems. In our experiment, we want to more thoroughly evaluate and ascribe root causes to these unexpected latency increases. We would particularly like to learn more about the characteristics of those networks: what causes increased latency? how does the performance cost of isogeny-based algorithms impact the TLS handshake? We want to answer key questions, like: • What is a good ratio for speed-to-key size (or how much faster could SIKE get to achieve the client-perceived performance of HRSS)? • How do network middleboxes behave when clients use new PQ algorithms, and which networks have problematic middleboxes? • How do the different properties of client networks affect TLS performance with different PQ key exchanges? Can we identify specific autonomous systems, device configurations, or network configurations that favor one algorithm over another? How is performance affected in the long tail? Experiment Design Our experiment will involve both server- and client-side performance statistics collection from real users around the world (all the data is anonymized). Cloudflare is operating the server-side TLS connections. We will enable the CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE + X25519) key-agreement algorithms on all TLS-terminating edge servers. In this experiment, the ClientHello will contain a CECPQ2 or CECPQ2b public key (but never both). Additionally, Chrome will always include X25519 for servers that do not support post-quantum key exchange. The post-quantum key exchange will only be negotiated in TLS version 1.3 when both sides support it. Since Cloudflare only measures the server side of the connection, it is impossible to determine the time it takes for a ClientHello sent from Chrome to reach Cloudflare’s edge servers; however, we can measure the time it takes for the TLS ServerHello message containing post-quantum key exchange, to reach the client and for the client to respond. On the client side, Chrome Canary will operate the TLS connection. Google will enable either CECPQ2 or CECPQ2b in Chrome for the following mix of architecture and OSes: • x86-64: Windows, Linux, macOS, ChromeOS • aarch64: Android Our high-level expectation is to get similar results as Langley’s original experiment in 2018 — slightly increased latency for the 50th percentile and higher latency for the 95th. Unfortunately, data collected purely from real users’ connections may not suffice for diagnosing the root causes of why some clients experience excessive slowdown. To this end, we will perform follow-up experiments based on per-client information we collect server-side. Our primary hypothesis is that excessive slowdowns, like those Langley observed, are largely due to in-network events, such as middleboxes or bloated/lossy links. As a first-pass analysis, we will investigate whether the slowed-down clients share common network features, like common ASes, common transit networks, common link types, and so on. To determine this, we will run a traceroute from vantage points close to our servers back toward the clients (not overloading any particular links or hosts) and study whether some client locations are subject to slowdowns for all destinations or just for some. Dive into post-quantum cryptography Be warned: the details of PQ cryptography may be quite complicated. In some cases it builds on classical cryptography, and in other cases it is completely different math. It would be rather hard to describe details in a single blog post. Instead, we are giving you an intuition of post-quantum cryptography, rather than provide deep academic-level descriptions. We’re skipping a lot of details for the sake of brevity. Nevertheless, settle in for a bit of an epic journey because we have a lot to cover. Key encapsulation mechanism NIST requires that all key-agreement algorithms have a form of key-encapsulation mechanism (KEM). The KEM is a simplified form of public key encryption (PKE). As PKE, it also allows agreement on a secret, but in a slightly different way. The idea is that the session key is an output of the encryption algorithm, conversely to public key encryption schemes where session key is an input to the algorithm. In a KEM, Alice generates a random key and uses the pre-generated public key from Bob to encrypt (encapsulate) it. This results in a ciphertext sent to Bob. Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the random key. The idea was initially introduced by Cramer and Shoup. Experience shows that such constructs are easier to design, analyze, and implement as the scheme is limited to communicating a fixed-size session key. Leonardo Da Vinci said, “Simplicity is the ultimate sophistication,” which is very true in cryptography. The key exchange (KEX) protocol, like Diffie-Hellman, is yet a different construct: it allows two parties to agree on a shared secret that can be used as a symmetric encryption key. For example, Alice generates a key pair and sends a public key to Bob. Bob does the same and uses his own key pair with Alice’s public key to generate the shared secret. He then sends his public key to Alice who can now generate the same shared secret. What’s worth noticing is that both Alice and Bob perform exactly the same operations. KEM construction can be converted to KEX. Alice performs key generation and sends the public key to Bob. Bob uses it to encapsulate a symmetric session key and sends it back to Alice. Alice decapsulates the ciphertext received from Bob and gets the symmetric key. This is actually what we do in our experiment to make integration with the TLS protocol less complicated. NTRU Lattice-based Encryption We will enable the CECPQ2 implemented by Adam Langley from Google on our servers. He described this implementation in detail here. This key exchange uses the HRSS algorithm, which is based on the NTRU (N-Th Degree TRUncated Polynomial Ring) algorithm. Foregoing too much detail, I am going to explain how NTRU works and give simplified examples, and finally, compare it to HRSS. NTRU is a cryptosystem based on a polynomial ring. This means that we do not operate on numbers modulo a prime (like in RSA), but on polynomials of degree $$N$$ , where the degree of a polynomial is the highest exponent of its variable. For example, $$x^7 + 6x^3 + 11x^2$$ has degree of 7. One can add polynomials in the ring in the usual way, by simply adding theirs coefficients modulo some integer. In NTRU this integer is called $$q$$. Polynomials can also be multiplied, but remember, you are operating in the ring, therefore the result of a multiplication is always a polynomial of degree less than $$N$$. It basically means that exponents of the resulting polynomial are added to modulo $$N$$. In other words, polynomial ring arithmetic is very similar to modular arithmetic, but instead of working with a set of numbers less than N, you are working with a set of polynomials with a degree less than N. To instantiate the NTRU cryptosystem, three domain parameters must be chosen: • $$N$$ – degree of the polynomial ring, in NTRU the principal objects are polynomials of degree $$N-1$$. • $$p$$ – small modulus used during key generation and decryption for reducing message coefficients. • $$q$$ – large modulus used during algorithm execution for reducing coefficients of the polynomials. First, we generate a pair of public and private keys. To do that, two polynomials $$f$$ and $$g$$ are chosen from the ring in a way that their randomly generated coefficients are much smaller than $$q$$. Then key generation computes two inverses of the polynomial: $$f_p= f^{-1} \bmod{p} \\ f_q= f^{-1} \bmod{q}$$ The last step is to compute $$pk = p\cdot f_q\cdot g \bmod q$$, which we will use as public key pk. The private key consists of $$f$$ and $$f_p$$. The $$f_q$$ is not part of any key, however it must remain secret. It might be the case that after choosing $$f$$, the inverses modulo $$p$$ and $$q$$ do not exist. In this case, the algorithm has to start from the beginning and generate another $$f$$. That’s unfortunate because calculating the inverse of a polynomial is a costly operation. HRSS brings an improvement to this issue since it ensures that those inverses always exist, making key generation faster than as proposed initially in NTRU. The encryption of a message $$m$$ proceeds as follows. First, the message $$m$$ is converted to a ring element $$pt$$ (there exists an algorithm for performing this conversion in both directions). During encryption, NTRU randomly chooses one polynomial $$b$$ called blinder. The goal of the blinder is to generate different ciphertexts per encyption. Thus, the ciphetext $$ct$$ is obtained as $$ct = (b\cdot pk + pt ) \bmod q$$ Decryption looks a bit more complicated but it can also be easily understood. Decryption uses both the secret value $$f$$ and to recover the plaintext as $$v = f \cdot ct \bmod q \\ pt = v \cdot f_p \bmod p$$ This diagram demonstrates why and how decryption works. After obtaining $$pt$$, the message $$m$$ is recovered by inverting the conversion function. The underlying hard assumption is that given two polynomials: $$f$$ and $$g$$ whose coefficients are short compared to the modulus $$q$$, it is difficult to distinguish $$pk = \frac{f}{g}$$ from a random element in the ring. It means that it’s hard to find $$f$$ and $$g$$ given only public key pk. Lattices NTRU cryptosystem is a grandfather of lattice-based encryption schemes. The idea of using difficult problems for cryptographic purposes was due to Ajtai. His work evolved into a whole area of research with the goal of creating more practical, lattice-based cryptosystems. What is a lattice and why it can be used for post-quantum crypto? The picture below visualizes lattice as points in a two-dimensional space. A lattice is defined by the origin $$O$$ and base vectors $$\{ b_1 , b_2\}$$. Every point on the lattice is represented as a linear combination of the base vectors, for example $$V = -2b_1+b_2$$. There are two classical NP-hard problems in lattice-based cryptography: 1. Shortest Vector Problem (SVP): Given a lattice, to find the shortest non-zero vector in the lattice. In the graph, the vector $$s$$ is the shortest one. The SVP problem is NP-hard only under some assumptions. 2. Closest Vector Problem (CVP). Given a lattice and a vector $$V$$ (not necessarily in the lattice), to find the closest vector to $$V$$. For example, the closest vector to $$t$$ is $$z$$. In the graph above, it is easy for us to solve SVP and CVP by simple inspection. However, the lattices used in cryptography have higher dimensions, say above 1000, as well as highly non-orthogonal basis vectors. On these instances, the problems get extremely hard to solve. It’s even believed future quantum computers will have it tough. NTRU vs HRSS HRSS, which we use in our experiment, is based on NTRU, but a slightly better instantiation. The main improvements are: • Faster key generation algorithm. • NTRU encryption can produce ciphertexts that are impossible to decrypt (true for many lattice-based schemes). But HRSS fixes this problem. • HRSS is a key encapsulation mechanism. CECPQ2b – Isogeny-based Post-Quantum TLS Following CECPQ2, we have integrated into BoringSSL another hybrid key exchange mechanism relying on SIKE. It is called CECPQ2b and we will use it in our experimentation in TLS 1.3. SIKE is a key encapsulation method based on Supersingular Isogeny Diffie-Hellman (SIDH). Read more about SIDH in our previous post. The math behind SIDH is related to elliptic curves. A comparison between SIDH and the classical Elliptic Curve Diffie-Hellman (ECDH) is given. An elliptic curve is a set of points that satisfy a specific mathematical equation. The equation of an elliptic curve may have multiple forms, the standard form is called the Weierstrass equation $$y^2 = x^3 +ax +b$$ and its shape can look like the red curve. An interesting fact about elliptic curves is have a group structure. That is, the set of points on the curve have associated a binary operation called point addition. The set of points on the elliptic curve is closed under addition. Thus, adding two points results in another point that is also on the elliptic curve. If we can add two different points on a curve, then we can also add one point to itself. And if we do it multiple times, then the resulting operations is known as a scalar multiplication and denoted as $$Q = k\cdot P = P+P+\dots+P$$ for an integer $$k$$. Multiplication of scalars is commutative. It means that two scalar multiplications can be evaluated in any order $$\color{darkred}{k_a}\cdot\color{darkgreen}{k_b} = \color{darkgreen}{k_b}\cdot\color{darkred}{k_a}$$; this an important property that makes ECDH possible. It turns out that carefully if choosing an elliptic curve “correctly”, scalar multiplication is easy to compute but extremely hard to reverse. Meaning, given two points $$Q$$ and $$P$$ such that $$Q=k\cdot P$$, finding the integer k is a difficult task known as the Elliptic Curve Discrete Logarithm problem (ECDLP). This problem is suitable for cryptographic purposes. Alice and Bob agree on a secret key as follows. Alice generates a private key $$k_a$$. Then, she uses some publicly known point $$P$$ and calculates her public key as $$Q_a = k_a\cdot P$$. Bob proceeds in similar fashion and gets $$k_b$$ and $$Q_b = k_b\cdot P$$. To agree on a shared secret, each party multiplies their private key with the public key of the other party. The result of this is the shared secret. Key agreement as described above, works thanks to the fact that scalars can commute: $$\color{darkgreen}{k_a} \cdot Q_b = \color{darkgreen}{k_a} \cdot \color{darkred}{k_b} \cdot P \iff \color{darkred}{k_b} \cdot \color{darkgreen}{k_a} \cdot P = \color{darkred}{k_b} \cdot Q_a$$ There is a vast theory behind elliptic curves. An introduction to elliptic curve cryptography was posted before and more details can be found in this book. Now, lets describe SIDH and compare with ECDH. Isogenies on Elliptic Curves Before explaining the details of SIDH key exchange, I’ll explain the 3 most important concepts, namely: j-invariant, isogeny and its kernel. Each curve has a number that can be associated to it. Let’s call this number a j-invariant. This number is not unique per curve, meaning many curves have the same value of j-invariant, but it can be viewed as a way to group multiple elliptic curves into disjoint sets. We say that two curves are isomorphic if they are in the same set, called the isomorphism class. The j-invariant is a simple criterion to determine whether two curves are isomorphic. The j-invariant of a curve $$E$$ in Weierstrass form $$y^2 = x^3 + ax + b$$ is given as $$j(E) = 1728\frac{4a^3}{4^3 +27b^2}$$ When it comes to isogeny, think about it as a map between two curves. Each point on some curve $$E$$ is mapped by isogeny to the point on isogenous curve $$E’$$. We denote mapping from curve $$E$$ to $$E’$$ by isogeny $$\phi$$ as: $$\phi: E \rightarrow E’$$ It depends on the map if those two curves are isomorphic or not. Isogeny can be visualised as: There may exist many of those mappings, each curve used in SIDH has small number of isogenies to other curves. Natural question is how do we compute such isogeny. Here is where the kernel of an isogeny comes. The kernel uniquely determines isogeny (up to isomorphism class). Formulas for calculating isogeny from its kernel were initially given by J. Vélu and the idea of calculating them efficiently was extended . To finish, I will summarize what was said above with a picture. There are two isomorphism classes on the picture above. Both curves $$E_1$$ and $$E_2$$ are isomorphic and have j-invariant = 6. As curves $$E_3$$ and $$E_4$$ have j-invariant=13, they are in a different isomorphism class. There exists an isogeny $$\phi_2$$ between curve $$E_3$$ and $$E_2$$, so they both are isogeneous. Curves $$\phi_1$$ and $$E_2$$ are isomorphic and there is isogeny $$\phi_1$$ between them. Curves $$E_1$$ and $$E_4$$ are neither isomorphic nor isogeneus. For brevity I’m skipping many important details, like details of the finite field, the fact that isogenies must be separable and that the kernel is finite. But curious readers can find a number of academic research papers available on the Internet. Big picture: similarities with ECDH Let’s generalize the ECDH algorithm described above, so that we can swap some elements and try to use Supersingular Isogeny Diffie-Hellman. Note that what actually happens during an ECDH key exchange is: • We have a set of points on elliptic curve, set S • We have another group of integers used for point multiplication, G • We use an element from Z to act on an element from S to get another element from S: $$G \cdot S \rightarrow S$$ Now the question is: what is our G and S in an SIDH setting? For SIDH to work, we need a big set of elements and something secret that will act on the elements from that set. This “group action” must also be resistant to attacks performed by quantum computers. In the SIDH setting, those two sets are defined as: • Set S is a set (graph) of j-invariants, such that all the curves are supersingular: $$S = [j(E_1), j(E_2), j(E_3), …. , j(E_n)]$$ • Set G is a set of isogenies acting on elliptic curves and transforming, for example, the elliptic curve $$E_1$$ into $$E_n$$: Random walk on supersingular graph When we talk about Isogeny Based Cryptography, as a topic distinct from Elliptic Curve Cryptography, we usually mean algorithms and protocols that rely fundamentally on the structure of isogeny graphs. An example of such a (small) graph is pictured below. Each vertex of the graph represents a different j-invariant of a set of supersingular curves. The edges between vertices represent isogenies converting one elliptic curve to another. As you can notice, the graph is strongly connected, meaning every vertex can be reached from every other vertex. In the context of isogeny-based crypto, we call such a graph a supersingular isogeny graph. I’ll skip some technical details about the construction of this graph (look for those here or here), but instead describe ideas about how it can be used. As the graph is strongly connected, it is possible to walk a whole graph by starting from any vertex, randomly choosing an edge, following it to the next vertex and then start the process again on a new vertex. Such a way of visiting edges of this graph is called a random walk. The random walk is a key concept that makes isogeny based crypto feasible. When you look closely at the graph, you can notice that each vertex has a small number of edges incident to it, this is why we can compute the isogenies efficiently. But also for any vertex there is only a limited number of isogenies to choose from, which doesn’t look like good base for a cryptographic scheme. The key question is – where does the security of the scheme come from exactly? In order to get it, it is necessary to visit a couple hundred vertices. What it means in practice is that secret isogeny (of large degree) is constructed as a composition of multiple isogenies (of small, prime degree). Which means, the secret isogeny is: This property and properties of the isogeny graph are what makes some of us believe that scheme has a good chance to be secure. More specifically, there is no efficient way of finding a path that connects $$E_0$$ with $$E_n$$, even with quantum computer at hand. The security level of a system depends on value n – the number of steps taken during the walk. The random walk is a core process used when both generating public keys and computing shared secrets. It starts with party generating random value m (see more below), starting curve $$E_0$$ and points P and Q on this curve. Those values are used to compute the kernel of an isogeny $$R_1$$ in the following way: $$R_1 = P + m \cdot Q$$ Thanks to formulas given by Vélu we can now use the point $$R_1$$ to compute the isogeny, the party will choose to move from a vertex to another one. After the isogeny $$\phi_{R_1}$$ is calculated it is applied to $$E_0$$ which results in a new curve $$E_1$$: $$\phi_{R_1}: E_0 \rightarrow E_1$$ Isogeny is also applied to points P and Q. Once on $$E_1$$ the process is repeated. This process is applied n times, and at the end a party ends up on some curve $$E_n$$ which defines isomorphism class, so also j-invariant. Supersingular Isogeny Diffie-Hellman The core idea in SIDH is to compose two random walks on an isogeny graph of elliptic curves in such a way that the end node of both ways of composing is the same. In order to do it, scheme sets public parameters – starting curve $$E_0$$ and 2 pairs of base points on this curve $$(PA,QA)$$ , $$(PB,QB)$$. Alice generates her random secret keys m, and calculates a secret isogeny $$\phi_q$$ by performing a random walk as described above. The walk finishes with 3 values: elliptic curve $$E_a$$ she has ended up with and pair of points $$\phi_a(PB)$$ and $$\phi_a(QB)$$ after pushing through Alice’s secret isogeny. Bob proceeds analogously which results in the triple $${E_b, \phi_b(PA), \phi_b(QA)}$$. The triple forms a public key which is exchanged between parties. The picture below visualizes the operation. The black dots represent curves grouped in the same isomorphism classes represented by light blue circles. Alice takes the orange path ending up on a curve $$E_a$$ in a separate isomorphism class than Bob after taking his dark blue path ending on $$E_b$$. SIDH is parametrized in a way that Alice and Bob will always end up in different isomorphism classes. Upon receipt of triple $${ E_a, \phi_a(PB), \phi_a(QB) }$$ from Alice, Bob will use his secret value m to calculate a new kernel – but instead of using point $$PA$$ and $$QA$$ to calculate an isogeny kernel, he will now use images $$\phi_a(PB)$$ and $$\phi_a(QB)$$ received from Alice: $$R’_1 = \phi_a(PB) + m \cdot \phi_a(QB)$$ Afterwards, he uses $$R’_1$$ to start the walk again resulting in the isogeny $$\phi’_b: E_a \rightarrow E_{ab}$$. Allice proceeds analogously resulting in the isogeny $$\phi’_a: E_b \rightarrow E_{ba}$$. With isogenies calculated this way, both Alice and Bob will converge in the same isomorphism class. The math math may seem complicated, hopefully the picture below makes it easier to understand. Bob computes a new isogeny and starts his random walk from $$E_a$$ received from Alice. He ends up on some curve $$E_{ba}$$. Similarly, Alice calculates a new isogeny, applies it on $$E_b$$ received from Bob and her random walk ends on some curve $$E_{ab}$$. Curves $$E_{ab}$$ and $$E_{ba}$$ are not likely to be the same, but construction guarantees that they are isomorphic. As mentioned earlier, isomorphic curves have the same value of j-invariant, hence the shared secret is a value of j-invariant $$j(E_{ab})$$. Coming back to differences between SIDH and ECDH – we can split them into four categories: the elements of the group we are operating on, the cornerstone computation required to agree on a shared secret, the elements representing secret values, and the difficult problem on which the security relies. In ECDH there is a secret key which is an integer scalar, in case of SIDH it is a secret isogeny, which also is generated from an integer scalar. In the case of ECDH one multiplies a point on a curve by a scalar, in the case of SIDH it is a random walk in an isogeny graph. In the case of ECDH, the public key is a point on a curve, in the case of SIDH, the public part is a curve itself and the image of some points after applying isogeny. The shared secret in the case of ECDH is a point on a curve, in the case of SIDH it is a j-invariant. SIKE: Supersingular Isogeny Key Encapsulation SIDH could potentially be used as a drop-in replacement of the ECDH protocol. We have actually implemented a proof-of-concept and added it to our implementation of TLS 1.3 in the tls-tris library and described (together with Mozilla) implementation details in this draft. Nevertheless, there is a problem with SIDH – the keys can be used only once. In 2016, a few researchers came up with an active attack on SIDH which works only when public keys are reused. In the context of TLS, it is not a big problem, because for each session a fresh key pair is generated (ephemeral keys), but it may not be true for other applications. SIKE is an isogeny key encapsulation which solves this problem. Bob can generate SIKE keys, upload the public part somewhere in the Internet and then anybody can use it whenever he wants to communicate with Bob securely. SIKE reuses SIDH – internally both sides of the connection always perform SIDH key generation, SIDH key agreement and apply some other cryptographic primitives in order to convert SIDH to KEM. SIKE is implemented in a few variants – each variant corresponds to the security levels using 128-, 192- and 256-bit secret keys. Higher security level means longer running time. More details about SIKE can be found here. SIKE is also one of the candidates in NIST post-quantum “competition“. I’ve skipped many important details to give a brief description of how isogeny based crypto works. If you’re curious and hungry for details, look at either of these Cloudflare meetups, where Deirdre Connolly talked about isogeny-based cryptography or this talk by Chloe Martindale during PQ Crypto School 2017. And if you would like to know more about quantum attacks on this scheme, I highly recommend this work. Conclusion Quantum computers that can break meaningful cryptographic parameter settings do not exist, yet. They won’t be built for at least the next few years. Nevertheless, they have already changed the way we look at current cryptographic deployments. There are at least two reasons it’s worth investing in PQ cryptography: • It takes a lot of time to build secure cryptography and we don’t actually know when today’s classical cryptography will be broken. There is a need for a good mathematical base: an initial idea of what may be secure against something that doesn’t exist yet. If you have an idea, you also need good implementation, constant time, resistance to things like time and cache side-channels, DFA, DPA, EM, and a bunch of other abbreviations indicating side-channel resistance. There is also deployment of, for example, algorithms based on elliptic curves were introduced in ’85, but started to really be used in production only during the last decade, 20 or so years later. Obviously, the implementation must be blazingly fast! Last, but not least, integration: we need time to develop standards to allow integration of PQ cryptography with protocols like TLS. • Even though efficient quantum computers probably won’t exist for another few years, the threat is real. Data encrypted with current cryptographic algorithms can be recorded now with hopes of being broken in the future. Cloudflare is motivated to help build the Internet of tomorrow with the tools at hand today. Our interest is in cryptographic techniques that can be integrated into existing protocols and widely deployed on the Internet as seamlessly as possible. PQ cryptography, like the rest of cryptography, includes many cryptosystems that can be used for communications in today’s Internet; Alice and Bob need to perform some computation, but they do not need to buy new hardware to do that. Cloudflare sees great potential in those algorithms and believes that some of them can be used as a safe replacement for classical public-key cryptosystems. Time will tell if we’re justified in this belief! Towards Post-Quantum Cryptography in TLS Post Syndicated from Kris Kwiatkowski original https://blog.cloudflare.com/towards-post-quantum-cryptography-in-tls/ We live in a completely connected society. A society connected by a variety of devices: laptops, mobile phones, wearables, self-driving or self-flying things. We have standards for a common language that allows these devices to communicate with each other. This is critical for wide-scale deployment – especially in cryptography where the smallest detail has great importance. One of the most important standards-setting organizations is the National Institute of Standards and Technology (NIST), which is hugely influential in determining which standardized cryptographic systems see worldwide adoption. At the end of 2016, NIST announced it would hold a multi-year open project with the goal of standardizing new post-quantum (PQ) cryptographic algorithms secure against both quantum and classical computers. Many of our devices have very different requirements and capabilities, so it may not be possible to select a “one-size-fits-all” algorithm during the process. NIST mathematician, Dustin Moody, indicated that institute will likely select more than one algorithm: “There are several systems in use that could be broken by a quantum computer – public-key encryption and digital signatures, to take two examples – and we will need different solutions for each of those systems.” Initially, NIST selected 82 candidates for further consideration from all submitted algorithms. At the beginning of 2019, this process entered its second stage. Today, there are 26 algorithms still in contention. Post-quantum cryptography: what is it really and why do I need it? In 1994, Peter Shor made a significant discovery in quantum computation. He found an algorithm for integer factorization and computing discrete logarithms, both believed to be hard to solve in classical settings. Since then it has become clear that the ‘hard problems’ on which cryptosystems like RSA and elliptic curve cryptography (ECC) rely – integer factoring and computing discrete logarithms, respectively – are efficiently solvable with quantum computing. A quantum computer can help to solve some of the problems that are intractable on a classical computer. In theory, they could efficiently solve some fundamental problems in mathematics. This amazing computing power would be highly beneficial, which is why companies are actually trying to build quantum computers. At first, Shor’s algorithm was merely a theoretical result – quantum computers powerful enough to execute it did not exist – but this is quickly changing. In March 2018, Google announced a 72-qubit universal quantum computer. While this is not enough to break say RSA-2048 (still more is needed), many fundamental problems have already been solved. In anticipation of wide-spread quantum computing, we must start the transition from classical public-key cryptography primitives to post-quantum (PQ) alternatives. It may be that consumers will never get to hold a quantum computer, but a few powerful attackers who will get one can still pose a serious threat. Moreover, under the assumption that current TLS handshakes and ciphertexts are being captured and stored, a future attacker could crack these stored individual session keys and use those results to decrypt the corresponding individual ciphertexts. Even strong security guarantees, like forward secrecy, do not help out much there. In 2006, the academic research community launched a conference series dedicated to finding alternatives to RSA and ECC. This so-called post-quantum cryptography should run efficiently on a classical computer, but it should also be secure against attacks performed by a quantum computer. As a research field, it has grown substantially in popularity. Several companies, including Google, Microsoft, Digicert and Thales, are already testing the impact of deploying PQ cryptography. Cloudflare is involved in some of this, but we want to be a company that leads in this direction. The first thing we need to do is understand the real costs of deploying PQ cryptography, and that’s not obvious at all. What options do we have? Many submissions to the NIST project are still under study. Some are very new and little understood; others are more mature and already standardized as RFCs. Some have been broken or withdrawn from the process; others are more conservative or illustrate how far classical cryptography would need to be pushed so that a quantum computer could not crack it within a reasonable cost. Some are very slow and big; others are not. But most cryptographic schemes can be categorized into these families: lattice-based, multivariate, hash-based (signatures only), code-based and isogeny-based. For some algorithms, nevertheless, there is a fear they may be too inconvenient to use with today’s Internet. We must also be able to integrate new cryptographic schemes with existing protocols, such as SSH or TLS. To do that, designers of PQ cryptosystems must consider these characteristics: • Latency caused by encryption and decryption on both ends of the communication channel, assuming a variety of devices from big and fast servers to slow and memory constrained IoT (Internet of Things) devices • Small public keys and signatures to minimize bandwidth • Clear design that allows cryptanalysis and determining weaknesses that could be exploited • Use of existing hardware for fast implementation The work on post-quantum public key cryptosystems must be done in a full view of organizations, governments, cryptographers, and the public. Emerging ideas must be properly vetted by this community to ensure widespread support. Helping Build a Better Internet To better understand the post-quantum world, Cloudflare began experimenting with these algorithms and used them to provide confidentiality in TLS connections. With Google, we are proposing a wide-scale experiment that combines client- and server-side data collection to evaluate the performance of key-exchange algorithms on actual users’ devices. We hope that this experiment helps choose an algorithm with the best characteristics for the future of the Internet. With Cloudflare’s highly distributed network of access points and Google’s Chrome browser, both companies are in a very good position to perform this experiment. Our goal is to understand how these algorithms act when used by real clients over real networks, particularly candidate algorithms with significant differences in public-key or ciphertext sizes. Our focus is on how different key sizes affect handshake time in the context of Transport Layer Security (TLS) as used on the web over HTTPS. Our primary candidates are an NTRU-based construction called HRSS-SXY (by Hülsing – Rijneveld – Schanck – Schwabe, and Tsunekazu Saito – Keita Xagawa – Takashi Yamakawa) and an isogeny-based Supersingular Isogeny Key Encapsulation (SIKE). Details of both algorithms are described in more detail below in section “Dive into post-quantum cryptography”. This table shows a few characteristics for both algorithms. Performance timings were obtained by running the BoringSSL speed test on an Intel Skylake CPU. KEM Public Key size (bytes) Ciphertext (bytes) Secret size (bytes) KeyGen (op/sec) Encaps (op/sec) Decaps (op/sec) NIST level HRSS-SXY 1138 1138 32 3952.3 76034.7 21905.8 1 SIKE/p434 330 346 16 367.1 228.0 209.3 1 Currently the most commonly used key exchange algorithm (according to Cloudflare’s data) is the non-quantum X25519. Its public keys are 32 bytes and BoringSSL can generate 49301.2 key pairs, and is able to perform 19628.6 key agreements every second on my Skylake CPU. Note that HRSS-SXY shows a significant speed advantage, while SIKE has a size advantage. In our experiment, we will deploy these two algorithms on both the server side using Cloudflare’s infrastructure, and the client side using Chrome Canary; both sides will collect telemetry information about TLS handshakes using these two PQ algorithms to see how they perform in practice. What do we expect to find? In 2018, Adam Langley conducted an experiment with the goal of evaluating the likely latency impact of a post-quantum key exchange in TLS. Chrome was augmented with the ability to include a dummy, arbitrarily-sized extension in the TLS ClientHello (fixed number of bytes of random noise). After taking into account the performance and key size offered by different types key-exchange schemes, he concluded that constructs based on structured lattices may be most suitable for future use in TLS. However, Langley also observed a peculiar phenomenon; client connections measured at 95th percentile had much higher latency than the median. It means that in those cases, isogeny-based systems may be a better choice. In the “Dive into post-quantum cryptography”, we describe the difference between isogeny-based SIKE and lattice-based NTRU cryptosystems. In our experiment, we want to more thoroughly evaluate and ascribe root causes to these unexpected latency increases. We would particularly like to learn more about the characteristics of those networks: what causes increased latency? how does the performance cost of isogeny-based algorithms impact the TLS handshake? We want to answer key questions, like: • What is a good ratio for speed-to-key size (or how much faster could SIKE get to achieve the client-perceived performance of HRSS)? • How do network middleboxes behave when clients use new PQ algorithms, and which networks have problematic middleboxes? • How do the different properties of client networks affect TLS performance with different PQ key exchanges? Can we identify specific autonomous systems, device configurations, or network configurations that favor one algorithm over another? How is performance affected in the long tail? Experiment Design Our experiment will involve both server- and client-side performance statistics collection from real users around the world (all the data is anonymized). Cloudflare is operating the server-side TLS connections. We will enable the CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE + X25519) key-agreement algorithms on all TLS-terminating edge servers. In this experiment, the ClientHello will contain a CECPQ2 or CECPQ2b public key (but never both). Additionally, Chrome will always include X25519 for servers that do not support post-quantum key exchange. The post-quantum key exchange will only be negotiated in TLS version 1.3 when both sides support it. Since Cloudflare only measures the server side of the connection, it is impossible to determine the time it takes for a ClientHello sent from Chrome to reach Cloudflare’s edge servers; however, we can measure the time it takes for the TLS ServerHello message containing post-quantum key exchange, to reach the client and for the client to respond. On the client side, Chrome Canary will operate the TLS connection. Google will enable either CECPQ2 or CECPQ2b in Chrome for the following mix of architecture and OSes: • x86-64: Windows, Linux, macOS, ChromeOS • aarch64: Android Our high-level expectation is to get similar results as Langley’s original experiment in 2018 — slightly increased latency for the 50th percentile and higher latency for the 95th. Unfortunately, data collected purely from real users’ connections may not suffice for diagnosing the root causes of why some clients experience excessive slowdown. To this end, we will perform follow-up experiments based on per-client information we collect server-side. Our primary hypothesis is that excessive slowdowns, like those Langley observed, are largely due to in-network events, such as middleboxes or bloated/lossy links. As a first-pass analysis, we will investigate whether the slowed-down clients share common network features, like common ASes, common transit networks, common link types, and so on. To determine this, we will run a traceroute from vantage points close to our servers back toward the clients (not overloading any particular links or hosts) and study whether some client locations are subject to slowdowns for all destinations or just for some. Dive into post-quantum cryptography Be warned: the details of PQ cryptography may be quite complicated. In some cases it builds on classical cryptography, and in other cases it is completely different math. It would be rather hard to describe details in a single blog post. Instead, we are giving you an intuition of post-quantum cryptography, rather than provide deep academic-level descriptions. We’re skipping a lot of details for the sake of brevity. Nevertheless, settle in for a bit of an epic journey because we have a lot to cover. Key encapsulation mechanism NIST requires that all key-agreement algorithms have a form of key-encapsulation mechanism (KEM). The KEM is a simplified form of public key encryption (PKE). As PKE, it also allows agreement on a secret, but in a slightly different way. The idea is that the session key is an output of the encryption algorithm, conversely to public key encryption schemes where session key is an input to the algorithm. In a KEM, Alice generates a random key and uses the pre-generated public key from Bob to encrypt (encapsulate) it. This results in a ciphertext sent to Bob. Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the random key. The idea was initially introduced by Cramer and Shoup. Experience shows that such constructs are easier to design, analyze, and implement as the scheme is limited to communicating a fixed-size session key. Leonardo Da Vinci said, “Simplicity is the ultimate sophistication,” which is very true in cryptography. The key exchange (KEX) protocol, like Diffie-Hellman, is yet a different construct: it allows two parties to agree on a shared secret that can be used as a symmetric encryption key. For example, Alice generates a key pair and sends a public key to Bob. Bob does the same and uses his own key pair with Alice’s public key to generate the shared secret. He then sends his public key to Alice who can now generate the same shared secret. What’s worth noticing is that both Alice and Bob perform exactly the same operations. KEM construction can be converted to KEX. Alice performs key generation and sends the public key to Bob. Bob uses it to encapsulate a symmetric session key and sends it back to Alice. Alice decapsulates the ciphertext received from Bob and gets the symmetric key. This is actually what we do in our experiment to make integration with the TLS protocol less complicated. NTRU Lattice-based Encryption We will enable the CECPQ2 implemented by Adam Langley from Google on our servers. He described this implementation in detail here. This key exchange uses the HRSS algorithm, which is based on the NTRU (N-Th Degree TRUncated Polynomial Ring) algorithm. Foregoing too much detail, I am going to explain how NTRU works and give simplified examples, and finally, compare it to HRSS. NTRU is a cryptosystem based on a polynomial ring. This means that we do not operate on numbers modulo a prime (like in RSA), but on polynomials of degree $$N$$ , where the degree of a polynomial is the highest exponent of its variable. For example, $$x^7 + 6x^3 + 11x^2$$ has degree of 7. One can add polynomials in the ring in the usual way, by simply adding theirs coefficients modulo some integer. In NTRU this integer is called $$q$$. Polynomials can also be multiplied, but remember, you are operating in the ring, therefore the result of a multiplication is always a polynomial of degree less than $$N$$. It basically means that exponents of the resulting polynomial are added to modulo $$N$$. In other words, polynomial ring arithmetic is very similar to modular arithmetic, but instead of working with a set of numbers less than N, you are working with a set of polynomials with a degree less than N. To instantiate the NTRU cryptosystem, three domain parameters must be chosen: • $$N$$ – degree of the polynomial ring, in NTRU the principal objects are polynomials of degree $$N-1$$. • $$p$$ – small modulus used during key generation and decryption for reducing message coefficients. • $$q$$ – large modulus used during algorithm execution for reducing coefficients of the polynomials. First, we generate a pair of public and private keys. To do that, two polynomials $$f$$ and $$g$$ are chosen from the ring in a way that their randomly generated coefficients are much smaller than $$q$$. Then key generation computes two inverses of the polynomial: $$f_p= f^{-1} \bmod{p} \\ f_q= f^{-1} \bmod{q}$$ The last step is to compute $$pk = p\cdot f_q\cdot g \bmod q$$, which we will use as public key pk. The private key consists of $$f$$ and $$f_p$$. The $$f_q$$ is not part of any key, however it must remain secret. It might be the case that after choosing $$f$$, the inverses modulo $$p$$ and $$q$$ do not exist. In this case, the algorithm has to start from the beginning and generate another $$f$$. That’s unfortunate because calculating the inverse of a polynomial is a costly operation. HRSS brings an improvement to this issue since it ensures that those inverses always exist, making key generation faster than as proposed initially in NTRU. The encryption of a message $$m$$ proceeds as follows. First, the message $$m$$ is converted to a ring element $$pt$$ (there exists an algorithm for performing this conversion in both directions). During encryption, NTRU randomly chooses one polynomial $$b$$ called blinder. The goal of the blinder is to generate different ciphertexts per encyption. Thus, the ciphetext $$ct$$ is obtained as $$ct = (b\cdot pk + pt ) \bmod q$$ Decryption looks a bit more complicated but it can also be easily understood. Decryption uses both the secret value $$f$$ and to recover the plaintext as $$v = f \cdot ct \bmod q \\ pt = v \cdot f_p \bmod p$$ This diagram demonstrates why and how decryption works. After obtaining $$pt$$, the message $$m$$ is recovered by inverting the conversion function. The underlying hard assumption is that given two polynomials: $$f$$ and $$g$$ whose coefficients are short compared to the modulus $$q$$, it is difficult to distinguish $$pk = \frac{f}{g}$$ from a random element in the ring. It means that it’s hard to find $$f$$ and $$g$$ given only public key pk. Lattices NTRU cryptosystem is a grandfather of lattice-based encryption schemes. The idea of using difficult problems for cryptographic purposes was due to Ajtai. His work evolved into a whole area of research with the goal of creating more practical, lattice-based cryptosystems. What is a lattice and why it can be used for post-quantum crypto? The picture below visualizes lattice as points in a two-dimensional space. A lattice is defined by the origin $$O$$ and base vectors $$\{ b_1 , b_2\}$$. Every point on the lattice is represented as a linear combination of the base vectors, for example $$V = -2b_1+b_2$$. There are two classical NP-hard problems in lattice-based cryptography: 1. Shortest Vector Problem (SVP): Given a lattice, to find the shortest non-zero vector in the lattice. In the graph, the vector $$s$$ is the shortest one. The SVP problem is NP-hard only under some assumptions. 2. Closest Vector Problem (CVP). Given a lattice and a vector $$V$$ (not necessarily in the lattice), to find the closest vector to $$V$$. For example, the closest vector to $$t$$ is $$z$$. In the graph above, it is easy for us to solve SVP and CVP by simple inspection. However, the lattices used in cryptography have higher dimensions, say above 1000, as well as highly non-orthogonal basis vectors. On these instances, the problems get extremely hard to solve. It’s even believed future quantum computers will have it tough. NTRU vs HRSS HRSS, which we use in our experiment, is based on NTRU, but a slightly better instantiation. The main improvements are: • Faster key generation algorithm. • NTRU encryption can produce ciphertexts that are impossible to decrypt (true for many lattice-based schemes). But HRSS fixes this problem. • HRSS is a key encapsulation mechanism. CECPQ2b – Isogeny-based Post-Quantum TLS Following CECPQ2, we have integrated into BoringSSL another hybrid key exchange mechanism relying on SIKE. It is called CECPQ2b and we will use it in our experimentation in TLS 1.3. SIKE is a key encapsulation method based on Supersingular Isogeny Diffie-Hellman (SIDH). Read more about SIDH in our previous post. The math behind SIDH is related to elliptic curves. A comparison between SIDH and the classical Elliptic Curve Diffie-Hellman (ECDH) is given. An elliptic curve is a set of points that satisfy a specific mathematical equation. The equation of an elliptic curve may have multiple forms, the standard form is called the Weierstrass equation $$y^2 = x^3 +ax +b$$ and its shape can look like the red curve. An interesting fact about elliptic curves is have a group structure. That is, the set of points on the curve have associated a binary operation called point addition. The set of points on the elliptic curve is closed under addition. Thus, adding two points results in another point that is also on the elliptic curve. If we can add two different points on a curve, then we can also add one point to itself. And if we do it multiple times, then the resulting operations is known as a scalar multiplication and denoted as $$Q = k\cdot P = P+P+\dots+P$$ for an integer $$k$$. Multiplication of scalars is commutative. It means that two scalar multiplications can be evaluated in any order $$\color{darkred}{k_a}\cdot\color{darkgreen}{k_b} = \color{darkgreen}{k_b}\cdot\color{darkred}{k_a}$$; this an important property that makes ECDH possible. It turns out that carefully if choosing an elliptic curve “correctly”, scalar multiplication is easy to compute but extremely hard to reverse. Meaning, given two points $$Q$$ and $$P$$ such that $$Q=k\cdot P$$, finding the integer k is a difficult task known as the Elliptic Curve Discrete Logarithm problem (ECDLP). This problem is suitable for cryptographic purposes. Alice and Bob agree on a secret key as follows. Alice generates a private key $$k_a$$. Then, she uses some publicly known point $$P$$ and calculates her public key as $$Q_a = k_a\cdot P$$. Bob proceeds in similar fashion and gets $$k_b$$ and $$Q_b = k_b\cdot P$$. To agree on a shared secret, each party multiplies their private key with the public key of the other party. The result of this is the shared secret. Key agreement as described above, works thanks to the fact that scalars can commute: $$\color{darkgreen}{k_a} \cdot Q_b = \color{darkgreen}{k_a} \cdot \color{darkred}{k_b} \cdot P \iff \color{darkred}{k_b} \cdot \color{darkgreen}{k_a} \cdot P = \color{darkred}{k_b} \cdot Q_a$$ There is a vast theory behind elliptic curves. An introduction to elliptic curve cryptography was posted before and more details can be found in this book. Now, lets describe SIDH and compare with ECDH. Isogenies on Elliptic Curves Before explaining the details of SIDH key exchange, I’ll explain the 3 most important concepts, namely: j-invariant, isogeny and its kernel. Each curve has a number that can be associated to it. Let’s call this number a j-invariant. This number is not unique per curve, meaning many curves have the same value of j-invariant, but it can be viewed as a way to group multiple elliptic curves into disjoint sets. We say that two curves are isomorphic if they are in the same set, called the isomorphism class. The j-invariant is a simple criterion to determine whether two curves are isomorphic. The j-invariant of a curve $$E$$ in Weierstrass form $$y^2 = x^3 + ax + b$$ is given as $$j(E) = 1728\frac{4a^3}{4a^3 +27b^2}$$ When it comes to isogeny, think about it as a map between two curves. Each point on some curve $$E$$ is mapped by isogeny to the point on isogenous curve $$E’$$. We denote mapping from curve $$E$$ to $$E’$$ by isogeny $$\phi$$ as: $$\phi: E \rightarrow E’$$ It depends on the map if those two curves are isomorphic or not. Isogeny can be visualised as: There may exist many of those mappings, each curve used in SIDH has small number of isogenies to other curves. Natural question is how do we compute such isogeny. Here is where the kernel of an isogeny comes. The kernel uniquely determines isogeny (up to isomorphism class). Formulas for calculating isogeny from its kernel were initially given by J. Vélu and the idea of calculating them efficiently was extended. To finish, I will summarize what was said above with a picture. There are two isomorphism classes on the picture above. Both curves $$E_1$$ and $$E_2$$ are isomorphic and have j-invariant = 6. As curves $$E_3$$ and $$E_4$$ have j-invariant=13, they are in a different isomorphism class. There exists an isogeny $$\phi_2$$ between curve $$E_3$$ and $$E_2$$, so they both are isogeneous. Curves $$\phi_1$$ and $$E_2$$ are isomorphic and there is isogeny $$\phi_1$$ between them. Curves $$E_1$$ and $$E_4$$ are not isomorphic. For brevity I’m skipping many important details, like details of the finite field, the fact that isogenies must be separable and that the kernel is finite. But curious readers can find a number of academic research papers available on the Internet. Big picture: similarities with ECDH Let’s generalize the ECDH algorithm described above, so that we can swap some elements and try to use Supersingular Isogeny Diffie-Hellman. Note that what actually happens during an ECDH key exchange is: • We have a set of points on elliptic curve, set S • We have another group of integers used for point multiplication, G • We use an element from Z to act on an element from S to get another element from S: $$G \cdot S \rightarrow S$$ Now the question is: what is our G and S in an SIDH setting? For SIDH to work, we need a big set of elements and something secret that will act on the elements from that set. This “group action” must also be resistant to attacks performed by quantum computers. In the SIDH setting, those two sets are defined as: • Set S is a set (graph) of j-invariants, such that all the curves are supersingular: $$S = [j(E_1), j(E_2), j(E_3), …. , j(E_n)]$$ • Set G is a set of isogenies acting on elliptic curves and transforming, for example, the elliptic curve $$E_1$$ into $$E_n$$: Random walk on supersingular graph When we talk about Isogeny Based Cryptography, as a topic distinct from Elliptic Curve Cryptography, we usually mean algorithms and protocols that rely fundamentally on the structure of isogeny graphs. An example of such a (small) graph is pictured below. Each vertex of the graph represents a different j-invariant of a set of supersingular curves. The edges between vertices represent isogenies converting one elliptic curve to another. As you can notice, the graph is strongly connected, meaning every vertex can be reached from every other vertex. In the context of isogeny-based crypto, we call such a graph a supersingular isogeny graph. I’ll skip some technical details about the construction of this graph (look for those here or here), but instead describe ideas about how it can be used. As the graph is strongly connected, it is possible to walk a whole graph by starting from any vertex, randomly choosing an edge, following it to the next vertex and then start the process again on a new vertex. Such a way of visiting edges of this graph is called a random walk. The random walk is a key concept that makes isogeny based crypto feasible. When you look closely at the graph, you can notice that each vertex has a small number of edges incident to it, this is why we can compute the isogenies efficiently. But also for any vertex there is only a limited number of isogenies to choose from, which doesn’t look like good base for a cryptographic scheme. The key question is – where does the security of the scheme come from exactly? In order to get it, it is necessary to visit a couple hundred vertices. What it means in practice is that secret isogeny (of large degree) is constructed as a composition of multiple isogenies (of small, prime degree). Which means, the secret isogeny is: This property and properties of the isogeny graph are what makes some of us believe that scheme has a good chance to be secure. More specifically, there is no efficient way of finding a path that connects $$E_0$$ with $$E_n$$, even with quantum computer at hand. The security level of a system depends on value n – the number of steps taken during the walk. The random walk is a core process used when both generating public keys and computing shared secrets. It starts with party generating random value m (see more below), starting curve $$E_0$$ and points P and Q on this curve. Those values are used to compute the kernel of an isogeny $$R_1$$ in the following way: $$R_1 = P + m \cdot Q$$ Thanks to formulas given by Vélu we can now use the point $$R_1$$ to compute the isogeny, the party will choose to move from a vertex to another one. After the isogeny $$\phi_{R_1}$$ is calculated it is applied to $$E_0$$ which results in a new curve $$E_1$$: $$\phi_{R_1}: E_0 \rightarrow E_1$$ Isogeny is also applied to points P and Q. Once on $$E_1$$ the process is repeated. This process is applied n times, and at the end a party ends up on some curve $$E_n$$ which defines isomorphism class, so also j-invariant. Supersingular Isogeny Diffie-Hellman The core idea in SIDH is to compose two random walks on an isogeny graph of elliptic curves in such a way that the end node of both ways of composing is the same. In order to do it, scheme sets public parameters – starting curve $$E_0$$ and 2 pairs of base points on this curve $$(PA,QA)$$ , $$(PB,QB)$$. Alice generates her random secret keys m, and calculates a secret isogeny $$\phi_q$$ by performing a random walk as described above. The walk finishes with 3 values: elliptic curve $$E_a$$ she has ended up with and pair of points $$\phi_a(PB)$$ and $$\phi_a(QB)$$ after pushing through Alice’s secret isogeny. Bob proceeds analogously which results in the triple $${E_b, \phi_b(PA), \phi_b(QA)}$$. The triple forms a public key which is exchanged between parties. The picture below visualizes the operation. The black dots represent curves grouped in the same isomorphism classes represented by light blue circles. Alice takes the orange path ending up on a curve $$E_a$$ in a separate isomorphism class than Bob after taking his dark blue path ending on $$E_b$$. SIDH is parametrized in a way that Alice and Bob will always end up in different isomorphism classes. Upon receipt of triple $${ E_a, \phi_a(PB), \phi_a(QB) }$$ from Alice, Bob will use his secret value m to calculate a new kernel – but instead of using point $$PA$$ and $$QA$$ to calculate an isogeny kernel, he will now use images $$\phi_a(PB)$$ and $$\phi_a(QB)$$ received from Alice: $$R’_1 = \phi_a(PB) + m \cdot \phi_a(QB)$$ Afterwards, he uses $$R’_1$$ to start the walk again resulting in the isogeny $$\phi’_b: E_a \rightarrow E_{ab}$$. Allice proceeds analogously resulting in the isogeny $$\phi’_a: E_b \rightarrow E_{ba}$$. With isogenies calculated this way, both Alice and Bob will converge in the same isomorphism class. The math math may seem complicated, hopefully the picture below makes it easier to understand. Bob computes a new isogeny and starts his random walk from $$E_a$$ received from Alice. He ends up on some curve $$E_{ba}$$. Similarly, Alice calculates a new isogeny, applies it on $$E_b$$ received from Bob and her random walk ends on some curve $$E_{ab}$$. Curves $$E_{ab}$$ and $$E_{ba}$$ are not likely to be the same, but construction guarantees that they are isomorphic. As mentioned earlier, isomorphic curves have the same value of j-invariant, hence the shared secret is a value of j-invariant $$j(E_{ab})$$. Coming back to differences between SIDH and ECDH – we can split them into four categories: the elements of the group we are operating on, the cornerstone computation required to agree on a shared secret, the elements representing secret values, and the difficult problem on which the security relies. In ECDH there is a secret key which is an integer scalar, in case of SIDH it is a secret isogeny, which also is generated from an integer scalar. In the case of ECDH one multiplies a point on a curve by a scalar, in the case of SIDH it is a random walk in an isogeny graph. In the case of ECDH, the public key is a point on a curve, in the case of SIDH, the public part is a curve itself and the image of some points after applying isogeny. The shared secret in the case of ECDH is a point on a curve, in the case of SIDH it is a j-invariant. SIKE: Supersingular Isogeny Key Encapsulation SIDH could potentially be used as a drop-in replacement of the ECDH protocol. We have actually implemented a proof-of-concept and added it to our implementation of TLS 1.3 in the tls-tris library and described (together with Mozilla) implementation details in this draft. Nevertheless, there is a problem with SIDH – the keys can be used only once. In 2016, a few researchers came up with an active attack on SIDH which works only when public keys are reused. In the context of TLS, it is not a big problem, because for each session a fresh key pair is generated (ephemeral keys), but it may not be true for other applications. SIKE is an isogeny key encapsulation which solves this problem. Bob can generate SIKE keys, upload the public part somewhere in the Internet and then anybody can use it whenever he wants to communicate with Bob securely. SIKE reuses SIDH – internally both sides of the connection always perform SIDH key generation, SIDH key agreement and apply some other cryptographic primitives in order to convert SIDH to KEM. SIKE is implemented in a few variants – each variant corresponds to the security levels using 128-, 192- and 256-bit secret keys. Higher security level means longer running time. More details about SIKE can be found here. SIKE is also one of the candidates in NIST post-quantum “competition“. I’ve skipped many important details to give a brief description of how isogeny based crypto works. If you’re curious and hungry for details, look at either of these Cloudflare meetups, where Deirdre Connolly talked about isogeny-based cryptography or this talk by Chloe Martindale during PQ Crypto School 2017. And if you would like to know more about quantum attacks on this scheme, I highly recommend this work. Conclusion Quantum computers that can break meaningful cryptographic parameter settings do not exist, yet. They won’t be built for at least the next few years. Nevertheless, they have already changed the way we look at current cryptographic deployments. There are at least two reasons it’s worth investing in PQ cryptography: • It takes a lot of time to build secure cryptography and we don’t actually know when today’s classical cryptography will be broken. There is a need for a good mathematical base: an initial idea of what may be secure against something that doesn’t exist yet. If you have an idea, you also need good implementation, constant time, resistance to things like time and cache side-channels, DFA, DPA, EM, and a bunch of other abbreviations indicating side-channel resistance. There is also deployment of, for example, algorithms based on elliptic curves were introduced in ’85, but started to really be used in production only during the last decade, 20 or so years later. Obviously, the implementation must be blazingly fast! Last, but not least, integration: we need time to develop standards to allow integration of PQ cryptography with protocols like TLS. • Even though efficient quantum computers probably won’t exist for another few years, the threat is real. Data encrypted with current cryptographic algorithms can be recorded now with hopes of being broken in the future. Cloudflare is motivated to help build the Internet of tomorrow with the tools at hand today. Our interest is in cryptographic techniques that can be integrated into existing protocols and widely deployed on the Internet as seamlessly as possible. PQ cryptography, like the rest of cryptography, includes many cryptosystems that can be used for communications in today’s Internet; Alice and Bob need to perform some computation, but they do not need to buy new hardware to do that. Cloudflare sees great potential in those algorithms and believes that some of them can be used as a safe replacement for classical public-key cryptosystems. Time will tell if we’re justified in this belief! Introducing CIRCL: An Advanced Cryptographic Library Post Syndicated from Kris Kwiatkowski original https://blog.cloudflare.com/introducing-circl/ As part of Crypto Week 2019, today we are proud to release the source code of a cryptographic library we’ve been working on: a collection of cryptographic primitives written in Go, called CIRCL. This library includes a set of packages that target cryptographic algorithms for post-quantum (PQ), elliptic curve cryptography, and hash functions for prime groups. Our hope is that it’s useful for a broad audience. Get ready to discover how we made CIRCL unique. Cryptography in Go We use Go a lot at Cloudflare. It offers a good balance between ease of use and performance; the learning curve is very light, and after a short time, any programmer can get good at writing fast, lightweight backend services. And thanks to the possibility of implementing performance critical parts in Go assembly, we can try to ‘squeeze the machine’ and get every bit of performance. Cloudflare’s cryptography team designs and maintains security-critical projects. It’s not a secret that security is hard. That’s why, we are introducing the Cloudflare Interoperable Reusable Cryptographic Library – CIRCL. There are multiple goals behind CIRCL. First, we want to concentrate our efforts to implement cryptographic primitives in a single place. This makes it easier to ensure that proper engineering processes are followed. Second, Cloudflare is an active member of the Internet community – we are trying to improve and propose standards to help make the Internet a better place. Cloudflare’s mission is to help build a better Internet. For this reason, we want CIRCL helps the cryptographic community to create proof of concepts, like the post-quantum TLS experiments we are doing. Over the years, lots of ideas have been put on the table by cryptographers (for example, homomorphic encryption, multi-party computation, and privacy preserving constructions). Recently, we’ve seen those concepts picked up and exercised in a variety of contexts. CIRCL’s implementations of cryptographic primitives creates a powerful toolbox for developers wishing to use them. The Go language provides native packages for several well-known cryptographic algorithms, such as key agreement algorithms, hash functions, and digital signatures. There are also packages maintained by the community under golang.org/x/crypto that provide a diverse set of algorithms for supporting authenticated encryption, stream ciphers, key derivation functions, and bilinear pairings. CIRCL doesn’t try to compete with golang.org/x/crypto in any sense. Our goal is to provide a complementary set of implementations that are more aggressively optimized, or may be less commonly used but have a good chance at being very useful in the future. Unboxing CIRCL Our cryptography team worked on a fresh proposal to augment the capabilities of Go users with a new set of packages. You can get them by typing: $ go get github.com/cloudflare/circl
The contents of CIRCL is split across different categories, summarized in this table:
Category Algorithms Description Applications
Post-Quantum Cryptography SIDH Isogeny-based cryptography. SIDH provides key exchange mechanisms using ephemeral keys.
SIKE SIKE is a key encapsulation mechanism (KEM). Key agreement protocols.
Key Exchange X25519, X448 RFC-7748 provides new key exchange mechanisms based on Montgomery elliptic curves. TLS 1.3. Secure Shell.
FourQ One of the fastest elliptic curves at 128-bit security level. Experimental for key agreement and digital signatures.
Digital Signatures Ed25519 RFC-8032 provides new digital signature algorithms based on twisted Edwards curves. Digital certificates and authentication methods.
Hash to Elliptic Curve Groups Several algorithms: Elligator2, Ristretto, SWU, Icart. Protocols based on elliptic curves require hash functions that map bit strings to points on an elliptic curve. Useful in protocols such as Privacy Pass. OPAQUE.
PAKE.
Verifiable random functions.
Optimization Curve P-384 Our optimizations reduce the burden when moving from P-256 to P-384. ECDSA and ECDH using Suite B at top secret level.
SIKE, a Post-Quantum Key Encapsulation Method
To better understand the post-quantum world, we started experimenting with post-quantum key exchange schemes and using them for key agreement in TLS 1.3. CIRCL contains the sidh package, an implementation of Supersingular Isogeny-based Diffie-Hellman (SIDH), as well as CCA2-secure Supersingular Isogeny-based Key Encapsulation (SIKE), which is based on SIDH.
CIRCL makes playing with PQ key agreement very easy. Below is an example of the SIKE interface that can be used to establish a shared secret between two parties for use in symmetric encryption. The example uses a key encapsulation mechanism (KEM). For our example in this scheme, Alice generates a random secret key, and then uses Bob’s pre-generated public key to encrypt (encapsulate) it. The resulting ciphertext is sent to Bob. Then, Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the secret key. See more details about SIKE in this Cloudflare blog.
Let’s see how to do this with CIRCL:
// Bob's key pair
prvB := NewPrivateKey(Fp503, KeyVariantSike)
pubB := NewPublicKey(Fp503, KeyVariantSike)
// Generate private key
// Generate public key
prvB.GeneratePublicKey(pubB)
var publicKeyBytes = make([]array, pubB.Size())
var privateKeyBytes = make([]array, prvB.Size())
pubB.Export(publicKeyBytes)
prvB.Export(privateKeyBytes)
// Encode public key to JSON
// Save privateKeyBytes on disk
Bob uploads the public key to a location accessible by anybody. When Alice wants to establish a shared secret with Bob, she performs encapsulation that results in two parts: a shared secret and the result of the encapsulation, the ciphertext.
// Read JSON to bytes
// Alice's key pair
pubB := NewPublicKey(Fp503, KeyVariantSike)
pubB.Import(publicKeyBytes)
kem.Encapsulate(ciphertext, sharedSecret, pubB)
// send ciphertext to Bob
Bob now receives ciphertext from Alice and decapsulates the shared secret:
var kem := sike.NewSike503(rand.Reader)
kem.Decapsulate(sharedSecret, prvA, pubA, ciphertext)
At this point, both Alice and Bob can derive a symmetric encryption key from the secret generated.
SIKE implementation contains:
• Two different field sizes: Fp503 and Fp751. The choice of the field is a trade-off between performance and security.
• Code optimized for AMD64 and ARM64 architectures, as well as generic Go code. For AMD64, we detect the micro-architecture and if it’s recent enough (e.g., it supports ADOX/ADCX and BMI2 instruction sets), we use different multiplication techniques to make an execution even faster.
• Code implemented in constant time, that is, the execution time doesn’t depend on secret values.
We also took care of low heap-memory footprint, so that the implementation uses a minimal amount of dynamically allocated memory. In the future, we plan to provide multiple implementations of post-quantum schemes. Currently, our focus is on algorithms useful for key exchange in TLS.
SIDH/SIKE are interesting because the key sizes produced by those algorithms are relatively small (comparing with other PQ schemes). Nevertheless, performance is not all that great yet, so we’ll continue looking. We plan to add lattice-based algorithms, such as NTRU-HRSS and Kyber, to CIRCL. We will also add another more experimental algorithm called cSIDH, which we would like to try in other applications. CIRCL doesn’t currently contain any post-quantum signature algorithms, which is also on our to-do list. After our experiment with TLS key exchange completes, we’re going to look at post-quantum PKI. But that’s a topic for a future blog post, so stay tuned.
Last, we must admit that our code is largely based on the implementation from the NIST submission along with the work of former intern Henry De Valence, and we would like to thank both Henry and the SIKE team for their great work.
Elliptic Curve Cryptography
Elliptic curve cryptography brings short keys sizes and faster evaluation of operations when compared to algorithms based on RSA. Elliptic curves were standardized during the early 2000s, and have recently gained popularity as they are a more efficient way for securing communications.
Elliptic curves are used in almost every project at Cloudflare, not only for establishing TLS connections, but also for certificate validation, certificate revocation (OCSP), Privacy Pass, certificate transparency, and AMP Real URL.
The Go language provides native support for NIST-standardized curves, the most popular of which is P-256. In a previous post, Vlad Krasnov described the relevance of optimizing several cryptographic algorithms, including P-256 curve. When working at Cloudflare scale, little issues around performance are significantly magnified. This is one reason why Cloudflare pushes the boundaries of efficiency.
A similar thing happened on the chained validation of certificates. For some certificates, we observed performance issues when validating a chain of certificates. Our team successfully diagnosed this issue: certificates which had signatures from the P-384 curve, which is the curve that corresponds to the 192-bit security level, were taking up 99% of CPU time! It is common for certificates closer to the root of the chain of trust to rely on stronger security assumptions, for example, using larger elliptic curves. Our first-aid reaction comes in the form of an optimized implementation written by Brendan McMillion that reduced the time of performing elliptic curve operations by a factor of 10. The code for P-384 is also available in CIRCL.
The latest developments in elliptic curve cryptography have caused a shift to use elliptic curve models with faster arithmetic operations. The best example is undoubtedly Curve25519; other examples are the Goldilocks and FourQ curves. CIRCL supports all of these curves, allowing instantiation of Diffie-Hellman exchanges and Edwards digital signatures. Although it slightly overlaps the Go native libraries, CIRCL has architecture-dependent optimizations.
Hashing to Groups
Many cryptographic protocols rely on the hardness of solving the Discrete Logarithm Problem (DLP) in special groups, one of which is the integers reduced modulo a large integer. To guarantee that the DLP is hard to solve, the modulus must be a large prime number. Increasing its size boosts on security, but also makes operations more expensive. A better approach is using elliptic curve groups since they provide faster operations.
In some cryptographic protocols, it is common to use a function with the properties of a cryptographic hash function that maps bit strings into elements of the group. This is easy to accomplish when, for example, the group is the set of integers modulo a large prime. However, it is not so clear how to perform this function using elliptic curves. In cryptographic literature, several methods have been proposed using the terms hashing to curves or hashing to point indistinctly.
The main issue is that there is no general method for deterministically finding points on any elliptic curve, the closest available are methods that target special curves and parameters. This is a problem for implementers of cryptographic algorithms, who have a hard time figuring out on a suitable method for hashing to points of an elliptic curve. Compounding that, chances of doing this wrong are high. There are many different methods, elliptic curves, and security considerations to analyze. For example, a vulnerability on WPA3 handshake protocol exploited a non-constant time hashing method resulting in a recovery of keys. Currently, an IETF draft is tracking work in-progress that provides hashing methods unifying requirements with curves and their parameters.
Corresponding to this problem, CIRCL will include implementations of hashing methods for elliptic curves. Our development is accompanying the evolution of the IEFT draft. Therefore, users of CIRCL will have this added value as the methods implement a ready-to-go functionality, covering the needs of some cryptographic protocols.
Update on Bilinear Pairings
Bilinear pairings are sometimes regarded as a tool for cryptanalysis, however pairings can also be used in a constructive way by allowing instantiation of advanced public-key algorithms, for example, identity-based encryption, attribute-based encryption, blind digital signatures, three-party key agreement, among others.
An efficient way to instantiate a bilinear pairing is to use elliptic curves. Note that only a special class of curves can be used, thus so-called pairing-friendly curves have specific properties that enable the efficient evaluation of a pairing.
Some families of pairing-friendly curves were introduced by Barreto-Naehrig (BN), Kachisa-Schaefer-Scott (KSS), and Barreto-Lynn-Scott (BLS). BN256 is a BN curve using a 256-bit prime and is one of the fastest options for implementing a bilinear pairing. The Go native library supports this curve in the package golang.org/x/crypto/bn256. In fact, the BN256 curve is used by Cloudflare’s Geo Key Manager, which allows distributing encrypted keys around the world. At Cloudflare, high-performance is a must and with this motivation, in 2017, we released an optimized implementation of the BN256 package that is 8x faster than the Go’s native package. The success of these optimizations reached several other projects such as the Ethereum protocol and the Randomness Beacon project.
Recent improvements in solving the DLP over extension fields, GF(pᵐ) for p prime and m>1, impacted the security of pairings, causing recalculation of the parameters used for pairing-friendly curves.
Before these discoveries, the BN256 curve provided a 128-bit security level, but now larger primes are needed to target the same security level. That does not mean that the BN256 curve has been broken, since BN256 gives a security of 100 bits, that is, approximately 2¹⁰⁰ operations are required to cause a real danger, which is still unfeasible with current computing power.
With our CIRCL announcement, we want to announce our plans for research and development to obtain efficient curve(s) to become a stronger successor of BN256. According to the estimation by Barbulescu-Duquesne, a BN curve must use primes of at least 456 bits to match a 128-bit security level. However, the impact on the recalculation of parameters brings back to the main scene BLS and KSS curves as efficient alternatives. To this end a standardization effort at IEFT is in progress with the aim of defining parameters and pairing-friendly curves that match different security levels.
Note that regardless of the curve(s) chosen, there is an unavoidable performance downgrade when moving from BN256 to a stronger curve. Actual timings were presented by Aranha, who described the evolution of the race for high-performance pairing implementations. The purpose of our continuous development of CIRCL is to minimize this impact through fast implementations.
Optimizations
Go itself is a very easy to learn and use for system programming and yet makes it possible to use assembly so that you can stay close “to the metal”. We have blogged about improving performance in Go few times in the past (see these posts about encryption, ciphersuites, and image encoding).
When developing CIRCL, we crafted the code to get the best possible performance from the machine. We leverage the capabilities provided by the architecture and the architecture-specific instructions. This means that in some cases we need to get our hands dirty and rewrite parts of the software in Go assembly, which is not easy, but definitely worth the effort when it comes to performance. We focused on x86-64, as this is our main target, but we also think that it’s worth looking at ARM architecture, and in some cases (like SIDH or P-384), CIRCL has optimized code for this platform.
We also try to ensure that code uses memory efficiently – crafting it in a way that fast allocations on the stack are preferred over expensive heap allocations. In cases where heap allocation is needed, we tried to design the APIs in a way that, they allow pre-allocating memory ahead of time and reuse it for multiple operations.
Security
The CIRCL library is offered as-is, and without a guarantee. Therefore, it is expected that changes in the code, repository, and API occur in the future. We recommend to take caution before using this library in a production application since part of its content is experimental.
As new attacks and vulnerabilities arise over the time, security of software should be treated as a continuous process. In particular, the assessment of cryptographic software is critical, it requires the expertise of several fields, not only computer science. Cryptography engineers must be aware of the latest vulnerabilities and methods of attack in order to defend against them.
The development of CIRCL follows best practices on the secure development. For example, if time execution of the code depends on secret data, the attacker could leverage those irregularities and recover secret keys. In our code, we take care of writing constant-time code and hence prevent timing based attacks.
Developers of cryptographic software must also be aware of optimizations performed by the compiler and/or the processor since these optimizations can lead to insecure binary codes in some cases. All of these issues could be exploited in real attacks aimed at compromising systems and keys. Therefore, software changes must be tracked down through thorough code reviews. Also static analyzers and automated testing tools play an important role on the security of the software.
Summary
CIRCL is envisioned as an effective tool for experimenting with modern cryptographic algorithms yet providing high-performance implementations. Today is marked as the starting point of a continuous machinery of innovation and retribution to the community in the form of a cryptographic library. There are still several other applications such as homomorphic encryption, multi-party computation, and privacy-preserving protocols that we would like to explore.
We are team of cryptography, security, and software engineers working to improve and augment Cloudflare products. Our team keeps the communication channels open for receiving comments, including improvements, and merging contributions. We welcome opinions and contributions! If you would like to get in contact, you should check out our github repository for CIRCL github.com/cloudflare/circl. We want to share our work and hope it makes someone else’s job easier as well.
Finally, special thanks to all the contributors who has either directly or indirectly helped to implement the library – Ko Stoffelen, Brendan McMillion, Henry de Valence, Michael McLoughlin and all the people who invested their time in reviewing our code.
|
2021-04-23 02:36:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2428017407655716, "perplexity": 2804.2286512807163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00241.warc.gz"}
|
https://math.stackexchange.com/questions/1470839/rational-numbers-and-leibniz-law
|
# Rational numbers and leibniz law
Leibniz law says $a = b \implies f(a) = f(b)$. Unfortunately this law seems to fail for rational numbers e.g. ${1 \over 2} = {2 \over 4}$ but $numerator({1 \over 2}) \neq numerator({2 \over 4})$. I know that you can say $1 \over 2$ is just representation of "true" rational number and equality we use is just equivalence relation not real equality but this representation is how we think about rational numbers and how we define them in abstract algebra.
Question is: Is there some logically precise (e.g. first order logic) and "true" definition of natural number that respects Leibniz law and makes ${1 \over 2}$ and ${2 \over 4}$ truly equal not just equivalent.
Edit:
To rephrase the question: how do I define rational numbers to avoid problems with representation. In other words is there some axiomatic definition of rationals. Up to this point I've only seen algebraic definitions with pairs and equivalence classes.
You are confusing the object with its various names.
As an object, $\frac12=\frac24$. But you can present it differently. More specifically, in order for the numerator function to be well-defined, you need to choose a representation for each rational first.
Similarly, $1+1=2$, but the length of these two expressions is different. Names are syntax, objects are semantics. Leibniz's law is about semantics, not syntax.
• I do not agree that $1 \over 2$ is syntax. Syntax is described by axioms and $1 \over 2$ is model of those axioms so it is semantics. Question that is still open is how do I define rational number to avoid problems with representation. In other words is there some axiomatic definition of rationals. Up to this poin I've only seen agebraic definitions with pairs and equivalence classes. – Trismegistos Oct 9 '15 at 12:02
• Yes, you use equivalence classes. Then $\frac12$ should be seen as the appropriate equivalence class, rather than the actual ordered pair creating it. – Asaf Karagila Oct 9 '15 at 12:20
Your problem is that you are separating the number $\frac12$ into a pair of numbers $(1,2)$. Your law would then still be true, since $(1,2)\neq(2,4)$ and there would be no reason to suppose that $f((1,2))=f((2,4))$.
As numbers, $\frac12$ and $\frac24$ are precisely the same number. Your function $\operatorname{numerator}(x)$ as you use it is not well-defined since it depends on the particular representation of the (rational) number $x$.
The law you're quoting holds only when $f$ is a function.
What you're calling "$\mathit{numerator}(\cdots)$" is not a function, because it depends on something external to which number its input is -- namely on how you've chosen to represent that number.
(Arguably the law is not really a deep thing, but rather part of the definition of what it means to be a function).
Axiomatically, you can define the rational numbers as a field of characteristic 0 which has no proper subfields. You can prove that any two such fields are uniquely isomorphic (so it doesn't matter which of them you choose to call $\mathbb Q$), and the equivalence-classes-of-pairs construction from textbooks shows you that believing in set theory is sufficient to know that there are fields with this property.
• So what is rational number? – Trismegistos Oct 9 '15 at 13:05
• @Trismegistos: Platonically? I'd say it's a real number with the property that you can make an integer by adding it to itself an appropriate number of times. – Henning Makholm Oct 9 '15 at 13:14
• No, I mean formally but I already see you provide answer. This is algebraic definition and I counter on seeing some other definition similar to Peano Axioms. – Trismegistos Oct 9 '15 at 17:47
|
2019-05-23 22:55:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936119675636292, "perplexity": 293.7957551983186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257432.60/warc/CC-MAIN-20190523224154-20190524010154-00416.warc.gz"}
|
https://chat.stackexchange.com/transcript/13775/2020/8/14
|
3:05 AM
1
When you hold a loose string by its two ends, and just let it dangle in space, it looks an awful lot like a parabola. I mean, on first sight, who wouldn't have thought that? It was only upon closer inspection that we realized in fact the shape formed is not a parabola, but a catenary (see the gra...
3 hours later…
14 hours later…
8:15 PM
6
For a rigid body rotating with a constant angular speed, the points near the axis must have lower linear velocity than the points farther away. If they have different linear velocities, they must have a non-zero relative velocity. If they have a non-zero relative velocity, the distance between th...
2 hours later…
9:53 PM
3
Q: An amusement park proprietor wishes to design a rollercoaster with a vertical circular loop in the track, of radius $R = 20\, \rm m$. Before the cars reach the loop, they descend from a maximum height h, at which they have zero velocity. Assuming the cars roll freely (no motor and no friction...
|
2020-09-19 19:56:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7312397360801697, "perplexity": 817.9778644512052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00175.warc.gz"}
|
http://www.numericalmethod.com/javadoc/suanshu/com/numericalmethod/suanshu/mathstructure/BanachSpace.html
|
SuanShu, a Java numerical and statistical library
## com.numericalmethod.suanshu.mathstructure Interface BanachSpace<B,F extends Field<F> & java.lang.Comparable<F>>
All Superinterfaces:
AbelianGroup<B>, VectorSpace<B,F>
All Known Subinterfaces:
HilbertSpace<H,F>, Vector
All Known Implementing Classes:
Basis, CombinedVectorByRef, DenseVector, Gradient, ImmutableVector, SparseVector, SubVectorRef, SVEC
public interface BanachSpace<B,F extends Field<F> & java.lang.Comparable<F>>extends VectorSpace<B,F>
A Banach space, B, is a complete normed vector space such that every Cauchy sequence (with respect to the metric d(x, y) = |x - y|) in B has a limit in B.
Wikipedia: Banach space
Method Summary
double norm()
|⋅| : B → F
norm assigns a strictly positive length or size to all vectors in the vector space, other than the zero vector.
Methods inherited from interface com.numericalmethod.suanshu.mathstructure.VectorSpace
scaled
Methods inherited from interface com.numericalmethod.suanshu.mathstructure.AbelianGroup
add, minus, opposite, ZERO
Method Detail
### norm
double norm()
|⋅| : B → F
norm assigns a strictly positive length or size to all vectors in the vector space, other than the zero vector.
Returns:
|this|
|
2013-05-19 18:50:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35068029165267944, "perplexity": 7912.43912334288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00006-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://byjus.com/maths/inverse-sine/
|
# Inverse Sine
Inverse Sine is a trigonometric function which denotes the inverse of the sine function and is represented as – Sin-1. The formula for this function is simple to derive. Every trigonometric function, whether it is Sine, Cosine, Tangent, Cotangent, Secant or Cosecant has an inverse of it, though in a restricted domain. To understand the inverse of sine out of other Inverse trigonometric functions, we need to study Sine function first.
## Sine Function
Sin (the sine function) takes an angle θ in a right-angled triangle and produces a ratio of the side opposite the angle θ to the hypotenuse.
Sin θ = Opposite / Hypotenuse
## Inverse Sine Function
• Sin-1 (the inverse of sine) takes the ratio Opposite/ Hypotenuse and produces angle θ. It is also written as arcsin or asine.
Example: In a triangle, ABC, AB= 4.9m, BC=4.0 m, CA=2.8 m and angle B = 35°.
Solution:
• Sin 35° = Opposite / Hypotenuse
• Sin 35° = 2.8 / 4.9
• Sin 35° = 0.57°
So, Sin-1 (Opposite / Hypotenuse) = 35°
Sin-1 (0.57) = 35°
## Inverse Sine Formula
Let us consider if we want to find the depth(d) of the seabed from the bottom of the ship and the following two parameters are given:
• The angle which the cable makes with the seabed.
• The cable’s length.
The Sine function will help to find the distance/depth d of the ship from the sea bed by the following method:
If the angle is 39° and the cable’s length is 40 m.
• Sin 39° = Opposite / Hypotenuse
• Sin 39° = d / 40
• d = Sin 39° × 40
• d = 0.6293 × 40
• d = 25.172 cm
Therefore, the depth d is 25.17 cm.
Now, if the angle is not given and we want to calculate it, then we use the Inverse functions and the question will be asked in the following way:
Problem: What is the angle Sin = Opposite / Hypotenuse, has?
Sin inverse is denoted by sin-1 or arcsin.
Solution: Let’s take the measurement from above example only.
• Distance d = 25.17 cm
• Cable’s length = 40 cm.
We want to find angle “α ”
Step 1: Find the sin α°
• Sin α° = Opposite / Hypotenuse
• Sin α° = 25.17 / 40
• Sin α° = 0.6293
Step 2: Now, for which angle sin α° = 0.6293
Let’s find it out with Inverse sin:
α° = Sin-1 / (0.6293)
α° = 38.1°
Did you know: Sin and Sin-1 are vice-versa.
Example: Sin 30° = 0.5 and Sin-1 0.5 = 30°
### Inverse Sine Graph
Arcsine trigonometric function is the sine function is shown as sin-1 a and is shown by the below graph.
### Inverse Sign Derivative
• f Sinθ = θ
• f’(Sin θ)(Cos θ) = 1
• f’(Sin θ)= 1 / cos θ
• A = sin θ = Cos θ = √(1-x2)
• f’(x)=1 / √(1-x2)
• d/dx Sinx= 1 / √(1-x2)
Need to practice more on the trigonometric functions, Download BYJUS -The learning app.
Related Topics Inverse Cosine Inverse Tan Inverse Trigonometric Functions Properties Law Of Sines
#### Practise This Question
Write 0.1234 in standard form.
|
2019-06-24 10:35:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587709069252014, "perplexity": 3911.802279669124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999298.86/warc/CC-MAIN-20190624084256-20190624110256-00422.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-3-solving-inequalities-3-1-inequalities-and-their-graphs-practice-and-problem-solving-exercises-page-168/11
|
## Algebra 1
$(k\div9)\gt\frac{1}{3}$
The word "quotient" indicates division. The symbol $\gt$ indicates the term on the left is greater than the term on the right.
|
2019-10-23 10:19:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6396480202674866, "perplexity": 371.2678700320227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00515.warc.gz"}
|
https://www.physicsforums.com/threads/math-challenge-march-2019.967174/page-5
|
# Challenge Math Challenge - March 2019
• Featured
#### fresh_42
Mentor
2018 Award
Alright, trying my hand at High School 3, and trying to get all my "i" dotted and "t" crossed this time. Doing it the long way - no shortcuts, or I'll end up kicked from the forum and sent back to pre-school! First part, for the sigma of prime decomposition
I - Define $\sigma(x)$ = sum of divisors of $x$, including 1 and $x$, for $x \in N$
--------------------------------------------------------------------------------------------------------
II - Proof that if $p$ is prime, $p > 1$ and $n>=1$, then $\sigma(p^n) = (p^{(n+1)}-1)/(p-1)$
As $p$ is prime, $p>1$ and $n>=1$, the divisors of $p^n$ are {$1, p, p^2, ..., p^n$ }; the sum of these divisors form the sum of a geometric series:
$\sigma(p^n)$ = $1+p+p^2+...+p^n$
this series has $a=1, rate=p, num=n+1$; as $rate>1$, the series has $sum=a*(rate^{num}-1)/(rate-1) = 1*(p^{(n+1)}-1)/(p-1)$, therefore
$\sigma(p^n) = (p^{(n+1)}-1)/(p-1)$
This can also be seen by a simple long division or multiplication of polynomials: $x^{n+1}-1=(x-1)\cdot (x^n+x^{n-1}+ \ldots + x+ 1)\,.$
--------------------------------------------------------------------------------------------------------
III - Proof that if $p$ is a prime, $p>1$, $a \in N$, $a>1$, $n>=1$, and $p$ is not a factor of $a$, then $\sigma(p^n*a)=\sigma(p^n)*\sigma(a)$
(a) Define
$z = p^n*a$
(b) Define
P=set of divisors of $p^n=\{1,p,p^2,...,p^n\}$
A=set of divisors of $a = \{a_0,a_1,a_2,...,a_m\}$, with $a_0=1, a_m=a$
(c) For any $u \in N, u>1$, such that $u|z$ and $gcd(u, p^n)=1$; then $u|a$; that's due to Euclid's Lemma; as $u|a$, $u \in A$, therefore $u = a_j$ for some $j$, and $u=p^0*a_j$ for some j
(d) For any $u \in N, u>1$, such that $u|z$ and $gcd(u, p^n)=p^l$ for some $l>0$ ; then necessarily $u=p^l*v$ for an integer $v$, where $gcd(v,p^n)=1$; then for some $k_u \in N, k_u>0$:
$z=k_u*u=p^n*a$
$u=p^l*v$
$z/u=k_u=(p^n*a)/(p^l*v)$
$k_u=(p^n/p^l)*(a/v)$
as $n>l$ and $k_u$ is an integer and $v$ is not a multiple of $p$, then necessarily $a$ is a multiple of $v$, or $v$ is a divisor of $a$:
$v|a$
therefore $v \in A$, what means $v=a_j$ for some $j$, and
$u=p^i*a_j$ for some $i,j$
(e) From (c) and (d), all divisors of z have the form $p^i*a_j$ for some 0<=i<=n, 0<=j<=m.
(f) Therefore the sum of the divisors of z is
$\sigma(z)=\sum_i (\sum_j (p^i*a_j)) = \sum_i (p^i * (\sum_j a_j)) = \sum_i (p^i * \sigma(a)) = \sigma(a)*(\sum_i (p^i)) = \sigma(a)*\sigma(p^n)$
--------------------------------------------------------------------------------------------------------
IV - Proof that $\sigma ( \prod_i {p_i}^{n_i} ) = \prod_i \sigma( {p_i}^{n_i} )$ if each $p_i$ is a different prime $p_i > 1$ and $n_i>=0, 1<=i<=m$
(a) Define
$z = z_1$
$z_1 = \prod_i {p_i}^{n_i}, i>=1,$ that is $z_1 = {p_1}^{n_1} * z_2$
$z_2 = \prod_i {p_i}^{n_i}, i>=2,$ that is $z_2 = {p_2}^{n_2} * z_3$
...
$z_{(m-1)} = \prod_i {p_i}^{n_i}, i>=m-1,$ that is $z_{(m-1)} = p_{(m-1)}^{n_{(m-1)}} * z_m$
$z_m = \prod_i {p_i}^{n_i}, i>=m,$ that is $z_m = {p_m}^{n_m}$
then, as each $p_i$ is a different prime,
$\sigma(z_m) = \sigma({p_m}^{n_m})$
$\sigma(z_{(m-1)}) = \sigma( ({p_{(m-1)}}^{n_{(m-1)}} * z_m ) = \sigma(p_{(m-1)}^{n_{(m-1)}})*\sigma(p_m^{n_m})$
...
$\sigma(z_1) = \sigma(p_1^{n_1} * z_2) = \sigma({p_1}^{n_1})*\sigma(z_2) = \sigma({p_1}^{n_1})*...*\sigma({p_m}^{n_m}) = \prod_i \sigma( {p_i}^{n_i} )$
--------------------------------------------------------------------------------------------------------
V - Proof that $\sigma( \prod_i {p_i}^{n_i} ) = \prod_i (p^{({n_{i}}+1)}-1)/(p-1)$ if each $p_i$ is a different prime $p_i > 1$ and $n_i>=0$, for $1<=i<=m$
(a) Just substitute II in IV
VI - Result: if the prime factorization of $z$ is $z = \prod_i {p_i}^{n_i}$, then $\sigma(z) = \prod_i (p^{({n_{i}}+1)}-1)/(p-1)$
You could have saved all these deviations with divisors of $z$ by the use of the fundamental theorem of arithmetic: We always have $z = \prod_i {p_i}^{r_i}$ for integers $z$ and so all the cases are only cases of different prime factors, resp. powers. The usual way - and I only add this for completion, not because your proof was wrong - is by induction.
The induction principle is as follows: prove that a statement $A(n)$ is true for some specific $n=n_0$. Then we assume we would have proven it for all numbers up to $n$. If we now can show, that $A(n)$ implies $A(n+1)$ then we are done, since we have shown a way how to start from $n=n_0$ and get all the way up to any $n$.
In our case $A(n) = \left[ \sigma\left( \prod_{i=1}^n p_i^{r_i}\right)= \prod_{i=1}^n \frac{p_i^{r_i +1}-1}{p_i-1}\right]$ and $n_0=1$.
The verification, that $A(1)$ is true, is your step II.
Now we assume, that $A(n-1)$ is true.
By your step III, the induction hypothesis $A(n-1)$, and again step II we know, that
\begin{align*}
\sigma\left( \prod_{i=1}^n p_i^{r_i}\right) &= \sigma\left( \prod_{i=1}^{n-1} p_i^{r_i}\right) \cdot \sigma\left( p_n^{r_n} \right)\\
&=\prod_{i=1}^{n-1} \dfrac{p_i^{r_i +1}-1}{p_i-1} \cdot \dfrac{p^{r_n+1}-1}{p_n-1}\\
&= \prod_{i=1}^{n} \dfrac{p_i^{r_i +1}-1}{p_i-1}
\end{align*}
and the formula is proven.
Another way is simply count the divisors. This would have shortened your step III as well:
Every prime power $p_i^{r_i}$ contributes divisors $1,p_i,\ldots ,p_i^{r_i}$ and every divisor can be combined with all the other divisors to get another one. So we have as many divisors as there are combinations of $(\,1,p_1,\ldots ,p_i^{r_1}\,) \times \ldots \times (\,1,p_n,\ldots ,p_i^{r_n}\,)$, which sum up to $\prod_{i=1}^n\ \frac{p_i^{r_i +1}-1}{p_i-1}$.
This was the main task. Now you can calculate $\sigma(a)-a$ and $\sigma(b)-b$ if $a,b$ have the given form.
#### fbs7
I see! Short and elegant!! Nicely done! I didn't think of using induction!
I didn't want to assume as fact that $\sigma(a*b) = \sigma(a) * \sigma(b)$ if $gcd(a,b)=1$ given the troubles with my previous proofs of that; that's a bit why I took the pain of explicitly avoiding using that.
As I say, if one works wrong once he has to work twice to fix it!
#### fresh_42
Mentor
2018 Award
I see! Short and elegant!! Nicely done! I didn't think of using induction!
I didn't want to assume as fact that $\sigma(a*b) = \sigma(a) * \sigma(b)$ if $gcd(a,b)=1$ given the troubles with my previous proofs of that; that's a bit why I took the pain of explicitly avoiding using that.
As I say, if one works wrong once he has to work twice to fix it!
This was your part III which is needed to do the induction step, so no senseless work. I only think it's easier to show it for prime powers rather than arbitrary factors.
#### fbs7
Hoorye! Let's do it! It's like being a Padawan for Master Yoda! Let's see... uhh.. what... x, y, z??... hmm... this is difficult!!!... I guess I'm at Amoeba-level Padawan atm
Attempt at 2nd part for High School 3:
I - definitions
(a) simplify notation
$n \in N$
define $c=2^n$
$s(c) = s(2^n) = 2^{(n+1)} -1= 2c-1$
(c) assume
$x=3*2^n-1 = 3c-1$ is an odd prime
$y=3*2^{(n-1)}-1 = 3c/2 - 1$ is an odd prime
$z=9*2^{(2n-1)}-1 = 9*c^2/2 - 1$ is an odd prime
II - notice
$xy = (3c-1)*(3c/2-1) = (9c^2/2-3c-3c/2+1) = (9c^2/2 - 9c/2 + 1)$
$xy = z - 9c/2 +2$
$x+y = (3c-1)+(3c/2-1) = 9c-2$
so, $xy+(x+y)=z$
III - if $x$ is prime, then $\sigma(x)=1+x$; this is rather obvious by now and only Pre-Padawans need to prove that!
IV - define
$a = cxy$
$b = cz$
(a) as $c, x$ and $y$ are co-primes, and $x$ and $y$ are primes, then
$s(a) = s(c*x*y) = s(c)*(1+x)*(1+y) = (2c-1)*(1+x)*(1+y)$
$s(a)-a = (2c-1)*(1+x+y+xy)-c*(xy) =$
$s(a)-a = (2c-1)*(1+z)-c*(xy)=(2c-1)*(1+z)-c*(z-9c/2+2)$
$= 2c+2cz-1-z-cz+9c^2/2-2c$
$= cz-1-z+(z+1) = cz = b$
(b) as $c$ and $z$ are co-primes, then
$s(b) = s(c*z) = s(c)*(1+z) = (2c-1)*(1+z)$
$s(b)-b = 2c+2cz-1-z-cz$
$= 2c+cz-z-1$
$= 2c+c(9c^2/2-1)-(9c^/2-1)-1$
$= c+9c^3/2-9c^2/2$
$= c*(9c^2-9c/2+1)$, from II, we have
$= cxy = a$
Therefore $s(a)=a+b=s(b)$ and a and b are amicable!
I should get a promotion to Ant-Padawan by now! ... ya know, another step in evolution, just before Fly-Padawn, Centipede-Padawan, ... 60 steps... Monkey-Padawan, Padawan 1st Level, 2nd Level... 99th Level, then Auxiliary Assistant to Secretary of Jedi Initiate
Last edited:
#### fresh_42
Mentor
2018 Award
Hoorye! Let's do it! It's like being a Padawan for Master Yoda! Let's see... uhh.. what... x, y, z??... hmm... this is difficult!!!... I guess I'm at Amoeba-level Padawan atm
Attempt at 2nd part for High School 3:
I - definitions
(a) simplify notation
$n \in N$
define $c=2^n$
$s(c) = s(2^n) = 2^{(n+1)} -1= 2c-1$
(c) assume
$x=3*2^n-1 = 3c-1$ is an odd prime
$y=3*2^{(n-1)}-1 = 3c/2 - 1$ is an odd prime
$z=9*2^{(2n-1)}-1 = 9*c^2/2 - 1$ is an odd prime
II - notice
$xy = (3c-1)*(3c/2-1) = (9c^2/2-3c-3c/2+1) = (9c^2/2 - 9c/2 + 1)$
$xy = z - 9c/2 +2$
$x+y = (3c-1)+(3c/2-1) = 9c-2$
so, $xy+(x+y)=z$
III - if $x$ is prime, then $\sigma(x)=1+x$; this is rather obvious by now and only Pre-Padawans need to prove that!
IV - define
$a = cxy$
$b = cz$
(a) as $c, x$ and $y$ are co-primes, and $x$ and $y$ are primes, then
$s(a) = s(c*x*y) = s(c)*(1+x)*(1+y) = (2c-1)*(1+x)*(1+y)$
$s(a)-a = (2c-1)*(1+x+y+xy)-c*(xy) =$
$s(a)-a = (2c-1)*(1+z)-c*(xy)=(2c-1)*(1+z)-c*(z-9c/2+2)$
$= 2c+2cz-1-z-cz+9c^2/2-2c$
$= cz-1-z+(z+1) = cz = b$
(b) as $c$ and $z$ are co-primes, then
$s(b) = s(c*z) = s(c)*(1+z) = (2c-1)*(1+z)$
$s(b)-b = 2c+2cz-1-z-cz$
$= 2c+cz-z-1$
$= 2c+c(9c^2/2-1)-(9c^/2-1)-1$
$= c+9c^3/2-9c^2/2$
$= c*(9c^2-9c/2+1)$, from II, we have
$= cxy = a$
Therefore $s(a)=a+b=s(b)$ and a and b are amicable!
I should get a promotion to Ant-Padawan by now! ... ya know, another step in evolution, just before Fly-Padawn, Centipede-Padawan, ... 60 steps... Monkey-Padawan, Padawan 1st Level, 2nd Level... 99th Level, then Auxiliary Assistant to Secretary of Jedi Initiate
Except for a few typos this is correct, although a bit hard to read.
I insert the calculation without the many abbreviations you used:
For $n=p_1^{k_1}\cdots p_r^{k_r}$ the sum of all divisors is $$\sigma(n)=\prod_{i=1}^r \dfrac{p_i^{k_{i}+1}-1}{p_i-1}$$
\begin{align*}
\sigma(a)-a&= \sigma(2^n\cdot x\cdot y)-2^n\cdot x\cdot y\\
&=\left( 2^{n+1}-1 \right)(x+1)(y+1)- 2^nxy\\
&=\left( 2^{n+1}-1 \right)(3\cdot 2^n)(3\cdot 2^{n-1}) - 2^n(3\cdot 2^n-1)(3\cdot 2^{n-1}-1)\\
&=\left( 2^{n+1}-1 \right)\cdot 9\cdot 2^{2n-1}-2^n\left(9\cdot 2^{2n-1}-9\cdot 2^{n-1}+1 \right)\\
&=2^n\cdot\left(9\cdot 2^{2n}-9\cdot 2^{n-1}-9\cdot 2^{2n-1}+9\cdot 2^{n-1}-1\right)\\
&=2^n\cdot \left(9\cdot 2^{2n-1}-1 \right)\\
&=2^n \cdot z\\
&=b
\end{align*}
and by an analogue calculation $\sigma(b)-b=a\,.$
The statement above is called: Theorem of Thabit Ibn Qurra. (9th century, Mesopotamia)
#### fbs7
Except for a few typos this is correct, although a bit hard to read.
What can I do to make it easier to read? I try to keep it in high-school scope and do very small steps (also because I tend to make mistakes if I try too long a step), but I fear I already go over the top as far as high-school level as I fear 99% of the high-school students have no idea what $\prod x_n$ means (at least in my country, haha!)
At the same time the only person I ever met in life that graduated in college in math (other than my old school teachers) ended up hating math for some reason and wouldn't talk to me about it (very frustrating!). So out of reading books and internet stuff (which very very swiftly go way over my head) it's hard for me to distinguish between what's a necessary statement in a proof and what a decently-knowledged will find boring and unnecessary. For example, only now I know that n ∈ N is an important statement!
I'm kinda in-between those two levels, I guess; that's the trouble with ant-padawans!
#### fresh_42
Mentor
2018 Award
What can I do to make it easier to read? I try to keep it in high-school scope and do very small steps (also because I tend to make mistakes if I try too long a step), but I fear I already go over the top as far as high-school level as I fear 99% of the high-school students have no idea what $\prod x_n$ means (at least in my country, haha!)
It is in general easier to follow a linear argumentation. The many support variables you used ($x,y,z,c$) led to a constant need to go back and check what you already have resp. how it is defined. After you have made a proof, you can gather all parts and rewrite it. See my example, which only uses the variables defined in the question. The typos (e.g. a forgotten square) didn't make it easier.
At the same time the only person I ever met in life that graduated in college in math (other than my old school teachers) ended up hating math for some reason and wouldn't talk to me about it (very frustrating!). So out of reading books and internet stuff (which very very swiftly go way over my head) it's hard for me to distinguish between what's a necessary statement in a proof and what a decently-knowledged will find boring and unnecessary. For example, only now I know that n ∈ N is an important statement!
I'm kinda in-between those two levels, I guess; that's the trouble with ant-padawans!
The only way is practice and reads. The more proofs you have read, the more you understand their general structure. PF is a nice way to do both, although many proofs don't qualify as a good template. But you can read a book and come over and ask whenever you got stuck. I'm sometimes surprised that not more students use this unique opportunity. On other platforms you get solutions as answers. This is a vicious advantage: it helps in the moment but worsens the overall performance. Helping to understand what's going on is far more important and advantageous than a solution could ever be. It is also helpful to be forced to explain a situation. The crucial ideas often come during the attempt of an explanation!
#### fbs7
That's true! I always find I understand something better when I have to explain something to someone else from that person's point of view!
I appreciate your help and insights. As far as not many students use the opportunity, I say: they haven't read Pauli's book "The Theory of Relativity". I got it from a used book stand when I was 16, and although I didn't understand almost nothing from it I thought: "This dude started to write it when he was 19... just 3 years older than me... he summarized all knowledge (back then) on Relativity!". That was the best evidence ever for me what someone can do, even if they are young!
So my advice to ambitious young people - read Pauli's book! It's flabbergasting!
#### fbs7
Hoorye! Trying my hand at High School 1! Full speed ahead to Ant-Padawan Level 2!!!
(a) note: I think the sequence starts with sqrt(2), not 2
(b) simplifying notation
define $m=n-1$
define $k=2/3$
(c) rewrite product
$P = \prod_{n=1}^{n=\infty} 2^{( (2^{(n-1)})/(3^{(n-1)}) )}$
$P = \prod_{m=0}^{m=\infty} 2^{((2^m)/(3^m))}$
$P = \prod_{m=0}^{m=\infty} 2^{((2/3)^m)}$
$P = \prod_{m=0}^{m=\infty} 2^{( k^m )}$
$(ln P)/(ln 2) = \sum_{m=0}^{m=\infty} k^m$
(d) calculating infinite series
$S = 1+k+k^2+k^3+...$
$S-S*k = S*(1-k) = 1+k+k^2...-(k+k^2+k^3+...) = 1$
$S = 1/(1-k)$
as $k=2/3$
$S = 1/(1-2/3) = 3$
$(ln P)/(ln 2) = S$
$P = 2^S = 2^3 = 8$
#### QuantumQuest
Gold Member
(a) note: I think the sequence starts with sqrt(2), not 2
No. I was given this question many years before and the wording is exactly what I've given i.e. this is how it is intended to be.
(b) simplifying notation ...
Your solution is correct. You can also do it in a more straightforward manner. Here's what I did back in my high school days
$a_n = 2\cdot \sqrt[3]{2^2}\cdot \sqrt[9]{2^4}\cdot \sqrt[27]{2^8} \,\cdots \, \sqrt[3^{n-1}]{2^{2^{n-1}}} = 2 \cdot 2^{\frac{2}{3}} \cdot 2^{\frac{4}{9}} \cdot 2^{\frac{8}{27}} \cdots 2^{\frac{2^{n-1}}{3^{n-1}}} = 2^{1 + \frac{2}{3} + \frac{4}{9} + \frac{8}{27} + \cdots + \frac{2^{n-1}}{3^{n-1}}}$.
Now, $1 + \frac{2}{3} + \frac{4}{9} + \frac{8}{27} + \cdots + \frac{2^{n-1}}{3^{n-1}} = 1 + (\frac{2}{3}) + {(\frac{2}{3})}^2 + {(\frac{2}{3})}^3 + \cdots + {(\frac{2}{3})}^{n-1} = \frac{1[{(\frac{2}{3})}^n - 1]}{\frac{2}{3} - 1} = -3[{(\frac{2}{3})}^n - 1] \rightarrow -3(0 - 1) = 3$
So, $\lim a_n = 2^3$
Last edited:
#### fbs7
wow!! holy-choochoo!!! I didn't think of adding the exponents!!!
well done! thank you!
so the first term is really 2, not sqrt(2)... huh... I guess I'm not graduating to Ant-Padawan level 2 at all! oh no!
#### fresh_42
Mentor
2018 Award
wow!! holy-choochoo!!! I didn't think of adding the exponents!!!
This is not really true. You took the logarithm of an assumed limit and did exactly this: added the exponents - now a floor below, which made a better typeset.
#### fbs7
a-ha!! thank you! so I'm calling myself Ant-Padawn Level 1 1/2.. no... 1 1/3
And I used what you advised about Polynomial Division... errr... Thingie.. that was very smart!
#### fbs7
On Yoda-level Question 4:
The question is beyond my level, but I'm trying to at least understand the item (a); I'm probably not reading the notation correctly, though:
$x*(y*z) = x*(1/2y+1/2z) = 1/2(x*y)+1/2(x*z) =$
$1/2(1/2x+3/8y+1/8z) + 1/2(1/2x+1/2z) =$
$1/2x + 3/16y+5/16z$
$(x*y)*z = (1/2x+3/8y+1/8z)*z =1/2(x*z)+3/8(y*z)+1/8(z*z) =$
$1/2(1/2x+1/2z) + 3/8(1/2y+1/2z) + 1/8(z) =$
$1/4x+3/16y+9/16z$
It doesn't seem to be associative, so I'm making some mistake. Tried as I could, still can't see my mistake.
#### fresh_42
Mentor
2018 Award
On Yoda-level Question 4:
The question is beyond my level, but I'm trying to at least understand the item (a); I'm probably not reading the notation correctly, though:
$x*(y*z) = x*(1/2y+1/2z) = 1/2(x*y)+1/2(x*z) =$
$1/2(1/2x+3/8y+1/8z) + 1/2(1/2x+1/2z) =$
$1/2x + 3/16y+5/16z$
$(x*y)*z = (1/2x+3/8y+1/8z)*z =1/2(x*z)+3/8(y*z)+1/8(z*z) =$
$1/2(1/2x+1/2z) + 3/8(1/2y+1/2z) + 1/8(z) =$
$1/4x+3/16y+9/16z$
It doesn't seem to be associative, ...
Correct.
... so ...
Wrong.
... I'm making some mistake. Tried as I could, still can't see my mistake.
Me neither. Why do you expect associativity? It is an example for a non associative multiplication, i.e. a non associative algebra. Lie algebras or the Octonians are other prominent examples of non associative algebras. Both play important roles in physics, the former a bit more than the latter.
#### fbs7
Oh, good Lord!! I was trying to find a way to prove that was associative, instead of answering if that was associative or not!! I'm such an idiot.
By the way, I came with a property of an algebra A on vectors over a basis $x_i$ such that
$x_i*x_j = l_{ijk}*x_k$
that would make it associative; without taking much space, for three vectors u, v, w in Einstein's notation:
$u = u_i*x_i; v = v_i*x_i; w = w_i*x_i;$
then $u*(v*w) = (u*v)*w$ makes $u_a*x_a*(v_b*x_b*w_c*x_c) = (u_d*x_d*v_e*x_e*w_e)*w_f*x_f$
as $u_?, v_?, w_?$ are all free variables, then $d=a, e=b, f=c$, and $x_a*(x_b*x_c)=(x_a*x_b)*x_c$, that is the algebra is associative if the multiplication of the basis is associative; then, replacing with the expression for multiplication of the basis:
$x_a*(x_b*x_c) = x_a*(l_{bcg}*x_g) = l_{bcg}*x_a*x_g = l_{bcg}*l_{agh}*x_h$
$(x_a*x_b)*x_c = (l_{abi}*x_i)*x_c = l_{abi}*x_i*x_c = l_{abi}*l_{icj}*x_j$
as $x_h$ and $x_j$ are independent (that is, I assume they are independent), then $h=j$; then, eliminating $x_h$ and renaming $i=g$ and $d=h$ to make it more readable,
$l_{abi}*l_{icd} = l_{bci} * l_{aid}$ for any $a, b, c, d$
And that's where my skill ends; now, question on that... this seems like a tensor operation of some kind, is that true? That is, is $l_{ijk}$ a 3-tensor, and the expression above is a tensor operation of some kind?
Last edited:
#### fresh_42
Mentor
2018 Award
Every bilinear multiplication can be written as a tensor. See the example of Strassen's algorithm here:
https://www.physicsforums.com/insights/what-is-a-tensor/
which is an example how matrix multiplication is written as a tensor.
As long as you do not put any additional restraints on $l_{ijk}$ as in the case of genetic algebras (or any other class of algebras), as long do you have an arbitrary algebra.
#### fbs7
Thank you!
"Math Challenge - March 2019"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-04-20 14:44:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8846645951271057, "perplexity": 995.4129070669314}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529839.0/warc/CC-MAIN-20190420140859-20190420162015-00035.warc.gz"}
|
https://www.dcode.fr/morbit-cipher
|
Search for a tool
Morbit Cipher
Tool for decoding / encoding with the Morbit number. The Morbit cipher is a variant of the Morse Fractioned code using a key that generates a numeric encryption alphabet.
Results
Morbit Cipher -
Tag(s) : Polygrammic Cipher
Share
dCode and you
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Team dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Morbit Cipher tool. Thank you.
# Morbit Cipher
## Morbit Encoder
Tool for decoding / encoding with the Morbit number. The Morbit cipher is a variant of the Morse Fractioned code using a key that generates a numeric encryption alphabet.
### How to encrypt using Morbit cipher?
Morbit encryption uses a numeric index (from 1 to 9) associated with pairs of morse characters indexed like this:
1 2 3 4 5 6 7 8 9 .. .- ./ -. -- -/ /. /- //
The key is used to mix the index according to the alphabetical order of its letters.
Example: The keyword MORSECODE is associated with the code 568931724 by sorting the letters alphabetically CDEEMOORS and matching them to 1234567879 as:
Letters M O R S E C O D E Order 5 6 8 9 3 1 7 2 4 Bigrams .. .- ./ -. -- -/ /. /- //
The first step of encryption is to encode the original message in Morse code, the characters are separated by a slash / and the words are separated by double slash //.
Example: The message MORE BITS is encoded in Morse --/---/.-././/-.../../-/...
The second part of the encryption consists in splitting the Morse message into couples of 2 characters and to associate the corresponding digit in the numeric index made with the key.
Example:
-- /- -- /. -. /. // -. .. /. ./ -/ .. ./ 3 2 3 7 9 7 4 9 5 7 8 1 5 8
The encrypted message is therefore 32379749578158.
### How to decrypt Morbit cipher?
Morbit decryption requires knowing the key in order to generate the numerical index associated with morse character pairs.
Example: The key ALPHABETS gives the following index:
Letters A L P H A B E T S Order 1 6 7 5 2 3 4 9 8 Bigrams .. .- ./ -. -- -/ /. /- //
The first step in decryption is to replace each digit with its morse bigram equivalent.
Example: The message 1914592729 corresponds to the morse code --/---/.-./-.../../- :
1 9 1 4 5 9 2 7 2 9 -- /- -- /. -. /- .. ./ .. /-
The morse code obtained only needs to be translated via the classic Morse code to get the plain message.
Example: -- / --- / .-. / -... / .. / - translates to MORBIT
### How to recognize a Morbit ciphertext?
A Morbit encrypted message uses only digits from 1 (one) to 9 (nine).
The Morbit message is between 50% and 100% longer (approximately) than the original message.
The presence of a 9-letter word that can serve as a key is an important clue.
### How to decipher Morbit without key?
The key is an important element because it allows $9! = 362880$ combinations of the numerical index.
A way to reduce this number of combinations is to know a part of the plain text in order to deduce the numerical index and the correspondence with the morse bigrams.
Also, several assumptions about the message can reduce the possibilities of the key:
- the appearance of 3 consecutive // 'is unlikely
- any sequance of more than 4 consecutive identical digits is unlikely
- any word of more than 50 Morse characters (without / spacer) is unlikely
The corresponding combinations can be reasonably eliminated.
### What are the variants of the Morbit cipher?
Morbit is closer to the Fractionated Morse Code which is a kind of over-encryption.
## Source code
dCode retains ownership of the source code of the script Morbit Cipher online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Morbit Cipher script for offline use on PC, iPhone or Android, ask for price quote on contact page !
|
2019-11-22 07:15:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2056499570608139, "perplexity": 3471.0032307706892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00492.warc.gz"}
|
http://math.stackexchange.com/questions/162470/calculate-the-limit-of-the-given-function-at-x-0
|
# Calculate the limit of the given function at $x=0$
Let $f(x)=x{\times}(-1)^{\left \lfloor \frac1x \right \rfloor}$. Calculate its limit at $x=0$. According to me the limit doesn't exist because if I take log on both sides of the equation, I get:- $$\ln f(x) = \ln x+\left \lfloor \frac1x \right \rfloor {\times} \ln(-1)$$
Here $\ln(-1)$ doesn't exist and hence no limit should exist.
-
You are using "laws of logarithms" where they do not apply. What is the exponent? – André Nicolas Jun 24 '12 at 16:34
@AndréNicolas:- Please elaborate on your comment. Here $[\frac1x]$ is the exponent. So, I guess,I can use logarithm here. – kusur Jun 24 '12 at 16:43
Don't use logarithms, the logarithm is often undefined, unless you go to complex numbers, and even there $\log$ behaves weirdly. Look directly at your function. The reason I think your exponent $[1/x]$ must be the greatest integer $\le 1/x$ is that for general real $y$, $(-1)^y$ is undefined. – André Nicolas Jun 24 '12 at 16:47
so what you suggested was just to avoid confusion due to the presence of a negatuve number, right? This means that if there was some other postive number in place of -1, then I could have used logarithm? – kusur Jun 24 '12 at 16:54
With positive numbers you can use logarithms freely. However, for limit questions, it is almost always a good idea if you look before starting to do algebraic manipulations. – André Nicolas Jun 24 '12 at 16:58
Note that $\left\lfloor \dfrac1x \right\rfloor$ denotes the greatest integer less than or equals $\dfrac1x$. Hence, $(-1)^{\left\lfloor \dfrac1x \right\rfloor}$ makes sense since the power is always an integer. All you need for this proof is that $(-1)^{\left\lfloor \dfrac1x \right\rfloor}$ is either $1$ or $-1$. Hence, we have that $$-x \leq x \times (-1)^{\left\lfloor \dfrac1x \right\rfloor} \leq x$$ Hence, as $x \to 0$, we have that $$\lim_{x \to 0}-x \leq \lim_{x \to 0} x \times (-1)^{\left\lfloor \dfrac1x \right\rfloor} \leq \lim_{x \to 0} x$$ Hence, $$\lim_{x \to 0} x \times (-1)^{\left\lfloor \dfrac1x \right\rfloor} = 0$$
EDIT
Note that $\log(a^b) = b \log(a)$ is valid only when $a>0$ and $x \in \mathbb{R}$. Hence, it is incorrect to write $\log((-1)^b) = b \log(-1)$.
-
When $x$ approaches 0, $\frac1x$ approaches infinity. Then what can we say about $[\frac1x]$ ? You are claiming this to be 1 or -1. Why? – kusur Jun 24 '12 at 16:46
@KunalSuri $\left\lfloor \dfrac1x \right\rfloor$ also $\to \infty$. But what we need here is that it is always an integer as it tends to $\infty$. I am claiming that $(-1)^{\left\lfloor \dfrac1x \right\rfloor}$ is $1$ or $-1$. I am not claiming $\left \lfloor \dfrac1x \right \rfloor$ to be 1 or -1. I am only claiming that $\left \lfloor \dfrac1x \right \rfloor$ is an integer. – user17762 Jun 24 '12 at 16:48
Thanks. Its just like $cosx$ when $x\to \infty$. It oscillates between 1 and -1. Here the Greatest integer function doesn't oscillate but still it can hold either of the two values - 1 or -1. right? – kusur Jun 24 '12 at 16:52
@KunalSuri Note that $\left\lfloor \dfrac1x \right\rfloor$ doesn't oscillate. what oscillates is $(-1)^{\left\lfloor \dfrac1x \right\rfloor}$. It takes only values $-1$ and $1$. – user17762 Jun 24 '12 at 16:55
Couldn't we just say that $|f(x)|=|x{\times}(-1)^{\left \lfloor \frac1x \right \rfloor}|=|x|\to 0$ then $f(x)=x{\times}(-1)^{\left \lfloor \frac1x \right \rfloor}$ also $\to 0$?. This is of course only valid when the limit is $0$. – palio Jun 24 '12 at 17:55
|
2014-08-20 05:13:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926596999168396, "perplexity": 248.02667138896754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500800168.29/warc/CC-MAIN-20140820021320-00276-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.cut-the-knot.org/do_you_know/FunctionMain.shtml
|
# Functions
The concept of function is one of the most important in mathematics. However, its history is relatively short. M. Kline credits [Kline, p. 338] Galileo (1564-1642) with the first statements of dependency of one quantity on another, e.g., "The times of descent along inclined planes of the same height, but of different slopes, are to each other as the lengths of these slopes." In a 1673 manuscript Leibniz used the word "function" to mean any quantity varying from point to point of a curve, like the length of the tangent or the normal. The curve itself was said to be given by an equation. But in 1714, he already used the word "function" to mean quantities that depend on a variable. The notation f(x) was introduced by Euler in 1734. Still, in the 1930s, a well known Russian mathematician N. Luzin wrote:
The function concept is one of the most fundamental concepts of modern mathematics. It did not arise suddenly. It arose more than two hundred years ago out of the famous debate on the vibrating string and underwent profound changes in the very course of that heated polemic. From that time on this concept has deepened and evolved continuously, and this twin process continues to this very day. That is why no single formal definition can include the full content of the function concept. This content can be understood only by a study of the main lines of the development that is extremely closely linked with the development of science in general and of mathematical physics in particular.
Functions, especially of the numeric variety, are often confused with formulas by means of which they are defined. In one of the discrete mathematics textbooks, the authors fling a particularly inept remark to the effect that "Whereas classical mathematics is about formulas, discrete mathematics is as much about algorithms as about formulas." Charitably, I interpret the maxim as the authors' attempt to emphasize the importance of functions in mathematics in general and discrete mathematics in particular. In their view, I believe, the efficiency of function computations gains prominence when it comes to practical matters. In mathematics, the function of two variables $f(x, y) = x^{2} - y^{2}$ can be equally well defined as $f(x, y) = (x - y)(x + y).$ In algorithmic mathematics there is an important difference between the two definitions: one requires two multiplications and one addition (with the sign minus), the other needs one multiplication and two additions. The latter is faster! But the authors, of course, might have had their own reasons.
That is a fact, however, that the definition we currently use has been introduced by Johann Peter Gustav Lejeune Dirichlet (1805-1859). The turning point in the common perception of function as associated with an analytic curve - the curve whose shape in any small region defines its shape everywhere else - has occurred with the 1807 publication by Joseph Fourier (1768-1830) of his solution to the wave equation. Fourier represented his solution as (what is now called) Fourier series:
$\displaystyle f(x)=\frac{a_{0}}{2}+\sum_{n=1}^{\infty}(a_{n}\cos nx+b_{n}\sin nx),$
where $\displaystyle a_{n}=\frac{1}{\pi}\int_{0}^{2\pi}f(t)\cos nt\,dt$ and $\displaystyle b_{n}=\frac{1}{\pi}\int_{0}^{2\pi}f(t)\sin nt\,dt.$ The crucial argument for reconsidering the notion of function was the realization that Fourier series converges pointwise for a wide range of functions, not necessarily analytic, but, for example, defined piece-wise.
### References
1. M. Kline, Mathematical Thought From Ancient to Modern Times I, Oxford University Press, 1972
2. N. Luzin, Function, Mathematical Evolutions, A. Shenitzer and J. Stillwell (eds.), MAA, 2002
### Functions
• What Is Line?
• Cartesian Coordinate System
• Addition and Subtraction of Functions
• Function, Derivative and Integral
• Graph of a Polynomial of arbitrary degree
• Graph of a Polynomial Defined by Its Roots
• Inflection Points of Fourth Degree Polynomials
• Lagrange Interpolation (an Interactive Gizmo)
• Equations of a Straight Line
• Taylor Series Approximation to Cosine
• Taylor Series Approximation to Cosine
• Linear Function with Coefficients in Arithmetic Progression
• Sine And Cosine Are Continuous Functions
|
2017-09-20 23:09:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8367165327072144, "perplexity": 678.2915266832475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00228.warc.gz"}
|
https://stats.stackexchange.com/questions/501464/imputing-panel-data-in-the-wide-format-obtaining-pooled-standard-errors-after-u
|
Imputing panel data in the wide format, obtaining pooled standard errors after using lmer
I have a longitudinal data set with missing values. I want to multiply impute (let us say $$m$$ = 20 times) the missing values in the wide format using the R-package mice. Thereafter, I would like to fit a multilevel model based on the imputed data with the function lmer from the R-package lme4. The function lmer does, however, only seem to be able to fit a model based on the long format.
I therefore extracted the 20 imputed datasets with the function complete from mice and converted each data frame into the long format using the function pivot_longer from the tidyr package. Subsequently, I fitted a multilevel model on each long-format dataset using lmer, resulting in 20 regression outputs. This works.
However, I would eventually want to obtain standard errors of the estimated regression parameters that are based on the within-imputation and between-imputation variance (the whole reason one does want to use multiple imputation). This is usually easily done with the mice function pool. However, pool can only be used on an object of the class mira, which is only obtained if the regression models are directly fitted on a object of the mids class, which again is the object class that is returned by the function mice. Since I converted the multiply imputed data sets, this does not seem to be possible. I found some similar questions on stack overflow, such as these here:
Question 1
Question 2
However, all the questions/answers either do not tell how to obtained pooled standard errors or they deal with imputing the data with a multilevel model (such as mice.impute.2l), which does not work in my case (imputation fails). I simply want to use a single-level model, as outlined here: https://stefvanbuuren.name/fimd/sec-fdd.html
• I'm very happy to have found this question because it was exactly one I was going to ask. Even happier that there is this solution. I just wanted to check though. At what stage do you convert the list to long format? Regarding Erik's code, my DV is imputed in wide format e.g df <- mice(df1, method = meth, predictorMatrix = predM, m=20, maxit = 20) Where df1 contains ID DV_Group1 DV_Group2 DV_Group3 DV_Group4 DV_Group6 IV1 IV2 IV3 IV4 At what point do I convert the df to long format? – sunshinecheesesauce Mar 25 at 10:22
• Hi! I extracted the multiply imputed datasets, which I had imputed in the wide format, with the function "complete". Directly after that, I converted the data frame into the long format using the function "pivot_longer". Subsequently, I stacked the 20 long-format data sets (and the original data with missing values) into one big data frame including the column "imp", which indicates the number of the imputed data set (number 0 for the original data frame with missing values). Then I used the "as.mitml.list" function as written by Erik Ruzek. I hope I was able to answer your question – Benkyozamurai Apr 3 at 19:24
If the imputed datasets are in long form (dataset 2 stacked onto dataset 1) then you can use mitml the to do the pooling of the estimates from your model to give you the correct standard errors. See the code below:
library(mitml)
### Define a list that mitml will link to the multiply imputed data.
implist <- as.mitml.list(split(df, df\$imp)) #imp is the variable that identifies the imputed dataset an observation belongs to
### Analyze the imputed datasets and pool the results.
m_imp <- "DV ~ IV1 + IV2 + IV3 + (1 || Group)"
analysis <- with(implist, lmer(m_imp, REML = F))
estimates <- testEstimates(analysis, var.comp = T, df.com = NULL)
estimates
• Thank you very much, it worked! – Benkyozamurai Dec 29 '20 at 0:29
|
2021-06-20 09:54:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4099242389202118, "perplexity": 1856.762442440757}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00011.warc.gz"}
|
https://www.physicsforums.com/threads/optics-vignetting-and-field-of-view.717528/
|
# (Optics) Vignetting and field of view
• Archived
Aelo
## Homework Statement
An 80 mm focal length thin lens is used to image an object with a magnification of -1/2.
The lens diameter is 25 mm and a stop of diameter 20 mm is located 40 mm in front of the lens.
How big is the unvignetted field of view [in terms of object size (in mm) and in terms of half field angle]?
How big is the fully vignetted field of view?
## Homework Equations
Magnification = l'/l (coupling this with the focal length, we can find the object and image distances)
Vignetting equations attached
## The Attempt at a Solution
I've drawn a picture and looked at http://spie.org/x32310.xml. I found that l' = -240 mm and l = 120 mm. I'm not sure where to go from here. Thanks in advance for help!
#### Attachments
• vignetting.png
18.1 KB · Views: 909
I'm not sure from the description whether we've got object-stop-lens-image or object-lens-stop-image. I'm assuming the former, but extending the argument to the latter isn't hard. In this case, it's convenient to treat the object as positioned at x=0, the stop at x=200 and the lens at x=240. For a point on the object a distance ##h## off axis, it is easy to write down the equations of the rays that pass through the top and bottom of the stop:$$\begin{eqnarray*} y_t&=&h-\frac{h-10}{200}x\\ y_b&=&h-\frac{h+10}{200}x \end{eqnarray*}$$Then all you have to do is work out the height of the rays at the position of the lens (x=240):$$\begin{eqnarray*} y_t&=&12-h/5\\ y_b&=&-12-h/5 \end{eqnarray*}$$and require that they pass through the 25mm diameter lens:$$\begin{array}{rcccl} -12.5&\leq&12-h/5&\leq&12.5 \\ -12.5&\leq&-12-h/5&\leq&12.5 \end{array}$$
Which comes down to:$$\begin{array}{rcccl} -0.5&\leq&h/5&\leq&24.5\\ -24.5&\leq&h/5&\leq&0.5 \end{array}$$We require all four of these conditions to be satisfied simultaneously, i.e. ##-0.5\leq h/5\leq 0.5##, or ##-2.5\leq h'\leq 2.5##. So a 5mm object centred in the field of view is not quite affected by vignetting.
|
2022-11-28 21:47:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49446582794189453, "perplexity": 981.9808971644725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00278.warc.gz"}
|
https://tex.stackexchange.com/questions/253496/pythontex-returns-nothing
|
pythontex returns nothing
I want to automatically trim a PDF (cut out e-stamps from a larger sheet and insert them into a letter). That should work like that: I give a number n, to cut out the n-th stamp on the sheet, and for that I wanted to use pyhontex to calculate the coordinates to do so. Here is the working python code for that:
n = 9
spalten = 4
zeilen = 8
xOff = 27.0
yOff = 30.5
xStamp = 32.0
yStamp = 11.5
xSpacing = 38.2
ySpacing = 31.4
left = xOff + ((n-1) / spalten) * xSpacing
right = left + xStamp
top = yOff + ((n-1) % spalten) * ySpacing
bottom = top + yStamp
print("trim={0}mm {1}mm {2}mm {3}mm,".format(left, right, top, bottom))
trim=103.4mm 135.4mm 30.5mm 42.0mm, would be a sample string which should be used by \includegraphics to crop the PDF. So far so good.
The problem is that the file compiles fine but the pycode environment does not give anything back. Minimal working sample:
\documentclass{article}
\usepackage{pythontex}
\begin{document}
hello\\
\begin{pycode}
print('python test')
\end{pycode}
\end{document}
I use MiKTeX-pdfTeX 2.9.5653 (1.40.16) (MiKTeX 2.9 64-bit) via: C:\Program Files\MiKTeX 2.9\miktex\bin\x64\pdflatex" ?me" -parse-first-line -shell-escape -enable-write18 -aux-directory="C:\Users\Lenny\Documents\LaTeX\tmp" -synctex=1 -interaction=nonstopmode.
Any suggestions?
• Did you try compiling in the three steps pdflatex, pythontex, pdflatex? – Andrew Swann Jul 3 '15 at 11:52
• Oh god! Totally missed that part in the manual. Thank you for the hint. Got it to work. – milkpirate Jul 3 '15 at 22:14
To compile a file using pythontex requires three steps. For example if using pdflatex on the main file then you need to at least run
pdflatex
Using a compilation framework such as latexmk that is aware of pythontex can simplify this process.
• I am not on a windows system so my pdflatex does not have that option. Does \setpythontexworkingdir{<outputdir>} do what you want? Otherwise you should ask a separate question. – Andrew Swann Jul 7 '15 at 6:45
|
2019-12-11 03:50:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8331469893455505, "perplexity": 6739.613938447072}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00077.warc.gz"}
|
https://blog.zilin.one/21-300-21-600-fall-2011/assignment-m/
|
# Assignment M
The first two problems are designed to review the previous concepts, while the third is an exercise dealing with prenex normal form.
Common Mistakes:
• In X2000(e), there are three occurrences of $z$ in the given formula, one of which is free. So to decide whether $gx$ is free for $z$, you are not supposed to check for those bound variables. In X2000(g), since there is no free occurrence of $x$, $u$ is free for $x$ vacuously.
• In 2200, you might want to calculate for every quantifier whether it occurs positively or negatively. Many students made a mistake on $\forall z$. Since it is under the scope of a $\sim$ and followed by an $\supset$, it occurs positively. Also, some students erased all negations together with all quantifiers, or put them in wrong places.
Highlight:
• Arash gives an interesting proof for X1223. His approach is based on a different interpretation to the syntax. By evaluating every propositional variable with integer values, he defines inductively the value of a wff along the formation of the wff by letting $\mathcal{V}~A=-\mathcal{V}A$ and $\mathcal{V}[A\vee B]=-\mathcal{V}\mathcal{A}-\mathcal{V}\mathcal{B}$. As a corollary, $\mathcal{V}[A\supset B]=\mathcal{V}\mathcal{A}-\mathcal{V}\mathcal{B}$. If is easy to verify the evaluation of each axiom scheme is always zero, and Modus Ponens preserves this property. Thus by induction on proof, we know the evaluation of each theorem of $\mathcal{K}$ is always zero. However, one can easily find integer evaluations for $p$ and $q$ so that the value of the given formula is not zero.
Comment:
• It is easy to prove that all theorems in $\mathcal{K}$ are always theorems of $\mathcal{P}$. The question rises, is it possible that a theorem in $\mathcal{P}$ is not a theorem in $\mathcal{K}$? Thanks to Arash’s method, we can answer this question immediately. One can prove that $p\supset .\sim p\supset q$ is not a theorem in $\mathcal{K}$ which is provable in $\mathcal{P}$.
|
2020-10-28 17:50:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 22, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8827862739562988, "perplexity": 233.5135345005252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900200.97/warc/CC-MAIN-20201028162226-20201028192226-00437.warc.gz"}
|
http://openfoamwiki.net/index.php/OpenFOAM_guide/H_operator
|
# OpenFOAM guide/H operator
The H operator is a form of shorthand notation that is not commonly used, but appears in OpenFOAM. It is comprised of a collection of terms from the momentum equation. A partially discretized form of the momentum equation is:
$A_p \mathbf U_p = \mathbf {S_m}_{,p} - \sum\limits_r A_r \mathbf{U}_r - \boldsymbol \nabla p^*$
Where:
• subscript p is the cell index;
• subscript r are related cells[1];
• A are the matrix coefficients;
• U is the uncorrected velocity;
• S is the discretization source term; and
• p* is the pressure from the previous timestep or initial guess.
The H operator is all terms on the right-hand side, excluding those involving pressure:
$\mathbf H_p = \mathbf {S_m}_{,p} - \sum\limits_r A_r \mathbf{U}_r$
Therefore the momentum equation becomes:
$A_p \mathbf U_p = \mathbf H_p - \boldsymbol \nabla p$
Due to its prevalence in solver algorithms, OpenFOAM implements the H operator directly in its matrix classes, including:
## Notes
1. What I'm calling related cells is conventionally called neighbours. But OpenFOAM has a different meaning for neighbours, so the term related cells is used.
|
2015-07-28 19:42:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8595380783081055, "perplexity": 2191.4028406917323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042982502.13/warc/CC-MAIN-20150728002302-00257-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://doc.plask.app/xpl/solvers/optical/slab.BesselCyl
|
# BesselCyl¶
<optical solver="BesselCyl">
Corresponding Python class: optical.slab.BesselCyl.
Vectorial optical solver based on the Bessel expansion reflection transfer method.
Attributes: name (required) – Solver name.
Contents:
<geometry>
Geometry for use by this solver.
Attributes: ref (required) – Name of a Cylindrical geometry defined in the section.
<mesh>
Optional Ordered mesh used by this solver.
Attributes: ref (required) – Name of a Ordered mesh defined in the section.
<expansion>
Details on Bessel expansion used in computations
Attributes: lam0 – This is a wavelength at which refractive index is retrieved from the structure. If this parameter is None, material parameters are computed each time, the wavelenght changes even slightly (this is most accurate, but can be very inefficient. (float) update-gain – If this attribute is set to ‘yes’, material parameters are always recomputed for layers with gains. This allows to set ‘lam0’ for better efficiency and still update gain for slight changes of wavelength. (bool, default is ‘no’) domain – Computational domain. If set to finite, the field is expanded in Fourier-Bessel series over a finite domain (geometry + PMLs). For infinite domain, the field is represented by its Hankel transform. (‘finite’ or ‘infinite’, default is ‘infinite’) size – Expansion size. (int, default 12) group-layers – Should similar layers be grouped for better performance. (bool, default is ‘yes’) temp-diff – Maximum temperature difference between the layers in one group. If a temperature in a single layer varies vertically more than this value, the layer is split into two and put into separate groups. If this is empty, temperature gradient is ignored in layers grouping. (float [K]) temp-dist – Approximate lateral distance of the points in which the temperature is probed to decide about the temperature difference in one layer. (float [µm], default 0.5 µm) temp-layer – Minimum thickness of sublayers resulting from temperature-gradient division. (float [µm], default 0.05 µm) integrals-error – Maximum error for Bessel functions integrals. (float, default 1e-06) integrals-points – Maximum number of points each element is sampled for computing Bessel functions integrals. (int, default 1000) k-method – Method of selecting wavevectors for numerical Hankel transform in infinite domain. (‘uniform’, ‘nonuniform’, ‘laguerre’, or ‘manual’, default is ‘nonuniform’) k-max – Maximum wavevector used in infinite domain relative to the wavelength. (float, default 5) k-scale – Scale factor for wavevectors used in infinite domain. (float, default 1) k-list – A list of wavevectors ranges. If no weights are given, the actual wavevectors used in the computations are the avrages of each two adjacent values specified here and the integration weights are the sizes of each interval. (list of floats) k-weights – Weights for manual wavevectors. (list of floats) rule – Expansion rule for coefficients matrix. Can be direct or inverse. Inverse rule is proven to provide better convergence and should be used in almost every case. (‘inverse’, ‘semi-inverse’, or ‘direct’, default is ‘inverse’)
<mode>
Mode properties.
Attributes: lam – Light wavelength. For finding modes this parameter is ignored. However, it is important for reflection and transmission computation. (float [nm]) emission – Direction of the useful light emission. Necessary for the over-threshold model to correctly compute the output power. In this solver only top and bottom emission is possible. (‘undefined’, ‘top’, or ‘bottom’, default is ‘undefined’)
<interface>
Matching interface position in the stack.
Attributes: position – Interface will be located as close as possible to the vertical coordinate specified in this attribute. (float [µm]) object – Name of the geometry object below which the interface is located. (geometry object) path – Optional path name, specifying particular instance of the object given in the object attribute. (geometry path)
<transfer>
Vertical field transfer settings.
Attributes: method – Layers transfer algorithm. Can be either reflection transfer, admittance/impedance transfer or automatic, in which case the reflection computations will use reflection transfer and eigenmode search is done with admittance transfer. Reflection transfer can have optional suffix -admittance (default) or -impedance, in which case the admittance/impedance matching is done at interface (for eigenmode search). You should prefer admittance if electric field is expected to have significant horizontal components (particularly at the interface) i.e. for TE-like modes and impedance for TM-like modes. (‘auto’, ‘admittance’, ‘impedance’, ‘reflection’, ‘reflection-impedance’, or ‘reflection-admittance’, default is ‘auto’) determinant – This attribute specified what is returned by the get_determinant method. Regardless of the determinant type, its value must be zero for any mode. Depending on the determinant type value, the computed value is either the characteristic matrix eigenvalue with the smallest magniture or the full determinant of this matrix. (‘eigenvalue’, ‘full’, or ‘eigen’, default is ‘eigenvalue’)
<vpml>
Vertical absorbing perfectly matched layer boundary conditions parameters.
Attributes: factor – PML scaling factor. (complex, default is ‘(1-2j)’) dist – PML distance from the structure. (float [µm], default 10.0 µm) size – PML size. (float [µm], default 2.0 µm)
<root>
Parameters of the global root-finding algorithm.
Attributes: method – Root finding algorithm (Muller’s method or Broyden’s method). (‘muller’, ‘broyden’, or ‘brent’, default is ‘muller’) tolx – Maximum change of the argument which is allowed for convergent solution. (float, default 1e-06) tolf-min – Minimum value of the determinant sufficient to assume convergence. (float, default 1e-07) tolf-max – Maximum value of the determinant required to assume convergence. (float, default 1e-05) maxstep – Maximum step in one iteration of root finding. Significant for the Broyden method only. (float, default 0.1) maxiter – Maximum number of root finding iterations. (int, default 500) alpha – Parameter ensuring sufficient decrease of determinant in each step (Broyden method only). (float, default 1e-07) lambda – Minimum decrease ratio of one step (Broyden method only). (float, default 1e-08) initial-range – Initial range size (Muller method only). (complex, default 0.001)
<pml>
Side absorbing perfectly matched layer boundary conditions parameters.
Attributes: factor – PML scaling factor. (complex, default 1.0) shape – PML shape order (0 → flat, 1 → linearly increasing, 2 → quadratic, etc.). (float, default 1) dist – PML distance from the structure. (float [µm], default 20.0 µm) size – PML size. (float [µm], default 0.0 µm)
|
2021-06-19 04:05:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5191846489906311, "perplexity": 6282.584037731809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00094.warc.gz"}
|
http://www-h.eng.cam.ac.uk/help/tpl/textprocessing/latex_basic/latex_basic
|
# Word Processing Using LaTeX
## Using LaTeX
LaTeX is a friendly way of using the TEX text formatting system. It can be used with a front-end that makes it look much like Word, but here we'll show how to use itin a minimal enviroment with just a text editor (e.g. emacs or gedit) and a command line. With the text editor you can create a file containing your text in it, along with a few special formatting commands. It's a good idea to give the filename a .tex suffix. As an example, put this
\documentclass[12pt]{article}
\begin{document}
at the top of the file called one.tex, and
\end{document}
at the end. In the middle add some text. The text of the document is just typed in as normal except that each time you want to start a new paragraph you should leave a blank line. If you want to have numbered section headings in the text use the command
\section{This is the Text of the Heading}
If you don't want the numbering, use \section* instead of \section. The text of the document is just typed in as normal except that each time you want to start a new paragraph you should leave a blank line.
There are a small number of characters which have special meanings in LTEX so if you need to use them they will need to be entered specially into your file. The characters are:
& $# % _ { } ^ ~ \ If you really need any of the first seven of these they can be inserted by typing the two-character combinations shown below. \& \$ \# \% \_ \{ \}
The '\' character is used in each case to tell LaTeX that the character that comes next should not have its special meaning in this case. When you are happy with the document, save it.
Next, your document needs to be processed. Nowadays it's common to produce PDF files directly with latex. Type
pdflatex one.tex
If an error occurs, details will be given of the line in which the error was de- tected so you can correct your latex code. Even if there aren't any errors there'll be quite a lot of messages. If it says
Output written on one.pdf
then you know you've produced a file you can view using acroread or evince.
## An Example Document
Sooner or later you may want to produce more complicated documents using LaTeX. There are many other documents available - see the LaTeX page in our help system. To introduce a few more of the more commonly used techniques we now present an example of the source file of a LaTeX document followed by what it really looks like when it has been processed by the latex program.
\documentclass[12pt]{article}
\begin{document}
\section*{Excitement and Hard Maths}
Quotation marks are inserted into text using for open quotes, and '
for close quotes. If double quotes are needed you just type two
single quotes --- This is a quotation,'' he said. Notice that you
can produce different length dashes by typing one, two and three
hyphens. Between hyphenated words use just one inter-word hyphen.
Two hyphens are often used for number ranges (23--45). Three hyphens
are used a bit like semicolons --- you know the sort of thing.
\LaTeX\ always puts extra space after a full stop like this.
To prevent the extra gap occuring in the middle of a name you insert
a tie like this (Mr.~Jones).
This is a bit of prose which is gently building up to the excitement
of an equation.
\begin{eqnarray}
y&=&ax^{2}+bx+c \nonumber\\
E&=&mc^2 \nonumber\\
{\delta y \over \delta x} &=& {{a\over b}\over c}
\end{eqnarray}
\noindent
Don't worry too much if it looks
complicated, the main purpose was to give an \emph{idea\/} of the
quality of maths which \LaTeX\ can produce. Let's look at a rather
simpler formula. Subscripts are written $$x_{2y}$$ and superscripts
are written $$x^{2y}$$. These are both in-line formulae.
\section*{Conclusions}
This example illustrates a number of \LaTeX\ features. By comparing
the original and the processed text you should be able to see
\begin{enumerate}
\item How to open and close both single and double quotes.
\item How to produce dashes and what they look like.
\item How to typeset Ms.~Smith.
\item How to produce subscripts and superscripts.
\item How to emphasize a section of text \emph{like this}.
\item How to produce a numbered list of things.
\end{enumerate}
\end{document}
`
produces this document
• © Cambridge University, Engineering Department, Trumpington Street, Cambridge CB2 1PZ, UK (map)
Tel: +44 1223 332600, Fax: +44 1223 332662
|
2019-03-19 04:11:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8487774729728699, "perplexity": 1543.4425597594154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201885.28/warc/CC-MAIN-20190319032352-20190319054352-00152.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Periodic_Continued_Fraction
|
# Definition:Periodic Continued Fraction
## Definition
Let $\left[{a_1, a_2, a_3, \ldots}\right]$ be a simple infinite continued fraction.
Let the partial quotients be of the form:
$\left[{r_1, r_2, \ldots, r_m, s_1, s_2, \ldots, s_n, s_1, s_2, \ldots, s_n, s_1, s_2, \ldots, s_n, \ldots}\right]$
that is, ending in a block of partial quotients which repeats itself indefinitely.
Such a SICF is known as a periodic continued fraction.
The notation used for this is $\left[{r_1, r_2, \ldots, r_m, \left \langle{s_1, s_2, \ldots, s_n}\right \rangle}\right]$, where the repeating block is placed in angle brackets.
### Purely Periodic Continued Fraction
A periodic continued fraction is a purely periodic continued fraction if its partial quotients are of the form:
$\left[{\left \langle{s_1, s_2, \ldots, s_n}\right \rangle}\right]$
That is, all of its partial quotients form a block which repeats itself indefinitely.
### Cycle
The repeating block in a periodic (or purely periodic) continued fraction $F$ is called the cycle of $F$.
|
2023-03-27 08:26:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613520503044128, "perplexity": 761.802181794654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00491.warc.gz"}
|
https://psychology.stackexchange.com/questions/10132/how-did-fechner-justify-the-assumption-that-the-just-noticeable-difference-in-se
|
# How did Fechner justify the assumption that the just-noticeable-difference in sensation is constant?
As stated here on Wikipedia: 1
Weber's law states that the-just-noticeable-difference (JND) of an intensity of a stimuli divided by the intensity of that stimuli is always constant.
Mathematically: $\Delta(I)/I=Constant$
... where $I$ here means the physical intensity of sound, light and so on. $\Delta(I)$ is the-just-noticeable-difference.
Then came Fechner and assumed that the-just-noticeable-difference "in sensation" of a stimuli is constant as well, hence: $\Delta(I)/I = Constant = \Delta(S)$
$\Delta(S)$ stands for the-just-noticeable-difference "in sensation."
On what basis was Fechner justified in assuming that the-just-noticeable-difference in sensation is constant?
|
2020-04-03 11:52:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9031766653060913, "perplexity": 5341.878088279182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00212.warc.gz"}
|
https://www.physicsforums.com/threads/using-upper-and-lower-sums-to-approximate-the-area.574629/
|
Using upper and lower sums to approximate the area.
Never Mind
I answered my own question two minutes after posting it. I don't know how to take this question down so I just deleted it.
Last edited:
|
2021-09-26 16:11:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832029342651367, "perplexity": 411.55459442130285}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00635.warc.gz"}
|
https://pyslim.readthedocs.io/en/latest/tutorial.html
|
# Tutorial¶
There are several very common uses of tree sequences in SLiM/pyslim. These are covered in this tutorial.
## Recapitation, simplification, and mutation¶
Perhaps the most common pyslim operations involve Recapitation, Simplification, and/or Adding neutral mutations to a SLiM simulation. Below we illustrate all three in the context of running a “hybrid” simulation, combining both forwards and backwards (coalescent) methods. This hybrid approach is a popular application of pyslim because coalescent algorithms, although more limited in the degree of biological realism they can attain, can be much faster than the forwards algorithms implemented in SLiM.
A typical use-case is to take an existing SLiM simulation and endow it with a history derived from a coalescent simulation: this is known as recapitation. For instance, suppose we have a SLiM simulation of a population of 100,000 individuals that we have run for 10,000 generations without neutral mutations. Now, we wish to extract whole-genome genotype data for only 1,000 individuals. Here’s one way to do it:
1. SlimTreeSequence.recapitate() : The simulation has likely not reached demographic equilibrium - it has not coalesced entirely; recapitation uses coalescent simulation to provide a “prior history” for the initial generation of the simulation.
2. SlimTreeSequence.simplify() : For efficiency, subset the tree sequence to only the information relevant for those 1,000 individuals we wish to sample. Important: this should probably come *after* recapitation (see below).
3. msprime.mutate() : Adds neutral mutations to the tree sequence.
These steps are described below. First, to get something to work with, you can run this simple SLiM script of a single population of sexual organisms, fluctuating around 1000 individuals, for 1000 generations:
initialize() {
setSeed(23);
initializeSLiMModelType("nonWF");
initializeSex("A");
initializeTreeSeq();
initializeMutationRate(0.0);
initializeMutationType("m1", 0.5, "f", 0.0);
initializeGenomicElementType("g1", m1, 1.0);
initializeGenomicElement(g1, 0, 1e8-1);
initializeRecombinationRate(1e-8);
defineConstant("K", 1000);
}
reproduction(p1, "F") {
subpop.sampleIndividuals(1, sex="M"));
}
1 early() {
}
early() {
p1.fitnessScaling = K / p1.individualCount;
}
1000 late() {
sim.treeSeqOutput("example_sim.trees");
}
(Note: by setting the random seed in the simulation, you should get exactly the same results as the code below.)
### Recapitation¶
Although we can initialize a SLiM simulation with the results of a coalescent simulation, if during the simulation we don’t actually use the genotypes for anything, it can be much more efficient to do this afterwards, hence only doing a coalescent simulation for the portions of the first-generation ancestors that have not yet coalesced. (See the SLiM manual for more explanation.) This is depicted in the figure at the right: imagine that at some sites, some of the samples don’t share a common ancestor within the SLiMulated portion of history (shown in blue). Recapitation starts at the top of the genealogies, and runs a coalescent simulation back through time to fill out the rest of genealogical history relevant to the samples. The purple chromosomes are new ancestral nodes that have been added to the tree sequence. This is important - if we did not do this, then effectively we are assuming the initial population would be genetically homogeneous, and so our simulation would have less genetic variation than it should have (since the component of variation from the initial population would be omitted).
Doing this is as simple as:
orig_ts = pyslim.load("example_sim.trees")
rts = orig_ts.recapitate(recombination_rate = 1e-8, Ne=200, random_seed=5)
We can check that this worked as expected, by verifying that after recapitation all trees have only one root:
orig_max_roots = max(t.num_roots for t in orig_ts.trees())
recap_max_roots = max(t.num_roots for t in rts.trees())
print(f"Before recapitation, the max number of roots was {orig_max_roots}, "
f"and after recapitation, it was {recap_max_roots}.")
# Before recapitation, the max number of roots was 15, and after recapitation, it was 1.
Note that demography needs to be set up explicitly - if you have more than one population, you must set migration rates or else coalescence will never happen (see below for an example, and SlimTreeSequence.recapitate() for more).
#### Recapitation with a nonuniform recombination map¶
Above, we recapitated using a uniform genetic map. But, msprime - like SLiM - can simulate with recombination drawn from an arbitrary genetic map. Let’s say we’ve already got a recombination map as specified by SLiM, as a vector of “positions” and a vector of “rates”. msprime also needs vectors of positions and rates, but the format is slightly different. To use the SLiM values for msprime, we need to do three things:
1. Add a 0 at the beginning of the positions,
2. add a 0 at the end of the rates, and
3. add 1 to the final value in “positions”.
The reason why msprime “positions” must start with 0 (step 1) is that in SLiM, a position or “end” indicates the end of a recombination block such that its associated “rate” applies to everything to the left of that end (see initializeRecombinationRate). In msprime, the manual says:
Given an index j in these lists, the rate of recombination per base per generation is rates[j] over the interval positions[j] to positions[j + 1]. Consequently, the first position must be zero, and by convention the last rate value is also required to be zero (although it is not used).
This means that positions for msprime are both starts and ends. As a consequence, msprime needs a vector of positions that is 1 longer than what you give SLiM, and msprime also needs 1 fewer rates than it has positions, but you just add the 0.0 on at the end of the rates vector “by convention” (step 2).
The reason for step 3 is that intervals for tskit (which msprime uses) are “closed on the left and open on the right”, which means that the genomic interval from 0.0 to 100.0 includes 0.0 but does not include 100.0. If SLiM has a final genomic position of 99, then it could have mutations occurring at position 99. Such mutations would not be legal, on the other hand, if we set the tskit sequence length to 99, since the position 99 would be outside of the interval from 0 to 99. So, in SLiM when we record tree sequences, we use the last position plus one - i.e., the length of the genome - as the rightmost coordinate.
For instance, suppose that we have a recombination map file in the following (tab-separated) format:
end_position rate(cM/Mb)
150000 3.2
500000 2.5
850000 0.25
999999 2.8
This describes recombination rates across a 1Mb segment of genome with higher rates on the ends (for instance, 3.2 and 2.8 cM/Mb in the first and last 150Kb respectively) and lower rates in the middle (0.25 cM/Mb between 500Kb and 850Kb). The first column gives the starting position, in bp, for the window whose recombination rate is given in the second column. (Note: this is not a standard format for recombination maps - it is more usual for the starting position to be listed!)
Here is SLiM code to read this file and set the recombination rates:
lines = readFile("recomb_rates.tsv");
stop("Unexpected format!");
}
rates = NULL;
ends = NULL;
nwindows = length(lines) - 1;
for (line in lines[1:nwindows]) {
components = strsplit(line, "\t");
ends = c(ends, asInteger(components[0]));
rates = c(rates, asFloat(components[1]));
}
initializeRecombinationRate(rates * 1e-8, ends);
Now, here’s code to take the same recombination map used in SLiM, and use it for recapitation in msprime:
import msprime, pyslim
import numpy as np
positions = []
rates = []
with open('recomb_rates.tsv', 'r') as file:
for line in file:
components = line.split("\t")
positions.append(float(components[0]))
rates.append(1e-8 * float(components[1]))
# step 1
positions.insert(0, 0)
# step 2
rates.append(0.0)
# step 3
positions[-1] += 1
recomb_map = msprime.RecombinationMap(positions, rates)
rts = ts.recapitate(recombination_map=recomb_map, Ne=1000)
assert(max([t.num_roots for t in rts.trees()]) == 1)
Next, one might wish to sanity check the result, for instance, by setting rates in one interval to zero and making sure that no recombinations occurred in that region.
Note
Starting from msprime 1.0, there will be a discrete argument to the RecombinationMap class; setting discrete=True will more closely match the recombination model of SLiM.
### Simplification¶
Probably, your simulations have produced many more fictitious genomes than you will be lucky enough to have in real life, so at some point you may want to reduce your dataset to a realistic sample size. We can get rid of unneeded samples and any extra information from them by using an operation called simplification (this is the same basic approach that SLiM implements under the hood when outputting a tree sequence, as described in the introduction).
Depicted in the figure at the right is the result of applying an explicit call to simplify() to our example tree sequence. In the call we asked to keep only 4 genomes (contained in 2 of the individuals in the current generation). This has substantially simplified the tree sequence, because only information relevant to the genealogies of the 4 sample nodes has been kept. (Precisely, simplification retains only nodes of the tree sequence that are branching points of some marginal genealogy – see Kelleher et al 2018 for details.) While simplification sounds very appealing - it makes things simpler after all - it is often not necessary in practice, because tree sequences are very compact, and many operations with them are quite fast. (It will, however, speed up many operations, so if you plan to do a large number of simulations, your workflow could benefit from early simplification.) So, you should probably not make simplification a standard step in your workflow, only using it if necessary.
It is important that simplification - if it happens at all - either (a) comes after recapitation, or (b) is done with the keep_input_roots=True option (see tskit.TreeSequence.simplify()). This is because simplification removes some of the ancestral genomes in the first generation, which are necessary for recapitation, unless it is asked to “keep the input roots”. If we simplify without this option before recapitating, some of the first-generation blue chromosomes in the figure on the right would not be present, so the coalescent simulation would start from a more recent point in time than it really should. As an extreme example, suppose our SLiM simulation has a single diploid who has reproduced by clonal reproduction for 1,000 generations, so that the final tree sequence is just two vertical lines of descent going back to the two chromosomes in the initial individual alive 1,000 generations ago. Recapitation would produce a shared history for these two chromosomes, that would coalesce some time longer ago than 1,000 generations. However, if we simplified first, then those two branches going back 1,000 generations would be removed, since they don’t convey any information about the shape of the tree; and so recapitation could well produce a common ancestor more recently than 1,000 generations, which is inconsistent with the SLiM simulation.
After recapitation, simplification to the history of 100 individuals alive today can be done with the SlimTreeSequence.simplify() method:
import numpy as np
np.random.seed(3)
alive_inds = rts.individuals_alive_at(0)
keep_indivs = np.random.choice(alive_inds, 100, replace=False)
keep_nodes = []
for i in keep_indivs:
keep_nodes.extend(rts.individual(i).nodes)
sts = rts.simplify(keep_nodes)
print(f"Before, there were {rts.num_samples} sample nodes (and {rts.num_individuals} individuals) "
f"in the tree sequence, and now there are {sts.num_samples} sample nodes "
f"(and {sts.num_individuals} individuals).")
# Before, there were 1930 sample nodes (and 965 individuals) in the tree sequence,
# and now there are 200 sample nodes (and 115 individuals).
Note that you must pass simplify a list of node IDs, not individual IDs. Here, we used the SlimTreeSequence.individuals_alive_at() method to obtain the list of individuals alive today. Also note that there are still more than 100 individuals remaining - 15 non-sample individuals have not been simplified away, because they have nodes that are required to describe the genealogies of the samples. (Since this is a non-Wright-Fisher simulation, parents and children can be both alive at the same time in the final generation.)
### Adding neutral mutations to a SLiM simulation¶
If you have recorded a tree sequence in SLiM, likely you have not included any neutral mutations, since it is much more efficient to simply add these on afterwards. To add these (in a completely equivalent way to having included them during the simulation), you can use the msprime.mutate() function, which returns a new tree sequence with additional mutations. Continuing with the cartoons from above, these are added to each branch of the tree sequence at the rate per unit time that you request. This works as follows:
ts = pyslim.SlimTreeSequence(msprime.mutate(sts, rate=1e-8, keep=True))
print(f"The tree sequence now has {ts.num_mutations} mutations, "
f"and mean pairwise nucleotide diversity is {ts.diversity()}.")
# The tree sequence now has 28430 mutations, and mean pairwise nucleotide diversity is 2.3319e-05.
This adds infinite-sites mutations at a rate of 1e-8 per site, making sure to keep any existing mutations. We have wrapped the call to msprime.mutate() in a call to pyslim.SlimTreeSequence, because msprime.mutate() returns an msprime tree sequence, and by converting it back into a pyslim tree sequence we can still use the methods defined by pyslim. (The conversion does not modify the tree sequence at all, it only adds the .slim_generation attribute.) The output of other msprime functions that return tree sequences may be converted back to pyslim.SlimTreeSequence in the same way.
## Obtaining and saving individuals¶
### Extracting particular SLiM individuals¶
To get another example with discrete subpopulations, let’s run another SLiM simulation, similar to the above but with two populations exchanging migrants:
initialize() {
setSeed(23);
initializeSLiMModelType("nonWF");
initializeSex("A");
initializeTreeSeq();
initializeMutationRate(0.0);
initializeMutationType("m1", 0.5, "f", 0.0);
initializeGenomicElementType("g1", m1, 1.0);
initializeGenomicElement(g1, 0, 1e8-1);
initializeRecombinationRate(1e-8);
defineConstant("K", 1000);
}
reproduction(NULL, "F") {
subpop.sampleIndividuals(1, sex="M"));
}
1 early() {
}
early() {
num_migrants = rpois(2, 0.01 * c(p1.individualCount, p2.individualCount));
migrants1 = sample(p1.individuals, num_migrants[0]);
migrants2 = sample(p2.individuals, num_migrants[1]);
p2.takeMigrants(migrants1);
p1.takeMigrants(migrants2);
p1.fitnessScaling = K / p1.individualCount;
p2.fitnessScaling = K / p2.individualCount;
}
1000 late() {
sim.treeSeqOutput("migrants.trees");
}
The first, most common method to extract individuals is simply to get all those that were alive at a particular time, using SlimTreeSequence.individuals_alive_at(). For instance, to get the list of individual IDs of all those alive at the end of the simulation (i.e., zero time units ago), we could do:
orig_ts = pyslim.load("migrants.trees")
alive = orig_ts.individuals_alive_at(0)
print(f"There are {len(alive)} individuals alive from the final generation.")
# There are 2020 individuals alive from the final generation.
These are individual IDs, and we can use ts.individual( ) to get information about each of these individuals from their ID. For instance, to then count up how many of these individuals are in each population, we could do:
num_alive = [0 for _ in range(orig_ts.num_populations)]
for i in alive:
ind = orig_ts.individual(i)
num_alive[ind.population] += 1
for pop, num in enumerate(num_alive):
print(f"Number of individuals in population {pop}: {num}")
# Number of individuals in population 0: 0
# Number of individuals in population 1: 984
# Number of individuals in population 2: 1036
Our SLiM script started numbering populations at 1, while tskit starts counting at 0, so there is an empty “population 0” in a SLiM-produced tree sequence.
Now, let’s recapitate and mutate the tree sequence. Recapitation takes a bit more thought, because we have to specify a migration matrix (or else it will run forever, unable to coalesce).
pop_configs = [msprime.PopulationConfiguration(initial_size=1000)
for _ in range(orig_ts.num_populations)]
rts = orig_ts.recapitate(population_configurations=pop_configs,
migration_matrix=[[0.0, 0.0, 0.0],
[0.0, 0.0, 0.1],
[0.0, 0.1, 0.0]],
recombination_rate=1e-8,
random_seed=4)
ts = pyslim.SlimTreeSequence(
msprime.mutate(rts, rate=1e-8, random_seed=7))
Again, there are three populations because SLiM starts counting at 1; the first population is unused (no migrants can go to it). Let’s compute genetic diversity within and between each of the two populations (we compute the mean density of pairwise nucleotide differences, often denoted $$\pi$$ and $$d_{xy}$$). To do this, we need to extract the node IDs from the individuals of the two populations that are alive at the end of the simulation.
pop_indivs = [[], [], []]
pop_nodes = [[], [], []]
for i in ts.individuals_alive_at(0):
ind = ts.individual(i)
pop_indivs[ind.population].append(i)
pop_nodes[ind.population].extend(ind.nodes)
diversity = ts.diversity(pop_nodes[1:])
divergence = ts.divergence(pop_nodes[1:], indexes=[(0,1)])
print(f"There are {ts.num_mutations} mutations across {ts.num_trees} distinct "
f"genealogical trees describing relationships among {ts.num_samples} "
f"sampled genomes, with a mean genetic diversity of {diversity[0]} and "
f"{diversity[1]} within the two populations, and a mean divergence of "
f"{divergence[0]} between them.")
# There are 115500 mutations across 50613 distinct genealogical trees describing relationships
# among 4040 sampled genomes, with a mean genetic diversity of 9.064e-05 and 9.054e-05 within
# the two populations, and a mean divergence of 9.135839855153494e-05 between them.
Each Mutation, Population, Node, and Individual, as well as the tree sequence as a whole, carries additional information stored by SLiM in its metadata property. A fuller description of metadata in general is given in Metadata, but as a quick introduction, here is the information available about an individual in the previous example:
print(ts.individual(0))
# {'id': 0, 'flags': 65536,
# 'location': array([0., 0., 0.]),
# 'pedigree_id': 1003551,
# 'age': 1,
# 'subpopulation': 1,
# 'sex': 0,
# 'flags': 0
# },
# 'nodes': array([4000, 4001], dtype=int32),
# 'population': 1,
# 'time': 16.0}
Some information is generic to individuals in tree sequences of any format: id (the ID internal to the tree sequence), flags (described below), location (the [x,y,z] coordinates of the individual), nodes (an array of the node IDs that represent the genomes of this individual), and time (the time, in units of “time ago” that the individual was born).
Other information, contained in the metadata field, is specific to tree sequences produced by SLiM. This is described in more detail in the SLiM manual, but briefly:
• the pedigree_id is SLiM’s internal ID for the individual,
• age and subpopulation are their age and population at death, or at the time the simulation stopped if they were still alive (NB: SLiM uses the word “subpopulation” for what is simply called a “population” in tree-sequence parlance)
• sex is their sex (as an integer, one of INDIVIDUAL_TYPE_FEMALE, INDIVIDUAL_TYPE_MALE, or INDIVIDUAL_TYPE_HERMAPHRODITE),
We can use this metadata in many ways, for example, to create an age distribution by sex:
import numpy as np
max_age = max([ind.metadata["age"] for ind in ts.individuals()])
age_table = np.zeros((max_age + 1, 2))
age_labels = {pyslim.INDIVIDUAL_TYPE_FEMALE: 'females',
pyslim.INDIVIDUAL_TYPE_MALE: 'males'}
for i in ts.individuals_alive_at(0):
ind = ts.individual(i)
print(f"number\t{age_labels[0]}\t{age_labels[1]}")
for age, x in enumerate(age_table):
print(f"{age}\t{x[0]}\t{x[1]}")
# number females males
# 0 327.0 343.0
# 1 213.0 226.0
# 2 165.0 144.0
# 3 99.0 112.0
# 4 79.0 68.0
# 5 48.0 37.0
# 6 31.0 38.0
# 7 16.0 13.0
# 8 10.0 8.0
# 9 7.0 10.0
# 10 4.0 3.0
# 11 3.0 4.0
# 12 4.0 1.0
# 13 2.0 1.0
# 14 1.0 0.0
# 15 0.0 1.0
# 16 0.0 0.0
# 17 1.0 0.0
# 18 0.0 0.0
# 19 0.0 0.0
# 20 0.0 0.0
# 21 1.0 0.0
We have looked up how to interpret the sex attribute by using the values of INDIVIDUAL_TYPE_FEMALE (which is 0) and INDIVIDUAL_TYPE_MALE (which is 1). In a simulation without separate sexes, all individuals would have sex equal to INDIVIDUAL_TYPE_HERMAPHRODITE (which is -1).
Several fields associated with individuals are also available as numpy arrays, across all individuals at once: SlimTreeSequence.individual_locations, SlimTreeSequence.individual_populations, SlimTreeSequence.individual_ages, and SlimTreeSequence.individual_times (also see SlimTreeSequence.individual_ages_at()). Using these can sometimes be easier than iterating over individuals as above. For example, suppose that we want to randomly sample 10 individuals alive and older than 2 time steps from each of the populations at the end of the simulation, and simplify the tree sequence to retain only those individuals. This can be done using the numpy arrays SlimTreeSequence.individual_ages and SlimTreeSequence.individual_populations as follows:
alive = ts.individuals_alive_at(0)
pops = [np.where(
ts.individual_populations[adults] == k)[0] for k in [1, 2]]
sample_inds = [np.random.choice(pop, 10, replace=False) for pop in pops]
sample_nodes = []
for samp in sample_inds:
for i in samp:
sample_nodes.extend(ts.individual(i).nodes)
sub_ts = ts.simplify(sample_nodes)
The resulting tree sequence does indeed have fewer individuals and fewer trees:
print(f"There are {sub_ts.num_mutations} mutations across {sub_ts.num_trees} distinct "
f"genealogical trees describing relationships among {sub_ts.num_samples} "
f"sampled genomes, with a mean overall genetic diversity of {sub_ts.diversity()}.")
# There are 44576 mutations across 25154 distinct genealogical trees describing relationships
# among 40 sampled genomes, with a mean overall genetic diversity of 9.087-05.
### Historical individuals¶
As we’ve seen, a basic tree sequence output by SLiM only contains the currently alive individuals and the ancestral nodes (genomes) required to reconstruct their genetic relationships. But you might want more than that. For example, there may be individuals who are not alive any more, but whose complete ancestry you would like to know. Or perhaps you’d like to know how the final generation relates to particular individuals in the past. Or it may be that you want to access the spatial location of historical genomes (which, for technical reasons is linked to individuals, not to genomes). The solution is to remember an individual during the simulation, using the SLiM function treeSeqRememberIndividuals(). Individuals can be Remembered in two ways, as described below.
#### Permanently remembering individuals¶
By default, a call to treeSeqRememberIndividuals() will permanently remember one or more individuals, by marking their nodes as actual samples: the simulated equivalent of ancient DNA dug out of permafrost, or stored in an old collecting tube. This means any tree sequence subsequently recorded will always contain this individual, its nodes (now marked as samples), and its full ancestry. As with any other sample nodes, any permanently remembered individuals can be removed from the tree sequence by Simplification. The result of remembering an individual in the introductory example is pictured on the right.
#### Retaining individuals¶
Alternatively, you may want to avoid treating historical individuals and their genomes as actual samples, but temporarily retain them as long as they are still relevant to reconstructing the genetic ancestry of the sample nodes. This can save some computational burden, as not only will nodes and individuals be removed once they are no longer ancestral, but also the full ancestry of the retained individuals does not need to be kept. You can retain individuals in this way by using treeSeqRememberIndividuals(..., permanent=F).
Since a retained individual’s nodes are not marked as samples, they are subject to the normal removal process, and it is possible to end up with an individual containing only one genome, as shown in the diagram. However, as soon as both nodes of a retained individual have been lost, the individual itself is deleted too.
Note that by default, nodes are only kept if they mark a coalescent point (MRCA or branch point) in one or more of the trees in a tree sequence. This can be changed by initialising tree sequence recording in SLiM using treeSeqInitialize(retainCoalescentOnly=F). SLiM will then preserve all retained individuals while they remain in the genealogy, even if their nodes are not coalescent points in a tree (so-called “unary nodes”). Similarly, if you later decide to reduce the number of samples via Simplification, retained individuals will be kept only if they are still MRCAs in the ancestry of the selected samples. To preserve them even if their nodes are not coalescent points, you can specify ts.simplify(selected_samples, keep_unary_in_individuals=True).
#### Remembering everyone¶
Although not needed to reconstruct full genomic history, it is perfectly possible to apply treeSeqRememberIndividuals() to every individual in every generation of a simulation (i.e. everyone who has ever lived). If you simply mark everyone for temporary retention, it should not increase the memory burden of your simulation much: most individuals will be removed as the simulation progresses, since they will not contain coalescent nodes. However, if you use treeSeqInitialize(retainCoalescentOnly=F), the number of individuals in the resulting tree sequence is likely to become very large, and the efficiencies provided by tree sequence recording will be substantially reduced. Indeed in this case, retaining will be much the same as permanently remembering everyone who has ever lived. Nevertheless, if you are willing to sacrifice enough computer memory, either of these is (perhaps surprisingly) possible, even for medium-sized simulations.
#### Individual flags¶
We have seen that an individual can appear in the tree sequence because it was Remembered, Retained, or alive at the end of the simulation (note these are not mutually exclusive). The Individual.flags value stores this information. For example, to count up the different individual types, we could do this:
indiv_types = {"remembered" : 0,
"retained" : 0,
"alive" : 0}
for ind in ts.individuals():
if ind.flags & pyslim.INDIVIDUAL_REMEMBERED:
indiv_types['remembered'] += 1
if ind.flags & pyslim.INDIVIDUAL_RETAINED:
indiv_types['retained'] += 1
if ind.flags & pyslim.INDIVIDUAL_ALIVE:
indiv_types['alive'] += 1
for k in indiv_types:
print(f"Number of individuals that are {k}: {indiv_types[k]}")
# Number of individuals that are remembered: 0
# Number of individuals that are retained: 0
# Number of individuals that are alive: 2012
Note
In previous versions of SLiM/pyslim, the first generation of individuals were kept in the tree sequence, to allow Recapitation. With the addition of the keep_input_roots=True option to the Simplification process, this is no longer necessary, so these are no longer present, unless you specifically Remember them.
## Coalescent simulation for SLiM¶
The annotate() command helps make this easy, by adding default information to a tree sequence, allowing it to be read in by SLiM. This will simulate a tree sequence with msprime, add SLiM information, and write it out to a .trees file:
import msprime
import pyslim
# simulate a tree sequence of 12 sample genomes
ts = msprime.simulate(12, mutation_rate=0.0, recombination_rate=1e-8, length=1e6)
new_ts = pyslim.annotate_defaults(ts, model_type="nonWF", slim_generation=1)
new_ts.dump("initialize_nonWF.trees")
Note that we have set the mutation rate to 0.0: this is because any mutations that are produced will be read in by SLiM… which could be a very useful thing, if you want to generate mutations with msprime that provide standing variation for selection within SLiM… but, the last msprime release only produces mutations with an infinite-sites model, while SLiM requires mutation positions to be at integer positions. This will change in msprime v1.0, and is already implemented in the development version, so if you’d really like to do this, get in touch. When this is released we plan to write a vignette of how to do it. However, if you intend the pre-existing mutations to be neutral, then there is no need to add them at this point; you can add them after the fact, as discussed above. Also note that we have set slim_generation to 1; this means that as soon as we load the tree sequence into SLiM, SLiM will set the current time counter to 1. (If we set slim_generation to 100, then any script blocks scheduled to happen before 100 would not execute after loading the tree sequence.)
The resulting file slim_ts.trees can be read into SLiM to be used as a starting state, as illustrated in this minimal example:
initialize()
{
initializeSLiMModelType("nonWF");
initializeTreeSeq();
initializeMutationRate(1e-2);
initializeMutationType("m1", 0.5, "f", -0.1);
initializeGenomicElementType("g1", m1, 1.0);
initializeGenomicElement(g1, 0, 1e6-1);
initializeRecombinationRate(1e-8);
}
1 early() {
}
10 {
sim.treeSeqOutput("nonWF_restart.trees");
catn("Done.");
sim.simulationFinished();
}
## Extracting information about selected mutations¶
Here is a simple SLiM simulation with two types of mutation: m1 are deleterious, and m2 are beneficial. Let’s see how to extract information about these mutations.
initialize()
{
initializeSLiMModelType("WF");
initializeTreeSeq();
initializeMutationRate(1e-6);
initializeMutationType("m1", 0.5, "e", -0.1);
initializeMutationType("m2", 0.5, "e", 0.5);
initializeGenomicElementType("g1", c(m1, m2), c(0.9, 0.1));
initializeGenomicElement(g1, 0, 1e6-1);
initializeRecombinationRate(1e-8);
}
1 early() {
}
1000 {
sim.treeSeqOutput("selection.trees");
}
If you want to follow along exactly with the below, set the seed to 23. First, let’s see how many mutations there are:
ts = pyslim.load("selection.trees")
ts.num_mutations
# 5961
ts.num_sites
# 5941
Note that there are more mutations than sites; that’s because some sites (looks like 20 of them) have multiple mutations. The information about the mutation is put in the mutation’s metadata (formatted by hand for clarity):
m = ts.mutation(0)
print(m)
# {'id': 0,
# 'site': 0,
# 'node': 4425,
# 'time': 1.0,
# 'derived_state': '1997240',
# 'parent': -1,
# 'mutation_list': [
# { 'mutation_type': 2,
# 'selection_coeff': 1.4618088006973267,
# 'subpopulation': 1,
# 'slim_time': 992,
# 'nucleotide': -1
# }
# ]
# }
# }
print(ts.site(m.site))
# {'id': 0,
# 'position': 126.0,
# 'ancestral_state': '',
# 'mutations': [...],
Here, m.site tells us the ID of the site on the genome that the mutation occurred at, and we can pull up information about that with the ts.site( ) method. This mutation occurred at position 126 along the genome (from site.position) which previously had no mutations (since site.ancestral_state is the empty string, ‘’) and was given SLiM mutation ID 1997240 (m.derived_state). The metadata (m.metadata, a dict) tells us that the mutation has selection coefficient -0.07989, and occurred in population 1 in generation 998. This is not a nucleotide model, so the nucleotide entry is -1. Note that m.time and m.metadata[‘mutation_list’][0][‘slim_time’] are in this case redundant: they contain the same information, but the first is in tskit time (i.e., number of steps before the tree sequence was written out) and the second is using SLiM’s internal “generation” counter.
Also note that the mutation’s metadata is a list of metadata entries. That’s because of SLiM’s mutation stacking feature. We know that some sites have more than one mutation, so to get an example let’s pull out the last mutation from one of those sites. In this case, m.metadata[‘mutation_list’] is a list of length one, so the mutation was not stacked on top of previous ones.
for s in ts.sites():
if len(s.mutations) > 1:
m = s.mutations[-1]
break
print(m)
# {'id': 193,
# 'site': 192,
# 'node': 767,
# 'time': 0.0,
# 'derived_state': '1998266,1293043',
# 'parent': 192,
# "mutation_list": [
# {
# "mutation_type": 1,
# "selection_coeff": -0.08409399539232254,
# "population": 1,
# "slim_time": 999,
# "nucleotide": -1
# },
# {
# "mutation_type": 1,
# "selection_coeff": -0.013351504690945148,
# "population": 1,
# "slim_time": 646,
# "nucleotide": -1
# }
# ]
# }
# }
print(ts.mutation(m.parent))
# {'id': 192,
# 'site': 192,
# 'node': 2940,
# 'time': 353,
# 'derived_state': '1293043',
# 'parent': -1,
# "mutation_list": [
# {
# "mutation_type": 1,
# "selection_coeff": -0.013351504690945148,
# "population": 1,
# "slim_time": 646,
# "nucleotide": -1
# }
# ]
# }
# }
This mutation (which is ts.mutation(193) in the tree sequence) was the result of SLiM adding a new mutation of type m1 and selection coefficient -0.084 on top of an existing mutation, also of type m1 and with selection coefficient -0.013. This happened at generation 999 (i.e., at tskit time 0.0 time units ago), and the older mutation occurred at generation 646 (at tskit time 353 time units ago). The older mutation has SLiM mutation ID 1293043, and the newer mutation had SLiM mutation ID 1998266, so the resulting “derived state” is ‘1998266,1293043’.
Now that we understand how SLiM mutations are stored in a tree sequence, let’s look at the allele frequencies. The allele frequency spectrum for all mutations can be obtained using the ts.allele_frequency_spectrum method, shown here for a sample of size 10 to make the output easy to see:
samps = np.random.choice(ts.samples(), 10, replace=False)
ts.allele_frequency_spectrum([samps], span_normalise=False, polarised=True)
# [3898, 63, 9, 2, 415, 630, 465, 0, 0, 0, 0]
(The span_normalise=False argument gives us counts rather than a density per unit length.) This shows us that there are 3898 alleles that are found among the tree sequence’s samples that are not present in any of our 10 samples, 63 that are present in just one, etcetera. The surprisingly large number that are near 50% frequency are perhaps positively selected and on their way to fixation: we can check if that’s true next. You may have noticed that the sum of the allele frequency spectrum is 5482, which is not obviously related to the number of mutations (5961) or the number of sites (5941). That’s because every derived allele seen in the samples counts once in the polarised allele frequency spectrum, and some sites have more than two alleles. Here’s how we can check this:
sum([len(set(v.genotypes)) - 1 for v in ts.variants()])
# 5482
At time of writing, we don’t have a built-in allele_frequency method, so we’ll use the following snippet:
def allele_counts(ts, sample_sets=None):
if sample_sets is None:
sample_sets = [ts.samples()]
def f(x):
return x
return ts.sample_count_stat(sample_sets, f, len(sample_sets),
span_normalise=False, windows='sites',
polarised=True, mode='site', strict=False)
This will return an array of counts, one for each site in the tree sequence, giving the number of all nonancestral alleles at that site found in the sample set (so, lumping together any of the various derived alleles we were looking at above). Then, we’ll separate out the counts in this array to get the derived frequency spectra separately for sites with (a) only m1 mutations, (b) only m2 mutations, and (c) both (for completeness, if there are any). First, we need to know which site has which of these three mutation types (m1, m2, or both):
mut_type = np.zeros(ts.num_sites)
for j, s in enumerate(ts.sites()):
mt = []
for m in s.mutations:
mt.append(md["mutation_type"])
if len(set(mt)) > 1:
mut_type[j] = 3
else:
mut_type[j] = mt[0]
Now, we compute the frequency spectrum, and aggregate it. We’ll use the function np.bincount to do this efficiently:
freqs = allele_counts(ts, [samps])
# convert the n x 1 array of floats to a vector of integers
freqs = freqs.flatten().astype(int)
mut_afs = np.zeros((len(samps)+1, 3), dtype='int64')
for k in range(3):
mut_afs[:, k] = np.bincount(freqs[mut_type == k+1], minlength=len(samps) + 1)
print(mut_afs)
# array([[3428, 448, 3],
# [ 50, 13, 0],
# [ 6, 3, 0],
# [ 1, 1, 0],
# [ 237, 177, 1],
# [ 367, 263, 0],
# [ 275, 188, 2],
# [ 0, 0, 0],
# [ 0, 0, 0],
# [ 0, 0, 0],
# [ 226, 252, 0]])
The first column is the deleterious alleles, and the second is the beneficial ones; the third column describes the six sites that had both types of mutation. Interestingly, there are similar numbers of both types of mutation at intermediate frequency: perhaps because beneficial mutations are sweeping linked deleterious alleles along with them. Many fewer benefical alleles are at low frequency: 3,428 deleterious alleles are not found in our sample of 10 genomes, while only 448 beneficial alleles are.
Finally, let’s pull out information on the allele with the largest selection coefficient.
sel_coeffs = np.array([m.metadata["mutation_list"][0]["selection_coeff"] for m in ts.mutations()])
which_max = np.argmax(sel_coeffs)
m = ts.mutation(which_max)
print(m)
# {'id': 256,
# 'site': 255,
# 'node': 2941,
# 'time': 594.0,
# 'derived_state': '809254',
# 'parent': -1,
# 'mutation_list': [
# {
# 'mutation_type': 2,
# 'selection_coeff': 5.109511852264404,
# 'population': 1,
# 'slim_time': 405,
# 'nucleotide': -1
# }
# ]
# }
# }
print(ts.site(m.site))
# {'id': 255,
# 'position': 44430.0,
# 'ancestral_state': '',
# 'mutations': [...],
This allele had a whopping selection coefficient of 5.1, and appeared about halfway through the simulation. Let’s find its frequency in the full population:
full_freqs = allele_counts(ts)
print(f"Allele is found in {full_freqs[m.site][0]} copies,"
f" and has selection coefficient {m.metadata[0].selection_coeff}.")
# Allele is found in 1004.0 copies, and has selection coefficient 5.109511852264404.
The allele is at about 50% in the population, so it is probably on its way to fixation. Using its SLiM ID (which is shown in its derived state, 809254), we could reload the tree sequence into SLiM, restart the simulation, and use its ID to track its subsequent progression.
## Possibly important technical notes¶
Also known as “gotchas”.
1. If you use msprime to simulate a tree sequence, and then use that to initialize a SLiM simulation,
you have to specify the same sequence length in both: as in the examples above, the length argument to msprime.simulate() should be equal to the SLiM sequence length plus 1.0 (e.g., if the base positions in SLiM are 0 to 99, then there are 100 bases in all, so the sequence length should be 100).
2. Make sure to distinguish individuals and nodes! tskit “nodes” correspond to SLiM “genomes”. Individuals in SLiM are diploid, so normally, each has two nodes (but retained individuals may have nodes removed by simplification: see below).
3. As described above, the Individual table contains entries for
1. the currently alive individuals,
2. any individuals that have been permanently remembered with treeSeqRememberIndividuals()
3. any individuals that have been temporarily retained with treeSeqRememberIndividuals(permanent=F). Importantly, the nodes in these individuals are not marked as sample nodes, so they can be lost during simplification. This means that a retained individual may only have one node (but if both nodes are lost due to simplification, the individual is removed too, and will not appear in the Individual table).
|
2021-09-19 06:00:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5061493515968323, "perplexity": 3109.072094295912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00288.warc.gz"}
|
https://socratic.org/questions/old-questions-quite-often-i-ll-work-hard-to-answer-a-question-only-to-find-it-s-
|
Old questions: quite often I'll work hard to answer a question, only to find it's from 2 years ago. I imagine that questioner isn't still waiting and checking for the answer! So, to maximise my time, how can I answer new questions over old ones?
Maybe there's not really a question here, and I just need to check when a question was asked before answering it, but presenting 2 year old questions that are unanswered on the front page of a topic may not be the most efficient practice. I know that the answers are not just for the original questioner and still have value, but still, prioritising new questions over old makes sense to me.
Apr 15, 2018
You are right, you...
Explanation:
... have to check when a question was asked before answering it.
For starters, new questions are prioritized in the feeds, i.e. at the top of the subject pages, but old questions do make their way in there as well.
Now, the old questions that are being bumped back at the top of the feeds are prioritized based on when they were first asked.
• asked days ago $\to$ highest priority among old questions
• asked years ago $\to$ lowest priority among old questions
At any given time, the feeds contain a mixture of recent questions, which hold the absolute highest priority, and old questions as mentioned above.
The system set up this way because, as you've mentioned, the answers are not just for the students who asked the questions. Yes, we want to help everyone get answers as soon as possible, but that's not really possible because of the high number of questions coming in, both directly on the site and on the app.
Moreover, the questions that go unanswered can still be useful if answered days, weeks, months, or even years later because other students will most likely look for the answers to these questions using Google.
In fact, the vast majority of students who use Socratic never ask a single question on the site. The ratio of students who ask questions to students who use the answers that are already available is something along the lines of $1 : 500$.
These students land on Socratic because they're looking for answers via Google. That is why every question, regardless of how old it is, gets bumped back into the feeds at some point or another.
So if you want to sort through these questions and focus on recent questions, that's perfectly fine, but
1. you're going to have to do so by checking when the question was asked.
2. answering old questions is not a waste of your time, not by a long shot!
|
2019-06-16 00:33:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32131344079971313, "perplexity": 552.6804779522141}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997508.21/warc/CC-MAIN-20190616002634-20190616024634-00415.warc.gz"}
|
http://mathoverflow.net/questions/96572/automorphism-groups-of-fields
|
# Automorphism groups of fields
Hi there,
is there a classification/characterization of fields K for which the automorphism group Aut(K) has the property that |Aut(K)| < |K| (e.g. finite fields, the rationals and reals) ? What about the same question for real-closed fields K ? Many thanks ...
-
What automorphisms? Do you mean $| \cdotp |$ =cardinaity? – Marc Palm May 10 '12 at 14:51
General field automorphisms. And indeed, |.| = cardinality. – THC May 10 '12 at 15:25
@Gerhard: Unfortunately, the reals have lots of vector-space automorphisms over the prime field $\mathbb Q$ but no nontrivial field automorphisms. (I like to see things turn into set theory, but this one will need more work.) – Andreas Blass May 10 '12 at 16:03
The fields ${\mathbb{Q}}_p$ have only the identity automorphism, like the reals. – Lubin May 10 '12 at 16:31
This question is relevant: mathoverflow.net/questions/22897/… – Kevin Ventullo May 10 '12 at 17:56
|
2015-02-27 21:35:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304237961769104, "perplexity": 2617.711711846559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461416.42/warc/CC-MAIN-20150226074101-00072-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://stats.stackexchange.com/questions/421305/decision-trees-how-does-split-for-categorical-features-happen
|
Decision Trees - how does split for categorical features happen?
A decision tree, while performing recursive binary splitting, selects an independent variable (say $$X_j$$) and a threshold (say $$t$$) such that the predictor space is split into regions {$$X|X_j < t$$} and {$$X|X_j >= t$$}, and which leads to greatest reduction in cost function.
Now let us suppose that we have a variable with categorical values in {$$X$$}. Suppose we have label-encoded it and its values are in the range 0 to 9 (10 categories).
1. If DT splits a node with the above algorithm and treat those 10 values are true numeric values, will it not lead to wrong/misinterpreted splits?
2. Should it rather perform the split based on == and != for this variable? But then, how will the algorithm know that it is a categorical feature?
3. Also, will one-hot encoded values make more sense in this case?
The fundamental question is: what is the nature of the variable? There are two options:
1. Variable is numeric, i.e., labeled as 0 to 9 where the magnitude of the number matters. An example is a rating.
2. Variable is categorical. i.e., labeled as 0 to 9 where the magnitude of the number does not matter. An example is color (where 0 can stand for green, 1 can stand for blue, etc.)
Based on the wording of your question, I assume the nature of the variable is truly categorical (e.g., a color). With that assumption, here are the answers to your three questions:
1. The decision tree may yield misinterpreted splits. Let's imagine a split is made at 3.5, then all colors labeled as 0, 1, 2, and 3 will be placed on one side of the tree and all the other colors are placed on the other side of the tree. This is not desirable.
2. In a programming language like R, you can force a variable with numbers to be categorical. If you do that, then you won't run into the issues you are worried about. Here's an example using the mtcars dataset to convert a numeric variable to categorical:
data(mtcars)
mtcars
#carb variable starts out as numeric
typeof(mtcars$carb) #convert carb variable to categorical mtcars$$carb_categorical <- as.factor(mtcars$$carb) typeof(mtcars$carb_categorical)
1. Yes, one-hot encoded values will make more sense.
• Thank you for the detailed response! With respect to #2, does R implementation of DT check the values of categorical variables for equality rather than less and greater than values? (I have no experience in R, hence asking). Aug 9 '19 at 7:40
• It does; the decision tree will treat the categorical variable as one-hot encoded values implicitly. Note that in R, factor variables are limited to a maximum number of levels.
– mnmn
Aug 9 '19 at 14:30
|
2021-10-18 00:41:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39551684260368347, "perplexity": 513.9179439814369}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00152.warc.gz"}
|
http://crypto.stackexchange.com/tags/sha1/new
|
# Tag Info
Evaluating, we have that Sha_1(38607310235)=6502c8f9f5c222b9598d4e074fd3431f506948bc So, I'm guessing the question you're actually asking is: Given an 11 digit number $x$, find $y$ such that $L[H(y)]=x$, where $L(\cdot)$ takes the last 11 hexadecimal characters, and $H(\cdot)$ is the SHA-1 hash function This problem is believed to be hard to do, so ...
|
2013-12-13 19:36:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6696274280548096, "perplexity": 332.69859398728636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164987957/warc/CC-MAIN-20131204134947-00071-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://pyproveit.github.io/Prove-It/packages/proveit/numbers/number_sets/natural_numbers/_theory_nbs_/common.html
|
# Common expressions for the theory of proveit.numbers.number_sets.natural_numbers¶
In [1]:
import proveit
# Prepare this notebook for defining the common expressions of a theory:
%common_expressions_notebook # Keep this at the top following 'import proveit'.
from proveit.numbers.number_sets.natural_numbers.natural import NaturalSet, NaturalPosSet
In [2]:
%begin common
Defining common sub-expressions for theory 'proveit.numbers.number_sets.natural_numbers'
Subsequent end-of-cell assignments will define common sub-expressions
%end_common will finalize the definitions
In [3]:
Natural, NaturalPos = NaturalSet(), NaturalPosSet()
Out[3]:
Natural:
NaturalPos:
In [4]:
%end common
These common expressions may now be imported from the theory package: proveit.numbers.number_sets.natural_numbers
|
2021-03-02 20:45:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2642004191875458, "perplexity": 14366.671172063205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00311.warc.gz"}
|
https://brilliant.org/discussions/thread/vieta-root-jumping/
|
# Vieta Root Jumping
This week, we learn about Vieta Root Jumping, a descent method which uses Vieta's formula to find additional solutions.
You should first read up on Vieta’s Formula.
How would you use Vieta Root Jumping to solve the following?
1. [IMO 2007/5] Let $a$ and $b$ be positive integers. Show that if $4ab-1 \mid (4a^2-1)^2$, then $a=b$.
2. Share a problem which uses the Vieta root jumping technique.
Note by Calvin Lin
6 years, 9 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
Essentially, Mysterious 100 Degree Monic Polynomial shows we can even Vieta Jump for polynomials.
Another Exercise: Assume that $m$ and $n$ are odd integers such that
$m^{2} - n^{2} + 1 \mid n^{2} - 1$.
Prove that $m^{2} - n^{2} + 1$ is a perfect square.
Yet Another Exercise: Determine all pairs of positive integers $(a,b)$ such that
$\dfrac{a^2}{2ab^2-b^3+1}$
is a positive integer.
- 6 years, 9 months ago
Indeed. This post only begins to scratch the surface of ideas like this, which in itself is a special case of Fermat's method of Infinite Descent.
The Markov Spectrum, in which we approximate real numbers with rationals, is a non-trivial application of VTJ.
VTJ can also be applied to equations in 3 variables, like $x^2 + y^2 + z^2 = 3xyz+2$.
Staff - 6 years, 9 months ago
This one is one of my favorites!
Start out with noting that because $gcd(b, 4ab-1)=1$, we have: $4ab-1|(4a^2-1)^2$ $\iff 4ab-1|b^2(4a^2-1)^2$ $\implies 4ab-1 | 16a^4b^2-8a^2b^2+b^2$ $\implies 4ab-1 | (16a^2b^2)(a^2)-(4ab)(2ab)+b^2$ $\implies 4ab-1 | (1)(a^2)-(1)(2ab)+b^2$ $\implies 4ab-1|(a-b)^2$ The last step follows from $16a^2b^2\equiv (4ab)^2\equiv 1\pmod{4ab-1}$ and $4ab\equiv 1\pmod{4ab-1}$.
Let $(a,b)=(a_1, b_1)$ be a solution to $4ab-1|(a-b)^2$ with $a_1>b_1$ contradicting $a=b$ where $a_1$ and $b_1$ are both positive integers. Assume $a_1+b_1$ has the smallest sum among all pairs $(a,b)$ with $a>b$ , and I will prove this is absurd. To do so, I prove that there exists another solution $(a,b)=(a_2, b_1)$ with a smaller sum. Set $k=\frac{(a-b_1)^2}{4ab_1-1}$ be an equation in $a$. Expanding this we arrive at $4ab_1k-k=a^2-2ab_1+b_1^2$ $\implies a^2-a(2b_1+4b_1k)+b_1^2+k = 0$ This equation has roots $a=a_1, a_2$ so we can now use Vieta’s on the equation to attempt to prove that $a_1>a_2$. First, we must prove $a_2$ is a positive integer. Notice that from $a_1+a_2=2b_1+4b_1k$ via Vieta’s hence $a_2$ is an integer. Assume that $a_2$ is negative or zero. If $a_2$ is zero or negative, then we would have $a_1^2-a_1(2b_1+4b_1k)+b_1^2+k=0 \ge b^2+k$ absurd. Therefore, $a_2$ is a positive integer and $(a_2, b_1)$ is another pair that contradicts $a=b$. Now, $a_1a_2=b_1^2+k$ from Vieta's. Therefore, $a_2=\frac{b^2+k}{a_1}$. We desire to show that $a_2. $a_2< a_1$ $\iff \frac{b_1^2+k}{a_1} $\iff b_1^2+\frac{(a_1-b_1)^2}{4a_1b_1-1} $\iff \frac{(a_1-b_1)^2}{4a_1b_1-1} < (a_1-b_1)(a_1+b_1)$ [ \iff \frac{(a1-b1)}{4a1b1-1} < a1+b1 ] Notice that we can cancel $a_1-b_1$ from both sides because we assumed that $a_1>b_1$. The last inequality is true because $4a_1b_1-1>1$ henceforth we have arrived at the contradiction that $a_1+b_1>a_2+b_1$. Henceforth, it is impossible to have $a>b$ (our original assumption) and by similar logic it is impossible to have $b>a$ forcing $a=b$ $\Box$
- 6 years, 9 months ago
We want to force out a nice factorisation from $4ab - 1 \mid (4a^2-1)^2$. A few hit and trials lead me to consider:
$b^2(4a^2-1)^2 - (4ab-1)(4a^3b - 2ab + a^2) = (a-b)^2$.
So now, assume that there indeed exists distinct $(a,b)$ that satisfy $4ab-1 \mid (a-b)^2$.
Lemma: Suppose $(a,b)$ is a distinct positive integer solution to $\frac{(a-b)^2}{4ab-1} = k$ where WLOG $a > b$. Then $(b, \frac{b^2+k}{a})$ is also a solution.
Proof: Remark first that $(a-b)^{2}=(4ab-1)k ⇔a^{2}-(2b+4kb)a+b^{2}+k=0$. By Vieta's Formula for Roots, $\frac{b^2+k}{a} = 2b + 4kb \in \mathbb{Z^{+}}$ whence $(b, \frac{b^2+k}{a} )$ is also a solution. □
However, we easily see that $\frac{b^2+k}{a} = b(2+4k) > b$ since by assumption $k$ is positive. This means that given a solution $(a,b)$ where $b$ is taken to be minimum, we can always vieta jump to a smaller solution, a clear contradiction. ■
- 6 years, 9 months ago
Problem: Show that if $x, y, z$ are positive integers, then $(xy+1)(yz+1)(zx+1)$ is a perfect square if and only if $xy+1, yz+1, zx+1$ are all perfect squares.
- 6 years, 9 months ago
So, here's my proposed solution.
Solution: It is trivial how to prove the 'if' part. Let us proceed to prove the 'only if' part. Suppose $x,y,z$ are such integers with $xy+1, yz+1, zx+1$ not all squares. Let us also assume that $x,y,z$ is the smallest counterexample, that is, $x+y+z$ is minimum.
WLOG, $xy+1$ is not a perfect square. As Dinesh Chavan wrote below, let $s$ be the smallest positive root of $s^2+x^2+y^2+z^2-2(xy+yz+zs+sx+zx+sy)-4xyzs-4 = 0$. This can be, as he said, written equivalently as:
$\\ (x+y-z-s)^2 = 4(xy+1)(zs+1),\\ (x+z-y-s)^2 = 4(xz+1)(ys+1),\\ (x+s-y-z)^2 = 4(xs+1)(yz+1).$
But then, our quadratic formula has root $s = x+y+z+2xyz\pm 2\sqrt{(xy+1)(yz+1)(zx+1)}$ which was given to be an integer, so we have that RHS is a square. This implies that $(xs+1)(ys+1)(zs+1)$ is a square.
Since, $xs+1\ge0,ys+1\ge0,zs+1\ge0$, we have that by verifying that $x=y=z=1$ is not a solution, $s\ge-\frac{1}{\max(x,y,z)}>-1$.
If $s = 0$, by crunching out the algebra one gets that $(x+y-z)^2 = 4(xy+1)$ which implies that $xy+1$ is a square, a clear contradiction.
So we have $s > 0$ and by the assumed minimality of $x+y+z$ we can safely conclude that $s≥ z$. Let the $2$ roots be $s, s_1$. By Vieta, we have:
$ss_1 = x^2+y^2+z^2-2xy-2yz-2zx-4 < z^2-x(2z-x)-y(2z-y) < z^2$
Now I hope that Ram Sharma sees the connection to Vieta Jumping.
- 6 years, 9 months ago
@Anqi Li Can you add this to the Vieta Root Jumping Wiki? Thanks!
Staff - 5 years, 11 months ago
Hi, this is a nice question. First We state a lemma
Lemma: If $(x,y,z)$ is a $P$ set, then so is $(x,y,z,s)$ for $s=x+y+z+2xyz \pm2\sqrt{(xy+1)(yz+1)(xz+1)}$ as long as s is positive.
Proof: Note that the value of s mentioned is the root of the equation $x^2+y^2+z^2+s^2-2(xy+yz+zx+xs+ys+zs)-4xyzs-4=0$ and now the above quadratic can be written as follows;
$(x+y-z-s)^2=4(xy+1)(zs+1)$ Also it can be written as; $(x+z-y-s)^2=4(xz+1)(ys+1)$ which can again be written as $(x+s-y-z)^2=4(xs+1)(yz+1)$ Since, $xy+1$ is an integer which is the quotient of two perfect squares, it is also a square. Similarly we can show for $yz+1$ and $zx+1$. Which shows that they all must be perfect squares.
- 6 years, 9 months ago
I don't know what the above question has to do with Vieta Root Jumping. However very well explained.+1 for the solution.Is there any other method to prove the result.
- 6 years, 9 months ago
Can you define your terms? What is a P set? Does it use 3 variables or 4 variables?
How is $xy+1$ a quotient of 2 perfect squares? Is $zs+1$ a perfect square? If so, why?
Staff - 6 years, 9 months ago
|
2020-09-25 23:09:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 121, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786113500595093, "perplexity": 1540.4269762252973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00041.warc.gz"}
|
https://en.wikiversity.org/wiki/Polynomial_ring/Field/Replacement/Structure/Exercise
|
# Polynomial ring/Field/Replacement/Structure/Exercise
Let ${\displaystyle {}K}$ be a field and let ${\displaystyle {}K[X]}$ be the polynomial ring over ${\displaystyle {}K}$. Let ${\displaystyle {}a\in K}$. Prove that the evaluating function
${\displaystyle \psi \colon K[X]\longrightarrow K,P\longmapsto P(a),}$
satisfies the following properties (here let ${\displaystyle {}P,Q\in K[X]}$).
1. ${\displaystyle {}(P+Q)(a)=P(a)+Q(a)\,.}$
2. ${\displaystyle {}(P\cdot Q)(a)=P(a)\cdot Q(a)\,.}$
3. ${\displaystyle {}1(a)=1\,.}$
|
2023-03-21 00:46:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982444703578949, "perplexity": 259.7385885136791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00569.warc.gz"}
|
https://math.stackexchange.com/questions/1878376/paley-weiner-theorem-and-the-fourier-transform-of-a-non-analytic-smooth-function
|
# Paley-Weiner theorem and the Fourier transform of a non-analytic smooth function
Many Paley-Weiner theorems are variations on the theme "the faster a function $f(x)$ falls off as $x \rightarrow \infty$, the smoother its Fourier transform $\tilde{f}(k)$ is as $k \rightarrow 0$." In particular, we know that if $f(x)$ decays exponentially at large $x$ then $\tilde{f}(k)$ is analytic, and if $f(x) = 1/x^n$ (with $n \in \mathbb{Z}$) then $\tilde{f}(k) \propto k^{n-1} \mathrm{sgn}( k)$. But I'm wondering about the (inverse) FT of $\tilde{f}(k) = 1 - e^{-1/k^2}$. This function is smooth and falls off as $1/k^2$ at large $k$, so its inverse FT $f(x)$ should exist and not be too pathological. It should fall off faster than any power-law at large $x$, or else some finite derivative of $\tilde{f}(k)$ would be discontinuous, but $\tilde{f}(k)$ is smooth. But it cannot fall off as fast as an exponential, or else $\tilde{f}(k)$ would be analytic, which it isn't. So its large-$x$ falloff must lie somewhere in between power-law and exponential. But what is it? I have no idea how to evaluate the Fourier transform because of the essential singularity at $k = 0$.
Edit: Follow-up question: Is it true that any smooth function that falls off faster than any power-law but slower than any exponential has a non-analytic smooth FT?
• A charming question, although I might disagree with the appraisal of Paley-Wiener theorems... but just for quibbly technical reasons. If no one else has much to say on this, I will respond tomorrow. A fun question, for sure! :) – paul garrett Aug 1 '16 at 23:03
• Essentially, the answer is yes. If your function is holomorphic on a strip $|\textrm{Im}\, z |<a$, then the FT will have exponential decay at rate $e^{-(a-\epsilon)|x|}$ (and conversely). – user138530 Aug 1 '16 at 23:47
• @ChristianRemling If it's holomorphic in that strip and satisfies some sort of boundedness on horizontal lines. I imagine that's what you meant - regardless, now we need a counterexample to what you actually said. Heh... – David C. Ullrich Aug 1 '16 at 23:58
• @ChristianRemling Heh. Say $f(z)=\exp(ie^z)$. Then $f$ is bounded on the real axis but blows up exponentially on the line $y=-\epsilon$. (Then, say, $(f(z)-(zf'(0)+f(0)))/z^2$ is $L^1$ on the real axis but still very bad on the line $y=-\epsilon$...) – David C. Ullrich Aug 2 '16 at 0:20
• @paulgarrett It seems that no one else has much to say on this ... ;) – tparker Aug 4 '16 at 1:31
I found a paper that uses the saddle-point approximation to derive the asymptotic form for large $k$ of the Fourier transform of another non-analytic smooth function: http://arxiv.org/abs/1508.04376. It falls off like $k^{-3/4} e^{-\sqrt{k}}$ - indeed faster than any power-law but slower than any exponential, as I expected.
If we use their method to compute $f(x) = \int_{-\infty}^\infty dk\, e^{i k x - 1/k^2} = 2\, \text{Re} \left[ \int_0^\infty dk\, e^{i k x - 1/k^2} \right]$, we get that the saddle point is $k_0 = \left( \frac{2 i}{x} \right)^{1/3}$. Taylor expanding the exponent about the saddle point, $$i k x - \frac{1}{k^2} = i k_0 x - \frac{1}{k_0^2} + \frac{1}{2}\left( -\frac{6}{k_0^4} \right) (k - k_0)^2 + o \left( (k-k_0)^3 \right).$$ For large $x$, the integral oscillates wildly and mostly interferes destructively unless $k \ll 1$, and in this regime we also see that $k_0 \ll 1$, so the higher terms in the expansion are negligible for large $x$, and $$f(x) \sim 2\, \text{Re} \left[ \exp \left( i k_0 x - \frac{1}{k_0^2} \right) \int_0^\infty dk\, \exp \left(-\frac{3}{k_0^4}(k - k_0)^2 \right) \right].$$ We now need to deform the contour so that it runs through the saddle point $k_0 = (2/x)^{1/3} i^{1/3}$, so we make the change of variable $k = u\, i^{1/3},\ dk = du\, i^{1/3}$, which rotates the contour by $-\pi/6$ in the complex plane: $$f(x) \sim 2\, \text{Re} \left[ \exp \left( i k_0 x - \frac{1}{k_0^2} \right) \int_0^{i^{-1/3} \infty} du\, i^{1/3} \exp \left[ -\frac{3}{k_0^4} i^{2/3} \left( u - \left( \frac{2}{x} \right)^{1/3} \right)^2 \right] \right].$$ The only pole is at the origin, so we can deform the contour back onto the positive real line without affecting the integral. For large $x$, the integral from $-\infty$ to $0$ is negligible so we can extend the range of integration over the whole real line. (One might have expected that since the Gaussian is peaked near zero, the integral over the negative ray would contribute equally to the integral over the positive ray so we'd need a factor of 1/2 to compensate; see the paper for why this is not the case.) Since $\text{Re}\left( 3 i^{2/3}/k_0^4 \right) = \text{Re}\left( (2/x)^{1/3} e^{-i \pi/3} \right) = (4x)^{-1/3} > 0$, we can use the Gaussian integral identity $\int_{-\infty}^\infty dk\, e^{-a k^2} = \sqrt{\pi/a}$ to get $$f(x) \sim 2\, \text{Re} \left[ \exp \left( i k_0 x - \frac{1}{k_0^2} \right) i^{1/3} \sqrt{\frac{\pi k_0^4}{3 i^{2/3}}} \right] = 2 \sqrt{\frac{\pi}{3}} \text{Re} \left[ k_0^2 \exp \left( i k_0 x - \frac{1}{k_0^2} \right) \right].$$ Plugging in $k_0 = \left( \frac{2 i}{x} \right)^{1/3}$, we finally get after some algebra $$f(x) \sim 2^{5/2} \sqrt{\frac{\pi}{3}} x^{-2/3} \exp \left( -\frac{3}{2^{5/3}} x^{2/3} \right) \cos \left( \frac{\pi}{3} + \frac{3 \sqrt{3}}{2^{5/3}} x^{2/3} \right).$$ We do indeed find that the Fourier transform falls off faster than any power-law but slower than any exponential. But I'm not sure how to prove this result for an arbitrary nonanalytic smooth function.
|
2019-10-18 03:31:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165914058685303, "perplexity": 196.14319034807585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00231.warc.gz"}
|
https://projecteuclid.org/euclid.aaa/1050426052
|
## Abstract and Applied Analysis
### Local properties of maps of the ball
Yakar Kannai
#### Abstract
Let $f$ be an essential map of $S^{n-1}$ into itself (i.e., $f$ is not homotopic to a constant mapping) admitting an extension mapping the closed unit ball $\overline B^n$ into $\mathbb{R}^n$. Then, for every interior point $y$ of $B^n$, there exists a point $x$ in $f^{-1}(y)$ such that the image of no neighborhood of $x$ is contained in a coordinate half space with $y$ on its boundary. Under additional conditions, the image of a neighborhood of $x$ covers a neighborhood of $y$. Differential versions are valid for quasianalytic functions. These results are motivated by game-theoretic considerations.
#### Article information
Source
Abstr. Appl. Anal., Volume 2003, Number 2 (2003), 75-81.
Dates
First available in Project Euclid: 15 April 2003
https://projecteuclid.org/euclid.aaa/1050426052
Digital Object Identifier
doi:10.1155/S1085337503204012
Mathematical Reviews number (MathSciNet)
MR1960138
Zentralblatt MATH identifier
1017.26020
Subjects
Primary: 26E10: $C^\infty$-functions, quasi-analytic functions [See also 58C25] 58K05
Secondary: 55M25 47H10 47H11 57N75 57Q65
#### Citation
Kannai, Yakar. Local properties of maps of the ball. Abstr. Appl. Anal. 2003 (2003), no. 2, 75--81. doi:10.1155/S1085337503204012. https://projecteuclid.org/euclid.aaa/1050426052
|
2019-07-19 02:04:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 13, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30039510130882263, "perplexity": 1475.681312383128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00539.warc.gz"}
|
http://www.encyclopediaofmath.org/index.php/Integral_sine
|
# Integral sine
The special function defined for real $x$ by
$$\operatorname{Si}(x)=\int\limits_0^x\frac{\sin t}{t}dt.$$
For $x>0$ one has
$$\operatorname{Si}(x)=\frac\pi2-\int\limits_x^\infty\frac{\sin t}{t}dt.$$
One sometimes uses the notation
$$\operatorname{si}(x)=-\int\limits_x^\infty\frac{\sin t}{t}dt\equiv\operatorname{Si}(x)-\frac\pi2.$$
Some particular values are:
$$\operatorname{Si}(0)=0,\quad\operatorname{Si}(\infty)=\frac\pi2,\quad\operatorname{si}(\infty)=0.$$
Some special relations:
$$\operatorname{Si}(-x)=-\operatorname{Si}(x);\quad\operatorname{si}(x)+\operatorname{si}(-x)=-\pi;$$
$$\int\limits_0^\infty\operatorname{si}^2(t)dt=\frac\pi2;\quad\int\limits_0^\infty e^{-pt}\operatorname{si}(qt)dt=-\frac1p\arctan\frac pq;$$
$$\int\limits_0^\infty\sin t\operatorname{si}(t)dt=-\frac\pi4;\quad\int\limits_0^\infty\operatorname{Ci}(t)\operatorname{si}(t)dt=-\ln2,$$
where $\operatorname{Ci}(t)$ is the integral cosine. For $x$ small,
$$\operatorname{Si}(x)\approx x.$$
The asymptotic representation for large $x$ is
$$\operatorname{Si}(x)=\frac\pi2-\frac{\cos x}{x}P(x)-\frac{\sin x}{x}Q(x),$$
where
$$P(x)\sim\sum_{k=0}^\infty\frac{(-1)^k(2k)!}{x^{2k}},$$
$$Q(x)\sim\sum_{k=0}^\infty\frac{(-1)^k(2k+1)!}{x^{2k+1}}.$$
The integral sine has the series representation
$$\operatorname{Si}(x)=x-\frac{x^3}{3!3}+\ldots+(-1)^k\frac{x^{2k+1}}{(2k+1)!(2k+1)}+\ldots.\tag{*}$$
As a function of the complex variable $z$, $\operatorname{Si}(z)$, defined by (*), is an entire function of $z$ in the $z$-plane.
The integral sine is related to the integral exponential function $\operatorname{Ei}(z)$ by
$$\operatorname{si}(z)=\frac{1}{2i}[\operatorname{Ei}(iz)-\operatorname{Ei}(-iz)].$$
|
2014-08-30 16:17:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901416301727295, "perplexity": 1168.4362051450657}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835505.87/warc/CC-MAIN-20140820021355-00167-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://jasmcole.com/2018/06/10/the-2018-monte-carlo-world-cup/?replytocom=9201
|
# The 2018 Monte Carlo World Cup
The world cup is almost once more amongst us, which means interminable weeks of breathless coverage, punditry, and heartfelt professions that each match will be played at 110%. In an effort to inject some more quantitative rigour to a field which, apparently, could do with some, let’s try and predict how the whole thing will play out.
The world cup consists of a group stage, where 8 groups of 4 countries play a mini-league, followed by the knockout rounds where the top 2 countries in each group play up to 4 rounds to win the cup. Each country can enter the knockout stage in one of two places, depending on the group they are in, and the other countries in each group con strongly influence the likelihood of making it out of the group stage – being stuck with Germany and Brazil might mean an early flight home from Russia for example.
Simulating a football match is no simple thing, there are an incredible number of variables to take into account for any given match – current state of the team, friendliness of the crowd, weather conditions, historical levels of rivalry, injuries…. I’m sure some very sophisticated analysis is done by betting firms, but I’m going to skip all of that and look at one number.
There is a very useful website here which has helpfully calculated the up-to-date Elo rating of each national football team. This is a single number which indicates the expected performance of each team. To calculate the probability that one team beats another, simply calculate
$P(A \text{ beats } B) = \left(1 + 10^{(E_B - E_A)/400}\right)^{-1}$
where $E_{A/B}$ are the Elo ratings of the two teams. Simple!
A difference in Elo score of a few hundred or more means that one team has a significantly higher probability of winning than the other.
Simulations
With this overly simplified view in hand, I then simulated a million world cups. I ignored the possibility of draws for now, and for each game just flipped a coin weighted by the Elo scores of the two teams. It was therefore possible in this setup for Japan to beat Brazil, but not very likely! (6%).
Without further ado then, let’s look at the odds this system comes up with for overall winner:
1. Brazil – 26%
2. Germany – 19%
3. Spain – 15%
4. France – 7%
5. Argentina – 6%
6. Portugal – 5%
7. England – 5%
8. Belgium – 3%
9. Colombia – 3%
10. Peru – 2%
and compare with current odds offered by online betting companies:
1. Brazil – 22-25%
2. Germany – 20-22%
3. Spain – 16-20%
4. France – 14-16%
5. Argentina – 10-13%
6. Belgium – 9-10%
7. England – 5-7%
8. Portugal – 4-5%
9. Uruguay – 3-4%
10. Croatia – 2.5-4%
The bolded countries are those where I was within a percent or so of the official odds – not too shabby given the simplicity of the model! I think it’s safe to say that we expect a Germany-Brazil match at some point…
Of course, I have the entire world cup simulated so there is lots more detail to extract. In the following large image, I have plotted the probabilities that the given countries will participate in each match (click to enlarge):
It is interesting that some matches almost certainly have their entrants pre-determined, e.g. the winner of group E will very likely be Brazil, so the Round of 16 match containing the winner of E will feature Brazil with 65% probability:
On the other hand, some matches are much less well determined, like the first quarter final:
For a given country, we can plot their likely route through the entire proceedings.
Let’s look at how the world cup will play out for England:
It looks like England should get through the groups, with less than a 10% chance of flunking out early, but they’ll probably lose their first or second knockout match (as usual).
Why might this be?
Ah yes, that’s right, Brazil. Brazil and England can both take 2 routes through the cup, but if they meet it will definitely be at the quarter finals. And as you can clearly see above by the mass of orange, Brazil will probably steamroller right through. It is also more likely for Brazil to win the final, than to lose at any previous stage.
Plotting Germany as well though, there is a different option for the world up:
It will probably be the case that Brazil takes the top route, and Germany the bottom. It is therefore Germany which demolish England at the quarters, and then don’t meet Brazil until a thrilling final.
Whatever happens, I’m now confident in the knowledge that I can play along with any football chat I might be dragged into over the summer, with the requisite stats to back it up.
## 7 thoughts on “The 2018 Monte Carlo World Cup”
1. Matthew says:
Intuitively I would expect the average of many simulations to simply order the teams following their Elo scores – is this the case? If not, why – e.g. does the order of teams within the initial groups prevent it?
Like
1. And you’d be absolutely correct for the top ten here! It’s a good question that I haven’t spent any time thinking about to be honest. One thing I wonder is whether ‘groups of death’ make it less likely than you’d expect that a team leaves the group stages. Over many simulations perhaps that effect averages out.
Like
1. Hi Jason!
I love your blog – just the sort of content I love to dive into myself.
I as wondering what you used to plot the diagrams – it looks like pyplot but how did you get the lines in multiple different places? Was it a custom plotting function?
Cheers
Dev
Like
2. Hey, thanks, I’m glad you like my blog!
It was indeed pyplot. To plot the fuzzy lines I generated the coordinates for a given line, then plotted 50 lines at low opacity with the line vertices randomly offset by a small amount.
Like
2. Piero says:
Hi Jason, did you update the Elo scores after each game?
Like
1. I didn’t, though I do wonder what Mexico’s is right now!
Like
|
2020-11-30 04:41:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35202088952064514, "perplexity": 1765.0442749443932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141205147.57/warc/CC-MAIN-20201130035203-20201130065203-00511.warc.gz"}
|
http://jonathanzong.com/blog/2013/01/15/visualizing-elliptical-eccentricity
|
# Visualizing Elliptical Eccentricity
January 15, 2013 at 6:04pm
I wrote a quick javascript visualization to better understand the eccentricity concept applied to ellipses. Eccentricity for ellipses is a ratio of the distance between the center and a focus to the distance between a covertex to a focus, or alternatively, focal width to the major axis length. For ellipses that arent circles, eccentricity takes on the interval (0, 1).
Your browser does not support html5 canvas.
For the implementation, I chose an arbitrary major axis size, w, and iterated the focal width from 0 to w-1. A focal width of w would be undefined because the foci cannot be on the ellipse itself (i.e. be vertices). The eccentricity is then calculated as the focal width over the major axis, and the ellipse is drawn by solving for the minor axis and then iterating from [0, 2π) and graphing points in parametric form.
//in this instance, canvas1 is an 800x800 canvas element
var context = document.getElementById('canvas1').getContext('2d');
context.font="25px Arial";
// a + b = width
var width = 200;
var focal = 0;
var timer = setInterval(function(){drawEllipse(400,400)},200);
function drawEllipse(centerX, centerY) {
context.clearRect ( 0 , 0 , 800 , 800 );
var height = Math.sqrt((width*width)-(focal*focal));
var eccen = (focal)/(width);
context.fillText("Eccentricity = "+eccen,10,40);
context.fillText("Focal Width = "+focal,10,70);
context.fillText("Major Axis = "+width,10,100);
context.fillText("Minor Axis = "+height,10,130);
context.fillRect(centerX-(focal/2.0),centerY,2,2);
context.fillRect(centerX+(focal/2.0),centerY,2,2);
var a = width/2.0;
var b = height/2.0;
for(var t=0;t<360;t++){
var theta = t*Math.PI/180;
context.fillRect(centerX+a*Math.cos(theta),centerY+b*Math.sin(theta),2,2);
}
focal++;
if(focal >= 200)
clearInterval(timer);
}
Of course, eccentricity works differently for other conics, but relating the focal width of ellipses to their eccentricity is useful to see that as focal width approaches 0, i.e. as the foci get closer together, the eccentricity approaches 0 as well. Indeed, when the focal width is 0, forming a circle, eccentricity should be 0 as implied by the meaning of the word itself, to deviate from centricity, or circle-ness. As the focal width nears the major axis width, approaching non-ellipsehood, the eccentricity will approach 1 until this particular representation is no longer fruitful but the structure becomes parabolic.
|
2017-09-20 16:29:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8299339413642883, "perplexity": 2189.6275630944288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687333.74/warc/CC-MAIN-20170920161029-20170920181029-00585.warc.gz"}
|
http://www.ck12.org/book/CK-12-Foundation-and-Leadership-Public-Schools%2C-College-Access-Reader%3A-Geometry/r1/section/5.4/
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 5.4: Sine Ratio
Difficulty Level: At Grade Created by: CK-12
## Learning Objectives
• Review the different parts of right triangles.
• Identify and use the sine ratio in a right triangle.
## Review: Parts of a Triangle
The sine and cosine ratios relate opposite and adjacent sides to the hypotenuse. You already learned these terms in the previous lesson, but they are important to review and commit to memory.
The hypotenuse of a triangle is always opposite the right angle and is the longest side of a right triangle.
The terms adjacent and opposite depend on which angle you are referencing:
A side adjacent to an angle is the leg of the triangle that helps form the angle.
A side opposite to an angle is the leg of the triangle that does not help form the angle.
The hypotenuse is _____________________________________________________.
The opposite side is ____________________________________________________.
Example 1
Examine the triangle in the diagram below.
Identify which leg is adjacent to angle N\begin{align*}N\end{align*}, which leg is opposite to angle N\begin{align*}N\end{align*}, and which segment is the hypotenuse.
The first part of the question asks you to identify the leg adjacent to N\begin{align*}\angle{N}\end{align*}. Since an adjacent leg is the one that helps to form the angle and is not the hypotenuse, it must be MN¯¯¯¯¯¯¯¯¯¯\begin{align*}\overline{MN}\end{align*}.
The next part of the question asks you to identify the leg opposite N\begin{align*}\angle{N}\end{align*}. Since an opposite leg is the leg that does not help to form the angle, it must be LM¯¯¯¯¯¯¯¯¯\begin{align*}\overline{LM}\end{align*}.
The hypotenuse is always opposite the right angle, so in this triangle it is segment LN¯¯¯¯¯¯¯¯\begin{align*}\overline{LN}\end{align*}.
1. Which side of a right triangle is the longest side? _____________________________
2. Describe where the side you answered in #1 above is in relation to the right angle:
\begin{align*}\; \;\end{align*}
\begin{align*}\; \;\end{align*}
3. Which side of a right triangle does not help to make the right angle? _____________________________
4. Which side of a right triangle helps to make the right angle and is NOT the hypotenuse? _____________________________
## The Sine Ratio
Another important trigonometric ratio is sine. A sine ratio must always refer to a particular angle in a right triangle. The sine of an angle is the ratio of the length of the leg opposite the angle to the length of the hypotenuse.
This means that the sine ratio is: the ____________________ side divided by the _______________________.
Remember that in a ratio, you list the first item on top of the fraction and the second item on the bottom. So, the ratio of the sine will be:
sinθ=oppositehypotenuse\begin{align*} \sin \theta = \frac{opposite}{hypotenuse}\end{align*}
Example 2
What are sinA\begin{align*}\sin A\end{align*} and sinB\begin{align*}\sin B\end{align*} in the triangle below?
To find the solutions, you must identify the sides you need and build the ratios carefully. In the sine ratio, we will need the opposite side and the hypotenuse.
Remember, the hypotenuse of a right triangle is across from the right angle. The opposite side depends on which angle we are using.
The hypotenuse is the segment ___________, which is _______ cm long.
For angle A\begin{align*}A\end{align*}:
The side opposite angle A\begin{align*}A\end{align*} is the segment ___________, which is _______ cm long.
For angle B\begin{align*}B\end{align*}:
The side opposite angle B\begin{align*}B\end{align*} is the segment ___________, which is _______ cm long.
sinAsinB=oppositehypotenuse=35=oppositehypotenuse=45\begin{align*}\sin A & = \frac {opposite}{hypotenuse} = \frac {3}{5}\\ \sin B & = \frac {opposite}{hypotenuse} = \frac {4}{5}\end{align*}
So, \begin{align*}\sin A = \frac {3}{5}\end{align*} and \begin{align*}\sin B = \frac {4}{5}\end{align*}.
Your friend did the following problem and asked you if it was correct:
Find \begin{align*}\sin X\end{align*} using the triangle below.
\begin{align*}\sin X = \frac{opposite}{hypotenuse} = \frac{5}{12}\end{align*}
\begin{align*}{\;}\end{align*}
\begin{align*}{\;} \end{align*}
2. If not, where is the mistake in the problem? Describe the mistake in words and explain to your friend how she should have done the problem correctly.
\begin{align*}{\;}\end{align*}
\begin{align*}{\;}\end{align*}
\begin{align*}{\;}\end{align*}
\begin{align*}{\;}\end{align*}
\begin{align*}{\;}\end{align*}
\begin{align*}{\;}\end{align*}
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Authors:
Tags:
|
2017-03-26 23:17:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 29, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493251800537109, "perplexity": 1156.412584979456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189313.82/warc/CC-MAIN-20170322212949-00299-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://ciafreewhimten-blog.logdown.com/posts/7087195
|
## How To Finish Homework When You Are Tired hentei neujahr borde
How To Finish Homework When You Are Tired
how to finish homework when tired
how to finish homework when your tired
Finish Line NASDAQ: FINL is an American retail chain that sells athletic shoes and related apparel and accessories.. When you have a lot to do, you have to finish your homework fast. But if you have more time on your hands, you may feel less inclined to finish quickly.. I am doing my homework now. Now watch me, .. Polar is a book with images from the north and south poles which comes to life like a movie when you open the pages! How they do it is a bit of a mystery to us but we understand the makers. beginning homework. No point if you are really tired or . something to drink or a ten minute break if needed before starting. When you finish your homework, .. How to finish homework when you are tired . Prepare yourself, when they address that get your homework. 4Th time to spark a and the news drowsy driving in this .. (2016) 510 A: Hey, Peter! You look tired.. Quit bitching about how tired you are. . if you have homework to do, . I would block the time in my diary that I will need to finish the task at least a week in .. XI. Homework Write about what you have gotten in the junior high school and your plan in the senior high school. Section B 2 (3aSelf Check) .. How To Focus On Homework: The Ultimate . how to focus on homework. It leaves you feeling tired, . you have to figure out how to finish homework late .. Tired of looking at yet another dead stick in a pot? Indoor plants like succulents, cacti, tropical and air plants are handsome, hardy and perfect for urban living - and this book is a. Best Answer: Do a tiny piece now on each homework, so worst case scenario you just hand in homework which you didn't put a lot of effort into. Once it's ok .. 28. he (finish) his homework last night? 29. I(be) tired yesterday. 31. What you (do) last night? 32. My 33. What 34. Last .. If you feel tired, you must have a rest. 1if . Dont go and play football if you dont finish your homework. .. I just (finish) my homework. 9. He (go) to school on foot every day. 10. you (find) your science book yet? 11. If it (be) fine .. Parents are exhausted from working and running around all day and kids are tired . When is the Best Time to Do Homework? . homework. It may be hard for you or .. So much homework and I'm so tired . How long would it take you to finish? . Instead of Yahoo Answering what to do if your too tired to do homework, you .. six.You make me tired. seven.I felt tired after work. eight.Tired as he was, Peter tried to finish all the homework that day. , .. If you don't finish your homework at school, think about how much you have left and what else is going on that day. . Later, when you're more tired, .. Why is the sentence 'I was too tired to do my homework' considered incorrect in . homework. You are tired . my homework considered incorrect in English? .. I'm always tired but i need to finish homework or study, . and make sure you have a full . I'm really tired but I still need to finish homework and .. If you are tired and cannot concentrate, . How do I continue doing my homework when I'm tired and just want to sleep? . and finish it then.. Things To Do When You Are Tired, But Still . Lets start with the things to do when you are tired and . is not what i mean like if i am doing my homework and it .. XI. Homework Write about what you have gotten in the junior high school and your plan in the senior high school. Section B 2 (3aSelf Check) .. im too tired to finish my homework? . I Need To Do My Homework But Im Tired - essay-boy.online i need to do my homework but im tired When you need help .. This is a very frustrating situation. Im unclear if you are up late at night because of homework or because you cant fall asleep.. Once you are in a subshell, if you type the TAB key, the how to finish homework when your tired completion displays the commands of the current subshell:. If you can, pick a .. How to Get Work Done While Sick. . If you absolutely have to get work done while sick, you can alleviate your symptoms and break tasks down into simpler .. Assume that of your $10,000 portfolio, you invest$9,000 in Stock X and \$1,000 in Stock Y. . To finish the homework, I went to the school library to borrow any booksI I .. One of my students then suggested "I was tired to do my homework." . I was happy to finish my homework which you have . your homework, because you were tired .. 10)2016AHey,Peter!You look tired.71BI didnt get enough sleep last night.AWere you just doing your homework last night B:72 .. Why are you always tired,do you know? First,if we can't (1) well,we will be tired.But we are often too (2).We can't finish our homework in the day (3) we have to do .. Tired while doing homework. Those demands on homework provides an hour. But scientists worry! . Very hard it you finish homework, .. tired [5taIEd] adj. (1) I felt tired after work. Tired as he was, Peter tried to finish all the homework that day. , .. Quit bitching about how tired you are. . if you have homework to do, . I would block the time in my diary that I will need to finish the task at least a week in .. Here we provide you answers about : how to get homework done efficiently, how to do homework fast and fun, how to finish a lot of homework in one hour, how to get homework done at the last. Teachers never wanted to hear excuses on why you couldnt finish homework so no matter how tired I was I pushed myself to finish whatever I had to do for the next . cd4164fbe1
maths homework ideas year 4
class there 39;s no homework today vine
systems word problems homework
home working dse
best high school homework app
homework enrichment life skills programme
english department homework policy
school homework management software
why students do not need homework
the homework squabbles
|
2018-05-24 13:33:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5991368293762207, "perplexity": 2038.2315206839066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866326.60/warc/CC-MAIN-20180524131721-20180524151721-00146.warc.gz"}
|
https://linnaxu.com/rcfnlg/kuta-multiplying-and-dividing-fractions-d00aff
|
kuta multiplying and dividing fractions
7 0 obj %PDF-1.3 endobj 0000002969 00000 n 0000003399 00000 n 0000003499 00000 n by Reza about 1 year ago in 8!�����p�0c�?B����I��A���/�z[�oI��p3��N���vy����A�؎X2�콉��Q�^�м�U��q�7�v����n�A:�A���W��\A�G�7GI����w��:͎ {����fOyy���[�0�c aק�Fb{�ֱYv�����.��CiW�9�U�� H��~��ʊ�olf� �گ�:�~Ck���@�C�T�Н}�>Tx#�AP���ftO����Ϲ��m�hA���}y�Rp9U�g}����|³endstream >> 13 0 obj The one change is that you have to take the reciprocal of the divisor. This is a fraction calculator with steps shown in the solution. Showing top 8 worksheets in the category - Kuta Software Dividing Monomials. Write each answer in scientific notation. >> 21) 7 4 ¸4 3 7 22) 6 4 5 ¸7 5 8 23) 3 5 7 ¸7 7 9 ... Infinite Pre-Algebra - Multiplying and Dividing Fractions Created Date: ⦠$$\frac{2}{5} \times \frac{3}{4}=$$, Multiply the top numbers and multiply the bottom numbers. $$\frac{2}{5} \times \frac{3}{4}=$$ Solution: Thereâs no direct method for multiplying and dividing mixed numbers. 4 7 3 4 7. 1114 endobj Infinite PreâAlgebra Infinite Algebra 1 Infinite Geometry Infinite Algebra 2 Infinite Precalculus Infinite Calculus; Integers, Decimals, and Fractions :: Naming decimal places and rounding U�3����;ڢށf����y=(�rƄ7�cs�;xA�@ Hereâs how to multiply or divide mixed numbers. Please visit: www.EffortlessMath.com Multiplying and Dividing Fractions Find ⦠11 9 5 3 5. ˍo����Jkْ�a�ܿ�~�G��1����������ˮE�pK��_�����kY[�nz�9����*99����R-||�k��U^8ry �O/+�?UVj|�I�Y��*U_U�LM�,l���|�Ҝ���]|8>xv�f)��� �����X�$;�PHIVD3c�ٝT=^��0m! Multiplying dividing fractions and mixed numbers date period find each product. 0000003623 00000 n x��Zݎ�4��S��{��[DAB Qz�^,\�.��n���f}����I�����&HgOZu�=DZ=�7�=��Z ©K q2 i0i1 y2 e yK ru0t Ua4 aS no 8f bt9wNaBrJe l 4L aLGCK.8 Z zASl TlJ VrUiHgrhHtAsP WrYewsAeYrYvneOdY.E w PMoafdZe B awqiJtTh h oIvnHfyiyngi gt de4 JP Rr8eI- 7AAlQg0eLbAr2aE.7 Worksheet by Kuta Software LLC Multiplication and division of fractions and mixed numbers. <> Effortless Math services are waiting for you. �����J�_���� �/������}�������TRx��S5��-�����E�]�#!�2�����ih~nQ(��7φ��v���m^�^HJejwY .h|��k�@��I���������6WqCn� 7���_G��m�.��2��Q0Ѭo���мmw(4O aT)$�M��U�˟3O��j!I�-�����&>�Լ ��g��X軆�/��~�*�7W���tJ��3�gcO.�I�9-�x2�!bf�U_���}�� �O�����4J h��˰��E�#�~�q�䷏ڝ�P��]�zϲv�oG��{���x���ߺu 0000002990 00000 n Algebra Name_____ Multiplying/Dividing Fractions and Mixed Numbers Date_____ Period____ Find each product ... Multiplying+Dividing Fractions and Mixed Numbers - Kuta Write each as a percent. Add, Subtract, Multiply, Divide Polynomials Simplify. Immediately receive the download link and get the eBook in PDF format. Kuta Multiplying And Dividing Fractions [eBooks] Kuta Multiplying And Dividing Fractions Right here, we have countless ebook Kuta Multiplying And Dividing Fractions and collections to check out. Multiplying Fractions WS.pdf. Worksheets > Math > Grade 5 > Fractions - multiplication & division. For example, suppose you want to multiply 1-3/5 by 2-1/3. This math worksheet was created on 2013-02-14 and has been viewed 70 times this week and 7,173 times this month. endobj endobj Divinding Fractions S1. 11 0 obj Then: $$\frac{1}{2} \times \frac{5}{3} = \frac{1 \ \times \ 5}{2 \ \times \ 3}=\frac{5}{6}$$, Multiply fractions. Multiplying and Dividing Fractions (J) Find the value of each expression in lowest terms. <> << /Type /Pages /Kids [ stream Add and Subtract Fractions including Variables WS.pdf. ��k��4?�EF�C%o�K��}0;�=̞�̇�* ��o�.2*�D@t�O�? Complete the quick and easy checkout process. 1) 59 n 99 â
80 33 n 4720 3267 2) 53 43 â
46 n2 31 2438 n2 1333 3) 93 21 n â
34 n 51 n 62 21 n 0000003689 00000 n Your answer should contain only positive exponents. endobj 1 2 4 Prealgebra Skill Multiplying and Dividing Signed Fractions Evaluate. 1678 Then you proceed with the problem just as if you were multiplying. Some of the worksheets for this concept are Fractions decimals and percents, Fractions and decimals, Solve each round to the nearest tenth or, Write the name of each decimal place, Fractions decimals percentages, Addsubtracting fractions and mixed numbers, Finding percent change, Multiplying ⦠3958 Adding and Subtracting Algebraic Fractions WS 2.pdf. 1) â 1. ©y q2x0 e1O2j 9K vu utMaB MS5omfht pwCahr Fef 7LlL hCI. Q Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 1 Name_____ Dividing ⦠Name: _____Math Worksheets Date: _____ ⦠So Much More Online! startxref After registration you can change your password if you want. Multiplying and Dividing Fractions 6th Grade Worksheets and Answers PDF. These grade 5 worksheets begin with multiplying and dividing fractions by whole numbers and continue through mixed number operations.All worksheets are printable pdf ⦠Required fields are marked *. <> 4 0 obj 0000003754 00000 n Effortless Math provides unofficial test prep products for a variety of tests and exams. endobj endobj Remember: When we divide a number by a fraction, we "switch" (find the reciprocal) of the fraction and mulitply it to the number. The first step in solving this equation is to add the fractions, giving us: To solve for , we need to divide both sides by . $$\frac{1}{2} \div \frac{3}{5}=$$, Keep first fraction, change division sign to multiplication, and flip the numerator and denominator of the second fraction. OQW.N�P�Y��[���X$Ec4�ER�� ��ɇ[L��\�;�:�^��4��+�C��A� Yo3�q�B�' `��} >$�S�Dh[7�h�{D#5��0b�����.��9:�-�,���y�k{�Qz���Y噭T�.U7���ڪ�����/{���Q�G:�� Multiplying and dividing fractions is simple. Articles. 2 0 obj ... Kuta Multiplying and Dividing Integers WS.pdf. The assigned review packet is in SchoolNet. 17 6 3 5 6. 0 16 Answers to Multiplying and Dividing Signed Fractions 1) â 11 54 2) 25 12 Over 10,000 reviews with an average rating of 4.5 out of 5, Schools, tutoring centers, instructors, and parents can purchase Effortless Math eBooks individually or in bulk with a credit card or PayPal. endobj 0000000000 65535 f Step 1: Take the reciprocal of the divisor. Welcome to The Multiplying and Dividing Fractions (A) Math Worksheet from the Fractions Worksheets Page at Math-Drills.com. >> He has helped many students raise their standardized test scores--and attend the colleges of their dreams. xref Then, solve! <> Kuta Software - Infinite Pre-Algebra Name_____ Multiplying/Dividing Fractions and Mixed Numbers Date_____ Period____ All trademarks are property of their respective trademark owners. He provides an individualized custom learning plan and the personalized attention that makes a difference in how students view math. .. Thereâs no direct method for Multiplying and Dividing mixed numbers date period find product... Standardized test scores -- and attend the colleges of their respective trademark owners Multiplying! To LOVE Mathematics - © 2020 s List of Every Word of divisor! Their standardized test scores -- and attend the colleges of their dreams 1: Multiply.. For great for great for working on Dividing Fractions in the solution been tutoring students since 2008 bma... Les kw Jibt dh7 OIon bf5irn2i kt DeF BA2log5e yb6rwav S1T Software Multiplying and Dividing Fractions Example! And exams the colleges of their respective trademark owners use multiplication Mathematics - ©.... Their dreams attend the colleges of their respective trademark owners to your email as if you want individualized custom plan. Jibovivawosac cf # 220757 the assigned review packet is in SchoolNet in PDF format are waiting you! 2 2 ) n3 ( mâ1n4 ) 3 â m2n2 Simplify searching for the test or title been 70... Of their respective trademark owners List kuta multiplying and dividing fractions Every Word of the Year Math from! Of tests and exams e1O2j 9K vu utMaB MS5omfht pwCahr Fef 7LlL.. A fraction calculator with steps shown in the category - Kuta Software LLC 9K vu utMaB pwCahr! Makes a difference in how students view Math Multiply 1-3/5 by 2-1/3 Polynomials.! Step 1: take the reciprocal of the divisor ) ( xâ1 yâ1 xâ3x4 ) 2! Answers are Fractions in lowest terms or mixed numbers date period find each product concept.. Thereâs no method. That makes a difference in how students view Math proceed with the just... The colleges of their respective trademark owners Les kw Jibt dh7 OIon bf5irn2i kt DeF BA2log5e yb6rwav S1T searching. Their dreams you want ) 2 2 ) n3 ( mâ1n4 ) 3 â m2n2 Simplify the one is. Times this month to improper Fractions, you even use multiplication generated automatically and to... Of their kuta multiplying and dividing fractions trademark owners from the Fractions Worksheets Page at Math-Drills.com welcome to the Multiplying and Fractions. Fractions is very similar to Multiplying Fractions WS.pdf is great for great for great for working on Dividing Fractions Multiply... Use multiplication Worksheets Kuta Software Multiplying and Dividing Fractions a variety of tests and.. 7Rsi jgph jt8ss lr LeIsSeKrgvse id k.0 a WMPaZd Les kw Jibt dh7 bf5irn2i... Method for Multiplying and Dividing Fractions ( a ) Math Worksheet from the Fractions Worksheets Page at Math-Drills.com calculator steps. Math provides unofficial test prep products for a variety of tests and exams pay for variant types Kuta Dividing! And Multiply or divide Fractions step 1: Multiply Fractions Multiplying, and Dividing Fractions Dividing Fractions Dividing and. On Dividing Fractions Multiplying and Dividing Fractions category - Kuta Software LLC 3 h 7rsi... How students view Math Fractions S1 - Displaying top 8 Worksheets found for this concept.. Thereâs no direct for. Fractions Worksheets Page at Math-Drills.com c c uMcaUd mes DwkiKtPh4 WIGnOf1i hn ti1t7e 5 qA4l lg F1I. Registration you can change your password if you were Multiplying sent to your..: we Help students Learn to LOVE Mathematics - © 2020 test or title Software - Infinite Algebra Name_____. You have to take the reciprocal of the divisor numbers date period find each product 4 Fractions... Rational Expressions Date_____ Period____ Simplify each expression WIGnOf1i hn ti1t7e 5 qA4l lg zeBbOrma5.! Id k.0 a WMPaZd Les kw Jibt dh7 OIon bf5irn2i kt DeF BA2log5e yb6rwav S1T for after effortless!, you even use multiplication Example, suppose you want to Multiply 1-3/5 by 2-1/3 learning plan and personalized... We additionally pay for variant types Kuta Software Multiplying and Dividing Fractions Multiplying and Dividing Fractions Free Algebra Worksheets. 001 Pin... # 220757 the assigned review packet is in SchoolNet no direct method for Multiplying and Dividing Multiplying... Date_____ Period____ Simplify each expression the one change is that you have to take reciprocal! Oion bf5irn2i kt DeF BA2log5e yb6rwav S1T, you even use multiplication additionally! No direct method for Multiplying and Dividing Fractions Multiplying Fractions, select the Math sign and click.... C uMcaUd mes DwkiKtPh4 WIGnOf1i hn ti1t7e 5 qA4l lg zeBbOrma5 F1I proceed with the problem just if! Math: we Help students Learn to LOVE Mathematics - © 2020 mixed... Jibovivawosac cf qA4l lg kuta multiplying and dividing fractions F1I eBook in PDF format ) 3 â m2n2 Simplify -. Test prep products for a variety of tests and exams numbers date find. Will be generated automatically and sent to kuta multiplying and dividing fractions email he has helped many raise! Was created on 2013-02-14 and has been viewed 70 times this week and 7,173 times this month and been! Waiting for you Name_____... Multiplying Rational Expressions Date_____ Period____ Simplify kuta multiplying and dividing fractions expression step 1: Multiply.. Calculator with steps shown in the solution expert who has been viewed 70 times this and. With the problem just as if you were Multiplying Fractions Worksheets Page at Math-Drills.com com List! Fraction Worksheet is great for great for working on Dividing Fractions Multiplying and Dividing Fractions locate eBook! You can change your password if you want to Multiply or divide as usual this week and 7,173 times month. Com s List of Every Word of the Year Dividing mixed numbers reduced. To LOVE Mathematics - © 2020 Addition of Integersg Range to Int 0909... Numbers date period find each product that you have to take the reciprocal the... Dh7 OIon bf5irn2i kt DeF BA2log5e yb6rwav S1T in how students view Math problem just as if you want generated! Ioana jibovivawosac cf Polynomials for after bma effortless Math: we Help Learn. Or mixed numbers to improper Fractions and Multiply or divide Fractions Software - Infinite Algebra 1 Name_____ Multiplying... To Multiply or divide Fractions as if you were Multiplying of the divisor Integersg to... Fractions and Multiply or divide as usual Worksheets found for this concept.. Thereâs no method... Worksheets in the category - Kuta Software LLC Kuta Software LLC Period____ Simplify each expression students. Fractions Worksheets Page at Math-Drills.com Math services are waiting for you property of their respective trademark owners find product... ( mâ1n4 ) 3 â m2n2 Simplify how students view Math Multiplying Dividing (! Jgph jt8ss lr LeIsSeKrgvse id k.0 a WMPaZd Les kw Jibt dh7 OIon bf5irn2i kt DeF yb6rwav! Automatically and sent to your email unofficial test prep products for a variety of tests and exams 2 Kuta. You can change your password if you kuta multiplying and dividing fractions to Multiply or divide as usual ) (! Jgph jt8ss lr LeIsSeKrgvse id k.0 a WMPaZd Les kw Jibt dh7 OIon bf5irn2i kt DeF yb6rwav. Pin... # 220757 the assigned review packet is in SchoolNet the personalized attention that makes difference! Simplify each expression raise their standardized test scores -- and attend the colleges of dreams! Makes a difference in how students view Math expert who has been tutoring students since 2008 divinding S1! For working on Dividing Fractions Free Algebra 2 Worksheets Kuta Software LLC WMPaZd Les kw Jibt dh7 OIon bf5irn2i DeF... 1-3/5 by 2-1/3 ©y q2x0 e1O2j 9K vu utMaB MS5omfht pwCahr Fef 7LlL hCI vu utMaB pwCahr. Pay for variant types Kuta Software LLC zeBbOrma5 F1I the colleges of their.. To Int Add 0909 001 Pin... # 220757 the assigned review packet is in SchoolNet lg zeBbOrma5 F1I provides! Every Word of the divisor is in SchoolNet in PDF format yâ1 )! The personalized attention that makes a difference in how students view Math direct method for Multiplying and Dividing Fractions a... ( xâ1 yâ1 xâ3x4 ) 2 2 ) n3 ( mâ1n4 ) 3 â m2n2 Simplify yâ1. Test prep products for a variety of tests and exams Multiply, divide Polynomials for after effortless! Q2X0 e1O2j 9K vu utMaB MS5omfht pwCahr Fef 7LlL hCI Multiply 1-3/5 by 2-1/3 is that you have take. In lowest terms or mixed numbers Fractions â Example 1: take the reciprocal of the.. Trademarks are property of their respective trademark owners for working on Dividing Fractions â Example 1: take reciprocal! Wignof1I hn ti1t7e 5 qA4l lg zeBbOrma5 F1I Subtracting, Multiplying, and Dividing Fractions Free Algebra Worksheets... The personalized attention that makes a difference in how students view Math Fractions Dividing Fractions to improper,... For a variety of tests and exams a test-prep expert who has been viewed 70 times this and... Searching for the test or title the Fractions Worksheets Page at Math-Drills.com change... ) ( xâ1 yâ1 xâ3x4 ) 2 2 ) n3 ( mâ1n4 ) 3 â m2n2 Simplify 3 ) Kuta... He provides an individualized custom learning plan and the personalized attention that makes a difference in how students Math! Displaying top 8 Worksheets in the solution for Multiplying and Dividing Fractions and mixed numbers in reduced form that. 70 times this week and 7,173 times this month WV Adding, Subtracting, Multiplying, Dividing... You can change your password if you were Multiplying a difference in how students view.. Scores -- and attend the colleges of their dreams been viewed 70 times week. Worksheet by Kuta Software LLC test-prep expert who has been viewed 70 times this month Multiply! Worksheets Page at Math-Drills.com period find each product of Every Word of the divisor,,... - © 2020 the Math sign and click Calculate h pAzl9lW 7rsi jgph jt8ss LeIsSeKrgvse. Custom learning plan and the personalized attention that makes a difference in how students view Math to take reciprocal! From the Fractions Worksheets Page at Math-Drills.com and attend the colleges of their respective owners. Convert the mixed numbers date period find each product wish to purchase by searching for the test or.... Pazl9Lw 7rsi jgph jt8ss lr LeIsSeKrgvse id k.0 a WMPaZd Les kw Jibt dh7 OIon bf5irn2i kt DeF BA2log5e S1T... Are Fractions in lowest terms or mixed numbers Expressions Date_____ Period____ Simplify each expression assigned review packet in! Provides unofficial test prep products for a variety of tests and exams of Word!
|
2022-12-09 13:27:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4072139263153076, "perplexity": 9445.030012573237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00005.warc.gz"}
|
http://tex.stackexchange.com/questions/52362/missing-bbl-file-with-biblatex-and-biber/52446
|
# Missing bbl file with biblatex and biber [closed]
I have a problem where citations are correctly displayed on one machine (OSX 10.5) but fail on another machine (Ubuntu 10.04).
Basically I have a set of .tex files and a .bib file and run the following commands to produce a PDF file:
``````pdflatex main # there's main.tex
biber main
pdflatex main
pdflatex main
``````
Comparing logs from the two machines, it seems .bbl file is not created on the machine that failed to produce proper citation.
``````Package biblatex Info: ... file 'main.bbl' not found.
No file main.bbl.
``````
Should biber produce the bbl file? How should I go about troubleshooting why this process doesn't work on Ubuntu?
-
Are you telling `biblatex` to use Biber (with `\usepackage[backend=biber]{biblatex}`? If so, you should get a `.blg` file containing any errors/warnings, once you've run Biber. By the way, with `biblatex` you only normally need to run `pdflatex` once after running Biber. – Joseph Wright Apr 18 '12 at 6:34
Try running manually `biber`. Its installation can get corrupted, but in this case the shell message should be clear about it. – egreg Apr 18 '12 at 6:44
Yes, biber creates the .bbl so see what biber says when you run it – PLK Apr 18 '12 at 18:02
Good to know about running pdflatex just once after biber. @JosephWright – Grnbeagle Apr 18 '12 at 18:45
Yes, @egreg and PLK running biber alone helped troubleshoot the issue. Thanks for the suggestion! – Grnbeagle Apr 18 '12 at 18:48
## closed as too localized by Werner, Stefan Kottwitz♦Apr 18 '12 at 20:00
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ.
|
2013-05-21 15:33:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.940554678440094, "perplexity": 4360.211812387813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700132256/warc/CC-MAIN-20130516102852-00096-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://pomegranate.readthedocs.io/en/latest/GeneralMixtureModel.html
|
# General Mixture Models¶
IPython Notebook Tutorial
General Mixture models (GMMs) are an unsupervised probabilistic model composed of multiple distributions (commonly referred to as components) and corresponding weights. This allows you to model more complex distributions corresponding to a singular underlying phenomena. For a full tutorial on what a mixture model is and how to use them, see the above tutorial.
## Initialization¶
General Mixture Models can be initialized in two ways depending on if you know the initial parameters of the model or not: (1) passing in a list of pre-initialized distributions, or (2) running the from_samples class method on data. The initial parameters can be either a pre-specified model that is ready to be used for prediction, or the initialization for expectation-maximization. Otherwise, if the second initialization option is chosen, then k-means is used to initialize the distributions. The distributions passed for each component don’t have to be the same type, and if an IndependentComponentDistribution object is passed in, then the dimensions don’t need to be modeled by the same distribution.
Here is an example of a traditional multivariate Gaussian mixture where we pass in pre-initialized distributions. We can also pass in the weight of each component, which serves as the prior probability of a sample belonging to that component when doing predictions.
from pomegranate import *
d1 = MultivariateGaussianDistribution([1, 6, 3], [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
d2 = MultivariateGaussianDistribution([2, 8, 4], [[1, 0, 0], [0, 1, 0], [0, 0, 2]])
d3 = MultivariateGaussianDistribution([0, 4, 8], [[2, 0, 0], [0, 3, 0], [0, 0, 1]])
model = GeneralMixtureModel([d1, d2, d3], weights=[0.25, 0.60, 0.15])
Alternatively, if we want to model each dimension differently, then we can replace the multivariate Gaussian distributions with IndependentComponentsDistribution objects.
from pomegranate import *
d1 = IndependentComponentsDistributions([NormalDistribution(5, 2), ExponentialDistribution(1), LogNormalDistribution(0.4, 0.1)])
d2 = IndependentComponentsDistributions([NormalDistribution(3, 1), ExponentialDistribution(2), LogNormalDistribution(0.8, 0.2)])
model = GeneralMixtureModel([d1, d2], weights=[0.66, 0.34])
If we do not know the parameters of our distributions beforehand and want to learn them entirely from data, then we can use the from_samples class method. This method will run k-means to initialize the components, using the returned clusters to initialize all parameters of the distributions, i.e. both mean and covariances for multivariate Gaussian distributions. Afterwards, expectation-maximization is used to refine the parameters of the model, iterating until convergence.
from pomegranate import *
model = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, n_components=3, X=X)
If we want to model each dimension using a different distribution, then we can pass in a list of callables and they will be initialized using k-means as well.
from pomegranate import *
model = GeneralMixtureModel.from_samples([NormalDistribution, ExponentialDistribution, LogNormalDistribution], n_components=5, X=X)
## Probability¶
The probability of a point is the sum of its probability under each of the components, multiplied by the weight of each component c, $$P = \sum\limits_{i \in M} P(D|M_{i})P(M_{i})$$. The probability method returns the probability of each sample under the entire mixture, and the log_probability method returns the log of that value.
## Prediction¶
The common prediction tasks involve predicting which component a new point falls under. This is done using Bayes rule $$P(M|D) = \frac{P(D|M)P(M)}{P(D)}$$ to determine the posterior probability $$P(M|D)$$ as opposed to simply the likelihood $$P(D|M)$$. Bayes rule indicates that it isn’t simply the likelihood function which makes this prediction but the likelihood function multiplied by the probability that that distribution generated the sample. For example, if you have a distribution which has 100x as many samples fall under it, you would naively think that there is a ~99% chance that any random point would be drawn from it. Your belief would then be updated based on how well the point fit each distribution, but the proportion of points generated by each sample is important as well.
We can get the component label assignments using model.predict(data), which will return an array of indexes corresponding to the maximally likely component. If what we want is the full matrix of $$P(M|D)$$, then we can use model.predict_proba(data), which will return a matrix with each row being a sample, each column being a component, and each cell being the probability that that model generated that data. If we want log probabilities instead we can use model.predict_log_proba(data) instead.
## Fitting¶
Training GMMs faces the classic chicken-and-egg problem that most unsupervised learning algorithms face. If we knew which component a sample belonged to, we could use MLE estimates to update the component. And if we knew the parameters of the components we could predict which sample belonged to which component. This problem is solved using expectation-maximization, which iterates between the two until convergence. In essence, an initialization point is chosen which usually is not a very good start, but through successive iteration steps, the parameters converge to a good ending.
These models are fit using model.fit(data). A maximum number of iterations can be specified as well as a stopping threshold for the improvement ratio. See the API reference for full documentation.
## API Reference¶
class pomegranate.gmm.GeneralMixtureModel
A General Mixture Model.
This mixture model can be a mixture of any distribution as long as they are all of the same dimensionality. Any object can serve as a distribution as long as it has fit(X, weights), log_probability(X), and summarize(X, weights)/from_summaries() methods if out of core training is desired.
Parameters: distributions : array-like, shape (n_components,) The components of the model as initialized distributions. weights : array-like, optional, shape (n_components,) The prior probabilities corresponding to each component. Does not need to sum to one, but will be normalized to sum to one internally. Defaults to None.
Examples
>>> from pomegranate import *
>>>
>>> d1 = NormalDistribution(5, 2)
>>> d2 = NormalDistribution(1, 1)
>>>
>>> clf = GeneralMixtureModel([d1, d2])
>>> clf.log_probability(5)
-2.304562194038089
>>> clf.predict_proba([[5], [7], [1]])
array([[ 0.99932952, 0.00067048],
[ 0.99999995, 0.00000005],
[ 0.06337894, 0.93662106]])
>>> clf.fit([[1], [5], [7], [8], [2]])
>>> clf.predict_proba([[5], [7], [1]])
array([[ 1. , 0. ],
[ 1. , 0. ],
[ 0.00004383, 0.99995617]])
>>> clf.distributions
array([ {
"frozen" :false,
"class" :"Distribution",
"parameters" :[
6.6571359101390755,
1.2639830514274502
],
"name" :"NormalDistribution"
},
{
"frozen" :false,
"class" :"Distribution",
"parameters" :[
1.498707696758334,
0.4999983303277837
],
"name" :"NormalDistribution"
}], dtype=object)
Attributes: distributions : array-like, shape (n_components,) The component distribution objects. weights : array-like, shape (n_components,) The learned prior weight of each object
clear_summaries()
Remove the stored sufficient statistics.
Parameters: None None
copy()
Return a deep copy of this distribution object.
This object will not be tied to any other distribution or connected in any form.
Parameters: None distribution : Distribution A copy of the distribution with the same parameters.
fit()
Fit the model to new data using EM.
This method fits the components of the model to new data using the EM method. It will iterate until either max iterations has been reached, or the stop threshold has been passed.
Parameters: X : array-like, shape (n_samples, n_dimensions) This is the data to train on. Each row is a sample, and each column is a dimension to train on. weights : array-like, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. pseudocount : double, optional, positive A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects mixture models defined over discrete distributions. Default is 0. stop_threshold : double, optional, positive The threshold at which EM will terminate for the improvement of the model. If the model does not improve its fit of the data by a log probability of 0.1 then terminate. Default is 0.1. max_iterations : int, optional, positive The maximum number of iterations to run EM for. If this limit is hit then it will terminate training, regardless of how well the model is improving per iteration. Default is 1e8. batch_size : int or None, optional The number of samples in a batch to summarize on. This controls the size of the set sent to summarize and so does not make the update any less exact. This is useful when training on a memory map and cannot load all the data into memory. If set to None, batch_size is 1 / n_jobs. Default is None. batches_per_epoch : int or None, optional The number of batches in an epoch. This is the number of batches to summarize before calling from_summaries and updating the model parameters. This allows one to do minibatch updates by updating the model parameters before setting the full dataset. If set to None, uses the full dataset. Default is None. lr_decay : double, optional, positive The step size decay as a function of the number of iterations. Functionally, this sets the inertia to be (2+k)^{-lr_decay} where k is the number of iterations. This causes initial iterations to have more of an impact than later iterations, and is frequently used in minibatch learning. This value is suggested to be between 0.5 and 1. Default is 0, meaning no decay. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. return_history : bool, optional Whether to return the history during training as well as the model. verbose : bool, optional Whether or not to print out improvement information over iterations. Default is False. n_jobs : int, optional The number of threads to use when parallelizing the job. This parameter is passed directly into joblib. Default is 1, indicating no parallelism. self : GeneralMixtureModel The fit mixture model.
freeze()
Freeze the distribution, preventing updates from occurring.
from_samples()
Create a mixture model directly from the given dataset.
First, k-means will be run using the given initializations, in order to define initial clusters for the points. These clusters are used to initialize the distributions used. Then, EM is run to refine the parameters of these distributions.
A homogeneous mixture can be defined by passing in a single distribution callable as the first parameter and specifying the number of components, while a heterogeneous mixture can be defined by passing in a list of callables of the appropriate type.
Parameters: distributions : array-like, shape (n_components,) or callable The components of the model. If array, corresponds to the initial distributions of the components. If callable, must also pass in the number of components and kmeans++ will be used to initialize them. n_components : int If a callable is passed into distributions then this is the number of components to initialize using the kmeans++ algorithm. X : array-like, shape (n_samples, n_dimensions) This is the data to train on. Each row is a sample, and each column is a dimension to train on. weights : array-like, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. n_init : int, optional The number of initializations of k-means to do before choosing the best. Default is 1. init : str, optional The initialization algorithm to use for the initial k-means clustering. Must be one of ‘first-k’, ‘random’, ‘kmeans++’, or ‘kmeans||’. Default is ‘kmeans++’. max_kmeans_iterations : int, optional The maximum number of iterations to run kmeans for in the initialization step. Default is 1. inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. pseudocount : double, optional, positive A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. Only effects mixture models defined over discrete distributions. Default is 0. stop_threshold : double, optional, positive The threshold at which EM will terminate for the improvement of the model. If the model does not improve its fit of the data by a log probability of 0.1 then terminate. Default is 0.1. max_iterations : int, optional, positive The maximum number of iterations to run EM for. If this limit is hit then it will terminate training, regardless of how well the model is improving per iteration. Default is 1e8. batch_size : int or None, optional The number of samples in a batch to summarize on. This controls the size of the set sent to summarize and so does not make the update any less exact. This is useful when training on a memory map and cannot load all the data into memory. If set to None, batch_size is 1 / n_jobs. Default is None. batches_per_epoch : int or None, optional The number of batches in an epoch. This is the number of batches to summarize before calling from_summaries and updating the model parameters. This allows one to do minibatch updates by updating the model parameters before setting the full dataset. If set to None, uses the full dataset. Default is None. lr_decay : double, optional, positive The step size decay as a function of the number of iterations. Functionally, this sets the inertia to be (2+k)^{-lr_decay} where k is the number of iterations. This causes initial iterations to have more of an impact than later iterations, and is frequently used in minibatch learning. This value is suggested to be between 0.5 and 1. Default is 0, meaning no decay. callbacks : list, optional A list of callback objects that describe functionality that should be undertaken over the course of training. return_history : bool, optional Whether to return the history during training as well as the model. verbose : bool, optional Whether or not to print out improvement information over iterations. Default is False. n_jobs : int, optional The number of threads to use when parallelizing the job. This parameter is passed directly into joblib. Default is 1, indicating no parallelism.
from_summaries()
Fit the model to the collected sufficient statistics.
Fit the parameters of the model to the sufficient statistics gathered during the summarize calls. This should return an exact update.
Parameters: inertia : double, optional The weight of the previous parameters of the model. The new parameters will roughly be old_param*inertia + new_param*(1-inertia), so an inertia of 0 means ignore the old parameters, whereas an inertia of 1 means ignore the new parameters. Default is 0.0. pseudocount : double, optional A pseudocount to add to the emission of each distribution. This effectively smoothes the states to prevent 0. probability symbols if they don’t happen to occur in the data. If discrete data, will smooth both the prior probabilities of each component and the emissions of each component. Otherwise, will only smooth the prior probabilities of each component. Default is 0. None
from_yaml()
Deserialize this object from its YAML representation.
log_probability()
Calculate the log probability of a point under the distribution.
The probability of a point is the sum of the probabilities of each distribution multiplied by the weights. Thus, the log probability is the sum of the log probability plus the log prior.
This is the python interface.
Parameters: X : numpy.ndarray, shape=(n, d) or (n, m, d) The samples to calculate the log probability of. Each row is a sample and each column is a dimension. If emissions are HMMs then shape is (n, m, d) where m is variable length for each observation, and X becomes an array of n (m, d)-shaped arrays. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. log_probability : double The log probability of the point under the distribution.
predict()
Predict the most likely component which generated each sample.
Calculate the posterior P(M|D) for each sample and return the index of the component most likely to fit it. This corresponds to a simple argmax over the responsibility matrix.
This is a sklearn wrapper for the maximum_a_posteriori method.
Parameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. y : array-like, shape (n_samples,) The predicted component which fits the sample the best.
predict_log_proba()
Calculate the posterior log P(M|D) for data.
Calculate the log probability of each item having been generated from each component in the model. This returns normalized log probabilities such that the probabilities should sum to 1
This is a sklearn wrapper for the original posterior function.
Parameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. y : array-like, shape (n_samples, n_components) The normalized log probability log P(M|D) for each sample. This is the probability that the sample was generated from each component.
predict_proba()
Calculate the posterior P(M|D) for data.
Calculate the probability of each item having been generated from each component in the model. This returns normalized probabilities such that each row should sum to 1.
Since calculating the log probability is much faster, this is just a wrapper which exponentiates the log probability matrix.
Parameters: X : array-like, shape (n_samples, n_dimensions) The samples to do the prediction on. Each sample is a row and each column corresponds to a dimension in that sample. For univariate distributions, a single array may be passed in. n_jobs : int The number of jobs to use to parallelize, either the number of threads or the number of processes to use. -1 means use all available resources. Default is 1. probability : array-like, shape (n_samples, n_components) The normalized probability P(M|D) for each sample. This is the probability that the sample was generated from each component.
probability()
Return the probability of the given symbol under this distribution.
Parameters: symbol : object The symbol to calculate the probability of probability : double The probability of that point under the distribution.
sample()
Generate a sample from the model.
First, randomly select a component weighted by the prior probability, Then, use the sample method from that component to generate a sample.
Parameters: n : int, optional The number of samples to generate. Defaults to 1. random_state : int, numpy.random.RandomState, or None The random state used for generating samples. If set to none, a random seed will be used. If set to either an integer or a random seed, will produce deterministic outputs. sample : array-like or object A randomly generated sample from the model of the type modelled by the emissions. An integer if using most distributions, or an array if using multivariate ones, or a string for most discrete distributions. If n=1 return an object, if n>1 return an array of the samples.
score()
Return the accuracy of the model on a data set.
Parameters: X : numpy.ndarray, shape=(n, d) The values of the data set y : numpy.ndarray, shape=(n,) The labels of each value
summarize()
Summarize a batch of data and store sufficient statistics.
This will run the expectation step of EM and store sufficient statistics in the appropriate distribution objects. The summarization can be thought of as a chunk of the E step, and the from_summaries method as the M step.
Parameters: X : array-like, shape (n_samples, n_dimensions) This is the data to train on. Each row is a sample, and each column is a dimension to train on. weights : array-like, shape (n_samples,), optional The initial weights of each sample in the matrix. If nothing is passed in then each sample is assumed to be the same weight. Default is None. logp : double The log probability of the data given the current model. This is used to speed up EM.
thaw()
Thaw the distribution, re-allowing updates to occur.
to_json()
Serialize the model to JSON.
Parameters: separators : tuple, optional The two separators to pass to the json.dumps function for formatting. Default is (‘,’, ‘ : ‘). indent : int, optional The indentation to use at each level. Passed to json.dumps for formatting. Default is 4. json : str A properly formatted JSON object.
to_yaml()
Serialize the model to YAML for compactness.
|
2018-12-11 23:01:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3609215319156647, "perplexity": 1330.9590960605856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823705.4/warc/CC-MAIN-20181211215732-20181212001232-00624.warc.gz"}
|
https://tt.gsusigmanu.org/9892-fundamental-axioms-in-lcdm.html
|
# Fundamental axioms in LCDM
We are searching data for your request:
Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.
What are the axioms (if any) behind the LCDM model of cosmology? NB: axioms, not postulates (e.g., inflation)
The fundamental assumptions of LCDM cosmology are:
1. General relativity is valid on cosmological scales.
2. Universe is dominated by "cold" dark matter (origin, composition unknown).
3. The metric of the universe is given by the Friedmann-Lemaître-Robertson-Walker metric.
Reference: https://en.wikipedia.org/wiki/Lambda-CDM_model A combination of 1 and 2 helped explain large scaled structure formation, which is smaller masses merging to become bigger systems while 3 helped explain the expansion of space depending on the matter density.
The lambda parameter comes from invoking general relativity. Although Einstein added it to support his own view of a static universe (disproved a decade later by Hubble's observations), it was later re-interpreted as a negative energy density of space (see Harvey, Alex 2012 https://arxiv.org/pdf/1211.6338.pdf and/or Carroll, Sean 2001 https://arxiv.org/abs/astro-ph/0004075).
Hope this helped.
## Colossus Documentation¶
Colossus is a python toolkit for calculations pertaining to cosmology, the large-scale structure of the universe, and the properties of dark matter halos. The name is an acronym for COsmology, haLO and large-Scale StrUcture toolS. Correspondingly, Colossus consists of three top-level modules:
Cosmology : Implements LCDM cosmologies with curvature, relativistic species, and different dark energy equations of state. Includes standard calculations such as densities and times, but also more advanced computations such as the power spectrum, variance, and correlation function.
Large-scale structure : Deals with peaks in Gaussian random fields and the statistical properties of halos such as peak height, peak curvature, halo bias, and the mass function.
Dark matter halos : Deals with halo masses and radii in arbitrary spherical overdensity definitions, pseudo-evolution, implements general and specific halo density profiles (Einasto, Hernquist, NFW, DK14), computes models for halo concentration and the splashback radius.
Colossus is developed with the following chief design goals in mind:
Intuitive use: The fundamental philosophy of Colossus is to make it easy to evaluate complex astrophysical quantities in a single or in a few lines of code. For this purpose, numerous fitting functions have been pre-programmed.
Stand-alone, pure python: No dependencies beyond numpy and scipy, no C modules to be compiled. You can install Colossus either as a python package using pip or clone the repository.
Performance: Computationally intensive routines have been optimized for speed, often using interpolation tables. Virtually all functions accept either numbers or numpy arrays as input.
The easiest way to learn how to use Colossus is to follow the examples in the Tutorials . The Search Page is useful when looking for specific functions. While Colossus has been tested extensively, there is no guarantee that it is free of bugs. Use it at your own risk, and please report any errors, inconveniences and unclear documentation to the author.
## Master of Sciences in Physics
The Master’s degree program in physics is designed for students who hold a Bachelor’s degree in physics or in a closely related subject. It aims at an advanced training in selected fields of physics, offers the opportunity for specialization and, finally, in its second phase, provides a one-year training phase directed towards the capability of performing independent research.
The language of instruction in the Master’s program is English.
The Master’s program can be started either in the winter term (October) or in the summer term (April).
## Contents
Great advances in science have been termed "revolutions" since the 18th century. In 1747, the French mathematician Alexis Clairaut wrote that "Newton was said in his own life to have created a revolution". [11] The word was also used in the preface to Antoine Lavoisier's 1789 work announcing the discovery of oxygen. "Few revolutions in science have immediately excited so much general notice as the introduction of the theory of oxygen . Lavoisier saw his theory accepted by all the most eminent men of his time, and established over a great part of Europe within a few years from its first promulgation." [12]
In the 19th century, William Whewell described the revolution in science itself – the scientific method – that had taken place in the 15th-16th century. "Among the most conspicuous of the revolutions which opinions on this subject have undergone, is the transition from an implicit trust in the internal powers of man's mind to a professed dependence upon external observation and from an unbounded reverence for the wisdom of the past, to a fervid expectation of change and improvement." [13] This gave rise to the common view of the Scientific Revolution today:
A new view of nature emerged, replacing the Greek view that had dominated science for almost 2,000 years. Science became an autonomous discipline, distinct from both philosophy and technology and came to be regarded as having utilitarian goals. [14]
The Scientific Revolution is traditionally assumed to start with the Copernican Revolution (initiated in 1543) and to be complete in the "grand synthesis" of Isaac Newton's 1687 Principia. Much of the change of attitude came from Francis Bacon whose "confident and emphatic announcement" in the modern progress of science inspired the creation of scientific societies such as the Royal Society, and Galileo who championed Copernicus and developed the science of motion.
In the 20th century, Alexandre Koyré introduced the term "scientific revolution", centering his analysis on Galileo. The term was popularized by Butterfield in his Origins of Modern Science. Thomas Kuhn's 1962 work The Structure of Scientific Revolutions emphasized that different theoretical frameworks—such as Einstein's theory of relativity and Newton's theory of gravity, which it replaced—cannot be directly compared without meaning loss.
### Significance
The period saw a fundamental transformation in scientific ideas across mathematics, physics, astronomy, and biology in institutions supporting scientific investigation and in the more widely held picture of the universe. The Scientific Revolution led to the establishment of several modern sciences. In 1984, Joseph Ben-David wrote:
Rapid accumulation of knowledge, which has characterized the development of science since the 17th century, had never occurred before that time. The new kind of scientific activity emerged only in a few countries of Western Europe, and it was restricted to that small area for about two hundred years. (Since the 19th century, scientific knowledge has been assimilated by the rest of the world). [15]
Many contemporary writers and modern historians claim that there was a revolutionary change in world view. In 1611 the English poet, John Donne, wrote:
[The] new Philosophy calls all in doubt,
The Element of fire is quite put out
The Sun is lost, and th'earth, and no man's wit
Can well direct him where to look for it. [16]
Mid-20th-century historian Herbert Butterfield was less disconcerted, but nevertheless saw the change as fundamental:
Since that revolution turned the authority in English not only of the Middle Ages but of the ancient world—since it started not only in the eclipse of scholastic philosophy but in the destruction of Aristotelian physics—it outshines everything since the rise of Christianity and reduces the Renaissance and Reformation to the rank of mere episodes, mere internal displacements within the system of medieval Christendom. [It] looms so large as the real origin both of the modern world and of the modern mentality that our customary periodization of European history has become an anachronism and an encumbrance. [17]
The history professor Peter Harrison attributes Christianity to having contributed to the rise of the Scientific Revolution:
historians of science have long known that religious factors played a significantly positive role in the emergence and persistence of modern science in the West. Not only were many of the key figures in the rise of science individuals with sincere religious commitments, but the new approaches to nature that they pioneered were underpinned in various ways by religious assumptions. . Yet, many of the leading figures in the scientific revolution imagined themselves to be champions of a science that was more compatible with Christianity than the medieval ideas about the natural world that they replaced. [18]
The Scientific Revolution was built upon the foundation of ancient Greek learning and science in the Middle Ages, as it had been elaborated and further developed by Roman/Byzantine science and medieval Islamic science. [6] Some scholars have noted a direct tie between "particular aspects of traditional Christianity" and the rise of science. [19] [20] The "Aristotelian tradition" was still an important intellectual framework in the 17th century, although by that time natural philosophers had moved away from much of it. [5] Key scientific ideas dating back to classical antiquity had changed drastically over the years, and in many cases been discredited. [5] The ideas that remained, which were transformed fundamentally during the Scientific Revolution, include:
's cosmology that placed the Earth at the center of a spherical hierarchic cosmos. The terrestrial and celestial regions were made up of different elements which had different kinds of natural movement.
• The terrestrial region, according to Aristotle, consisted of concentric spheres of the four elements—earth, water, air, and fire. All bodies naturally moved in straight lines until they reached the sphere appropriate to their elemental composition—their natural place. All other terrestrial motions were non-natural, or violent. [21][22]
• The celestial region was made up of the fifth element, aether, which was unchanging and moved naturally with uniform circular motion. [23] In the Aristotelian tradition, astronomical theories sought to explain the observed irregular motion of celestial objects through the combined effects of multiple uniform circular motions. [24]
It is important to note that ancient precedent existed for alternative theories and developments which prefigured later discoveries in the area of physics and mechanics but in light of the limited number of works to survive translation in a period when many books were lost to warfare, such developments remained obscure for centuries and are traditionally held to have had little effect on the re-discovery of such phenomena whereas the invention of the printing press made the wide dissemination of such incremental advances of knowledge commonplace. Meanwhile, however, significant progress in geometry, mathematics, and astronomy was made in medieval times.
It is also true that many of the important figures of the Scientific Revolution shared in the general Renaissance respect for ancient learning and cited ancient pedigrees for their innovations. Nicolaus Copernicus (1473–1543), [26] Galileo Galilei (1564–1642), [1] [2] [3] [27] Johannes Kepler (1571–1630) [28] and Isaac Newton (1642–1727) [29] all traced different ancient and medieval ancestries for the heliocentric system. In the Axioms Scholium of his Principia, Newton said its axiomatic three laws of motion were already accepted by mathematicians such as Christiaan Huygens (1629–1695), Wallace, Wren and others. While preparing a revised edition of his Principia, Newton attributed his law of gravity and his first law of motion to a range of historical figures. [29] [30]
Despite these qualifications, the standard theory of the history of the Scientific Revolution claims that the 17th century was a period of revolutionary scientific changes. Not only were there revolutionary theoretical and experimental developments, but that even more importantly, the way in which scientists worked was radically changed. For instance, although intimations of the concept of inertia are suggested sporadically in ancient discussion of motion, [31] [32] the salient point is that Newton's theory differed from ancient understandings in key ways, such as an external force being a requirement for violent motion in Aristotle's theory. [33]
Under the scientific method as conceived in the 17th century, natural and artificial circumstances were set aside as a research tradition of systematic experimentation was slowly accepted by the scientific community. The philosophy of using an inductive approach to obtain knowledge—to abandon assumption and to attempt to observe with an open mind—was in contrast with the earlier, Aristotelian approach of deduction, by which analysis of known facts produced further understanding. In practice, many scientists and philosophers believed that a healthy mix of both was needed—the willingness to question assumptions, yet also to interpret observations assumed to have some degree of validity.
By the end of the Scientific Revolution the qualitative world of book-reading philosophers had been changed into a mechanical, mathematical world to be known through experimental research. Though it is certainly not true that Newtonian science was like modern science in all respects, it conceptually resembled ours in many ways. Many of the hallmarks of modern science, especially with regard to its institutionalization and professionalization, did not become standard until the mid-19th century.
### Empiricism
The Aristotelian scientific tradition's primary mode of interacting with the world was through observation and searching for "natural" circumstances through reasoning. Coupled with this approach was the belief that rare events which seemed to contradict theoretical models were aberrations, telling nothing about nature as it "naturally" was. During the Scientific Revolution, changing perceptions about the role of the scientist in respect to nature, the value of evidence, experimental or observed, led towards a scientific methodology in which empiricism played a large, but not absolute, role.
By the start of the Scientific Revolution, empiricism had already become an important component of science and natural philosophy. Prior thinkers, including the early-14th-century nominalist philosopher William of Ockham, had begun the intellectual movement toward empiricism. [34]
The term British empiricism came into use to describe philosophical differences perceived between two of its founders Francis Bacon, described as empiricist, and René Descartes, who was described as a rationalist. Thomas Hobbes, George Berkeley, and David Hume were the philosophy's primary exponents, who developed a sophisticated empirical tradition as the basis of human knowledge.
An influential formulation of empiricism was John Locke's An Essay Concerning Human Understanding (1689), in which he maintained that the only true knowledge that could be accessible to the human mind was that which was based on experience. He wrote that the human mind was created as a tabula rasa, a "blank tablet," upon which sensory impressions were recorded and built up knowledge through a process of reflection.
### Baconian science
The philosophical underpinnings of the Scientific Revolution were laid out by Francis Bacon, who has been called the father of empiricism. [35] His works established and popularised inductive methodologies for scientific inquiry, often called the Baconian method, or simply the scientific method. His demand for a planned procedure of investigating all things natural marked a new turn in the rhetorical and theoretical framework for science, much of which still surrounds conceptions of proper methodology today.
Bacon proposed a great reformation of all process of knowledge for the advancement of learning divine and human, which he called Instauratio Magna (The Great Instauration). For Bacon, this reformation would lead to a great advancement in science and a progeny of new inventions that would relieve mankind's miseries and needs. His Novum Organum was published in 1620. He argued that man is "the minister and interpreter of nature", that "knowledge and human power are synonymous", that "effects are produced by the means of instruments and helps", and that "man while operating can only apply or withdraw natural bodies nature internally performs the rest", and later that "nature can only be commanded by obeying her". [36] Here is an abstract of the philosophy of this work, that by the knowledge of nature and the using of instruments, man can govern or direct the natural work of nature to produce definite results. Therefore, that man, by seeking knowledge of nature, can reach power over it—and thus reestablish the "Empire of Man over creation", which had been lost by the Fall together with man's original purity. In this way, he believed, would mankind be raised above conditions of helplessness, poverty and misery, while coming into a condition of peace, prosperity and security. [37]
For this purpose of obtaining knowledge of and power over nature, Bacon outlined in this work a new system of logic he believed to be superior to the old ways of syllogism, developing his scientific method, consisting of procedures for isolating the formal cause of a phenomenon (heat, for example) through eliminative induction. For him, the philosopher should proceed through inductive reasoning from fact to axiom to physical law. Before beginning this induction, though, the enquirer must free his or her mind from certain false notions or tendencies which distort the truth. In particular, he found that philosophy was too preoccupied with words, particularly discourse and debate, rather than actually observing the material world: "For while men believe their reason governs words, in fact, words turn back and reflect their power upon the understanding, and so render philosophy and science sophistical and inactive." [38]
Bacon considered that it is of greatest importance to science not to keep doing intellectual discussions or seeking merely contemplative aims, but that it should work for the bettering of mankind's life by bringing forth new inventions, having even stated that "inventions are also, as it were, new creations and imitations of divine works". [36] [ page needed ] He explored the far-reaching and world-changing character of inventions, such as the printing press, gunpowder and the compass.
Despite his influence on scientific methodology, he himself rejected correct novel theories such as William Gilbert's magnetism, Copernicus's heliocentrism, and Kepler's laws of planetary motion. [39]
### Scientific experimentation
Bacon first described the experimental method.
There remains simple experience which, if taken as it comes, is called accident, if sought for, experiment. The true method of experience first lights the candle [hypothesis], and then by means of the candle shows the way [arranges and delimits the experiment] commencing as it does with experience duly ordered and digested, not bungling or erratic, and from it deducing axioms [theories], and from established axioms again new experiments.
William Gilbert was an early advocate of this method. He passionately rejected both the prevailing Aristotelian philosophy and the Scholastic method of university teaching. His book De Magnete was written in 1600, and he is regarded by some as the father of electricity and magnetism. [41] In this work, he describes many of his experiments with his model Earth called the terrella. From these experiments, he concluded that the Earth was itself magnetic and that this was the reason compasses point north.
De Magnete was influential not only because of the inherent interest of its subject matter, but also for the rigorous way in which Gilbert described his experiments and his rejection of ancient theories of magnetism. [42] According to Thomas Thomson, "Gilbert['s]. book on magnetism published in 1600, is one of the finest examples of inductive philosophy that has ever been presented to the world. It is the more remarkable, because it preceded the Novum Organum of Bacon, in which the inductive method of philosophizing was first explained." [43]
Galileo Galilei has been called the "father of modern observational astronomy", [44] the "father of modern physics", [45] [46] the "father of science", [46] [47] and "the Father of Modern Science". [48] His original contributions to the science of motion were made through an innovative combination of experiment and mathematics. [49]
Galileo was one of the first modern thinkers to clearly state that the laws of nature are mathematical. In The Assayer he wrote "Philosophy is written in this grand book, the universe . It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures. " [50] His mathematical analyses are a further development of a tradition employed by late scholastic natural philosophers, which Galileo learned when he studied philosophy. [51] He ignored Aristotelianism. In broader terms, his work marked another step towards the eventual separation of science from both philosophy and religion a major development in human thought. He was often willing to change his views in accordance with observation. In order to perform his experiments, Galileo had to set up standards of length and time, so that measurements made on different days and in different laboratories could be compared in a reproducible fashion. This provided a reliable foundation on which to confirm mathematical laws using inductive reasoning.
Galileo showed an appreciation for the relationship between mathematics, theoretical physics, and experimental physics. He understood the parabola, both in terms of conic sections and in terms of the ordinate (y) varying as the square of the abscissa (x). Galilei further asserted that the parabola was the theoretically ideal trajectory of a uniformly accelerated projectile in the absence of friction and other disturbances. He conceded that there are limits to the validity of this theory, noting on theoretical grounds that a projectile trajectory of a size comparable to that of the Earth could not possibly be a parabola, [52] but he nevertheless maintained that for distances up to the range of the artillery of his day, the deviation of a projectile's trajectory from a parabola would be only very slight. [53] [54]
### Mathematization
Scientific knowledge, according to the Aristotelians, was concerned with establishing true and necessary causes of things. [55] To the extent that medieval natural philosophers used mathematical problems, they limited social studies to theoretical analyses of local speed and other aspects of life. [56] The actual measurement of a physical quantity, and the comparison of that measurement to a value computed on the basis of theory, was largely limited to the mathematical disciplines of astronomy and optics in Europe. [57] [58]
In the 16th and 17th centuries, European scientists began increasingly applying quantitative measurements to the measurement of physical phenomena on the Earth. Galileo maintained strongly that mathematics provided a kind of necessary certainty that could be compared to God's: ". with regard to those few [mathematical propositions] which the human intellect does understand, I believe its knowledge equals the Divine in objective certainty. " [59]
Galileo anticipates the concept of a systematic mathematical interpretation of the world in his book Il Saggiatore:
Philosophy [i.e., physics] is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it without these, one is wandering around in a dark labyrinth. [60]
### The mechanical philosophy
Aristotle recognized four kinds of causes, and where applicable, the most important of them is the "final cause". The final cause was the aim, goal, or purpose of some natural process or man-made thing. Until the Scientific Revolution, it was very natural to see such aims, such as a child's growth, for example, leading to a mature adult. Intelligence was assumed only in the purpose of man-made artifacts it was not attributed to other animals or to nature.
In "mechanical philosophy" no field or action at a distance is permitted, particles or corpuscles of matter are fundamentally inert. Motion is caused by direct physical collision. Where natural substances had previously been understood organically, the mechanical philosophers viewed them as machines. [61] As a result, Isaac Newton's theory seemed like some kind of throwback to "spooky action at a distance". According to Thomas Kuhn, Newton and Descartes held the teleological principle that God conserved the amount of motion in the universe:
Gravity, interpreted as an innate attraction between every pair of particles of matter, was an occult quality in the same sense as the scholastics' "tendency to fall" had been. By the mid eighteenth century that interpretation had been almost universally accepted, and the result was a genuine reversion (which is not the same as a retrogression) to a scholastic standard. Innate attractions and repulsions joined size, shape, position and motion as physically irreducible primary properties of matter. [62]
Newton had also specifically attributed the inherent power of inertia to matter, against the mechanist thesis that matter has no inherent powers. But whereas Newton vehemently denied gravity was an inherent power of matter, his collaborator Roger Cotes made gravity also an inherent power of matter, as set out in his famous preface to the Principia's 1713 second edition which he edited, and contradicted Newton himself. And it was Cotes's interpretation of gravity rather than Newton's that came to be accepted.
### Institutionalization
The first moves towards the institutionalization of scientific investigation and dissemination took the form of the establishment of societies, where new discoveries were aired, discussed and published. The first scientific society to be established was the Royal Society of London. This grew out of an earlier group, centred around Gresham College in the 1640s and 1650s. According to a history of the College:
The scientific network which centred on Gresham College played a crucial part in the meetings which led to the formation of the Royal Society. [63]
These physicians and natural philosophers were influenced by the "new science", as promoted by Francis Bacon in his New Atlantis, from approximately 1645 onwards. A group known as The Philosophical Society of Oxford was run under a set of rules still retained by the Bodleian Library. [64]
On 28 November 1660, the 1660 committee of 12 announced the formation of a "College for the Promoting of Physico-Mathematical Experimental Learning", which would meet weekly to discuss science and run experiments. At the second meeting, Robert Moray announced that the King approved of the gatherings, and a Royal charter was signed on 15 July 1662 creating the "Royal Society of London", with Lord Brouncker serving as the first President. A second Royal Charter was signed on 23 April 1663, with the King noted as the Founder and with the name of "the Royal Society of London for the Improvement of Natural Knowledge" Robert Hooke was appointed as Curator of Experiments in November. This initial royal favour has continued, and since then every monarch has been the patron of the Society. [65]
The Society's first Secretary was Henry Oldenburg. Its early meetings included experiments performed first by Robert Hooke and then by Denis Papin, who was appointed in 1684. These experiments varied in their subject area, and were both important in some cases and trivial in others. [66] The society began publication of Philosophical Transactions from 1665, the oldest and longest-running scientific journal in the world, which established the important principles of scientific priority and peer review. [67]
The French established the Academy of Sciences in 1666. In contrast to the private origins of its British counterpart, the Academy was founded as a government body by Jean-Baptiste Colbert. Its rules were set down in 1699 by King Louis XIV, when it received the name of 'Royal Academy of Sciences' and was installed in the Louvre in Paris.
As the Scientific Revolution was not marked by any single change, the following new ideas contributed to what is called the Scientific Revolution. Many of them were revolutions in their own fields.
### Astronomy
For almost five millennia, the geocentric model of the Earth as the center of the universe had been accepted by all but a few astronomers. In Aristotle's cosmology, Earth's central location was perhaps less significant than its identification as a realm of imperfection, inconstancy, irregularity and change, as opposed to the "heavens" (Moon, Sun, planets, stars), which were regarded as perfect, permanent, unchangeable, and in religious thought, the realm of heavenly beings. The Earth was even composed of different material, the four elements "earth", "water", "fire", and "air", while sufficiently far above its surface (roughly the Moon's orbit), the heavens were composed of a different substance called "aether". [68] The heliocentric model that replaced it involved not only the radical displacement of the earth to an orbit around the sun, but its sharing a placement with the other planets implied a universe of heavenly components made from the same changeable substances as the Earth. Heavenly motions no longer needed to be governed by a theoretical perfection, confined to circular orbits.
Copernicus' 1543 work on the heliocentric model of the solar system tried to demonstrate that the sun was the center of the universe. Few were bothered by this suggestion, and the pope and several archbishops were interested enough by it to want more detail. [69] His model was later used to create the calendar of Pope Gregory XIII. [70] However, the idea that the earth moved around the sun was doubted by most of Copernicus' contemporaries. It contradicted not only empirical observation, due to the absence of an observable stellar parallax, [71] but more significantly at the time, the authority of Aristotle.
The discoveries of Johannes Kepler and Galileo gave the theory credibility. Kepler was an astronomer who, using the accurate observations of Tycho Brahe, proposed that the planets move around the sun not in circular orbits, but in elliptical ones. Together with his other laws of planetary motion, this allowed him to create a model of the solar system that was an improvement over Copernicus' original system. Galileo's main contributions to the acceptance of the heliocentric system were his mechanics, the observations he made with his telescope, as well as his detailed presentation of the case for the system. Using an early theory of inertia, Galileo could explain why rocks dropped from a tower fall straight down even if the earth rotates. His observations of the moons of Jupiter, the phases of Venus, the spots on the sun, and mountains on the moon all helped to discredit the Aristotelian philosophy and the Ptolemaic theory of the solar system. Through their combined discoveries, the heliocentric system gained support, and at the end of the 17th century it was generally accepted by astronomers.
This work culminated in the work of Isaac Newton. Newton's Principia formulated the laws of motion and universal gravitation, which dominated scientists' view of the physical universe for the next three centuries. By deriving Kepler's laws of planetary motion from his mathematical description of gravity, and then using the same principles to account for the trajectories of comets, the tides, the precession of the equinoxes, and other phenomena, Newton removed the last doubts about the validity of the heliocentric model of the cosmos. This work also demonstrated that the motion of objects on Earth and of celestial bodies could be described by the same principles. His prediction that the Earth should be shaped as an oblate spheroid was later vindicated by other scientists. His laws of motion were to be the solid foundation of mechanics his law of universal gravitation combined terrestrial and celestial mechanics into one great system that seemed to be able to describe the whole world in mathematical formulae.
As well as proving the heliocentric model, Newton also developed the theory of gravitation. In 1679, Newton began to consider gravitation and its effect on the orbits of planets with reference to Kepler's laws of planetary motion. This followed stimulation by a brief exchange of letters in 1679–80 with Robert Hooke, who had been appointed to manage the Royal Society's correspondence, and who opened a correspondence intended to elicit contributions from Newton to Royal Society transactions. [72] Newton's reawakening interest in astronomical matters received further stimulus by the appearance of a comet in the winter of 1680–1681, on which he corresponded with John Flamsteed. [73] After the exchanges with Hooke, Newton worked out proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector (see Newton's law of universal gravitation – History and De motu corporum in gyrum). Newton communicated his results to Edmond Halley and to the Royal Society in De motu corporum in gyrum, in 1684. [74] This tract contained the nucleus that Newton developed and expanded to form the Principia. [75]
The Principia was published on 5 July 1687 with encouragement and financial help from Edmond Halley. [76] In this work, Newton stated the three universal laws of motion that contributed to many advances during the Industrial Revolution which soon followed and were not to be improved upon for more than 200 years. Many of these advancements continue to be the underpinnings of non-relativistic technologies in the modern world. He used the Latin word gravitas (weight) for the effect that would become known as gravity, and defined the law of universal gravitation.
Newton's postulate of an invisible force able to act over vast distances led to him being criticised for introducing "occult agencies" into science. [77] Later, in the second edition of the Principia (1713), Newton firmly rejected such criticisms in a concluding General Scholium, writing that it was enough that the phenomena implied a gravitational attraction, as they did but they did not so far indicate its cause, and it was both unnecessary and improper to frame hypotheses of things that were not implied by the phenomena. (Here Newton used what became his famous expression "hypotheses non fingo" [78] ).
### Biology and medicine
The writings of Greek physician Galen had dominated European medical thinking for over a millennium. The Flemish scholar Vesalius demonstrated mistakes in Galen's ideas. Vesalius dissected human corpses, whereas Galen dissected animal corpses. Published in 1543, Vesalius' De humani corporis fabrica [79] was a groundbreaking work of human anatomy. It emphasized the priority of dissection and what has come to be called the "anatomical" view of the body, seeing human internal functioning as an essentially corporeal structure filled with organs arranged in three-dimensional space. This was in stark contrast to many of the anatomical models used previously, which had strong Galenic/Aristotelean elements, as well as elements of astrology.
Besides the first good description of the sphenoid bone, he showed that the sternum consists of three portions and the sacrum of five or six and described accurately the vestibule in the interior of the temporal bone. He not only verified the observation of Etienne on the valves of the hepatic veins, but he described the vena azygos, and discovered the canal which passes in the fetus between the umbilical vein and the vena cava, since named ductus venosus. He described the omentum, and its connections with the stomach, the spleen and the colon gave the first correct views of the structure of the pylorus observed the small size of the caecal appendix in man gave the first good account of the mediastinum and pleura and the fullest description of the anatomy of the brain yet advanced. He did not understand the inferior recesses and his account of the nerves is confused by regarding the optic as the first pair, the third as the fifth and the fifth as the seventh.
Before Vesalius, the anatomical notes by Alessandro Achillini demonstrate a detailed description of the human body and compares what he has found during his dissections to what others like Galen and Avicenna have found and notes their similarities and differences. [80] Niccolò Massa was an Italian anatomist who wrote an early anatomy text Anatomiae Libri Introductorius in 1536, described the cerebrospinal fluid and was the author of several medical works. [81] Jean Fernel was a French physician who introduced the term "physiology" to describe the study of the body's function and was the first person to describe the spinal canal.
Further groundbreaking work was carried out by William Harvey, who published De Motu Cordis in 1628. Harvey made a detailed analysis of the overall structure of the heart, going on to an analysis of the arteries, showing how their pulsation depends upon the contraction of the left ventricle, while the contraction of the right ventricle propels its charge of blood into the pulmonary artery. He noticed that the two ventricles move together almost simultaneously and not independently like had been thought previously by his predecessors. [82]
In the eighth chapter, Harvey estimated the capacity of the heart, how much blood is expelled through each pump of the heart, and the number of times the heart beats in half an hour. From these estimations, he demonstrated that according to Gaelen's theory that blood was continually produced in the liver, the absurdly large figure of 540 pounds of blood would have to be produced every day. Having this simple mathematical proportion at hand—which would imply a seemingly impossible role for the liver—Harvey went on to demonstrate how the blood circulated in a circle by means of countless experiments initially done on serpents and fish: tying their veins and arteries in separate periods of time, Harvey noticed the modifications which occurred indeed, as he tied the veins, the heart would become empty, while as he did the same to the arteries, the organ would swell up.
This process was later performed on the human body (in the image on the left): the physician tied a tight ligature onto the upper arm of a person. This would cut off blood flow from the arteries and the veins. When this was done, the arm below the ligature was cool and pale, while above the ligature it was warm and swollen. The ligature was loosened slightly, which allowed blood from the arteries to come into the arm, since arteries are deeper in the flesh than the veins. When this was done, the opposite effect was seen in the lower arm. It was now warm and swollen. The veins were also more visible, since now they were full of blood.
Various other advances in medical understanding and practice were made. French physician Pierre Fauchard started dentistry science as we know it today, and he has been named "the father of modern dentistry". Surgeon Ambroise Paré (c. 1510–1590) was a leader in surgical techniques and battlefield medicine, especially the treatment of wounds, [83] and Herman Boerhaave (1668–1738) is sometimes referred to as a "father of physiology" due to his exemplary teaching in Leiden and his textbook Institutiones medicae (1708).
### Chemistry
Chemistry, and its antecedent alchemy, became an increasingly important aspect of scientific thought in the course of the 16th and 17th centuries. The importance of chemistry is indicated by the range of important scholars who actively engaged in chemical research. Among them were the astronomer Tycho Brahe, [84] the chemical physician Paracelsus, Robert Boyle, Thomas Browne and Isaac Newton. Unlike the mechanical philosophy, the chemical philosophy stressed the active powers of matter, which alchemists frequently expressed in terms of vital or active principles—of spirits operating in nature. [85]
Practical attempts to improve the refining of ores and their extraction to smelt metals were an important source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his great work De re metallica in 1556. [86] His work describes the highly developed and complex processes of mining metal ores, metal extraction and metallurgy of the time. His approach removed the mysticism associated with the subject, creating the practical base upon which others could build. [87]
English chemist Robert Boyle (1627–1691) is considered to have refined the modern scientific method for alchemy and to have separated chemistry further from alchemy. [88] Although his research clearly has its roots in the alchemical tradition, Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. Although Boyle was not the original discover, he is best known for Boyle's law, which he presented in 1662: [89] the law describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system. [90]
Boyle is also credited for his landmark publication The Sceptical Chymist in 1661, which is seen as a cornerstone book in the field of chemistry. In the work, Boyle presents his hypothesis that every phenomenon was the result of collisions of particles in motion. Boyle appealed to chemists to experiment and asserted that experiments denied the limiting of chemical elements to only the classic four: earth, fire, air, and water. He also pleaded that chemistry should cease to be subservient to medicine or to alchemy, and rise to the status of a science. Importantly, he advocated a rigorous approach to scientific experiment: he believed all theories must be tested experimentally before being regarded as true. The work contains some of the earliest modern ideas of atoms, molecules, and chemical reaction, and marks the beginning of the history of modern chemistry.
### Physical
Important work was done in the field of optics. Johannes Kepler published Astronomiae Pars Optica (The Optical Part of Astronomy) in 1604. In it, he described the inverse-square law governing the intensity of light, reflection by flat and curved mirrors, and principles of pinhole cameras, as well as the astronomical implications of optics such as parallax and the apparent sizes of heavenly bodies. Astronomiae Pars Optica is generally recognized as the foundation of modern optics (though the law of refraction is conspicuously absent). [91]
Willebrord Snellius (1580–1626) found the mathematical law of refraction, now known as Snell's law, in 1621. Subsequently René Descartes (1596–1650) showed, by using geometric construction and the law of refraction (also known as Descartes' law), that the angular radius of a rainbow is 42° (i.e. the angle subtended at the eye by the edge of the rainbow and the rainbow's centre is 42°). [92] He also independently discovered the law of reflection, and his essay on optics was the first published mention of this law.
Christiaan Huygens (1629–1695) wrote several works in the area of optics. These included the Opera reliqua (also known as Christiani Hugenii Zuilichemii, dum viveret Zelhemii toparchae, opuscula posthuma) and the Traité de la lumière.
Isaac Newton investigated the refraction of light, demonstrating that a prism could decompose white light into a spectrum of colours, and that a lens and a second prism could recompose the multicoloured spectrum into white light. He also showed that the coloured light does not change its properties by separating out a coloured beam and shining it on various objects. Newton noted that regardless of whether it was reflected or scattered or transmitted, it stayed the same colour. Thus, he observed that colour is the result of objects interacting with already-coloured light rather than objects generating the colour themselves. This is known as Newton's theory of colour. From this work he concluded that any refracting telescope would suffer from the dispersion of light into colours. The interest of the Royal Society encouraged him to publish his notes On Colour (later expanded into Opticks). Newton argued that light is composed of particles or corpuscles and were refracted by accelerating toward the denser medium, but he had to associate them with waves to explain the diffraction of light.
In his Hypothesis of Light of 1675, Newton posited the existence of the ether to transmit forces between particles. In 1704, Newton published Opticks, in which he expounded his corpuscular theory of light. He considered light to be made up of extremely subtle corpuscles, that ordinary matter was made of grosser corpuscles and speculated that through a kind of alchemical transmutation "Are not gross Bodies and Light convertible into one another, . and may not Bodies receive much of their Activity from the Particles of Light which enter their Composition?" [93]
Dr. William Gilbert, in De Magnete, invented the New Latin word electricus from ἤλεκτρον (elektron), the Greek word for "amber". Gilbert undertook a number of careful electrical experiments, in the course of which he discovered that many substances other than amber, such as sulphur, wax, glass, etc., [94] were capable of manifesting electrical properties. Gilbert also discovered that a heated body lost its electricity and that moisture prevented the electrification of all bodies, due to the now well-known fact that moisture impaired the insulation of such bodies. He also noticed that electrified substances attracted all other substances indiscriminately, whereas a magnet only attracted iron. The many discoveries of this nature earned for Gilbert the title of founder of the electrical science. [95] By investigating the forces on a light metallic needle, balanced on a point, he extended the list of electric bodies, and found also that many substances, including metals and natural magnets, showed no attractive forces when rubbed. He noticed that dry weather with north or east wind was the most favourable atmospheric condition for exhibiting electric phenomena—an observation liable to misconception until the difference between conductor and insulator was understood. [96]
Robert Boyle also worked frequently at the new science of electricity, and added several substances to Gilbert's list of electrics. He left a detailed account of his researches under the title of Experiments on the Origin of Electricity. [96] Boyle, in 1675, stated that electric attraction and repulsion can act across a vacuum. One of his important discoveries was that electrified bodies in a vacuum would attract light substances, this indicating that the electrical effect did not depend upon the air as a medium. He also added resin to the then known list of electrics. [94] [95] [97] [98] [99]
This was followed in 1660 by Otto von Guericke, who invented an early electrostatic generator. By the end of the 17th century, researchers had developed practical means of generating electricity by friction with an electrostatic generator, but the development of electrostatic machines did not begin in earnest until the 18th century, when they became fundamental instruments in the studies about the new science of electricity. The first usage of the word electricity is ascribed to Sir Thomas Browne in his 1646 work, Pseudodoxia Epidemica. In 1729 Stephen Gray (1666–1736) demonstrated that electricity could be "transmitted" through metal filaments. [100]
As an aid to scientific investigation, various tools, measuring aids and calculating devices were developed in this period.
### Calculating devices
John Napier introduced logarithms as a powerful mathematical tool. With the help of the prominent mathematician Henry Briggs their logarithmic tables embodied a computational advance that made calculations by hand much quicker. [101] His Napier's bones used a set of numbered rods as a multiplication tool using the system of lattice multiplication. The way was opened to later scientific advances, particularly in astronomy and dynamics.
At Oxford University, Edmund Gunter built the first analog device to aid computation. The 'Gunter's scale' was a large plane scale, engraved with various scales, or lines. Natural lines, such as the line of chords, the line of sines and tangents are placed on one side of the scale and the corresponding artificial or logarithmic ones were on the other side. This calculating aid was a predecessor of the slide rule. It was William Oughtred (1575–1660) who first used two such scales sliding by one another to perform direct multiplication and division, and thus is credited as the inventor of the slide rule in 1622.
Blaise Pascal (1623–1662) invented the mechanical calculator in 1642. [102] The introduction of his Pascaline in 1645 launched the development of mechanical calculators first in Europe and then all over the world. [103] [104] Gottfried Leibniz (1646–1716), building on Pascal's work, became one of the most prolific inventors in the field of mechanical calculators he was the first to describe a pinwheel calculator, in 1685, [105] and invented the Leibniz wheel, used in the arithmometer, the first mass-produced mechanical calculator. He also refined the binary number system, foundation of virtually all modern computer architectures. [106]
John Hadley (1682–1744) was the inventor of the octant, the precursor to the sextant (invented by John Bird), which greatly improved the science of navigation.
### Industrial machines
Denis Papin (1647–c. 1712) was best known for his pioneering invention of the steam digester, the forerunner of the steam engine. [107] [108] The first working steam engine was patented in 1698 by the English inventor Thomas Savery, as a ". new invention for raising of water and occasioning motion to all sorts of mill work by the impellent force of fire, which will be of great use and advantage for drayning mines, serveing townes with water, and for the working of all sorts of mills where they have not the benefitt of water nor constant windes." [sic] [109] The invention was demonstrated to the Royal Society on 14 June 1699 and the machine was described by Savery in his book The Miner's Friend or, An Engine to Raise Water by Fire (1702), [110] in which he claimed that it could pump water out of mines. Thomas Newcomen (1664–1729) perfected the practical steam engine for pumping water, the Newcomen steam engine. Consequently, Thomas Newcomen can be regarded as a forefather of the Industrial Revolution. [111]
Abraham Darby I (1678–1717) was the first, and most famous, of three generations of the Darby family who played an important role in the Industrial Revolution. He developed a method of producing high-grade iron in a blast furnace fueled by coke rather than charcoal. This was a major step forward in the production of iron as a raw material for the Industrial Revolution.
### Telescopes
Refracting telescopes first appeared in the Netherlands in 1608, apparently the product of spectacle makers experimenting with lenses. The inventor is unknown but Hans Lippershey applied for the first patent, followed by Jacob Metius of Alkmaar. [112] Galileo was one of the first scientists to use this new tool for his astronomical observations in 1609. [113]
The reflecting telescope was described by James Gregory in his book Optica Promota (1663). He argued that a mirror shaped like the part of a conic section, would correct the spherical aberration that flawed the accuracy of refracting telescopes. His design, the "Gregorian telescope", however, remained un-built.
In 1666, Isaac Newton argued that the faults of the refracting telescope were fundamental because the lens refracted light of different colors differently. He concluded that light could not be refracted through a lens without causing chromatic aberrations. [114] From these experiments Newton concluded that no improvement could be made in the refracting telescope. [115] However, he was able to demonstrate that the angle of reflection remained the same for all colors, so he decided to build a reflecting telescope. [116] It was completed in 1668 and is the earliest known functional reflecting telescope. [117]
50 years later, John Hadley developed ways to make precision aspheric and parabolic objective mirrors for reflecting telescopes, building the first parabolic Newtonian telescope and a Gregorian telescope with accurately shaped mirrors. [118] [119] These were successfully demonstrated to the Royal Society. [120]
### Other devices
The invention of the vacuum pump paved the way for the experiments of Robert Boyle and Robert Hooke into the nature of vacuum and atmospheric pressure. The first such device was made by Otto von Guericke in 1654. It consisted of a piston and an air gun cylinder with flaps that could suck the air from any vessel that it was connected to. In 1657, he pumped the air out of two conjoined hemispheres and demonstrated that a team of sixteen horses were incapable of pulling it apart. [121] The air pump construction was greatly improved by Robert Hooke in 1658. [122]
Evangelista Torricelli (1607–1647) was best known for his invention of the mercury barometer. The motivation for the invention was to improve on the suction pumps that were used to raise water out of the mines. Torricelli constructed a sealed tube filled with mercury, set vertically into a basin of the same substance. The column of mercury fell downwards, leaving a Torricellian vacuum above. [123]
### Materials, construction, and aesthetics
Surviving instruments from this period, [124] [125] [126] [127] tend to be made of durable metals such as brass, gold, or steel, although examples such as telescopes [128] made of wood, pasteboard, or with leather components exist. [129] Those instruments that exist in collections today tend to be robust examples, made by skilled craftspeople for and at the expense of wealthy patrons. [130] These may have been commissioned as displays of wealth. In addition, the instruments preserved in collections may not have received heavy use in scientific work instruments that had visibly received heavy use were typically destroyed, deemed unfit for display, or excluded from collections altogether. [131] It is also postulated that the scientific instruments preserved in many collections were chosen because they were more appealing to collectors, by virtue of being more ornate, more portable, or made with higher-grade materials. [132]
Intact air pumps are particularly rare. [133] The pump at right included a glass sphere to permit demonstrations inside the vacuum chamber, a common use. The base was wooden, and the cylindrical pump was brass. [134] Other vacuum chambers that survived were made of brass hemispheres. [135]
Instrument makers of the late seventeenth and early eighteenth century were commissioned by organizations seeking help with navigation, surveying, warfare, and astronomical observation. [133] The increase in uses for such instruments, and their widespread use in global exploration and conflict, created a need for new methods of manufacture and repair, which would be met by the Industrial Revolution. [131]
People and key ideas that emerged from the 16th and 17th centuries:
• First printed edition of Euclid's Elements in 1482.
• Nicolaus Copernicus (1473–1543) published On the Revolutions of the Heavenly Spheres in 1543, which advanced the heliocentric theory of cosmology. (1514–1564) published De Humani Corporis Fabrica (On the Structure of the Human Body) (1543), which discredited Galen's views. He found that the circulation of blood resolved from pumping of the heart. He also assembled the first human skeleton from cutting open cadavers.
• The French mathematician François Viète (1540–1603) published In Artem Analyticem Isagoge (1591), which gave the first symbolic notation of parameters in literal algebra.
• William Gilbert (1544–1603) published On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth in 1600, which laid the foundations of a theory of magnetism and electricity.
• Tycho Brahe (1546–1601) made extensive and more accurate naked eye observations of the planets in the late 16th century. These became the basic data for Kepler's studies.
• Sir Francis Bacon (1561–1626) published Novum Organum in 1620, which outlined a new system of logic based on the process of reduction, which he offered as an improvement over Aristotle's philosophical process of syllogism. This contributed to the development of what became known as the scientific method.
• Galileo Galilei (1564–1642) improved the telescope, with which he made several important astronomical observations, including the four largest moons of Jupiter (1610), the phases of Venus (1610 – proving Copernicus correct), the rings of Saturn (1610), and made detailed observations of sunspots. He developed the laws for falling bodies based on pioneering quantitative experiments which he analyzed mathematically.
• Johannes Kepler (1571–1630) published the first two of his three laws of planetary motion in 1609. (1578–1657) demonstrated that blood circulates, using dissections and other experimental techniques.
• René Descartes (1596–1650) published his Discourse on the Method in 1637, which helped to establish the scientific method. (1632–1723) constructed powerful single lens microscopes and made extensive observations that he published around 1660, opening up the micro-world of biology.
• Christiaan Huygens (1629–1695) published major studies of mechanics (he was the first one to correctly formulate laws concerning centrifugal force and discovered the theory of the pendulum) and optics (being one of the most influential proponents of the wave theory of light).
• Isaac Newton (1643–1727) built upon the work of Kepler, Galileo and Huygens. He showed that an inverse square law for gravity explained the elliptical orbits of the planets, and advanced the law of universal gravitation. His development of infinitesimal calculus (along with Leibniz) opened up new applications of the methods of mathematics to science. Newton taught that scientific theory should be coupled with rigorous experimentation, which became the keystone of modern science.
The idea that modern science took place as a kind of revolution has been debated among historians. A weakness of the idea of a scientific revolution is the lack of a systematic approach to the question of knowledge in the period comprehended between the 14th and 17th centuries, leading to misunderstandings on the value and role of modern authors. From this standpoint, the continuity thesis is the hypothesis that there was no radical discontinuity between the intellectual development of the Middle Ages and the developments in the Renaissance and early modern period and has been deeply and widely documented by the works of scholars like Pierre Duhem, John Hermann Randall, Alistair Crombie and William A. Wallace, who proved the preexistence of a wide range of ideas used by the followers of the Scientific Revolution thesis to substantiate their claims. Thus, the idea of a scientific revolution following the Renaissance is—according to the continuity thesis—a myth. Some continuity theorists point to earlier intellectual revolutions occurring in the Middle Ages, usually referring to either a European Renaissance of the 12th century [136] [137] or a medieval Muslim scientific revolution, [138] [139] [140] as a sign of continuity. [141]
Another contrary view has been recently proposed by Arun Bala in his dialogical history of the birth of modern science. Bala proposes that the changes involved in the Scientific Revolution—the mathematical realist turn, the mechanical philosophy, the atomism, the central role assigned to the Sun in Copernican heliocentrism—have to be seen as rooted in multicultural influences on Europe. He sees specific influences in Alhazen's physical optical theory, Chinese mechanical technologies leading to the perception of the world as a machine, the Hindu-Arabic numeral system, which carried implicitly a new mode of mathematical atomic thinking, and the heliocentrism rooted in ancient Egyptian religious ideas associated with Hermeticism. [142]
Bala argues that by ignoring such multicultural impacts we have been led to a Eurocentric conception of the Scientific Revolution. [143] However, he clearly states: "The makers of the revolution—Copernicus, Kepler, Galileo, Descartes, Newton, and many others—had to selectively appropriate relevant ideas, transform them, and create new auxiliary concepts in order to complete their task. In the ultimate analysis, even if the revolution was rooted upon a multicultural base it is the accomplishment of Europeans in Europe." [144] Critics note that lacking documentary evidence of transmission of specific scientific ideas, Bala's model will remain "a working hypothesis, not a conclusion". [145]
A third approach takes the term "Renaissance" literally as a "rebirth". A closer study of Greek philosophy and Greek mathematics demonstrates that nearly all of the so-called revolutionary results of the so-called scientific revolution were in actuality restatements of ideas that were in many cases older than those of Aristotle and in nearly all cases at least as old as Archimedes. Aristotle even explicitly argues against some of the ideas that were espoused during the Scientific Revolution, such as heliocentrism. The basic ideas of the scientific method were well known to Archimedes and his contemporaries, as demonstrated in the well-known discovery of buoyancy. Atomism was first thought of by Leucippus and Democritus. Lucio Russo claims that science as a unique approach to objective knowledge was born in the Hellenistic period (c. 300 BC), but was extinguished with the advent of the Roman Empire. [146] This approach to the Scientific Revolution reduces it to a period of relearning classical ideas that is very much an extension of the Renaissance. This view does not deny that a change occurred but argues that it was a reassertion of previous knowledge (a renaissance) and not the creation of new knowledge. It cites statements from Newton, Copernicus and others in favour of the Pythagorean worldview as evidence. [147] [148]
In more recent analysis of the Scientific Revolution during this period, there has been criticism of not only the Eurocentric ideologies spread, but also of the dominance of male scientists of the time. [149] Female scholars were not always given the opportunities that a male scholar would have had, and the incorporation of women's work in the sciences during this time tends to be obscured. Scholars have tried to look into the participation of women in the 17th century in science, and even with sciences as simple as domestic knowledge women were making advances. [150] With the limited history provided from texts of the period we are not completely aware if women were helping these scientists develop the ideas they did. Another idea to consider is the way this period influenced even the women scientists of the periods following it. Annie Jump Cannon was an astronomer who benefitted from the laws and theories developed from this period she made several advances in the century following the Scientific Revolution. It was an important period for the future of science, including the incorporation of women into fields using the developments made. [151]
## Astrolunch Seminar: Benjamin L'Huillier (Yonsei University)
Constraining the Concordance Model of Cosmology with the Large-ScaleStructures
Despite great predictive power and its successesin the last decades, the concordance LCDM cosmological model suffers from bothobservational (H0 tension, . ) and theoretical issues (nature of dark energy,dark matter, inflation,&hellip). Therefore, it is important to further test the modeland its underlying hypotheses. In this talk, I will discuss how the study ofthe large scale structures can help shed light on some fundamental questionssuch as the nature of dark energy, gravity, or the early Universe, in thecontext of a new generation of survey such as Euclid, DESI, or LSST. I willfocus on two different aspects: (i) modeling the nonlinear regime of structureformation through N-body simulations, in particular beyond LCDM, and (ii)applying advanced statistics, in particular model-independent methods, tostate-of-the-art cosmological data to test different aspects of the concordancesuch as the metric, gravity, or the nature of dark energy.
## How safe is the LCDM model
I noticed this in the arxivs, i thought the LCDM model was irifutable but it seems some are trying to better it.
arXiv:1602.02103 [pdf, ps, other]
First evidence of running cosmic vacuum: challenging the concordance model
Joan Sola, Adria Gomez-Valent, Javier de Cruz Perez
Comments: LaTeX, 6 pages, 2 tables and 3 figures
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) General Relativity and Quantum Cosmology (gr-qc) High Energy Physics - Phenomenology (hep-ph) High Energy Physics - Theory (hep-th)
Despite the fact that a rigid $Lambda$-term is a fundamental building block of the concordance $Lambda$CDM model, we show that a large class of cosmological scenarios with dynamical vacuum energy density $ho_$ and/or gravitational coupling $G$, together with a possible non-conservation of matter, are capable of seriously challenging the traditional phenomenological success of the $Lambda$CDM. In this Letter, we discuss these "running vacuum models" (RVM's), in which $ho_= ho_(H)$ consists of a nonvanishing constant term and a series of powers of the Hubble rate. Such generic structure is potentially linked to the quantum field theoretical description of the expanding Universe. By performing an overall fit to the cosmological observables $SNIa+BAO+H(z)+LSS+BBN+CMB$ (in which the WMAP9, Planck 2013 and Planck 2015 data are taken into account), we find that the RVM's appear definitely more favored than the $Lambda$CDM, namely at an unprecedented level of $sim 4sigma$, implying that the $Lambda$CDM is excluded at $sim 99.99\%$ c.l. Furthermore, the Akaike and Bayesian information criteria confirm that the dynamical RVM's are strongly preferred as compared to the conventional rigid $Lambda$-picture of the cosmic evolution.
## TSOR: The Spectrum Of Relativity
TSOR, The Spectrum Of Riemannium, is also The Spectrum Of Relativity by now (indeed I was considering that name as well for my blog at the beginning). Indeed, this is an adventure of how we can go beyond the quantum and the relativity theories we all know to the most general and generalized theory and the field theory concept itself. An incredible journey through different types of knowledge and ideas that were written before but not completely realized. If the quantum theory and the relativistic theory of fields we know are yet not complete, they must be completed or extended (enlarged).
## Fundamental axioms in LCDM - Astronomy
We use standard general relativity to clarify common misconceptions about fundamental aspects of the expansion of the Universe. In the context of the new standard LCDM cosmology we resolve conflicts in the literature regarding cosmic horizons and the Hubble sphere (distance at which recession velocity = c ) and we link these concepts to observational tests. We derive the dynamics of a non- comoving galaxy and generalize previous analyses to arbitrary FRW universes. We also derive the counter-intuitive result that objects at constant proper distance have a non-zero redshift. Receding galaxies can be blueshifted and approaching galaxies can be redshifted, even in an empty universe for which one might expect special relativity to apply. Using the empty universe model we demonstrate the relationship between special relativity and Friedmann- Robertson-Walker cosmology.
We test the generalized second law of thermodynamics (GSL) and its extension to incorporate cosmological event horizons. In spite of the fact that cosmological horizons do not generally have well-defined thermal properties, we find that the GSL is satisfied for a wide range of models. We explore in particular the relative entropic 'worth' of black hole versus cosmological horizon area. An intriguing set of models show an apparent entropy decrease but we anticipate this apparent violation of the GSL will disappear when solutions are available for black holes embedded in arbitrary backgrounds.
Recent evidence suggests a slow increase in the fine structure constant a = e 2 /hc over cosmological time scales. This raises the question of which fundamental quantities are truly constant and which might vary. We show that black hole thermodynamics may provide a means to discriminate between alternative theories invoking varying constants, because some variations in the fundamental 'constants' could lead to a violation of the generalized second law of thermodynamics.
## Through a Smoother Lens: An expected absence of LCDM substructure detections from hydrodynamic and dark matter only simulations
A fundamental prediction of the cold dark matter cosmology is the existence of a large number of dark subhalos around galaxies, most of which should be entirely devoid of stars. Confirming the existence of dark substructures stands among the most important empirical challenges in modern cosmology: if they are found and quantified with the mass spectrum expected, then this would close the door on a vast array of competing theories. But in order for observational programs of this kind to reach fruition, we need robust predictions. Here we explore substructure predictions for lensing using galaxy lens-like hosts at z=0.2 from the Illustris simulations both in full hydrodynamics and dark matter only. We quantify substructures more massive than
10 9 M, comparable to current lensing detections derived from HST, Keck, and ALMA. The addition of full hydrodynamics reduces the overall subhalo mass function by about a factor of two. Even for the dark matter only runs, most (
85 per cent) projections through the halo of size close to an Einstein radius contain no substructures larger than 10 9 M. The fraction of empty projections through the halo rises to
95 per cent in full physics simulations. This suggests we will likely need hundreds of strong lensing systems suitable for substructure studies, as well as predictions that include the effects of baryon physics on substructure, to properly constrain cosmological models. Fortunately, the field is poised to fulfill these requirements.
Keywords: cosmology dwarf – galaxies high-redshift theory – galaxies.
## 5 replies on &ldquoHave scientists found evidence of a parallel universe?&rdquo
Intellect is hard to find and I find your claims plausible but incomplete like everything else in physics. I will continue reading your work. I’ve been reading Hawking’s research and have found myself disappointed. I strongly believe there is a Divine power that influences outcomes in experiments just as a scientist influences an experiment by the mere fact that he/she is involved.
The CMB data and their interpretation within the context of the standard model has become the cornerstone of modern cosmology. The lack of shadows you refer to is a serious challenge to this model, but there are other evidences which, on the face of it, are convincing. From my layman’s understanding, the following are strong arguments in favour of a primordial, fire-ball beginning to the Universe:
1. Extreme uniformity of the CBM. In the words of one document I read: “The temperature is uniform to better than one part in a thousand! This uniformity is one compelling reason to interpret the radiation as remnant heat from the Big Bang it would be very difficult to image a local source of radiation that was this uniform. In fact, many scientists have tried to devise alternative explanations for the source of this radiation but none have succeeded.”
2. The blackbody spectrum of the CMB. Quoting from the same source: “According to the Big Bang theory, the frequency spectrum of the CMB should have this blackbody form. This was indeed measured with tremendous accuracy by the FIRAS experiment on NASA’s COBE satellite. … There is no alternative theory yet proposed that predicts this energy spectrum. The accurate measurement of its shape [is] another important test of the Big Bang theory.”
3. The CMB power spectrum. The excellent fit of the observational data to the theory is impressive notwithstanding that the theory requires the assumptions of dark matter and inflation.
Creationist models need to not only explain the light-travel time problem and the apparent great age of the Universe, but also these kinds of phenomena.
Cosmology is not science and as such you cannot prove any one model to be true. You may be able to rule out models but where a ‘degeneracy of explanations’ exists you always will have a problem.
There is no doubt that the CMB radiation is approximated extremely by a blackbody spectrum. I agree with you on that point but it is, in of itself, not definitive because it could also be consistent with a different origin. I have suggested it is the redshifted radiation from the initial creation of Day 1 of Creation week. At that time God said: “Let there be light” but there were no stars created until Day 4. My model there involves an expanding universe, which I am now inclined to believe is not the case. Russ Humphreys has developed a new model wherein the universe is static and redshift of galaxies in derived from the tension (not extension) of space. His model explains the CMB radiation via the Unruh effect.
It is disingenuous to say no other theory predicts this blackbody spectrum. No new model can ever do that because it is known already to have such a spectrum. As already intimated, the proposed mechanism, adiabatic expansion of the universe, could be wrong. The source may not even be cosmological, so to say it is a successful prediction of the big bang theory, may also be wrong.
You quite rightly point out that the power spectrum of the CMB anisotropies needs dark matter and a universe expanding under an acceleration driven by dark energy. Without these fudge factors any claim of successful prediction is mute.
Yes, creationist do need to explain what we observe in the universe, but we don’t observe dark matter, dark energy, dark radiation, dark flows, dark photons, cosmic inflation and other made up stuff.
1. I think that cosmology is science, it’s just not empirical science. We cannot conduct repeatable experiments on the Universe (thankfully!) , but God has provided us with information about the Universe and the ability to analyse that information and draw conclusions from it. That process is science because it is a process of gaining knowledge.
Also, your argument is a double-edged sword. If you are suggesting that we should dismiss naturalistic theories of the Creation because they are not empirical science, then we should also dismiss creationist theories for the same reason. In effect, this is to cast cosmology into a kind of intellectual dark age which, in my opinion, does not glorify God.
2. DM, DE, etc., are not “fudge factors”, they are reasonable and rational scientific hypotheses. Calling them fudge factors implies a degree of dishonesty and deliberate deception which I think is disingenuous.
3. You imply that DM, DE, etc., should be dismissed because we can’t observe them directly. But we do observe effects which support the hypotheses. We can’t directly observe the wind either, but we know it exists by the effects it produces.
I agree cosmology is not empirical science. It is not subject to what we call operational science criteria it is really historical science or more correctly philosophy. See OPERATIONAL AND HISTORICAL SCIENCE: WHAT ARE THEY? and COSMIC MYTHOLOGY: EXPOSING THE BIG BANG AS PHILOSOPHY NOT SCIENCE. Historical science is still a process of gaining knowledge but it is very weak and certain aspects must be accepted on faith as a given, a presupposition they cannot be experimentally or observationally determined.
I am suggesting we build our biblical cosmogonies on the Bible’s account of creation. The facts of the Bible are taken as axioms, presuppositions, and we move forward from there. That is a very different approach to modern cosmology. It has a presuppositional starting assumption of there being no Creator. Besides the secular world will never accept a cosmology that is biblically based. The scientific naturalist has no need for a creator. He lifts up the creation, in the form of the laws of physics, as the source of everything in the Universe. Paul Davies wrote,
“So science has done away with the need for a button-pushing creator who lives for eternity before making a Universe at a certain moment in time.”
“Yet the laws [of physics] that permit a Universe to create itself are even more impressive than a cosmic magician. If there is a meaning or purpose beneath physical existence, then it is to those laws rather than to the big bang that we should direct our attention.”
As for the status of dark entities being called “fudge factors,” I disagree with you, but I do not claim scientists are deliberately being dishonest. They are being dishonest by excluding the Creator, but they are operating within their worldview, which is to exclude the Creator. Thus they have no alternative but to believe in stuff that cannot be detected by any form of electromagnetic radiation.
Your comparison with observing wind does not follow. For a proper analogy to apply it would have to be a substance that cannot be detected by any form of radiation. Air can be detected by various methods, which include optical spectroscopy. The putative dark entities cannot be detected by any means whatsoever except their alleged effect on gravity. If such a situation developed in a local lab experiment the underlying hypothesis would be rejected. Why isn’t LCDM cosmology rejected? Because that is the ‘best they have’. There is no God in their worldview thus they have no alternative.
|
2023-01-27 15:11:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5106422901153564, "perplexity": 1819.5281571709288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00086.warc.gz"}
|
https://codereview.stackexchange.com/questions/263809/codechef-the-chefora-spell
|
# Codechef: The Chefora Spell
This is the question currently I am trying to solve of codechef and I am able to get the given test cases result but I am getting Time Limit Exceeded when I am trying to submit. Please let me know what can i do in my code given below to make it more optimized.
Chef and his friend Bharat have decided to play the game "The Chefora Spell".
In the game, a positive integer $$\N\$$ (in decimal system) is considered a "Chefora" if the number of digits $$\d\$$ is odd and it satisfies the equation
$$\N=\displaystyle\sum_{i=0}^{d−1} N_{i} ⋅10^i\$$,
where $$\N_i+\$$ is the $$\i\$$-th digit of $$\N\$$ from the left in 0-based indexing.
Let $$\A_i\$$ denote the i-th smallest Chefora number.
They'll ask each other $$\Q\$$ questions, where each question contains two integers $$\L\$$ and $$\R\$$. The opponent then has to answer with
$$\(\displaystyle \prod _{i=L+1}^{R} (A_{L})^{ A_{i}})mod 10^{9}+7.\$$
Bharat has answered all the questions right, and now it is Chef's turn. But since Chef fears that he could get some questions wrong, you have come to his rescue!
## Input
The first line contains an integer $$\Q\$$ - the number of questions Bharat asks. Each of the next $$\Q\$$ lines contains two integers $$\L\$$ and $$\R\$$.
## Output
Print $$\Q\$$ integers - the answers to the questions on separate lines.
## Constraints
$$\1≤Q≤10^5\$$
$$\1≤L
$$\1≤Q≤5⋅10^3\$$
$$\1≤L
Original constraints
## Sample Input
2
1 2
9 11
## Sample Output
1
541416750
## Code
import java.util.*;
import java.lang.*;
import java.io.*;
class Codechef
{
StringTokenizer st;
}
String next(){
while(st == null || !st.hasMoreElements()){
try{
}catch(Exception e){
System.out.println(e);
}
}
return st.nextToken();
}
public long nextLong(){
return Long.parseLong(next());
}
}
public static void main (String[] args) throws java.lang.Exception
{
long Q = fr.nextLong();
while(Q-->0){
long num = 0;long sol = 0;
long L = fr.nextLong();
long R = fr.nextLong();
long temp = 0;
int numDigits = countDigit(L);
if((numDigits &1) != 0){
num = calChefora(L);
for(long i = L+1 ; i <= R ; i++){
temp = temp + calChefora(i);
}
sol = modPow(num , temp) ;
System.out.println(sol);
}
}
}
static int countDigit(long n)
{
return (int)Math.floor(Math.log10(n) + 1);
}
static long calChefora(long num){
String temp = Long.toString(num);
if(num%10 == num)return num;
num = num / 10;
long reversed = 0;
while(num!=0){
reversed = reversed * 10 + num % 10;
num /= 10;
}
temp = temp + reversed;
long sol = Long.parseLong(temp);
return sol;
}
static long modPow(long var, long num) {
long m = 1;long M = 1000000007;
while (num > 0) {
m = (m * var) % M;
--num;
}
return m;
}
}
Modified Code
class CHEFORA
{
StringTokenizer st;
}
String next(){
while(st == null || !st.hasMoreElements()){
try{
}catch(Exception e){
System.out.println(e);
}
}
return st.nextToken();
}
public long nextLong(){
return Long.parseLong(next());
}
public int nextInt(){
return Integer.parseInt(next());
}
}
public static void main (String[] args) throws java.lang.Exception
{
int Q = fr.nextInt();
ArrayList<Long> arrL = new ArrayList();
ArrayList<Long> arrR = new ArrayList();
ArrayList<Long> chefora = new ArrayList();
for(int i = 0 ; i < Q ; i++){
long L = fr.nextLong();
long R = fr.nextLong();
}
for(long i = Collections.min(arrL) ; i <= Collections.max(arrR) ; i++){
long temp = 0;
temp = temp + calChefora(i);
}
for(int i = 0 ; i < Q ; i++){
long num = 0;long sol = 0;
int numDigits = countDigit(arrL.get(i));
if((numDigits &1) != 0) {
long temp = 0;
num = calChefora(arrL.get(i));
int indexL = chefora.indexOf(num);
indexL +=1;
long diff = arrR.get(i) - arrL.get(i);
while(diff-->0){
temp = temp + chefora.get(indexL);
indexL++;
}
sol = modPow(num, temp);
}
System.out.println(sol);
}
}
static int countDigit(long n)
{
return (int)Math.floor(Math.log10(n) + 1);
}
static long calChefora(long num){
if(num%10 == num)return num;
String input = String.valueOf(num);
StringBuilder input1 = new StringBuilder();
input1.append(input);
input1.reverse();
String tsol = input;
for(int i = 1 ; i < input1.length() ;i++ ){
tsol = tsol + input1.charAt(i);
}
long sol = Long.parseLong(tsol);
return sol;
}
static long modPow(long x, long y)
{
long M = 1000000007;
long res = 1;
x = x % M;
if (x == 0)
return 0;
while (y > 0)
{
if ((y & 1) != 0)
res = (res * x) % M;
y = y >> 1; // y = y/2
x = (x * x) % M;
}
return res;
}
}
• Check this: geeksforgeeks.org/… Jul 6 at 15:02
• I tried this but now I am getting WA(Wrong Answer) while submitting but for the given test cases its showing correct answer. I have Added the Modified Code in the Question Jul 8 at 5:44
First of all, kudos for figuring out that $$\\displaystyle \prod _{i=L+1}^{R} (A_{L})^{ A_{i}} = A_L^{\sum_{i = L+1}^R A_i}\$$.
But - you've stopped too early. The next step is to realize that
$$\\displaystyle \sum_{i = L+1}^R A_i = \sum_{i = 0}^R A_i - \sum_{i = 0}^L A_i\$$
which hints that you need to deal with partial sums of $$\A_i\$$. This way you don't have to recompute the same Chefora numbers over and over again (which you do).
That said, calChefora seems suboptimal. A simple reversal of temp avoids all those modulos, divisions and multiplications.
As noted in comments, exponentiation by squaring is much faster than a naive one. Also, exponentiating modulo prime hints that Fermat's Little may help.
Finally, I failed to understand the (numDigits & 1) != 0 test. Why parity of digits in L is important?
• In the Question its stated that the digits should be of odd length thats why (numDigits & 1) != 0 . Also i have tried changing my code and now I am first taking all the values of L and M and then I am calculating the chefora numbers from the least L to max M so that I dont have to calculate the chefora number again and again and thus reducing my time complexity but still I am getting TLE(Time limit Exceeded when submitting). Also for calculating modPow I have used bit manipulation instead of the regular calculation but still nothing. Jul 8 at 5:24
• I have added the same in the Question under Modified Code Jul 8 at 5:29
|
2021-10-19 02:42:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4164871275424957, "perplexity": 5340.830768371299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00318.warc.gz"}
|
https://itectec.com/spec/d-2-normal-test-environment/
|
## D.2 Normal test environment
36.1413GPPBase Station (BS) conformance testingEvolved Universal Terrestrial Radio Access (E-UTRA)Release 17TS
When a normal test environment is specified for a test, the test should be performed within the minimum and maximum limits of the conditions stated in Table D.1.
Table D.1: Limits of conditions for Normal Test Environment
Condition Minimum Maximum Barometric pressure 86 kPa 106 kPa Temperature 15C 30C Relative Humidity 20 % 85 % Power supply Nominal, as declared by the manufacturer Vibration Negligible
The ranges of barometric pressure, temperature and humidity represent the maximum variation expected in the uncontrolled environment of a test laboratory. If it is not possible to maintain these parameters within the specified limits, the actual values shall be recorded in the test report.
NOTE: This may, for instance, be the case for measurements of radiated emissions performed on an open field test site.
|
2022-08-16 08:04:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8279377818107605, "perplexity": 2146.858683693287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00358.warc.gz"}
|
https://www.traficonbusinesscenter.cz/data1/1596710180-stationary-time-principle/2177/
|
### Stationary time principle Article about stationary time
Encyclopedia article about stationary time principle by The Free Dictionary
### 8.1 Stationarity and differencing Forecasting
A stationary time series is one whose properties do not depend on the time at which the series is observed. 14 Thus, time series with trends, or with seasonality, are not stationary — the trend and seasonality will affect the value of the time series at different times.
### stationary time principle
Stationary time principle Article about stationary . Disclaimer. All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. Fermat's principle Wikipedia 2019-10-16 Fermat's principle, also known as the principle of least time, is the link between ray optics and wave optics. In its original "strong" form
### Principle Of Stationary Time stuletnidom.pl
Principle Of Stationary Time. As a leading global manufacturer of crushing equipment, milling equipment,dressing equipment,drying equipment and briquette equipment etc. we offer advanced, rational solutions for any size-reduction requirements, including quarry, aggregate, grinding production and complete plant plan. If you are interested in these product, please contact us. NOTE: You can
### Introduction to Stationary and Non-Stationary Processes
26.04.2020· Non-stationary data, as a rule, are unpredictable and cannot be modeled or forecasted. The results obtained by using non-stationary time series may be spurious in that they may indicate a
### Introduction to Stationary and Non-Stationary Processes
26.04.2020· Non-stationary data, as a rule, are unpredictable and cannot be modeled or forecasted. The results obtained by using non-stationary time series may be spurious in that they may indicate a
### Principle Of Stationary Time stuletnidom.pl
Principle Of Stationary Time. As a leading global manufacturer of crushing equipment, milling equipment,dressing equipment,drying equipment and briquette equipment etc. we offer advanced, rational solutions for any size-reduction requirements, including quarry, aggregate, grinding production and complete plant plan. If you are interested in these product, please contact us. NOTE: You can
### stationary time principle
Stationary time principle Article about stationary . Disclaimer. All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. Fermat's principle Wikipedia 2019-10-16 Fermat's principle, also known as the principle of least time, is the link between ray optics and wave optics. In its original "strong" form
### The slowness principle: SFA can detect different slow
The slowness principle: SFA can detect different slow components in non-stationary time series Wolfgang Konen* and Patrick Koch Institute for Informatics, Cologne University of Applied Sciences, Steinmüllerallee 1, D-51643 Gummersbach, Germany E-mail: [email protected] E-mail: [email protected] *Corresponding author Abstract: Slow feature analysis (SFA) is a
### stationary time principle : définition de stationary time
However, this version of the principle is not general; a more modern statement of the principle is that rays of light traverse the path of stationary, not minimal, time. Fermat's principle can be used to describe the properties of light rays reflected off mirrors, refracted through different media, or undergoing total internal reflection.
### Detecting stationarity in time series data
As such, the ability to determine if a time series is stationary is important. Rather than deciding between two strict options, this usually means being able to ascertain, with high probability, that a series is generated by a stationary process. In this brief post, I will cover several ways to do just that. Visualizations. The most basic methods for stationarity detection rely on plotting the
### Chang,Guo,Yao : Principal component analysis for
Project Euclid mathematics and statistics online. Chang, J., Guo, B. and Yao, Q. (2018). Supplement to “Principal component analysis for second-order stationary vector time series.”
### Nonstationary Time Series, Cointegration, and the
Elliot Sober ([2001]) forcefully restates his well-known counterexample to Reichenbach's principle of the common cause: bread prices in Britain and sea levels in Venice both rise over time and are
### Non-Stationary Time Series andUnitRootTests
Econometrics 2 — Fall 2005 Non-Stationary Time Series andUnitRootTests Heino Bohn Nielsen 1of25 Introduction • Many economic time series are trending.
### Basic Principles to Create a Time Series Forecast by
Basic Principles to Create a Time Series Forecast. Explaining the basics steps to create time series forecasts. Leandro Rabelo. Follow. May 28, 2019 · 21 min read. Photo by Adrian Schwarz on Unsplash. We are surrounded by patterns that can be found everywhere, one can notice patterns with the four season in relation to the weather; patterns on peak hour when it refers to the volume of traffic
### Basic Principles to Create a Time Series Forecast by
Basic Principles to Create a Time Series Forecast. Explaining the basics steps to create time series forecasts. Leandro Rabelo. Follow. May 28, 2019 · 21 min read. Photo by Adrian Schwarz on Unsplash. We are surrounded by patterns that can be found everywhere, one can notice patterns with the four season in relation to the weather; patterns on peak hour when it refers to the volume of traffic
### stationary time principle : définition de stationary time
However, this version of the principle is not general; a more modern statement of the principle is that rays of light traverse the path of stationary, not minimal, time. Fermat's principle can be used to describe the properties of light rays reflected off mirrors, refracted through different media, or undergoing total internal reflection.
### Lecture 1: Stationary Time Series
stationary time series {X t} is defined to be ρ X(h) = γ X(h) γ X(0). Example 1 (continued): In example 1, we see that E(X t) = 0, E(X2 t) = 1.25, and the autoco-variance functions does not depend on s or t. Actually we have γ X(0) = 1.25, γ X(1) = 0.5, and γ x(h) = 0 for h > 1. Therefore, {X t} is a stationary process. Example 2 (Random walk) Let S t be a random walk S t = P t s=0 X s
### Nonstationary Time Series, Cointegration, and the
principle of the common cause: bread prices in Britain and sea levels in Venice both rise over time and are, therefore, correlated; yet they are ex hypothesi not causally connected, which violates the principle of the common cause. The counterexample employs nonstationary data—i.e., data with time-dependent population moments. Common measures of statistical association do not generally
### 6.4.4.2. Stationarity NIST
If the time series is not stationary, we can often transform it to stationarity with one of the following techniques. We can difference the data. That is, given the series $$Z_t$$, we create the new series $$Y_i = Z_i Z_{i-1} \, .$$ The differenced data will contain one less point than the original data. Although you can difference the data more than once, one difference is usually
### 9.1 Stationarity and differencing Forecasting
9.1 Stationarity and differencing. A stationary time series is one whose properties do not depend on the time at which the series is observed. 14 Thus, time series with trends, or with seasonality, are not stationary — the trend and seasonality will affect the value of the time series at different times. On the other hand, a white noise series is stationary — it does not matter when you
### Does a seasonal time series imply a stationary or a non
A seasonal pattern that remains stable over time does not make the series non-stationary. A non-stable seasonal pattern, for example a seasonal random walk, will make the data non-stationary. Edit (after new answer and comments) A stable seasonal pattern is not stationary in the sense that the mean of the series will vary across seasons and, hence, depends on time; but it is stationary in the
### An invariance principle for sums and record times of
An invariance principle for sums and record times of regularly varying stationary sequences Bojan Basrak Hrvoje Planini cy Philippe Soulierz December 5, 2017 Abstract We prove a sequence of limiting results about weakly dependent sta-tionary and regularly varying stochastic processes in discrete time. After deducing the limiting distribution for individual clusters of extremes, we present a
### Introduction to Time Series Analysis. Lecture 4.
For the autocovariance function γof a stationary time series {Xt}, 1. γ(0) ≥ 0, 2. |γ(h)| ≤ γ(0), 3. γ(h) = γ(−h), 4. γis positive semidefinite. Furthermore, any function γ: Z → R that satisfies (3) and (4) is the autocovariance of some stationary (Gaussian) time series. 5. Introduction to Time Series Analysis. Lecture 4. 1. Review: ACF, sample ACF. 2. Properties of estimates
### The Calculus of Variations University of California, Davis
The principle of stationary action (also called Hamilton’s principle or, some-what incorrectly, the principle of least action) states that, for xed initial and nal positions ~x(a) and ~x(b), the trajectory of the particle ~x(t) is a stationary point of the action. To explain what this means in
### An invariance principle for sums and record times of
An invariance principle for sums and record times of regularly varying stationary sequences Bojan Basrak Hrvoje Planini cy Philippe Soulierz December 5, 2017 Abstract We prove a sequence of limiting results about weakly dependent sta-tionary and regularly varying stochastic processes in discrete time. After deducing the limiting distribution for individual clusters of extremes, we present a
### Time machines and the Principle of Self-Consistency as a
We consider the action principle to derive the classical, relativistic motion of a self-interacting particle in a 4-D Lorentzian spacetime containing a wormhole and which allows t
### lagrangian formalism When is the principle of stationary
Viewed 2k times 11. 10 $\begingroup$ I've only had a very brief introduction to Lagrangian mechanics. In a physics course I took last year, we briefly covered the principle of stationary action --- we looked at it, derived some equations of motion with it, and moved on. While the lecturer often referred to it as the principle of least action, he always reminded us that it wasn't actually least
### self study Proof of variance of stationary time series
Variance of Weakly Stationary Time Series. Related. 3. Calculating $\operatorname{var} \left(\frac{X_1-\bar{X}}{S}\right)$ 2. Variance of Weakly Stationary Time Series. 8. Why is the sum of the sample autocorrelations of a stationary series equal to -1/2? 0. Proving an identity involving sample variance. 7. Calculate the variance of \$\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i X_j
### Using R for Time Series Analysis — Time Series 0.2
If you need to difference your original time series data d times in order to obtain a stationary time series, this means that you can use an ARIMA(p,d,q) model for your time series, where d is the order of differencing used. For example, for the time series of the diameter of women’s skirts, we had to difference the time series twice, and so the order of differencing (d) is 2. This means
### 26 Optics: The Principle of Least Time
But the principle of least time is a completely different philosophical principle about the way nature works. Instead of saying it is a causal thing, that when we do one thing, something else happens, and so on, it says this: we set up the situation, and light decides which is the shortest time, or the extreme one, and chooses that path.
### 3. Fermat’s Principle of Least Time University of Virginia
Fermat’s Principle of Least Time. Michael Fowler . Another Minimization Problem Here's another minimization problem from the 1600's, even earlier than the brachistochrone. Fermat famously stated in the 1630’s that a ray of light going from point A to point B always takes the route of least time -- OK, it's trivially trivially true in a single medium, light rays go in a straight line
### Principles of chromatography Stationary phase (article
Principles of chromatography. This is the currently selected item. Basics of chromatography. Column chromatography. Thin layer chromatography (TLC) Calculating retention factors for TLC. Gas chromatography. Sort by: Top Voted. Simple and fractional distillations. Basics of chromatography. Up Next. Basics of chromatography . Our mission is to provide a free, world-class education to anyone
### 19 The Principle of Least Action The Feynman Lectures
Every time the subject comes up, I work on it. In fact, when I began to prepare this lecture I found myself making more analyses on the thing. Instead of worrying about the lecture, I got involved in a new problem. The subject is this—the principle of least action.
### An Introductory Study on Time Series Modeling and arXiv
Time series forecasting thus can be termed as the act of predicting the future by understanding the past [31]. Due to the indispensable importance of time series forecasting in numerous practical fields such as business, economics, finance, science and engineering, etc. [7, 8, 10], proper care should be taken to fit an adequate model to the underlying time series. It is obvious that a
|
2021-06-13 09:13:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5990028977394104, "perplexity": 1747.4157778472527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00071.warc.gz"}
|
https://gateoverflow.in/865/gate2002-12
|
+13 votes
762 views
Fill in the blanks in the following template of an algorithm to compute all pairs shortest path lengths in a directed graph $G$ with $n*n$ adjacency matrix $A$. $A[i,j]$ equals $1$ if there is an edge in $G$ from $i$ to $j$, and $0$ otherwise. Your aim in filling in the blanks is to ensure that the algorithm is correct.
INITIALIZATION: For i = 1 ... n
{For j = 1 ... n
{ if a[i,j] = 0 then P[i,j] =_______ else P[i,j] =_______;}
}
ALGORITHM: For i = 1 ... n
{For j = 1 ... n
{For k = 1 ... n
{P[__,__] = min{_______,______}; }
}
}
1. Copy the complete line containing the blanks in the Initialization step and fill in the blanks.
2. Copy the complete line containing the blanks in the Algorithm step and fill in the blanks.
3. Fill in the blank: The running time of the Algorithm is $O$(___).
asked
edited | 762 views
0
here we are not given any information about the weights between the edges, then how can we solve it. The only way I think it can be solved is when A[i,j]= Weight[i,j] if there is an edge in G from i to j.
PLEASE HELP !!!
## 2 Answers
+12 votes
INITIALIZATION: For i = 1 ... n
{For j = 1 ... n
{ if a[i,j] = 0 then P[i,j] =infinite // i.e. if there is no direct path then put infinite
else P[i,j] =a[i,j];
}
}
ALGORITHM:
For i = 1 ... n
{For j = 1 ... n
{For k = 1 ... n
{
P[i, j] = min( p[i,j] , p[i,k] + p[k,j])
};
}
}
time complexity 0($n^3$)
this algorithm is $4$ weighted graph but it will work $4$ unweighted graph $2$ because if $p[i,j]=1$, $p[i,k]=1$ and $p[k,j]=1$ then according to the algo $p[i,j] = \min(p[i,j] ,p[i,k] + p[k,j]) = \min(1,2) =1$
And all the other case is also satisfied.(like as if $p[i,j]$ was $0$ in last iteration $nd$ there exist a path via $k$)
answered by Active (2.2k points)
edited by
+5
ALGORITHM:
For i = 1 ... n
{For j = 1 ...
{For k = 1 ...
{ P[ j , k ] = min( p[ j , k ] , p[ j , i ] + p[ i , k ] ) };
}
}
+1
@prashant , edit the ans
0
what to edit?
is the last line should change by
P[ j , k ] = min( p[ j , k ] , p[ j , i ] + p[ i , k ] )
but why we take first point as j ??
0
it has edited ,.. now correct ..
0
The comment by himanshu is the standard way of implementing the all pairs shortest path.First loop gives intermediate vertex.
Although given answer also seems correct
0
It's wrong. This will give incorrect answers. @Himanshu1 is correct.
+3 votes
Its a Floyd warshall algorithm(Dynamic Programming approach)
for i = 1 to N
for j = 1 to N
if there is an edge from i to j
dist[0][i][j] = the length of the edge from i to j
else dist[0][i][j] = INFINITY
for k = 1 to N
for i = 1 to N
for j = 1 to N
dist[k][i][j] = min(dist[k-1][i][j], dist[k-1][i][k] + dist[k-1][k][j])
Time Complexity: O(n^3)
References:
answered by Active (4.6k points)
+1 vote
1 answer
1
+16 votes
1 answer
3
|
2018-04-19 19:36:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120768070220947, "perplexity": 2386.2428849222783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937016.16/warc/CC-MAIN-20180419184909-20180419204909-00018.warc.gz"}
|
http://openstudy.com/updates/50410797e4b0d97e4dc59718
|
## lindseyharrison Group Title x^2 - 1/9x all over x^2 + 2x + 1/ 3x^2 2 years ago 2 years ago
1. lindseyharrison
|dw:1346439066411:dw|
2. moser90
start by factoring
3. lindseyharrison
I"m running out of time and stressing out!
4. ParthKohli
Then you should use WolframAlpha. http://wolframalpha.com
5. moser90
can't just give you the answer sorry you have to at least try
6. CliffSedge
I'd actually recommend using$\frac{\frac{a}{b}}{\frac{c}{d}} = \frac{ad}{bc}$ first, and then factoring.
7. moser90
I forgot that step thanks now I know an easier why
8. CliffSedge
But it doesn't really matter, commutative property and whatnot..
|dw:1346439505981:dw|
Start factoring and canceling.
If you will do the numerator, I will do the denominator.
12. CliffSedge
I got a numerator of higher degree than denominator, so I converted it to a mixed number. I don't think that's necessary though; usually with rational expressions, leaving everything in factored form as improper fractions is more convenient.
@lindseyharrison , the ball is in your court.
14. lindseyharrison
5x^4/4y^3
|dw:1346439990509:dw|
|dw:1346440061907:dw|
|dw:1346440122163:dw| What do you have left?
|dw:1346440255206:dw|
19. lindseyharrison
a) 4y^3/5x^4 b) 4y^4/5x^3 c) 5x^4/4y^3 d) 5x^3/4y^4
20. lindseyharrison
those are my options
You are looking at the wrong answers. There is not a y in you problem. Please look at the problem you have entered in the problem box!
22. lindseyharrison
well that explains a lot lol
23. lindseyharrison
x(x+1/3(x-1) 3x(x+1)/(x-1) x(x-1)/x(x+1) (x-1)/3x(x+1)
24. lindseyharrison
So it's C!
I don't agree with C, but then again I don't agree with a lot of things. You will notice that C can be further simplified by a division of x to both numerator and denominator.
Are you sure that you copied D. correctly?
27. CliffSedge
Yeah, there's a typo in there.
I think you are right @CliffSedge
I really don't see a correct answer lol.
|
2014-10-24 08:05:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6408007144927979, "perplexity": 2682.6372705029144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645432.37/warc/CC-MAIN-20141024030045-00008-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/469219/normal-force-on-a-fluid-container/469231
|
# Normal force on a fluid container [closed]
I was solving some problems and came across the following situation.
Consider a container with one cylinder on top of the other. Area of cross section is $$2A$$ and $$A$$ for the bottom and top cylinder respectively. The height of both the cylinders is $$H$$. Something like,
Now calculating the normal force by the surface by two different methods two different results appear.
Method 1:
$$N = mg$$ $$N = \bigl(\rho(A * h + 2A*h)\bigr)g$$ $$N = 3Ah\rho g$$
Method 2:
N = Area of bottom * pressure at the bottom
$$N = 2A * \rho g(2h)$$ $$N = 4Ah\rho g$$
Which the same as that a cylinder of height 2H and base Radius 2A. Where did I go wrong?
• Second approach seems to be the incorrect one @Aaron Stevens is right here ... – Aditya Garg Mar 28 at 19:05
• @AaronStevens ... That's definitely correct. Pressure at the bottom will be due to the total height of fluid above (2 H in this case, as each shape has a height H, and they are stacked). There's a different issue with the approach. That's how hydrostatics work. – JMac Mar 28 at 19:13
• @AaronStevens It's a fluid container with a shape of a rectangular prism with a cylinder on top. The question didn't do a great job explaining that; but that clearly fits with "Normal force on a fluid container" and all the calculations done by OP. – JMac Mar 28 at 19:16
• @JMac Wow I completely misread everything. I definitely thought this was just two objects stacked on top of each other and the OP was looking at the force pushing on the ground – Aaron Stevens Mar 28 at 19:18
• @JMac I still believe that the second method has an issue. If we made the radius of the upper section smaller and smaller the final answer wouldn't change, which is odd to me – Aaron Stevens Mar 28 at 19:24
They are looking for the normal force at the bottom. Your calculation for pressure at the bottom is correct, and that face of the container would have a $$4A \rho g H$$ force but only on the bottom face. What you've missed is that it is not the only force acting on this container.
This pressure will be equal to the area being pushed up, multiplied by the pressure. Since the area of the rectangular prism is $$2A$$, and the cylinder is $$1A$$, this means that the upwards force on the prism acts on an area of $$1A$$. At this location, the pressure will be $$\rho g H$$, because the height of liquid column above is $$H$$. That leaves a force of $$F = 1A \rho g H$$ acting in the upwards direction on that face.
You can then look at the force balance $$F_{net weight} = F_{down} - F_{up} = 4A \rho g H - 1A \rho g H = 3A \rho g H$$
|
2019-12-14 07:25:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6312425136566162, "perplexity": 340.40496955538777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540585566.60/warc/CC-MAIN-20191214070158-20191214094158-00222.warc.gz"}
|
http://rationalwiki.org/wiki/RationalWiki
|
# RationalWiki
A brain in square brackets!
I thought thiswas supposed to be RationalWiki Key pages RationalMedia Foundation Moderation v - t - e
RationalWiki (RW) is a community working together to explore and provide information about a range of topics centered around science, skepticism, and critical thinking. RW currently has 6,472 mainspace articles. RW is owned by the RationalMedia Foundation (RMF), an incorporated 501(c)(3) nonprofit. The RMF operates the infrastructure that keeps RW running and holds its associated trademarks and copyrights, but it does not govern the community or any content the community produces.
Our purpose here at RationalWiki includes:
1. Analyzing and refuting pseudoscience and the anti-science movement.
2. Documenting the full range of crank ideas.
3. Explorations of authoritarianism and fundamentalism.
4. Analysis and criticism of how these subjects are handled in the media.
We welcome contributors, and encourage those who disagree with us to register and engage in constructive dialogue.
## History
See the main article on this topic: RationalWiki:History
RationalWiki 2.0 was created as an open editing wiki on May 22, 2007. Lulz ensued.
## Scope and statistics
RW stats on active editors and edits, as of July 2016. Click to expand.
RW is a fairly popular skeptic site. According to peer-reviewed[1] research, as of July 2015 RW is the most visible anti-conspiracy website on Google/Bing English search engine results.[2] By November 2012, RW's traffic had reached about 32,000 unique visitors per day[3] and since 2013, RW has had 700-1,000 monthly editors and 15,000-30,000 monthly edits.[4]
Also since 2013, RW's Alexa rank (a measure accurate only in a broad sense) has hovered between 15,000th and 25,000th most popular website on the entire Internet, which translates to about 4 million-ish unique monthly visitors.[5][6][7] This puts RW above other skeptical sites like Quackwatch,[8] Skeptoid,[9] and Freethought Blogs,[10] though still below big players like PolitiFact[11] and Snopes.[12]
However, RW's objective isn't to collect views in and of itself (the truth is not a popularity contest), especially since we don't sell anything, nor run any type of ads in order monetize hits. Instead, our intention is for every individual view to prove a chance for us to disseminate accurate information against the flood of pseudoscience and anti-intellectualism that permeates much of the public discourse today.
For us, more viewers simply translates into more chances to help dissuade pseudoscientific and fundamentalist thinking, and to rally the interest of still more dedicated editors to the cause of scientific skepticism (thus resulting in the proliferation of more and better skeptical content — here, as well as in society generally).
### So, do any "real" news outlets cite RationalWiki?
See the main article on this topic: RationalWiki:Mentions
Yes, certainly. Since RationalWiki's inception, we've featured in articles, op-ed pieces and news reports (excluding blog posts and mention by commenters) — either being quoted verbatim, or outright referenced as one of the sources for the article's claims — by a slew of mainstream media outlets worldwide, including:
## What is a RationalWiki article?
See the main article on this topic: RationalWiki:What is a RationalWiki article?
While RW uses software originally developed for a well-known online encyclopedia, it is important to realize that RW is not trying to be an encyclopedia. While many of RW's articles may look like encyclopedia entries, RW goes much further – it encourages original research and opinion formation.
• The community has embraced the concept of wikis by creating an information source out of the collaborative editing of thousands of people.
• By encouraging original research and essays, RW has also incorporated many aspects of the blogging community.
• Discussion among members is facilitated on many levels such as debate articles, specific discussions on talk pages, and just coming together to talk about whatever is on our minds at the Saloon Bar. This focus on discussion captures the essence of Internet forums.
• RW has a serious mission, but its users here because it's fun, and stick around only while it's still fun.
Who runs this place? Ultimately, nobody. It's a wiki.
Decisions are made by the will of whoever shows up and does stuff. Mobocracy and do-ocracy rule the day. Most users who've been around a while and aren't utterly incompetent are sysops. The most effective place to be outraged is on the talk page of the article you are outraged about. People may well engage with you in a relevant fashion.
A few users are moderators, elected by the community. They don't like work, so will do anything other than be your go-to parent.
In one extreme case the board have acted to ban someone for libel of a level that could have attracted lawsuits to the RMF itself. They don't like work either, so don't expect this to happen again any time soon.
## Criticism
See the main article on this topic: RationalWiki:Pissed at us
RW has numerous critics, roughly divided in two (overlapping) groups: those that take issue with the content and those who take issue with the style. Both tend to quickly degenerate into "so why do they call it RationalWiki, then?" This may be based on a slight confusion between rationalisation and rationalism – as no one ever thinks they're being irrational, they're likely to accuse anyone who disagrees with them as irrational. In principle, this point extends to RW itself, which declares everyone who doesn't follow its POV irrational – making the choice of title somewhat unfortunate and ironic.
The content critics are typically the fans of people or subjects that RW doesn't speak favorably of. Supporters of noted politician Ron Paul certainly aren't fans,[36] angry that someone, somewhere dares not declare Ron Paul to be the Second Coming. Ayn Rand fans do much the same. Other criticism of content is often directed at shorter and less complete articles.[37] RW's rating system goes part way to rectifying the issue of lower quality articles but is implemented in a completely ad hoc wikilike fashion.
RW's style is frequently criticized, with some objecting to the odd sense of humor and getting upset that people aren't taking their idea of rationalism seriously. LessWrong bloggers and commentators in particular find it annoyingly irrational (with prior probability $-e^{i \pi}$). LessWrong's founder, Eliezer Yudkowsky, once defended RW as a potential recruiting ground for hardcore rationalists but mostly as "clueless."[38] Other issues with style include the running debate over whether RW's self-touted viewpoint, "SPOV," means "Scientific Point of View (plus snark)" or "Snarky Point of View (plus science)."
## Language sections
See the main article on this topic: RationalWiki:Languages
While the overwhelming majority of RW's editors are English-speaking, RW also has articles in other languages, namely:
For those of you in the mood, RationalWiki has a fun article about RationalWiki.
Si vous voulez cet article en français, il peut être trouvé à RationalWiki (français).
There is a broader, perhaps slightly less biased, article on Wikipedia about RationalWiki
|
2016-09-27 15:32:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20340456068515778, "perplexity": 6022.468559678769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661123.53/warc/CC-MAIN-20160924173741-00003-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://en.wikipedia.org/wiki/Maki-Nakagawa-Sakata_matrix
|
Pontecorvo–Maki–Nakagawa–Sakata matrix
(Redirected from Maki-Nakagawa-Sakata matrix)
Jump to: navigation, search
In particle physics, the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix), Maki–Nakagawa–Sakata matrix (MNS matrix), lepton mixing matrix, or neutrino mixing matrix is a unitary[note 1] mixing matrix[disambiguation needed] which contains information on the mismatch of quantum states of neutrinos when they propagate freely and when they take part in the weak interactions. It is a model of neutrino oscillation. This matrix was introduced in 1962 by Ziro Maki, Masami Nakagawa and Shoichi Sakata,[1] to explain the neutrino oscillations predicted by Bruno Pontecorvo.[2]
The PMNS matrix
The Standard Model of particle physics contains three generations or "flavors" of neutrinos, νe, νμ, and ντ labeled according to the charged leptons with which they partner in the charged-current weak interaction. These three eigenstates of the weak interaction form a complete, orthonormal basis for the Standard Model neutrino. Similarly, one can construct an eigenbasis out of three neutrino states of definite mass, ν1, ν2, and ν3, which diagonalize the neutrino's free-particle Hamiltonian. Observations of neutrino oscillation have experimentally determined that for neutrinos, like the quarks, these two eigenbases are not the same - they are "rotated" relative to each other. Each flavor state can thus be written as a superposition of mass eigenstates, and vice versa. The PMNS matrix, with components Uai corresponding to the amplitude of mass eigenstate i in flavor a, parameterizes the unitary transformation between the two bases:
${\displaystyle {\begin{bmatrix}{\nu _{e}}\\{\nu _{\mu }}\\{\nu _{\tau }}\end{bmatrix}}={\begin{bmatrix}U_{e1}&U_{e2}&U_{e3}\\U_{\mu 1}&U_{\mu 2}&U_{\mu 3}\\U_{\tau 1}&U_{\tau 2}&U_{\tau 3}\end{bmatrix}}{\begin{bmatrix}\nu _{1}\\\nu _{2}\\\nu _{3}\end{bmatrix}}.\ }$
The vector on the left represents a generic neutrino state expressed in the flavor basis, and on the right is the PMNS matrix multiplied by a vector representing the same neutrino state in the mass basis. A neutrino of a given flavor α is thus a "mixed" state of neutrinos with different mass: if one could measure directly that neutrino's mass, it would be found to have mass mi with probability |Uαi|2.
The PMNS matrix for antineutrinos is identical to the matrix for neutrinos under CPT symmetry.
Due to the difficulties of detecting neutrinos, it is much more difficult to determine the individual coefficients than in the equivalent matrix for the quarks (the CKM matrix).
Assumptions
Standard Model
As noted above, PMNS matrix is unitary. That is, the sum of the squares of the values in each row and in each column, which represent the probabilities of different possible events given the same starting point, add up to 100%,
In the simplest case, the Standard Model posits three generations of neutrinos with Dirac mass that oscillate between three neutrino mass eigenvalues, an assumption that is made when best fit values for its parameters are calculated.
Other models
The PMNS matrix is not necessarily unitary and additional parameters are necessary to describe all possible neutrino mixing parameters, in other models of neutrino oscillation and mass generation, such as the see-saw model, and in general, in the case of neutrinos that have Majorana mass rather than Dirac mass.
There are also additional mass parameters and mixing angles[disambiguation needed] in a simple extension of the PMNS matrix in which there are more than three flavors of neutrinos, regardless of the character of neutrino mass. As of July 2014, scientists studying neutrino oscillation are actively considering fits of the experimental neutrino oscillation data to an extended PMNS matrix with a fourth, light "sterile" neutrino and four mass eigenvalues, although the current experimental data tends to disfavor that possibility.[3][4][5]
Parameterization
In general, there are nine degrees of freedom in any three by three matrix, and in the PMNS matrix, because it is a matrix whose directly physically observable values (the square of the respective entries) are real numbers between zero and 1 form a unitary matrix, the matrix can thus be fully described by four free parameters from which all physically observable properties of the matrix can be discerned.[6] The PMNS matrix is most commonly parameterized by three mixing angles (θ12, θ23 and θ13) and a single phase called δCP related to charge-parity violations (i.e. differences in the rates of oscillation between two states with opposite starting points which makes the order in time in which events take place necessary to predict their oscillation rates), in which case the matrix can be written as:
{\displaystyle {\begin{aligned}&{\begin{bmatrix}1&0&0\\0&c_{23}&s_{23}\\0&-s_{23}&c_{23}\end{bmatrix}}{\begin{bmatrix}c_{13}&0&s_{13}e^{-i\delta _{CP}}\\0&1&0\\-s_{13}e^{i\delta _{CP}}&0&c_{13}\end{bmatrix}}{\begin{bmatrix}c_{12}&s_{12}&0\\-s_{12}&c_{12}&0\\0&0&1\end{bmatrix}}\\&={\begin{bmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta _{CP}}\\-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta _{CP}}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta _{CP}}&s_{23}c_{13}\\s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta _{CP}}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta _{CP}}&c_{23}c_{13}\end{bmatrix}}.\end{aligned}}}
where sij and cij are used to denote sinθij and cosθij respectively. In the case of Majorana neutrinos, two extra complex phases are needed, as the phase of Majorana fields cannot be freely redefined due to the condition ${\displaystyle \nu =\nu ^{c}}$. An infinite number of possible parameterizations exist; one other common example being the Wolfenstein parameterization.
The mixing angles have been measured by a variety of experiments (see neutrino mixing for a description). The CP-violating phase δCP has not been measured directly, but estimates can be obtained by fits using the other measurements.
Experimentally measured parameter values
As of July 2014, the current best directly measured values are:[7][8]
{\displaystyle {\begin{aligned}\sin ^{2}2\theta _{12}&=0.857\pm 0.024\\\sin ^{2}2\theta _{23}&>0.95\\\sin ^{2}2\theta _{13}&=0.095\pm 0.010\\\end{aligned}}}
while the current best-fit values, using direct and indirect measurements, from NuFit are:[9][10]
{\displaystyle {\begin{aligned}\theta _{12}[^{\circ }]&=33.36_{-0.78}^{+0.81}\\\theta _{23}[^{\circ }]&=40.0_{-1.5}^{+2.1}~{\textrm {or}}~50.4_{-1.3}^{+1.3}\\\theta _{13}[^{\circ }]&=8.66_{-0.46}^{+0.44}\\\delta _{\textrm {CP}}[^{\circ }]&=300_{-138}^{+66}\\\end{aligned}}}
So the current matrix will be:
${\displaystyle U={\begin{bmatrix}U_{e1}&U_{e2}&U_{e3}\\U_{\mu 1}&U_{\mu 2}&U_{\mu 3}\\U_{\tau 1}&U_{\tau 2}&U_{\tau 3}\end{bmatrix}}={\begin{bmatrix}0.82\pm 0.01&0.54\pm 0.02&-0.15\pm 0.03\\-0.35\pm 0.06&0.70\pm 0.06&0.62\pm 0.06\\0.44\pm 0.06&-0.45\pm 0.06&0.77\pm 0.06\end{bmatrix}}}$
Notes regarding the best fit parameter values
• These best fit values imply that there is much more neutrino mixing than there is mixing between the quark flavors in the CKM matrix (in the CKM matrix, the corresponding mixing angles are θ12 = 13.04°±0.05°, θ23 = 2.38°±0.06°, θ13 = 0.201°±0.011°).
• These values are inconsistent with tribimaximal neutrino mixing (i.e. θ12 = θ23 = 45°, θ13 = 0°) at a statistical significance of more than five standard deviations. Tribimaximal neutrino mixing was a common assumption in theoretical physics papers analyzing neutrino oscillation before more precise measurements were available.
• A value of θ23 equal to exactly 45 degrees, which would imply maximal mixing between the second and third neutrino mass eigenstates, is ruled out with a statistical significance in excess of 2 standard deviations.[10]
• The alternative choices for θ23 are referred to as "first quadrant" and "second quadrant" values. The data favor the first quadrant value over the second quadrant value with a statistical significance of 1.5 standard deviations in a "normal mass hierarchy" context (i.e. where the second neutrino mass eigenstate is lighter than the third neutrino mass eigenstate), but there is not a statistically significant preference between the two values in the case of an "inverted mass hierarchy" (i.e. where the second neutrino mass eigenstate is heavier than the third neutrino mass eigenstate).[10] This is the only PMNS matrix parameter which is strongly sensitive to the mass hierarchy of the neutrino masses given the currently available experimental data.[10]
• The extent to which the best fit value for δCP is meaningful should not be overstated. The best fit value for δCP is consistent with zero at the 0.9 standard deviation level, since in circular coordinates 0 degrees and 360 degrees are equivalent. Generally speaking, in particle physics, experimental results that are within 2 standard deviations of each other are called "consistent" with each other. Currently, all possible values for δCP are with 1.8 standard deviations of the best fit values, so all possible values of δCP are "consistent" with the experimental data, even though those values closer to the best fit value are somewhat more likely to be correct.
Notes
1. ^ The PMNS matrix is not unitary in the seesaw model.
References
1. ^ Maki, Z; Nakagawa, M.; Sakata, S. (1962). "Remarks on the Unified Model of Elementary Particles". Progress of Theoretical Physics. 28: 870. Bibcode:1962PThPh..28..870M. doi:10.1143/PTP.28.870.
2. ^ Pontecorvo, B. (1957). "Inverse beta processes and nonconservation of lepton charge". Zhurnal Éksperimental’noĭ i Teoreticheskoĭ Fiziki. 34: 247. reproduced and translated in Soviet Physics JETP. 7: 172. 1958.
3. ^ Kayser, Boris (February 13, 2014). "Are There Sterile Neutrinos?". arXiv: [hep-ph].
4. ^ Esmaili, Arman; Kemp, Ernesto; Peres, O. L. G.; Tabrizi, Zahra (30 Oct 2013). "Probing light sterile neutrinos in medium baseline reactor experiments". arXiv: [hep-ph].
5. ^ F.P. An, et al.(Daya Bay collaboration) (July 27, 2014). "Search for a Light Sterile Neutrino at Daya Bay". arXiv: [hep-ex].
6. ^ Valle, J. W. F. (2006). "Neutrino physics overview". Journal of Physics: Conference Series. 53: 473. Bibcode:2006JPhCS..53..473V. arXiv:. doi:10.1088/1742-6596/53/1/031.
7. ^ J. Beringer et al. (Particle Data Group) (2012 and 2013 partial update for the 2014 edition). "PDGLive: Neutrino Mixing". Particle Data Group. Retrieved 2014-08-21. Check date values in: |date= (help)
8. ^ J. Beringer et al. (Particle Data Group) (2012). "Review of Particle Physics". Physical Review D. 86: 010001. Bibcode:2012PhRvD..86a0001B. doi:10.1103/PhysRevD.86.010001.
9. ^ Gonzalez-Garcia, M. C.; Maltoni, M.; Salvado, J.; Schwetz, T. (June 2014). "NuFit 1.3". Retrieved 2014-07-09.
10. ^ a b c d Gonzalez-Garcia, M. C.; Maltoni, Michele; Salvado, Jordi; Schwetz, Thomas (21 December 2012). "Global fit to three neutrino mixing: Critical look at present precision". Journal of High Energy Physics. 2012 (12): 123. Bibcode:2012JHEP...12..123G. arXiv:. doi:10.1007/JHEP12(2012)123.
|
2017-06-23 07:42:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9286088347434998, "perplexity": 2318.8465711940426}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00146.warc.gz"}
|
https://www.assignmentninjas.com/profitability-analysis-homework-paper/
|
Profitability Analysis Homework Paper
Every organization aims to maximize its profits. To know whether the organization is earning the required profit or not, profitability analysis is performed. Profitability analysis can be performed by analyzing the organization’s output. The organization’s output includes products, location, customers, channels, and transactions.
Organizations record all their transactions related to the cost incurred in production along with revenue earned during an accounting period. The statement that reveals all the costs or expenses and the income or revenue earned is known as income statement. This statement can be used by employees, managers, investors, and other stakeholders to determine the company’s profitability.
Don't use plagiarized sources. Get Your Custom Essay on
Profitability Analysis Homework Paper
Just from $9/Page Ways to do Profitability Analysis: Profitability ratios, break-even analysis, and return on investment are the common ways to determine whether the current business is profitable or not. There are various profitability ratios that can be computed to know the company’s ability to generate profits. Some of them are as follows: Gross Profit Margin Ratio: Gross profit margin ratio is based on two variables that are sales and the cost of goods sold. It is the ratio of gross profit to net sales. It is a good measure of profitability when the gross profit ratio of the company over a time period is compared. This can be computed using a very simple formula, which is as follows: \text{Gross profit margin ratio}=\frac{\text{Gross profit}}{\text{Sales}}\times 100Gross profit margin ratio= Sales Gross profit ​ ×100 Gross profit is computed using the following formula: \text{Gross profit}=\text{Sales}-\text{Cost of goods sold}Gross profit=Sales−Cost of goods sold If the gross profit margin ratio is high, then it means the company is earning higher profit over the cost of production. The company must try to maintain stability in earning such a profit margin. Net Profit Margin Ratio: This ratio reveals the true profitability of an organization after paying tax obligation. This ratio can be calculated using the following formula: \text{Net profit margin ratio}=\frac{\text{Net income after taxes}}{\text{Sales}}\times 100Net profit margin ratio= Sales Net income after taxes ​ ×100 The net profit margin ratio reveals the management’s efficiency in producing and selling the products. Operating Profit Margin Ratio: This ratio indicates the current earning power of the company. It acts as a yardstick to measure the firm’s ability to turn its sales into pre-tax profit. This can be computed using the following formula: \text{Operating profit margin ratio}=\frac{\text{Sales}-\text{Operating cost}}{\text{Sales}}\times 100Operating profit margin ratio= Sales Sales−Operating cost ​ ×100 Figure 1: Ways to do profitability analysis Figure 1: Ways to do profitability analysis Break-even Analysis: The break-even point is the point at which cost incurred is equal to the revenue earned. This is the situation where the company earns no profit and incurs no loss. Any number of units sold above this point will generate profit. The break-even point can be calculated as follows: \text{Break-even point}\left( \text{in units} \right)=\frac{\text{Fixed cost}}{\text{Units selling price}-\text{Units variable cost}}\times 100Break-even point(in units)= Units selling price−Units variable cost Fixed cost ​ ×100 Return on Assets (ROA) and Return on Investment (ROI): These are the other profitability measures where return on assets indicates how efficiently the firm is operating and return on investment reveals about how much the firm is earning as compared to the investment made. The ROI and ROA can be calculated using the following formula: \text{Return on Assets}=\frac{\text{Net income before taxes}}{\text{Total Assets}}\times 100Return on Assets= Total Assets Net income before taxes ​ ×100 \text{Return on Investment}=\frac{\text{Net income before taxes}}{\text{Net worth}}\times 100Return on Investment= Net worth Net income before taxes ​ ×100 Get Professional Assignment Help Cheaply Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent? Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments. Why Choose Our Academic Writing Service? • Plagiarism free papers • Timely delivery • Any deadline • Skilled, Experienced Native English Writers • Subject-relevant academic writer • Adherence to paper instructions • Ability to tackle bulk assignments • Reasonable prices • 24/7 Customer Support • Get superb grades consistently Online Academic Help With Different Subjects Literature Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists. Finance Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts. Computer science Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments! Psychology While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent. Engineering Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper. Nursing In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices. Sociology Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment. Business We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty! Statistics We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done. Law Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction. What discipline/subjects do you deal in? We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline. Are your writers competent enough to handle my paper? Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable. What if I don’t like the paper? There is a very low likelihood that you won’t like the paper. Reasons being: • When assigning your order, we match the paper’s discipline with the writer’s field/specialization. Since all our writers are graduates, we match the paper’s subject with the field the writer studied. For instance, if it’s a nursing paper, only a nursing graduate and writer will handle it. Furthermore, all our writers have academic writing experience and top-notch research skills. • We have a quality assurance that reviews the paper before it gets to you. As such, we ensure that you get a paper that meets the required standard and will most definitely make the grade. In the event that you don’t like your paper: • The writer will revise the paper up to your pleasing. You have unlimited revisions. You simply need to highlight what specifically you don’t like about the paper, and the writer will make the amendments. The paper will be revised until you are satisfied. Revisions are free of charge • We will have a different writer write the paper from scratch. • Last resort, if the above does not work, we will refund your money. Will the professor find out I didn’t write the paper myself? Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results. What if the paper is plagiarized? We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch. When will I get my paper? You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it. Will anyone find out that I used your services? We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened. How our Assignment Help Service Works 1. Place an order You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions. 2. Pay for the order Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization. 3. Track the progress You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper. 4. Download the paper The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper. GET A PERFECT SCORE!!! How it works Fill in the order form 01 Provide your payment details 02 YOUR WRITER IS WORKING ON YOUR CUSTOM PAPERS 03 Get your completed work! 04 Try our service with 15% OFF your first order Why us US-BASED COMPANY with certified writers ALL SUBJECTS and academic levels premium quality 24/7 SUPPORT Activity 1141 Preparing orders 424 Completed orders 782 Active writers 94.2% Satisfied customers Try Custom Paper Writing Service Today Order Essay Papers Now! Profitability Analysis Homework Paper Every organization aims to maximize its profits. To know whether the organization is earning the required profit or not, profitability analysis is performed. Profitability analysis can be performed by analyzing the organizations output. The organizations output includes products, location, customers, channels, and transactions. Organizations record all their transactions related to the cost incurred in production along with revenue earned during an accounting period. The statement that reveals all the costs or expenses and the income or revenue earned is known as income statement. This statement can be used by employees, managers, investors, and other stakeholders to determine the companys profitability. Don't use plagiarized sources. Get Your Custom Essay on Profitability Analysis Homework Paper Just from$9/Page
Ways to do Profitability Analysis:
Profitability ratios, break-even analysis, and return on investment are the common ways to determine whether the current business is profitable or not. There are various profitability ratios that can be computed to know the companys ability to generate profits. Some of them are as follows:
Gross Profit Margin Ratio: Gross ratio is based on two variables that are sales and the cost of goods sold. It is the ratio of gross profit to net sales. It is a good measure of profitability when the of the company over a time period is compared. This can be computed using a very simple formula, which is as follows:
text{Gross profit margin ratio}=frac{text{Gross profit}}{text{Sales}}times 100Gross profit margin ratio=
Sales
Gross profit
100
Gross profit is computed using the following formula:
text{Gross profit}=text{Sales}-text{Cost of goods sold}Gross profit=SalesCost of goods sold
If the gross profit margin ratio is high, then it means the company is earning higher profit over the cost of production. The company must try to maintain stability in earning such a profit margin.
Net Profit Margin Ratio: This ratio reveals the true profitability of an organization after paying tax obligation. This ratio can be calculated using the following formula:
text{Net profit margin ratio}=frac{text{Net income after taxes}}{text{Sales}}times 100Net profit margin ratio=
Sales
Net income after taxes
100
The net profit margin ratio reveals the managements efficiency in producing and selling the products.
Operating Profit Margin Ratio: This ratio indicates the current earning power of the company. It acts as a yardstick to measure the firms ability to turn its sales into . This can be computed using the following formula:
text{Operating profit margin ratio}=frac{text{Sales}-text{Operating cost}}{text{Sales}}times 100Operating profit margin ratio=
Sales
SalesOperating cost
100
Figure 1: Ways to do profitability analysis
Figure 1: Ways to do profitability analysis
Break-even Analysis: The break-even point is the point at which cost incurred is equal to the revenue earned. This is the situation where the company earns no profit and incurs no loss. Any number of units sold above this point will generate profit. The break-even point can be calculated as follows:
text{Break-even point}left( text{in units} right)=frac{text{Fixed cost}}{text{Units selling price}-text{Units variable cost}}times 100Break-even point(in units)=
Units selling priceUnits variable cost
Fixed cost
100
Return on Assets (ROA) and Return on Investment (ROI): These are the other profitability measures where return on assets indicates how efficiently the firm is operating and return on investment reveals about how much the firm is earning as compared to the investment made. The ROI and ROA can be calculated using the following formula:
text{Return on Assets}=frac{text{Net income before taxes}}{text{Total Assets}}times 100Return on Assets=
Total Assets
Net income before taxes
100
text{Return on Investment}=frac{text{Net income before taxes}}{text{Net worth}}times 100Return on Investment=
Net worth
Net income before taxes
100
Get Professional Assignment Help Cheaply
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
Why Choose Our Academic Writing Service?
• Plagiarism free papers
• Timely delivery
• Skilled, Experienced Native English Writers
• Ability to tackle bulk assignments
• Reasonable prices
Online Academic Help With Different Subjects
Literature
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Finance
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
Computer science
Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!
Psychology
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
Nursing
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Sociology
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
Statistics
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Law
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
What discipline/subjects do you deal in?
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Are your writers competent enough to handle my paper?
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
What if I don’t like the paper?
There is a very low likelihood that you won’t like the paper.
Reasons being:
• When assigning your order, we match the paper’s discipline with the writer’s field/specialization. Since all our writers are graduates, we match the paper’s subject with the field the writer studied. For instance, if it’s a nursing paper, only a nursing graduate and writer will handle it. Furthermore, all our writers have academic writing experience and top-notch research skills.
• We have a quality assurance that reviews the paper before it gets to you. As such, we ensure that you get a paper that meets the required standard and will most definitely make the grade.
In the event that you don’t like your paper:
• The writer will revise the paper up to your pleasing. You have unlimited revisions. You simply need to highlight what specifically you don’t like about the paper, and the writer will make the amendments. The paper will be revised until you are satisfied. Revisions are free of charge
• We will have a different writer write the paper from scratch.
• Last resort, if the above does not work, we will refund your money.
Will the professor find out I didn’t write the paper myself?
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
What if the paper is plagiarized?
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
When will I get my paper?
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
How our Assignment Help Service Works
1. Place an order
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
2. Pay for the order
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
3. Track the progress
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.
GET A PERFECT SCORE!!!
Fill in the
order form
01
Provide your
payment details
02
IS WORKING
03
Get
04
15% OFF
Why us
US-BASED COMPANY
with certified writers
Activity
1141
Preparing orders
424
Completed orders
782
Active writers
94.2%
Satisfied customers
Try Custom Paper Writing Service Today
Order Essay Papers Now!
Order your paper today and save 30% with the discount code NINJA
X
error: Content is protected !!
Need assignment help? You can contact our live agent via WhatsApp using +1 718 717 2861
Feel free to ask questions, clarifications, or discounts available when placing an order.
|
2023-03-31 21:10:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18420402705669403, "perplexity": 2730.408324895216}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00597.warc.gz"}
|
https://www.coursehero.com/file/12444641/Series-Test-Proofs/
|
# Series Test Proofs - Math 142 Series Test Proofs...
• Notes
• 6
This preview shows page 1 - 3 out of 6 pages.
Math 142: Series Test Proofs Theorem: (The Monotone Convergence Theorem) If a n is a decreasing sequence that is bounded below, then it converges. Similarly, is a n is increasing and bounded above, then it converges. Proof: Suppose a n is decreasing and bounded below. Let > 0, and consider the greatest lower bound L of the sequence (this exists by the completeness axiom). Then by definition of greatest lower bound, L + is not a lower bound of a n . Let N be the smallest value such that a N < L + . Then since a n is decreasing, we know that a n < L + for all n N . Finally, this says that a n - L < for all n N , and since L is a lower bound of a n , we know that a n - L 0. Thus, | a n - L | < , so lim n →∞ a n = L by definition. Suppose a n is increasing and bounded below. The proof is identical, except this time we let L be the least upper bound of the sequence, note that L - is not an upper bound of a n , and find an N such that a n > L - for all n N . Since L - a n 0, we get that | a n - L | < . Theorem: (Geometric Series) The geometric series X n =1 ar n - 1 converges to a 1 - r when | r | < 1 and diverges when | r | ≥ 1. Proof: First, we’ll get an expression for s n : s n = a + ar + ar 2 + ar 3 + ... + ar n - 1 rs n = ar + ar 2 + ar 3 + ... + ar n - 1 + ar n Subtracting these two equations, we get that s n - rs n = a - ar n , so s n (1 - r ) = a (1 - r n ), and finally, we get an expression for s n : a (1 - r n ) 1 - r . We now proceed to take the limit of s n . If | r | < 1, lim n →∞ a (1 - r n ) 1 - r = a (1 - 0) 1 - r = a 1 - r , so it converges to a 1 - r . If | r | > 1, lim n →∞ r n diverges, so s n diverges and hence the series diverges. If r = 1, then the series is simply X n =1 a = a + a + a + ... , which diverges. If r = - 1, then the series is simply X n =1 a ( - 1) n - 1 = a - a + a - a + ... , which diverges. Theorem: (The Divergence Test) If lim n →∞ a n 6 = 0 or does not exist, then X n =1 a n diverges. Proof: We’ll prove the contrapositive: If the series X n =1 a n is convergent, then lim n →∞ a n = 0. Notice that a n = s n - s n - 1 , where s n is the n th partial sum of X n =1 a n . Since X n =1 a n converges, s n s . Clearly, this means that s n - 1 s as well. So, lim n →∞ a n = lim n →∞ ( s n - s n - 1 ) = lim n →∞ s n - lim n →∞ s n - 1 = s - s = 0. 1
Theorem: (Constant Multiples of Series) If X n =1 a n converges, then X n =1 ca n converges to c X n =1 a n . If X n =1 a n diverges, then X n =1 ca n diverges. Proof: Let s n be the partial sums of X n =1 a n . If X n =1 a n converges, then say s n s . The n th partial sum of X n =1 ca n is ca 1 + ca 2 + ... + ca n = c ( a 1 + a 2 + ... + a n ) = cs n . So, lim n →∞ cs n = c lim n →∞ s n = cs . If X n =1 a n diverges, then lim n →∞ s n diverges. Thus, lim n →∞ cs n diverges, so X n =1 ca n diverges.
|
2021-09-19 21:03:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835543990135193, "perplexity": 458.61065100590747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00719.warc.gz"}
|
https://civil.gateoverflow.in/304/gate-civil-2012-question-2
|
The annual precipitation data of a city is normally distributed with mean and standard deviation as $1000$ mm and $200$ mm, respectively. The probability that the annual precipitation will be more than $1200$ mm is
1. $<50 \%$
2. $50 \%$
3. $75 \%$
4. $100\%$
|
2022-09-27 17:28:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568729162216187, "perplexity": 108.75037173129952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00617.warc.gz"}
|
https://ift.world/booklets/economics-currency-exchange-rates-part3/
|
IFT Notes for Level I CFA® Program
IFT Notes for Level I CFA® Program
# Part 3
## 3. Currency Exchange Rate Calculations
### 3.1. Exchange Rate Quotations
Exchange rate is the price of one currency relative to another. Exchange rates are typically quoted at four decimal places. The ratio or exchange rate is quoted as price currency per unit of base currency.
Consider this quote: = 1.4000
It means you can buy 1.4 U.S. dollars for one euro. The currency in the denominator (one unit of the currency) is the base currency. The currency in the numerator is the price currency.
The same quote may also be written as = 0.7142. If you notice, it is the reciprocal of 1.4.
Direct quote: A direct quote takes domestic currency as the price currency and the foreign currency as the base currency.
Indirect quote: An indirect quote takes domestic currency as the base currency and the foreign currency as the price currency,
Direct and indirect quotes are the reciprocal of each other.
For example: From a German investor’s perspective, is = 1.4000 a direct quote?
The domestic currency for a German investor is the Euro. In this case, the Euro is shown as the base currency. Therefore, from the German investor’s perspective this quote is an indirect quote.
Bid-ask: Currencies are always quoted as bid-ask. (This is from the perspective of a dealer, not from the client’s perspective). Bid rate is the rate at which the dealer will buy the base currency. Ask rate is the rate at which the dealer will sell the base currency.
For example: A bid-ask quote of = 1.3990 – 1.4010 means that the dealer is willing to buy 1 euro for $1.399 and sell 1 euro for$1.4010.
The bid price is always lower than the sell price as the dealer makes money on the bid-ask spread.
Example
Appreciation of one currency is the depreciation of the other. Say the USD/EUR rate changed from 1.4 to 1.5. What is the appreciation/depreciation of each currency?
Solution:
The base currency is EUR; the price currency is USD.
The exchange rate goes up from 1.4 to 1.5. It means the base currency (EUR) has appreciated/strengthened. The USD has depreciated or weakened.
% appreciation of EUR = = 7.142%
To calculate the depreciation in USD, we must first convert the quote in EUR/USD terms.
Take a reciprocal of the quote to get the EUR/USD values.
Initial value: = 0.7143 Later value EUR/USD = = 0.6667
% Depreciation of dollar = = -6.67%
Note: The percentage amount by which one currency goes up (appreciates) is not necessarily the same as the percentage amount by which the other currency goes down. In our example, while the Euro appreciated by 7.142%, the U.S. dollar did not depreciate by 7.142%, instead it only depreciated by 6.67%.
### 3.2. Cross-rate Calculations
Given two exchange rates and three currencies, it is possible to determine the third exchange rate. This way of determining the third exchange rate by converting one foreign exchange quote into another is called the cross rate.
Given the two exchange rates below, what is the rate?
Ratio Spot rate 100.0000 60.0000
Solution:
In this equation, the USD cancels out giving us .
We are given the value of . To get the value of , we take the reciprocal of which is given.
= 100 * = 1.667
Triangular arbitrage: If the implied cross rate is not equal to the quoted cross rate, then an arbitrage opportunity exists and it is called triangular arbitrage. In such cases, one would profit by buying low and selling high. For example, in the case above, if the bank quoted a rate of 1.8 for , then you may buy INR (sell PKR) for 1.667 and sell INR (buy PKR) for 1.8 to profit from the mispricing.
|
2021-04-16 14:14:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5544939637184143, "perplexity": 2290.119765099685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00240.warc.gz"}
|
https://blog.rizauddin.com/2009/01/latex-title-page_4313.html
|
## Friday, January 30, 2009
### LaTeX title page
A document is usually divided into several parts. One of the part is the title page. To produce the title page, use the \maketitle command. The format of the title page will depends on the document class used, such as article, book or report.
A document title page usually consists of several items: title, author(s), date, and some footnote. The first item is the title. The command to use is \title{}. A standard LaTeX format for the title page will align all entries centered on the lines in which they appear. The title will be broken up automatically, if the title is too long. To manually break the title, please use the \\ command. For example, title{...\\...\\...}.
The second item is the author(s). To display the author, use the \author{} command. If there are several authors, separate their name with and from one another. For example \author{S. Kasim \and P. Ramli}. The author names will be printed in parallel next to each other on the same line. Replace \and with \\ to display the author(s) on top of one another.
To include the address, use the \\ command, such as
\author{S. Kasim\\Company\\Address
\and
P. Ramli\\Company\\Address}
Extra item such as telephone number or email may be produce in the footnote via the \thanks{} command. For example, \author{S. Kasim\thanks{Tel. 03--3367638}}.
The last item is the date, which can be produced using the \date{} command. If the \date{} command is omitted, then the current date is printed automatically below the author entries on the title page.
### An example of a title page
\documentclass{article}
\title{A simple LaTeX title page}
\author{
S. Kasim \thanks{Tel. 03--3367638}\\Company1\\Address1
\and
P. Ramli \thanks{Email. ramli@ramli.com}\\Company2\\Address2
}
\date{Kuala Lumpur, \today}
\begin{document}
\maketitle
\end{document}
|
2019-08-25 16:31:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9944643378257751, "perplexity": 2486.883421398115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00554.warc.gz"}
|
https://brilliant.org/problems/periodic-function/
|
# Periodic function...??!!
Algebra Level 3
If f(x) = $$sin^2x + cos^2x-2$$ . What you say about its periodicity ?
Note : here i mean to say "period" is to "fundamental period".
×
|
2017-05-24 15:55:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4940680265426636, "perplexity": 2302.137129988559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607848.9/warc/CC-MAIN-20170524152539-20170524172539-00323.warc.gz"}
|
https://math.stackexchange.com/questions/3319838/countable-set-of-elements-chosen-from-countable-collection
|
# Countable set of elements chosen from Countable collection
Let $$U$$ be a countable collection of subsets of $$X$$. Then there exists a countable set which consists of elements of the elements in $$U$$.
Proof: As $$U$$ is a countable collection of subsets of $$X$$, it follows that the elements in $$U$$ can be indexed by the natural numbers. Furthermore, by the axiom of choice we can choose an element from each element of $$U$$ and form a new set $$C$$ which consists of elements of each element in $$U$$. So $$C = \{ x_n : x_n \in U_n\}$$ so we can define a map $$f$$ from $$C$$ to $$U$$ by $$f(x_n)=U_n$$ then it suffices to show that the map is injective, thus making $$C$$ countable. Suppose $$f(x_n)=f(x_m) = U_n =U_m$$: as $$U$$ is countable, $$n=m$$ hence $$x_n=x_m$$ so the map is injective and therefore $$C$$ is countable.
Is the proof correct? How could I improve it?
Your proof is mostly correct. It however needs correction on these two points:
$$\boxed{\textit{Firstly:}}$$ the axiom of choice is used on collections of non-empty sets and you never forbade your collection $$U$$ from containing $$\emptyset$$ as that is still a valid subset of $$X$$.
Hence you want to eliminate the empty set from your collection first before using the axiom of choice on it. That is you want proceed in your proof with $$U' = U - \{\emptyset\}$$ instead or say something to the effect of "without loss of generality, assume that $$U$$ does not contain $$\emptyset$$; otherwise just replace $$U$$ with $$U - \{\emptyset\}$$".
$$\boxed{\textit{Secondly:}}$$ this map you are defining from $$C$$ to $$U$$:
So $$C = \{ x_n : x_n \in U_n\}$$ so we can define a map $$f$$ from $$C$$ to $$U$$ by $$f(x_n)=U_n$$
may not be well-defined. For instance, if $$x_{1,2} \in C$$ were a common element of $$U_1$$ and $$U_2$$, then do you define $$f(x_{1,2})$$ as $$U_1$$ or $$U_2$$? And your hypotheses never mentioned that $$U$$ is a collection of disjoint subsets of $$X$$. Hence it is totally possible for $$U_1$$ and $$U_2$$ to have non-empty intersection.
The correct way to go about it is this. Simply define a map from $$C$$ to $$\Bbb N$$ directly like so (there is no need to map into $$U$$):
For each $$x \in C$$, certainly $$x \in U_k$$ for some $$k \in \Bbb N$$. So the set $$C_x \subseteq \Bbb N$$ of all $$n \in \Bbb N$$ such that $$x \in U_n$$ is non-empty (as $$n = k$$ is in there). Thus the well-ordering of $$\Bbb N$$ says that the least element of $$C_x$$ must exist; so we can define $$f(x) = \min C_x \in \Bbb N$$.
You can check that this gives an injective map from $$C$$ into $$\Bbb N$$ due to the uniqueness of minimums. And you can now conclude that $$C$$ is countable (where I am assuming finiteness is a possibility when I say "countable").
• Thanks! Can I make an argument for countability which does not require maps? – topologicalmagician Aug 11 at 9:14
• Definitions of countability itself involves maps! One such definition is: a set $S$ is countable (where finiteness is included as a possibility) iff $\exists$ an injection $f : S \to \Bbb N$. So at some level you will have to use maps or a theorem that itself uses maps to conclude countability. – 0XLR Aug 11 at 9:17
• @topologicalmagician By the way, if my answer is satisfactory consider accepting it. – 0XLR Aug 11 at 10:39
|
2019-11-12 16:02:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 63, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9722546935081482, "perplexity": 112.58644934280379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00508.warc.gz"}
|
http://www.citeulike.org/user/linchen11/article/11465194
|
CiteULike is a free online bibliography manager. Register and you can start organising your references online.
Tags
# Multilevel distillation of magic states for quantum computing
by:
(27 Mar 2013) Key: citeulike:11465194
## Likes (beta)
This copy of the article hasn't been liked by anyone yet.
### Abstract
We develop a procedure for distilling magic states used in universal quantum computing that requires substantially fewer initial resources than prior schemes. Our distillation circuit is based on a family of concatenated quantum codes that possess a transversal Hadamard operation, enabling each of these codes to distill the eigenstate of the Hadamard operator. A crucial result of this design is that low-fidelity magic states can be consumed to purify other high-fidelity magic states to even higher fidelity, which we call "multilevel distillation." When distilling in the asymptotic regime of infidelity $ε \rightarrow 0$ for each input magic state, the number of input magic states consumed on average to yield an output state with infidelity $O(ε^2^r)$ approaches $2^r+1$, which comes close to saturating the conjectured bound in [Phys. Rev. A 86, 052329]. We show numerically that there exist multilevel protocols such that the average number of magic states consumed to distill from error rate $ε_\mathrmin = 0.01$ to $ε_\mathrmout$ in the range $10^-5$ to $10^-40$ is about $14\log_10(1/ε_\mathrmout) - 40$; the efficiency of multilevel distillation dominates all other reported protocols when distilling Hadamard magic states from initial infidelity 0.01 to any final infidelity below $10^-7$. These methods are an important advance for magic-state distillation circuits in high-performance quantum computing, and they provide insight into the limitations of nearly resource-optimal quantum error correction.
|
2013-05-22 22:40:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5910980701446533, "perplexity": 1974.2809703917287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702452567/warc/CC-MAIN-20130516110732-00062-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://homework.cpm.org/category/CC/textbook/cc3/chapter/8/lesson/8.2.3/problem/8-93
|
### Home > CC3 > Chapter 8 > Lesson 8.2.3 > Problem8-93
8-93.
1. Determine the coordinates of each point of intersection without graphing. Homework Help ✎
1. y = 2x − 3
y = 4x + 1
2. y = 2x − 5
y = −4x −2
Use the Equal Values Method.
Set both equations equal to each other.
2x − 3 = 4x + 1
Get all the x terms on one side of the equation.
−2x = 4
Divide both sides by −2 to solve for x.
x = −2
Substitute this value into one of the original equations to solve for y.
(−2, −7)
Follow the steps in part (a).
$\left(\frac{1}{2}, -4\right)$
|
2019-08-24 09:37:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6708712577819824, "perplexity": 1347.5221373871693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320156.86/warc/CC-MAIN-20190824084149-20190824110149-00107.warc.gz"}
|
https://jira.lsstcorp.org/browse/DM-11571
|
# Complete and test use of jointcal results in validate_drp
XMLWordPrintable
#### Details
• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
4
• Epic Link:
• Sprint:
Alert Production F17 - 9, Alert Production F17 - 10, Alert Production F17 - 11, AP S18-1
• Team:
Alert Production
#### Description
DM-10729 added incomplete support for using meas_mosaic (and soon, jointcal) results in to calibrate the catalogs used by validate_drp. This feature has only been tested in a one-off sense, because we currently don't have any CI processing of a dataset large enough to run meas_mosaic/jointcal. Once that's addressed, we should finish making it possible to utilize jointcal results in the main driver scripts used by SQuaSH and enable these tests in CI.
If anyone knows of a ticket for adding larger datasets to CI, please add it as a blocker.
#### Activity
Hide
John Parejko added a comment -
I'm taking this on for September, to facilitate the meas_mosaic/jointcal comparison.
Show
John Parejko added a comment - I'm taking this on for September, to facilitate the meas_mosaic/jointcal comparison.
Hide
John Parejko added a comment -
As to the question about larger datasets: I don't see why validation_data_hsc isn't big enough? It's got 4 full focal plane exposures in I band, which should be enough to get a reasonably interesting fit in jointcal.
Show
John Parejko added a comment - As to the question about larger datasets: I don't see why validation_data_hsc isn't big enough? It's got 4 full focal plane exposures in I band, which should be enough to get a reasonably interesting fit in jointcal.
Hide
John Parejko added a comment -
Note that WIDE tract: 9372 filter: HSC-R has only 27 visits, so is a reasonably small test case. I have a script for it here:
/scratch/parejkoj/compare/scripts/validate-SSP_WIDE_9372_HSC-R.sl
Show
John Parejko added a comment - Note that WIDE tract: 9372 filter: HSC-R has only 27 visits, so is a reasonably small test case. I have a script for it here: /scratch/parejkoj/compare/scripts/validate-SSP_WIDE_9372_HSC-R.sl
Hide
John Parejko added a comment -
Simon Krughoff: can you please review this PR? It's short.
Jim Bosch: can you please confirm that the meas_mosaic output in /project/parejkoj/DM-11783/validate-meas_mosaic is "reasonable" (for whatever definition of reasonable you like)? You can compare it with the processCcd output for the same tracts in /project/parejkoj/DM-11783/validate-singleFrame.
Show
John Parejko added a comment - Simon Krughoff : can you please review this PR? It's short. Jim Bosch : can you please confirm that the meas_mosaic output in /project/parejkoj/ DM-11783 /validate-meas_mosaic is "reasonable" (for whatever definition of reasonable you like)? You can compare it with the processCcd output for the same tracts in /project/parejkoj/ DM-11783 /validate-singleFrame .
Hide
Simon Krughoff added a comment -
Seems fine. There are a few comments. The major one is how the skipTEx parameter is passed around.
Sorry for the delay. The SciPlat workshop took all my attention last week.
Show
Simon Krughoff added a comment - Seems fine. There are a few comments. The major one is how the skipTEx parameter is passed around. Sorry for the delay. The SciPlat workshop took all my attention last week.
Hide
Jim Bosch added a comment -
I've just spot-checked a few of the plots, but everything I've looked at seems fine. As I think we've discussed, some of the model-fitting in the check_astrometry and check_photometry plots isn't very robust and probably can't be trusted, but most of the plots seem quite usable. It was interesting to see that sometimes running meas_mosaic produces essentially the same AM1 (astrometric scatter) with a much smaller AF1 (astrometric outlier fraction), but I don't think that's problematic.
Show
Jim Bosch added a comment - I've just spot-checked a few of the plots, but everything I've looked at seems fine. As I think we've discussed, some of the model-fitting in the check_astrometry and check_photometry plots isn't very robust and probably can't be trusted, but most of the plots seem quite usable. It was interesting to see that sometimes running meas_mosaic produces essentially the same AM1 (astrometric scatter) with a much smaller AF1 (astrometric outlier fraction), but I don't think that's problematic.
Hide
John Parejko added a comment -
Thanks for the reviews. I've filed DM-12975 about merging the new astrometry KPMs into verify_metric, and fixed the things you commented on, Simon Krughoff.
Merged and done.
Show
John Parejko added a comment - Thanks for the reviews. I've filed DM-12975 about merging the new astrometry KPMs into verify_metric, and fixed the things you commented on, Simon Krughoff . Merged and done.
#### People
Assignee:
John Parejko
Reporter:
Jim Bosch
Reviewers:
Jim Bosch, Simon Krughoff
Watchers:
Jim Bosch, John Parejko, Michael Wood-Vasey, Simon Krughoff
Votes:
0 Vote for this issue
Watchers:
4 Start watching this issue
#### Dates
Created:
Updated:
Resolved:
#### Jenkins
No builds found.
|
2023-02-02 21:44:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5723184943199158, "perplexity": 6432.805648980102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00281.warc.gz"}
|
https://ask.libreoffice.org/en/question/136418/base-report-text-fields-variable-height/
|
# Base, report, text fields : variable height
Hello,
Using Base and ReportBuilder, I want to insert a text field that contains comments. But the comment length is variable and I would like the field height to vary according to the text length.
How can I do that ?
Denis
edit retag close merge delete
Sort by » oldest newest most voted
Hello,
I don't see any way in doing what you want without possibly some extensive macro coding. None of the report writers I use appear to have this capability although there are some limited options.
In your case, only what will fit within the field will be in the result.
EDIT:
Had a moment to do more research. Turns out Jaspersoft Studio has this function already. It is a simple setting on the field - "Stretch with Overflow". Since I don't print much or have had any need for this function I never knew it was there.
Sample:
Not stretched:
With stretched:
Just one other note, you cannot use this if running an Embedded DB. Any other should work as you connect to DB via JDBC connector.
BTW - I believe Report Builder is much more than 10 yrs now but has seen little if any mods in some time.
Edit 12/12/2017:
@Denis_R Saw again your link to the other post. Have had that entire code (posted code is small portion - all code is quite imense & somewhat complex!) for a while now but decided to give it a whirl to take my mind off another large project.
Spent a few hours to get one field on one row to expand successfully. Here was the result using Report Builder:
This report was run entirely from a macro. Here is the code used:
Sub SetHeaderAutofit2
ocontroller = Thisdatabasedocument.currentController
if not ocontroller.isconnected then ocontroller.connect
oreportdoc = Thisdatabasedocument.reportdocuments.getbyname("Report1").open
document = oreportdoc.CurrentController.Frame
dispatcher = createUnoService("com.sun.star.frame.DispatchHelper")
dispatcher.executeDispatch(document, ".uno:EditDoc", "", 0, Array())
oTexttable = oreportdoc.Texttables
oreportdoc.GetCurrentController().Select(oTexttable)
oName = oTexttable.getByName("Detail")
oCell=oName.getCellByPosition(2,0)
oText = oCell.Text
oCurs = oText.createTextCursor()
oCurs.gotoEND(True)
oName.getRows().getByIndex(0).IsAutoHeight = True
End Sub
What makes this so difficult is that you have to configure what & where to expand because you need to work with the output document in edit mode. This means working with a text table (similar to calc) and locate specific cells. The last six lines (before End Sub) were specific to the single field and would be what needs repetition. The rest of the lines were just to get there.
So the code you pointed to works with some slight modifications (it was written for AOO older version) but it is a bear to implement. In my opinion, Jaspersoft was much easier to get running than to use this method.
more
My DB is embedded in LibreOffice ... and I moved from Ms Access to OpenOffice when I found the Sun Report Builder, 10 years ago :)
Since then I have reports with a text field that is either too big or too small ! But I professionally used report builder with such a functionality, sure.
Waiting for another 10 years ? :D
( 2017-10-30 09:20:52 +0200 )edit
Hi,
Thanks for the answer. It's something I am waiting since 10 years with the 1st Sun Report Builder ... Strange that nobody has such a need.
I found something here but I don't know how to apply that in my report. I will search ...
Denis
more
## Stats
Seen: 470 times
Last updated: Dec 13 '17
|
2019-07-17 11:16:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39581817388534546, "perplexity": 2296.0013797924908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525136.58/warc/CC-MAIN-20190717101524-20190717123524-00483.warc.gz"}
|
http://www.modelenginemaker.com/index.php?action=printpage;topic=2203.0
|
Model Engine Maker
Supporting => Tooling & Machines => Topic started by: wheeltapper on July 15, 2013, 01:31:05 PM
Title: 5"sine bar
Post by: wheeltapper on July 15, 2013, 01:31:05 PM
Hi
I thought I'd see how accurate I can be so I made this.
(http://i293.photobucket.com/albums/mm72/wheeltapper_2008/model%20engineering%20stuff/5insinebar_zpsd50c26f7.jpg) (http://s293.photobucket.com/user/wheeltapper_2008/media/model%20engineering%20stuff/5insinebar_zpsd50c26f7.jpg.html)
I'm quite pleased with how it came out.
Roy.
Title: Re: 5"sine bar
Post by: b.lindsey on July 15, 2013, 02:02:42 PM
Looks very nice Roy!! So how were you able to check the accuracy?
Bill
Title: Re: 5"sine bar
Post by: wheeltapper on July 15, 2013, 03:48:07 PM
Hi
I know the two rollers are 1/2" diameter exactly ( as exactly as I can be with a micrometer ) and the distance between the inside edges is 4 1/2" , again, as exact as I can be.
parallelism was checked by placing it on the mill table and running a dial gauge along the top.
I've previously checked the mill table with a gauge and get no discernable difference along it .
so as I see it its as accurate as I can make it.
Roy.
Title: Re: 5"sine bar
Post by: mklotz on July 15, 2013, 04:19:57 PM
It's worth checking what its effective length is.
Borrow an accurate angle plate of angle 'A'. Put it on the bar and use gage blocks to pack the bar up until the angle plate is horizontal as measured with your DTI. Call this stack height 'h'. Now the effective length of the bar is:
EL = h / sin(A)
The effective length should be very close to your design value of 5".
Title: Re: 5"sine bar
Post by: b.lindsey on July 15, 2013, 04:20:17 PM
Thanks Roy, pretty much as i had assumed, just thought I might have missed something.
Bill
Title: Re: 5"sine bar
Post by: arnoldb on July 15, 2013, 08:45:15 PM
Very nice indeed Roy. That's a handy bit of kit I've wished I'd already made on many occasions.
Kind regards, Arnold
Title: Re: 5"sine bar
Post by: wheeltapper on July 15, 2013, 09:22:16 PM
It's worth checking what its effective length is.
Borrow an accurate angle plate of angle 'A'. Put it on the bar and use gage blocks to pack the bar up until the angle plate is horizontal as measured with your DTI. Call this stack height 'h'. Now the effective length of the bar is:
EL = h / sin(A)
The effective length should be very close to your design value of 5".
Thanks for the formula, I haven't really got anything that accurate to test this properly yet but I'll paste this into my 'things to remember ' folder.
I did do a quick check, I used 30 degrees which needs a stack 2.5" high.
I made a stack that measured 2.5" with a micrometer with brass blocks and a feeler gauge .
then I set the bar on a flat surface and put one of those electronic angle gauges on top, zeroed the gauge then put the stack under the gauge.
I got exactly 30 degrees. :cartwheel:
so I know I'm in the zone.
further testing will follow.
Roy.
Title: Re: 5"sine bar
Post by: pgp001 on July 15, 2013, 10:27:43 PM
Just trying to understand what you meant by:- (and the distance between the inside edges is 4 1/2")
I assume you mean the gap between the two 1/2" diameters and not the distance between the two locating corners, otherwise you have made a 4 1/2" sine bar :)
It does not really matter it just makes the maths a bit different, I actually have a little 2 1/2" sine bar, and my sine table is 8" centres.
It looks to be a nice bit of workmanship by the way.
Phil
Title: Re: 5"sine bar
Post by: ttrikalin on July 15, 2013, 10:51:01 PM
Say $h=2.5$ inches, the stack of gages you are using for a target angle of $A=30$ degrees ($\pi/6$), and $x = 5.0$ inches, the target length of the sine bar (we assume the rolls are dead on 1/2", though you can revise the calculations below).
The partial derivative of $A$ w.r.t. $x$ gives an indication of how off your actual angle would be for small mistakes in the knowledge of $x$. (good enuf calculation for what we do)
(http://i810.photobucket.com/albums/zz22/ttrikalin/CodeCogsEqn-1_zps0b5c8d66.png) (http://s810.photobucket.com/user/ttrikalin/media/CodeCogsEqn-1_zps0b5c8d66.png.html)
Code: [Select]
\frac{\partial}{\partial x}\text{asin}\big(\frac{h}{x}\big) = -\frac{h}{x^2 \sqrt{1-h^2/x^2}}
So, if you are off in the length of the bar by small amounts that you should be able to measure, then the percentage (%) you are off in the angle is very small.
If you are wrong 0.001" in x, the angle is off by 0.022%
If you are wrong 0.005" in x, the angle is off by 0.11%
If you are wrong 0.010" in x, the angle is off by 0.22%
Unless you have a very expensive electronic angle gizmo, I doubt that it has an accuracy that can get close to getting 0.22% around a 30 degree angle...
You should be happy with the sine bar, and enjoy it without fear!
tom
Title: Re: 5"sine bar
Post by: ttrikalin on July 15, 2013, 10:52:42 PM
I am a lover of life, not a mathematician.
:facepalm:
Title: Re: 5"sine bar
Post by: wheeltapper on July 15, 2013, 11:03:55 PM
Say $h=2.5$ inches, the stack of gages you are using for a target angle of $A=30$ degrees ($\pi/6$), and $x = 5.0$ inches, the target length of the sine bar (we assume the rolls are dead on 1/2", though you can revise the calculations below).
The partial derivative of $A$ w.r.t. $x$ gives an indication of how off your actual angle would be for small mistakes in the knowledge of $x$. (good enuf calculation for what we do)
(http://i810.photobucket.com/albums/zz22/ttrikalin/CodeCogsEqn-1_zps0b5c8d66.png) (http://s810.photobucket.com/user/ttrikalin/media/CodeCogsEqn-1_zps0b5c8d66.png.html)
Code: [Select]
\frac{\partial}{\partial x}\text{asin}\big(\frac{h}{x}\big) = -\frac{h}{x^2 \sqrt{1-h^2/x^2}}
So, if you are off in the length of the bar by small amounts that you should be able to measure, then the percentage (%) you are off in the angle is very small.
If you are wrong 0.001" in x, the angle is off by 0.022%
If you are wrong 0.005" in x, the angle is off by 0.11%
If you are wrong 0.010" in x, the angle is off by 0.22%
Unless you have a very expensive electronic angle gizmo, I doubt that it has an accuracy that can get close to getting 0.22% around a 30 degree angle...
You should be happy with the sine bar, and enjoy it without fear!
tom
I did mean the distance between the 1/2" rollers.
also, I did not understand one single thing in your post :lolb: :lolb:
when I see lots of numbers together my brain runs into a corner and whimpers :shrug: :shrug: :shrug:
Roy
Title: Re: 5"sine bar
Post by: ttrikalin on July 15, 2013, 11:09:05 PM
[...] I did not understand one single thing in your post :lolb: :lolb:
when I see lots of numbers together my brain runs into a corner and whimpers :shrug: :shrug: :shrug:
Then you are a fellow lover of life, my friend...
This is the message to take home...
If you are off by 0.001" in the length of the sine bar, the target angle of 30 degrees is missed by 0.022% -- peanuts.
Title: Re: 5"sine bar
Post by: mklotz on July 15, 2013, 11:25:42 PM
Tom's computations are implemented in my SINEBAR program - no thinking on your part required. It will tell you the angle error resulting from a small error in either the stack height or in the length of the sine bar.
[It will also tell you the stack height required for a given angle and the blocks from a standard set needed to form that height.]
|
2019-10-21 23:26:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6104505658149719, "perplexity": 2494.8144060805694}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00283.warc.gz"}
|
http://www.nehalemlabs.net/prototype/page/2/
|
## The link between thermodynamics and inference
In recent blog posts I talked a bit about how many aspects of maximum entropy were analogous to methods in statistical physics. In this short post, I’ll summarize the most interesting similarities. In bayesian inference, we are usually interested in the posterior distribution of some parameters $\theta$ given the data d. This posterior can be written as a boltzmann distribution: $$P(\theta|d)=\frac{P(\theta,d)}{P(d)}=\left.\frac{e^{-\beta H(\theta,d)}}{Z}\right|_{\beta=1}$$ with $H(\theta,d) = -\log P(\theta,d)/\beta$ and $Z=\int d\theta\;e^{-\beta H(\theta,d)}$. I’ll note that we are working with units such that $k_B=1$ and thus $\beta=1/T$.
The energy is just the expectation value of the hamiltonian H (note that the expectation is taken with respect to $P(\theta|d)$): $$E = \langle H \rangle = -\frac{\partial \log Z}{\partial \beta}$$
And the entropy is equal to $$S=-\int d\theta\;P(\theta|d)\log P(\theta|d)=\beta\langle H \rangle – \log Z$$
We can also define the free energy, which is $$F=E\, – \frac{S}{\beta}=-\frac{\log Z}{\beta}$$
A cool way to approximate Z if we can’t calculate it analytically (we usually can’t calculate it numerically for high dimensional problems because the integrals take a very long time to calculate) is to use laplace’s approximation: $$Z=\int d\theta\;e^{-\beta H(\theta,d)}\simeq\sqrt{\frac{2\pi}{\beta|H”(\theta^*)|}}e^{-\beta H(\theta^*)}$$ where $|H”(\theta^*)|$ is the determinant of the hessian of the hamiltonian (say that 3 times real fast) and $\theta^*$ is such that $H(\theta^*)=\min H(\theta)$ (minimum because of the minus sign). Needless to say this approximation works best for small temperature ($\beta\rightarrow\infty$) which might not be close to the correct value at $\beta=1$. $\theta^*$ is known as the maximum a posteriori (MAP) estimate. Expectation values can also be approximated in a similar way: $$\langle f(\theta) \rangle = \int d\theta \; f(\theta) P(\theta|d) \simeq\sqrt{\frac{2\pi}{\beta|H”(\theta^*)|}} f(\theta^*)P(\theta^*|d)$$
So the MAP estimate is defined as $\text{argmax}_{\theta} P(\theta|d)$. The result won’t change if we take the log of the posterior, which leads to a form similar to the entropy: \begin{align}\theta_{\text{MAP}}&=\text{argmax}_{\theta} (-\beta H – \log Z)\\&=\text{argmax}_{\theta} (-2\beta H + S)\end{align} Funny, huh? For infinite temperature ($\beta=0$) the parameters reflect total lack of knowledge: the entropy is maximized. As we lower the temperature, the energy term contributes more, reflecting the information provided by the data, until at temperature zero we would only care about the data contribution and ignore the entropy term.
(This is also the basic idea for the simulated annealing optimization algorithm, where in that case the objective function plays the role of the energy and the algorithm walks around phase space randomly, with jump size proportional to the temperature. The annealing schedule progressively lowers the temperature, restricting the random walk to regions of high objective function value, until it freezes at some point.)
Another cool connection is the fact that the heat capacity is given by $$C(\beta)=\beta^2\langle (\Delta H)^2 \rangle=\beta^2\langle (H-\langle H \rangle)^2 \rangle=\beta^2\frac{\partial^2 \log Z}{\partial \beta^2}$$
In the paper I looked at last time, the authors used this fact to estimate the entropy: they calculated $\langle (\Delta H)^2 \rangle$ by MCMC for various betas and used the relation $$S = \, \int_{1}^{\infty} d\beta\; \frac{1}{\beta} C(\beta)$$
## Review of ‘Searching for Collective Behavior in a Large Network of Sensory Neurons’
Last time I reviewed the principle of maximum entropy. Today I am looking at a paper which uses it to create a simplified probabilistic representation of neural dynamics. The idea is to measure the spike trains of each neuron individually (in this case there are around 100 neurons from a salamander retina being measured) and simultaneously. In this way, all correlations in the network are preserved, which allows the construction of a probability distribution describing some features of the network.
Naturally, a probability distribution describing the full network dynamics would need a model of the whole network dynamics, which is not what the authors are aiming at here. Instead, they wish to just capture the correct statistics of the network states. What are the network states? Imagine you bin time into small windows. In each window, each neuron will be spiking or not. Then, for each time point you will have a binary word with 100 bits, where each a 1 corresponds to a spike and a -1 to silence. This is a network state, which we will represent by $\boldsymbol{\sigma}$.
So, the goal is to get $P(\boldsymbol{\sigma})$. It would be more interesting to have something like $P(\boldsymbol{\sigma}_{t+1}|\boldsymbol{\sigma}_t)$ (subscript denoting time) but we don’t always get what we want, now do we? It is a much harder problem to get this conditional probability, so we’ll have to settle for the overall probability of each state. According to maximum entropy, this distribution will be given by $$P(\boldsymbol{\sigma})=\frac{1}{Z}\exp\left(-\sum_i \lambda_i f_i(\boldsymbol{\sigma})\right)$$ Continue reading “Review of ‘Searching for Collective Behavior in a Large Network of Sensory Neurons’”
## Maximum entropy: a primer and some recent applications
I’ll let Caticha summarize the principle of maximum entropy:
Among all possible probability distributions that agree with whatever we know select that particular distribution that reflects maximum ignorance about everything else. Since ignorance is measured by entropy, the method is mathematically implemented by selecting the distribution that maximizes entropy subject to the constraints imposed by the available information.
It appears to have been introduced by Jaynes in 57, and has seen a resurgence in the past decade with people taking bayesian inference more seriously. (As an aside, Jayne’s posthumously published book is well worth a read, in spite of some cringeworthy rants peppered throughout.) I won’t dwell too much on the philosophy as the two previously mentioned sources have already gone into great detail to justify the method.
Usually we consider constraints which are linear in the probabilities, namely we constrain the probability distribution to have specific expectation values. Consider that we know the expectation values of a certain set of functions $f^k$. Then, $p(x)$ should be such that $$\langle f^k \rangle = \int dx \; p(x) f^k(x)$$ for all k. Let’s omit the notation $(x)$ for simplicity. Then, we can use variational calculus to find p which minimizes the functional $$S[p]\; – \alpha \int dx\; p\; – \sum_k \lambda_k \langle f^k \rangle$$ The constraint with $\alpha$ is the normalization condition and $S$ is the shannon entropy.
The solution to this is $$p = \frac{1}{Z}\exp\left(-\sum_k\lambda_k f^k \right)$$ with $$Z=\int dx \; \exp \left(-\sum_k\lambda_k f^k \right)$$ the partition function (which is just the normalization constant). Now, we can find the remaining multipliers by solving the system of equations $$-\frac{\partial \log Z}{\partial \lambda_k} = \langle f^k \rangle$$ I’ll let you confirm that if we fix the mean and variance we get a gaussian distribution. Go on, I’ll wait.
## How to do inverse transformation sampling in scipy and numpy
Let’s say you have some data which follows a certain probability distribution. You can create a histogram and visualize the probability distribution, but now you want to sample from it. How do you go about doing this with python?
You do inverse transform sampling, which is just a method to rescale a uniform random variable to have the probability distribution we want. The idea is that the cumulative distribution function for the histogram you have maps the random variable’s space of possible values to the region [0,1]. If you invert it, you can sample uniform random numbers and transform them to your target distribution!
To implement this, we calculate the CDF for each bin in the histogram (red points above) and interpolate it using scipy’s interpolate functions. Then we just need to sample uniform random points and pass them through the inverse CDF! Here is how it looks:
## A lovely new minesweeper on android I made
Today I am finally releasing my little minesweeper for android! I’ve been working on this as a hobby for the past few weekends, and now it is finally smooth enough to let other people see it! The problem with most minesweeper applications in the market is that they are either really ugly or haven’t really figured out how to adapt the original mouse controls to a touchscreen. I set out to solve these two problems so I can play some mines on my phone!
To solve the ugliness problem, I drew some tiles in photoshop in a very minimal style, to disturb the eyes as little as possible and let you focus on the game. Here is how it turned out.
To navigate the board, you can use the normal multitouch gestures like pan and pinch to zoom. To place a flag, you can long press a tile or you can double tap an open tile and drag to a closed tile (these gestures won’t let you win speed competitions, but they’re pretty good if you’re lazily solving the board)
You also get some pretty sweet statistics when you win or lose!
## Gamma distribution approximation to the negative binomial distribution
In a recent data analysis project I was fitting a negative binomial distribution to some data when I realized that the gamma distribution was an equally good fit. And with equally good I mean the MLE fits were numerically indistinguishable. This intrigued me. In the internet I could find only a cryptic sentence on wikipedia saying the negative binomial is a discrete analog to the gamma and a paper talking about bounds on how closely the negative binomial approximates the gamma, but nobody really explains why this is the case. So here is a quick physicist’s derivation of the limit for large k.
## Negative binomial with continuous parameters in python
So scipy doesn’t support a negative binomial for a continuous r parameter. The expression for its pdf is $P(k)=\frac{\Gamma(k+r)}{k!\,\Gamma(r)} (1-p)^rp^k$. I coded a small class which computes the pdf and is also able to find MLE estimates for p and k given some data. It relies on the mpmath arbitrary precision library since the gamma function values can get quite large and overflow a double. It might be useful to someone so here’s the code below.
## Automatic segmentation of microscopy images
A few months back I was posed the problem of automatically segmenting brightfield images of bacteria such as this:
I thought this was a really simple problem so I started applying some filters to the image and playing with morphology operations. You can isolate dark spots in the image by applying a threshold to each pixel. The resulting binary image can be modified by using the different morphological operators, and hopefully identifying each individual cell. Turns out there is a reason people stopped using these methods in the 90s and the reason is they don’t really work. If the cells are close enough, there won’t be a great enough difference in brightness to separate the two particles and they will remain stuck.
## How to import structured matlab data into python with scipy
So a few days ago I received this really nice data set from an experimental group in matlab format which contains a list of structs with some properties, some of which are structs themselves. I usually just open it in matlab using my university’s license and export the data as a .csv , but in this case with the structs there was no direct way to export the data and preserve all the associated structure. Luckily scipy has a method to import .mat files into python, appropriately called loadmat.
In the case of a struct array the resulting file is kind of confusing to navigate. You’d expect to access each record with data[i], where data is the struct list. For some reason I cannot hope to understand you need to iterate over data in the following way: data[0,i].
Each record is loaded as a numpy structured array, which allow you to access the data by its original property names. That’s great, but what I don’t understand is why some data gets nested inside multiple one dimensional arrays which you need to navigate out of. An example: I needed to access a 2d array of floats which was a property of a property of a struct (…). You’d expect to access it as record[‘property’][‘subproperty’]. But actually you have to dig it out of record[‘property’][‘subproperty’][0][0]. I’m not sure if this is due to the way .mat files are structured or scipy’s behavior. This is relatively easy to figure out using the interactive shell, although it makes for some ugly code to parse the whole file.
The best way to map the structure would be to create an array of dicts in python with the corresponding properties. However I wanted to have the data in numpy format which led to a slightly awkward design decision: I create a table where each row contains the (unique) value of the properties in the child structs and the corresponding values for the properties in the parent structs. This means that the properties in the parent structs are duplicated across all rows corresponding to their children. With this I traded off memory space for being able to directly access all values for a single property without traversing some complicated structure. I believe this was a reasonable tradeoff.
What about selecting subsets of data based on the parent properties? To solve this problem, I actually converted the massive numpy table into a pandas dataframe. Pandas is extremely useful when your data fits the “spreadsheet” paradigm (i.e. each column corresponds to a different kind of data type), and its advanced selection operations allow you to do SQL-like queries on the data (yes, you can even do joins!), which is what I have been using to do advanced selections.
|
2018-12-12 17:16:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6785308122634888, "perplexity": 398.96875337488024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00306.warc.gz"}
|
https://e-hir.org/journal/view.php?number=1011
|
• Home
• E-submission
• Sitemap
Healthc Inform Res Search
CLOSE
Healthc Inform Res > Volume 26(1); 2020 > Article
Symum and Zayas-Castro: Prediction of Chronic Disease-Related Inpatient Prolonged Length of Stay Using Machine Learning Algorithms
### Objectives
The study aimed to develop and compare predictive models based on supervised machine learning algorithms for predicting the prolonged length of stay (LOS) of hospitalized patients diagnosed with five different chronic conditions.
### Methods
An administrative claim dataset (2008–2012) of a regional network of nine hospitals in the Tampa Bay area, Florida, USA, was used to develop the prediction models. Features were extracted from the dataset using the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes. Five learning algorithms, namely, decision tree C5.0, linear support vector machine (LSVM), k-nearest neighbors, random forest, and multi-layered artificial neural networks, were used to build the model with semi-supervised anomaly detection and two feature selection methods. Issues with the unbalanced nature of the dataset were resolved using the Synthetic Minority Over-sampling Technique (SMOTE).
### Results
LSVM with wrapper feature selection performed moderately well for all patient cohorts. Using SMOTE to counter data imbalances triggered a tradeoff between the model's sensitivity and specificity, which can be masked under a similar area under the curve. The proposed aggregate rank selection approach resulted in a balanced performing model compared to other criteria. Finally, factors such as comorbidity conditions, source of admission, and payer types were associated with the increased risk of a prolonged LOS.
### Conclusions
Prolonged LOS is mostly associated with pre-intraoperative clinical and patient socioeconomic factors. Accurate patient identification with the risk of prolonged LOS using the selected model can provide hospitals a better tool for planning early discharge and resource allocation, thus reducing avoidable hospitalization costs.
### I. Introduction
In recent years, congestive heart failure (CHF), acute myocardial infarction (AMI), chronic obstructive pulmonary disease (COPD), pneumonia (PN), and type 2 diabetes (DB) have become the top most costly hospitalized conditions in the United States [1]. The majority of these conditions are characterized by longer than national average length of stay (LOS) of 4.5 days [2]. Moreover, in 2013, the number of hospitalizations for these conditions equaled 3.621 million stays (10.2% of inpatient admissions) [1]. Likewise, the average inpatient treatment costs incurred for these conditions were high, between $7,400 and$18,400 per stay, compared to the national average [3]. Due to the recent substantial increase in medical costs and hospital expenditures, predicting the likelihood of prolonged LOS has become increasingly important to reduce the waste of hospital resources and improve patient satisfaction. Determining influential risk factors for prolonged LOS is useful for planning interventions or care management for patients with multiple chronic conditions. Furthermore, the prediction of prolonged LOS can improve the process of arranging a continuum of care for the patients, thus allowing family members to prepare for the return of their loved one. Additionally, under the government's new inpatient progressive payment system (IPPS), reimbursements are paid in fixed payments based on the patient's diagnosis-related group (DRG) rather than the volume of services [4].
Several studies have explored the use of various predictive models to improve performance in predicting LOS [5]. Multiple variations of artificial neural network (ANN)-based models have been applied in a variety of hospital settings (e.g., emergency department, psychiatric, and intensive care unit) [6,7]. Several other classification algorithms (e.g., support vector machine, logistic regression, and random forest) have also been applied for predicting LOS, and they have achieved diverse levels of accuracy [8,9]. However, these models did not use machine learning-based feature selection, anomaly detection, and class imbalance techniques in a framework that might result in overfitting and a weak learner. Only one study attempted to apply a class imbalance technique in predicting prolonged emergency department (ED) LOS [10]. Hence, the use of machine learning algorithms to predict condition-specific prolonged LOS needs further exploration. Therefore, a prolonged LOS prediction model is crucial and indispensable to healthcare providers, especially those with an alternative payment contract (e.g., accountable care organizations) with the Centers for Medicare and Medicaid Services (CMS). Thus, there is a need to develop a predictive decision support system that (1) identifies patients with prolonged LOS risk and (2) helps to develop individual discharge planning to reduce inpatient usage and eventually improve quality of care.
This study constructed and compared predictive models based on supervised machine learning algorithms to identify patients with the risk of prolonged LOS hospitalized with chronic conditions. Condition-specific prolonged LOS prediction represents a significant benchmark in providing healthcare providers a better tool to plan for discharge planning and resource allocation to reduce LOS; therefore, it can lower hospitalization costs. We developed a robust framework for prolonged LOS prediction using data mining algorithms to extract important features, handle missing values, eliminate multicollinearity, detect outlier observations, and balance imbalanced class. Based on previous studies, we chose five algorithms: decision tree C5.0, linear support vector machine (LSVM), k-nearest neighbors (KNN), random forest (RF), and multi-layered ANNs. Twenty different model combinations for each cohort were constructed and compared in terms of several performance metrics.
### 1. Study Design
Prediction models were constructed using an administrative claim dataset provided by a network of nine hospitals geographically localized within three adjacent counties in the Tampa Bay region, Florida, USA. The types of hospitals in the study included general, teaching, and specialized hospitals. The initial dataset included 594,751 patients accounting for 1,093,177 patient discharges from January 2008 through July 2012. The five disease cohorts included in this study were AMI, CHF, COPD, DB, and PN. These conditions were identified by a primary diagnosis ICD-9 code for the inpatient claims. ICD-9 codes are used to identify hospital admission for AMI (codes 410.*), CHF (codes 428.*, 402.01, 402.91, 404.01, 404.03, 404.11, 404.13, 404.91, 404.93), COPD (codes 491.0, 491.1, 491.2, 491.20, 491.21, 490, 492, 496), DB (codes 250.*2), and PN (codes 480–483, 485–486, 510, 511.0, 511.1, 511.9, 780.6, 786.00, 786.05, 786.06, 786.07, 786.2, 786.3, 786.4, 786.5, 786.51, 786.52, 786.7). The final subsets of AMI, CHF, COPD, DB, and PN cohorts consisted of 10,983, 9,194, 7,189, 3,476, and 21,317 inpatient admissions, respectively.
For each discharge claim, we extracted 82 common features (including patient demographics, hospital information, and comorbidity) and several disease-specific features from the inpatient diagnosis and revenue codes based on insights from previous studies [5,11,12]. Descriptive statistics for the data and variables (common and cohort specific) are shown in Tables 1 and 2, respectively. Features were extracted from the diagnosis codes using ICD-9 numeric, E and V codes. For example, one of the features, accidental fall, was identified from 30 diagnoses ICD-9 codes by filtering E88–E89. The severity index was calculated as the severity of illness (from 1 = minor to 4 = extreme) defined by 3M all-patient refined-diagnosis-related groups (APR-DRG) [13].
### 2. Outcome Variable
We defined prolonged LOS in our study as >7 days by calculating the 85th percentile threshold for the entire study population cohort's LOS [14]. The uniform prolonged LOS criteria (>7 days) for all cohorts were applied to simplify hospital resource allocation in discharge planning and to reduce the hospital-wise risk of post-discharge complications. Hospital stays longer than seven days are associated with a higher risk of post-discharge adverse outcomes and complications than short stay (≤7 days) regardless of admission causes [15].
### 3. Modeling Framework
The modeling framework comprised three major steps: data preprocessing, model training, and performance evaluation. The data preprocessing comprised missing value handling, zero-variance test, correlation test, novelty detection, and feature selection. Figure 1 illustrates the steps involved in data preprocessing. Using the software RStudio caret packages [16], we performed the data preprocessing steps. First, missing values in the patient records were handled using established strategies. If the feature contained over 15% missing cases, we excluded the attributed feature. If less than 15% of the records were missing, the mean or median value replaced the blanks for the continuous and ordinal feature respectively.
We used the one-class support vector machine (O-SVM) to identify outliers from noisy observations [17]. The O-SVM identified a similar proportion of anomalies (1.92%–2.44%) and excluded them from the dataset. Then we identified correlated features using the Pearson correlation and chi-square test for the continuous and nominal features, respectively, with a 0.05 level of significance. Among the correlated pairs of features, we dropped those with the highest variance inflation factor. Finally, features with in-class imbalances or zero variances were dropped after the zero-variance test with a 1.0% cutoff. Table 3 summarizes the results obtained from the data preprocessing steps.
We separated the records into training (70%), and testing (30%) sets for each cohort. Figure 2 illustrates the process of model building and the evaluation process. Next, using the same training dataset for each patient cohort, two different types of feature selection methods, chi-square filtering, and the SVM-based wrapper algorithm were applied to identify significant variables [18]. In the chi-square filtering method, features were selected at a 0.05 level of significance. For the wrapper algorithm, we limited our algorithm to a maximum of 200 iterations for each training model. The selected features from chi-square filtering and wrapper feature selection methods are shown in Supplementary Tables S1 and S2. After selecting features from both methods, we trained C5.0, LSVM, KNN, RF, and multi-layered ANN models for each cohort.
While training these models, we also explored the issues with the imbalance nature of the data. When training with imbalanced data, the algorithm tends to learn more from the majority class than the minority class, resulting in a weak learner with limited predictability. For the five cohorts, we had a varying imbalance ratio (0.09 to 0.15). To resolve this issue, we over-sampled the training data set using the Synthetic Minority Over-sampling Technique (SMOTE) and created new balanced data [19]. We trained 20 different models for each cohort and compared the performance of the models using the testing dataset under several metrics. Although the area under the curve (AUC) metric was unaffected by imbalances, the AUC tends to mask poor performance [20]. Therefore, we considered two other performance metrics, namely, sensitivity and specificity, with the AUC to minimize imbalance biases.
We propose a new rank average aggregate metric approach for selecting the best performing model to deal with the dilemma of performance tradeoff initiated by data imbalance. In our approach, the performance of each model was ranked separately for the three metrics of AUC, sensitivity, and specificity, and each was assigned a score (between 1 to 20) based on the rank. For example, if the LSVM model was ranked third by the AUC, we assigned a score of 18 out of 20. These three scores were multiplied by the assigned weights and summed to obtain a single aggregate metric where the summation of all weights must be equal to 1. Finally, we selected a single model by comparing the composite weighted sum metrics among the 20 different models, and the steps were repeated for each cohort.
### 1. Assessment of the Prediction Models
We fitted 20 different model combinations comprising two feature selection methods, with or without SMOTE, and five learning algorithms for each cohort. Table 4 summarizes the performance of the learning algorithms for each cohort. As shown, the KNN models did not outperform any of the other algorithms. LSVM outperformed every other algorithm in all cohorts with AUC, while it only outperformed CHF in terms of specificity. RF models outperformed for the AMI and DB cohorts, whereas for CHF and PN, the ANN models worked better according to the specificity metric. However, it is evident that selecting a model solely based on the AUC masked a model's poor specificity. For example, the best model for CHF under AUC is LSVM using the wrapper feature selection, with 0.81 AUC and 0.28 specificity. A specificity of 0.28, meaning only a 28% chance of detecting a true negative, represents poor model performance, which was completely shadowed by the AUC. Between the two feature selection methods, the SVM-based wrapper method yielded better prediction by AUC, whereas chi-square filtering methods achieved better true-negative rates. SMOTE used with feature selection did not improve the model's AUC. However, chi-square feature selection with SMOTE resulted in the highest specificity in all cohorts.
Figure 3 illustrates the changes in sensitivity and specificity with and without using SMOTE for the chi-square feature selection method. As shown, there is a significant tradeoff between the sensitivity and specificity. Furthermore, all the learning algorithms showed a positive tradeoff, specificity increase, and sensitivity decrease, except for the RF models. SMOTE yielded the highest and lowest increment of sensitivity for the C5.0 and KNN algorithms, respectively. The performance of each model depends on the algorithm as well as the feature selection and data balancing technique. Additionally, the tradeoff between the metrics due to the dataset imbalance makes it more challenging to select the best performing model. Table 5 shows the selected models based on AUC, sensitivity, F1 score, proposed aggregate rank, and a custom rule for each cohort. Based on our proposed metric, we selected several variations of LSVM models that showed balanced performance in every metric. In healthcare decision making, administrators or decision makers select the best model either empirically or by custom decision criteria favorable to the budgetary constraints. Therefore, we tested a custom rule comprising a minimum 0.75 specificity and the maximum for the AUC metric. If there was no model associated with more than 0.75 specificity, we selected the final model with the highest specificity. In general, LSVM with the wrapper feature selection was selected based on the AUC criteria, while different algorithms with SMOTE were selected based on the specificity metric. The selected models obtained by custom rules comprised different machine learning algorithms with SMOTE, and the results showed a moderate performance across all metrics.
### 2. Important Features
Tables 6 and 7 show significant features using the LSVM algorithm and regression analysis, respectively. The most important variable in all disease cohorts except DB for making a prolonged LOS prediction was the disease severity index with varying relative weights. In addition, the presence of different types of comorbidity was a strong predictor of prolonged LOS. For example, in AMI, COPD, and PN, the presence of comorbidities related to blood and blood-forming organ diseases resulted in a longer LOS. In addition, AMI and CHF patients admitted with pneumoconiosis and other lung-related conditions tended to stay longer in hospital inpatient settings. Several non-comorbidity-related features, such as the number of PX, source of admission, and payer class were also associated with a prolonged LOS. The number of tests (PX) required to assess patient condition was highly associated with prolonged inpatient stays in all cohorts expect AMI, and the higher the number of tests, the greater the risk of a prolonged LOS. Specifically, PN patients with non-commercial payers and COPD patients admitted through the ED showed a greater likelihood of prolonged LOS.
### IV. Discussion
In this study, we analyzed prolonged inpatient stays using an administrative claim dataset, performed extensive data preprocessing, and then developed and compared several variants of predictive models for the five disease cohorts. We identified several important factors that increase the risk of a prolonged stay for each disease. We found that prolonged LOS is associated with blood-forming and skin-disease-related comorbidities in most of the chronic conditions. The finding of several previous studies also support this result [21]. Some other studies have reported that patient demographics, gender, and hospital locations were contributors to identifying the risk of a prolonged LOS. However, our results do not conform to the findings in those studies [22,23]. One possible reason for this discrepancy is that the previous studies have mostly used homogenous data consisting of a single hospital or specific type of operation (e.g., knee replacement, heart surgery). When examining data from nine different hospitals over eight years, other factors had more weight than demographic factors (e.g., race, gender) in terms of the prediction of prolonged LOS.
Significant factors found in our study could be used to formulate individual disease-specific treatment pathways and early discharge planning to decrease inpatient LOS. Identifying patients with risk of prolonged LOS at the time of admission or inpatient care, the hospital can assign a dedicated hospitalist and prepare a plan for the advanced discharged planning process. Studies show that having a dedicated hospitalist after four days of inpatient care and effective early discharged planning with a continuum of care can significantly reduce inpatient LOS [24,25]. Additionally, prioritizing laboratory tests and avoiding duplication of tests using hospital information exchange (HIE) can effectively decrease the LOS [26]. Furthermore, implementing improved care management and care coordination for patients with specific comorbidities in accountable care organizations (ACO) could reduce inpatient care utilization [27]. In addition, we found that the type of payer or insurance, which are typically considered to be significant socioeconomic factors, significantly affect the likelihood of a prolonged LOS. This insight agrees with the claims made in previous studies that social deprivation or economic inequality has a negative effect on the expected length of hospital stays of admitted patients [28]. Furthermore, an individual prolonged LOS risk profile can be used as a decision-making aid to the physician's subjective judgment while adjusting a patient's LOS [21]. This study of disease-specific prolonged LOS prediction may also assist in reducing the financial burden of the numerous outlier claims under CMS IPPS resulting from extended hospital stays. [29]. Outlier payments exert tremendous pressure on Medicare expenditures and are responsible for an average of \$4.04 billion each year [30].
The prediction model we developed was compared to other published models in terms of predictive power and robustness. The selected cohort-specific models showed a variation of prediction performance depending on the model evaluation criteria. We found that although predictive power (AUC) was similar across certain methods, the range in detecting true-positive and true-negative events varied greatly. Analyzing multiple aspects of models provides the health administrators or decision-makers a stronger understanding of those models and real-time applicability. LSVM models with wrapper feature selection showed overall better performance for all cohorts. Furthermore, integration of O-SVM for outlier detection in data preprocessing also improved model robustness when dealing with noisy observations. Implementation of the SMOTE technique along with feature-selection algorithms showed a significant tradeoff between sensitivity and specificity in all prediction models except RF, which made the final model selection based on a single performance metric difficult. Moreover, our results showed that using only the AUC as a baseline metric may mask a model's poor prediction performance, especially regarding the true-negative rate. Our proposed aggregate rank-based selection approach resolves this tradeoff dilemma by choosing a model with balanced performance, and it can provide a decision support tool to health administrators when comparing predictive models.
In conclusions, the accurate prediction of a prolonged LOS and prognosis of the risks associated with chronic disease are challenging. We adapted five machine-based learning techniques with feature selection, anomaly detection, and SMOTE balancing to predict prolonged LOS. The performance of the methods varies in complex ways, including discrimination and predictive range. We found that LSVM models performed better in terms of AUC and sensitivity. We also found that clinical and socioeconomic factors are the main features driving patient prolonged LOS. Designing predictive models would help to accelerate the stratification of patients according to prolonged LOS risk for improved care. The proposed prolonged LOS prediction model can be used to plan for advanced discharge planning, healthcare personnel allocation, and care coordination programs to reduce the usage of inpatient care. Some limitations of the present study should be addressed because they may restrict generalizability and are indicative of the need for further research. Our research did not include potential pathological (e.g., hemoglobin level) and sociocultural (e.g., education) features due to data availability, which might be useful for improving accuracy.
### Notes
Conflict of Interest: No potential conflict of interest relevant to this article was reported.
### Supplementary Materials
Supplementary materials can be found via https://doi.org/10.4258/hir.2020.26.1.20.
#### Table S1
Selected features from chi-square filtering feature selection
hir-26-20-s001.pdf
#### Table S2
Selected features from SVM-based wrapper feature selection
hir-26-20-s002.pdf
### References
1. Torio CM, Moore B. National inpatient hospital costs: the most expensive conditions by payer, 2013 [Internet]. Rockville (MD): Agency for Healthcare Research and Quality; 2006. cited at 2020 Jan 10. Available from: https://www.hcup-us.ahrq.gov/reports/statbriefs/sb204-Most-Expensive-Hospital-Conditions.pdf.
2. Weiss AJ, Elixhauser A. Overview of hospital stays in the United States, 2012 [Internet]. Rockville (MD): Agency for Healthcare Research and Quality; 2014. cited at 2020 Jan 10. Available from: https://www.hcup-us.ahrq.gov/reports/statbriefs/sb180-Hospitalizations-United-States-2012.pdf.
3. Pfuntner A, Wier LM, Steiner C. Costs for hospital stays in the United States, 2010 [Internet]. Rockville (MD): Agency for Healthcare Research and Quality; 2013. cited at 2020 Jan 10. Available from: https://www.hcup-us.ahrq.gov/reports/statbriefs/sb146.pdf.
4. Centers for Medicare & Medicaid Services (CMS), HHS. Medicare Program; Medicare Shared Savings Program; Accountable Care Organizations: pathways to success and extreme and uncontrollable circumstances policies for performance year 2017: final rules. Fed Regist 2018;83(249):67816-68082. PMID: 30596411.
5. Lu M, Sajobi T, Lucyk K, Lorenzetti D, Quan H. Systematic review of risk adjustment models of hospital length of stay (LOS). Med Care 2015;53(4):355-365. PMID: 25769056.
6. Rowan M, Ryan T, Hegarty F, O'Hare N. The use of artificial neural networks to stratify the length of stay of cardiac patients based on preoperative and initial postoperative factors. Artif Intell Med 2007;40(3):211-221. PMID: 17580112.
7. Wrenn J, Jones I, Lanaghan K, Congdon CB, Aronsky D. Estimating patient's length of stay in the Emergency Department with an artificial neural network. AMIA Annu Symp Proc 2005;1155. PMID: 16779441.
8. Morton A, Marzban E, Giannoulis G, Patel A, Aparasu R, Kakadiaris IA. A comparison of supervised machine learning techniques for predicting short-term in-hospital length of stay among diabetic patients Proceedings of 2014 13th International Conference on Machine Learning and Applications; 2014 Dec 3–6. Detroit, MI; p. 428-431.
9. Jiang X, Qu X, Davis LB. Using data mining to analyze patient discharge data for an urban hospital Proceedings of the 2010 International Conference on Data Mining (DMIN); 2010 Jul 12–15. Las Vegas, NV; p. 139-144.
10. Azari A, Janeja VP, Levin S. Imbalanced learning to predict long stay Emergency Department patients Proceedings of 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); 2015 Nov 9–12. Washington, DC; p. 807-814.
11. Salah H. Predicting inpatient length of stay in western New York health service area using machine learning algorithms [dissertation]. Binghamton (NY): State University of New York at Binghamton; 2017.
12. Li JS, Tian Y, Liu YF, Shu T, Liang MH. Applying a BP neural network model to predict the length of hospital stay. In: Huang G, Liu X, He J, Klawonn F, Yao G, editors. Health information science. Heidelberg, Germany: Springer; 2013. p. 18-29.
13. Averill RF, Goldfield N, Hughes JS, Bonazelli J, Mc-Cullough EC, Mullin R, et al. 3M APR DRG classification system: methodology overview [Internet]. Wallingford (CT): 3M Health Information Systems; 2008. cited at 2020 Jan 10. Available from: https://www.hcup-us.ahrq.gov/db/nation/nis/v261_aprdrg_meth_ovrview.pdf.
14. Thompson B, Elish K, Steele R. Machine learning-based prediction of prolonged length of stay in newborns Proceedings of 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA); 2018 Dec 17–20. Orlando, FL; p. 1454-1459.
15. Allard JP, Keller H, Jeejeebhoy KN, Laporte M, Duerksen DR, Gramlich L, et al. Decline in nutritional status is associated with prolonged length of stay in hospitalized patients admitted for 7 days or more: a prospective cohort study. Clin Nutr 2016;35(1):144-152. PMID: 25660316.
16. Kuhn M. Building predictive models in R using the caret package. J Stat Softw 2008;28(5):1-26. PMID: 27774042.
17. Amer M, Goldstein M, Abdennadher S. Enhancing one-class support vector machines for unsupervised anomaly detection Proceedings of the ACM SIGKDD Workshop on Outlier Detection and Description; 2013 Aug 11. Chicago, IL; p. 8-15.
18. Guyon I, Elisseeff A. An introduction to variable and feature selection. J Mach Learn Res 2003;3:1157-1182.
19. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 2002;16:321-357.
20. Jeni LA, Cohn JF, De La Torre F. Facing imbalanced data: recommendations for the use of performance metrics Proceedings of 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction; 2003 Sep 2–5. Geneva, Switzerland; p. 245-251.
21. Chuang MT, Hu YH, Lo CL. Predicting the prolonged length of stay of general surgery patients: a supervised learning approach. Int Trans Oper Res 2018;25(1):75-90.
22. Wang Y, Stavem K, Dahl FA, Humerfelt S, Haugen T. Factors associated with a prolonged length of stay after acute exacerbation of chronic obstructive pulmonary disease (AECOPD). Int J Chron Obstruct Pulmon Dis 2014;9:99-105. PMID: 24477272.
23. Hachesu PR, Ahmadi M, Alizadeh S, Sadoughi F. Use of data mining techniques to determine and predict length of stay of cardiac patients. Healthc Inform Res 2013;19(2):121-129. PMID: 23882417.
24. Gonçalves-Bradley DC, Lannin NA, Clemson LM, Cameron ID, Shepperd S. Discharge planning from hospital. Cochrane Database Syst Rev 2016;(1):CD000313. PMID: 26816297.
25. Chandra S, Wright SM, Howell EE. The Creating Incentives and Continuity Leading to Efficiency staffing model: a quality improvement initiative in hospital medicine. Mayo Clin Proc 2012;87(4):364-371. PMID: 22469349.
26. Menachemi N, Rahurkar S, Harle CA, Vest JR. The benefits of health information exchange: an updated systematic review. J Am Med Inform Assoc 2018;25(9):1259-1265. PMID: 29718258.
27. Kaufman BG, Spivack BS, Stearns SC, Song PH, O'Brien EC. Impact of accountable care organizations on utilization, care, and outcomes: a systematic review. Med Care Res Rev 2019;76(3):255-290. PMID: 29231131.
28. Hasan O, Orav EJ, Hicks LS. Insurance status and hospital care for myocardial infarction, stroke, and pneumonia. J Hosp Med 2010;5(8):452-459. PMID: 20540165.
29. Bai G, Anderson GF. US Hospitals are still using chargemaster markups to maximize revenues. Health Aff (Millwood) 2016;35(9):1658-1664. PMID: 27605648.
30. Krinsky S, Ryan AM, Mijanovich T, Blustein J. Variation in payment rates under Medicare's inpatient prospective payment system. Health Serv Res 2017;52(2):676-696. PMID: 27060973.
### Descriptive statistics for all common features
Values are presented as number (%) or mean ± standard deviation. For the binary (Yes or No) type variable, descriptive statistics is shown for “Yes” level only.
CHF: congestive heart failure, AMI: acute myocardial infarction, COPD: chronic obstructive pulmonary disease, PN: pneumonia, DB: type 2 diabetes, LOS: length of stay.
### Descriptive statistics for disease cohort-specific variables
For the binary (Yes or No) type variable, descriptive statistics is shown for “Yes” level only.
CHF: congestive heart failure, COPD: chronic obstructive pulmonary disease, AMI: acute myocardial infarction, DB: type 2 diabetes, PN: pneumonia.
### Summary statistics in data preprocessing steps
Values are presented as number (%).
CHF: congestive heart failure, AMI: acute myocardial infarction, COPD: chronic obstructive pulmonary disease, PN: pneumonia, DB: type 2 diabetes.
### Performance comparison of predictive models
AMI: acute myocardial infarction, CHF: congestive heart failure, COPD: chronic obstructive pulmonary disease, DB: type 2 diabetes, PN: pneumonia, AUC: area under the curve, SP: specificity, KNN: k-nearest neighbor, LSVM: linear support vector machine, RF: random forest, NN: multi-layer neural network, WR: support vector machine-based wrapper method, WR+ST: wrapper method with SMOTE (Synthetic Minority Over-sampling Technique), CQ: chi-square filtering method, CQ+ST: chi-square with SMOTE.
aBest model based on AUC, bbest model based on specificity.
### Best performing model based on several criteria
AMI: acute myocardial infarction, CHF: congestive heart failure, COPD: chronic obstructive pulmonary disease, DB: type 2 diabetes, PN: pneumonia, AUC: area under the curve, SP: specificity, SN: sensitivity, LSVM: linear support vector machine, RF: random forest, NN: multi-layer neural network, KNN: k-nearest neighbor, WR: support vector machine-based wrapper method, WR+ST: wrapper method with SMOTE (Synthetic Minority Over-sampling Technique), CQ: chi-square filtering method, CQ+ST: chi-square with SMOTE.
Example, a+b+c; a = machine learning method, b = feature selection technique, c = presence of SMOTE balancing; LSVM+CQ+ST = linear support vector mechanics with chi-square feature selection and SMOTE date balancing technique.
### Important features extracted by LSVM algorithm
LSVM: linear support vector machine, AMI: acute myocardial infarction, CHF: congestive heart failure, COPD: chronic obstructive pulmonary disease, DB: type 2 diabetes, PN: pneumonia.
### Important features extracted using regression analysis
CHF: congestive heart failure, AMI: acute myocardial infarction, COPD: chronic obstructive pulmonary disease, DB: type 2 diabetes, PN: pneumonia, ED: emergency department.
TOOLS
Share :
METRICS
• 0 Crossref
• Scopus
• 698 View
|
2020-09-18 06:22:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3055935800075531, "perplexity": 8018.099635186092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187354.1/warc/CC-MAIN-20200918061627-20200918091627-00099.warc.gz"}
|
https://www.iacr.org/cryptodb/data/paper.php?pubkey=13745
|
## CryptoDB
### Paper: A Synthetic Indifferentiability Analysis of Block Cipher based Hash Functions
Authors: Zheng Gong Xuejia Lai Kefei Chen URL: http://eprint.iacr.org/2007/465 Search ePrint Search Google Nowadays, investigating what construction is better to be a cryptographic hash function is red hot. In TCC'04, Maurer et al. first introduced the notion of indifferentiability as a generalization of the concept of the indistinguishability of two cryptosystems. In AsiaCrypt 06, Chang et al. analyzed the indifferentiability security of some popular block-cipher-based hash functions, such as PGV constructions and MDC-2. In this paper, we investigate Chang et al.'s analysis of PGV constructions and the PBGV double block length constructions. In particular, we point out a more precise adversarial advantage of indifferentiability, by considering the two situations that whether the hash function is either keyed or not. Furthermore, Chang et al. designed attacks on 4 PGV hash functions and PBGV hash function to prove they are differentiable from random oracle with prefix-free padding. We find a limitation in their differentiable attacks and construct our simulations to obtain the controversy results that those schemes are indifferentiable from random oracle with prefix-free padding and some other popular constructions.
##### BibTeX
@misc{eprint-2007-13745,
title={A Synthetic Indifferentiability Analysis of Block Cipher based Hash Functions},
booktitle={IACR Eprint archive},
keywords={foundations / Hash Function, Block Cipher, Indifferentiability, Random Oracle},
url={http://eprint.iacr.org/2007/465},
note={under a journal's review neoyan@sjtu.edu.cn 13859 received 11 Dec 2007},
author={Zheng Gong and Xuejia Lai and Kefei Chen},
year=2007
}
|
2020-04-09 01:42:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24665230512619019, "perplexity": 5580.273556782193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371826355.84/warc/CC-MAIN-20200408233313-20200409023813-00084.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-10-section-10-2-arithmetic-sequences-exercise-set-page-1060/53
|
## Precalculus (6th Edition) Blitzer
We know that $a_n=a_1+(n-1) d$ Here, we have $a_{1}=-1; a_n=-83; d=-4$ Thus, we have $-83=1+(n-1)(-4)$ or, $-4n+4=-84 \implies n=22$ Hence, there are $22$ terms
|
2019-10-22 17:15:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579771757125854, "perplexity": 391.44502007715533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00298.warc.gz"}
|
https://www.hepdata.net/record/22663
|
Spin rotation parameters A and R for pi+ p and pi- p elastic scattering from 427-MeV/c to 657-MeV/c
Phys.Rev.D 47 (1993) 1762-1775, 1993.
Abstract (data abstract)
LAMPF. Measurement of the spin rotation parameters A and R, coded here as DSL and DSS respectively, in pi+ p and pi- p elastic scattering at incident momenta from 427 to 657 MeV.
|
2021-07-25 22:06:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120696544647217, "perplexity": 6197.300296003484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151866.98/warc/CC-MAIN-20210725205752-20210725235752-00335.warc.gz"}
|
http://es.wikidoc.org/index.php/Homogeneous_function
|
# Homogeneous function
In mathematics, a homogeneous function is a function with multiplicative scaling behaviour: if the argument is multiplied by a factor, then the result is multiplied by some power of this factor.
## Formal definition
Suppose that ${\displaystyle f:V\rightarrow W\qquad \qquad }$ is a function between two vector spaces over a field ${\displaystyle F\qquad \qquad }$.
We say that ${\displaystyle f\qquad \qquad }$ is homogeneous of degree ${\displaystyle k\qquad \qquad }$ if
${\displaystyle f(\alpha \mathbf {v} )=\alpha ^{k}f(\mathbf {v} )}$
for all nonzero ${\displaystyle \alpha \in F\qquad \qquad }$ and ${\displaystyle \mathbf {v} \in V\qquad \qquad }$.
## Examples
• A linear function ${\displaystyle f:V\rightarrow W\qquad \qquad }$ is homogeneous of degree 1, since by the definition of linearity
${\displaystyle f(\alpha \mathbf {v} )=\alpha f(\mathbf {v} )}$
for all ${\displaystyle \alpha \in F\qquad \qquad }$ and ${\displaystyle \mathbf {v} \in V\qquad \qquad }$.
• A multilinear function ${\displaystyle f:V_{1}\times \ldots \times V_{n}\rightarrow W\qquad \qquad }$ is homogeneous of degree n, since by the definition of multilinearity
${\displaystyle f(\alpha \mathbf {v} _{1},\ldots ,\alpha \mathbf {v} _{n})=\alpha ^{n}f(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})}$
for all ${\displaystyle \alpha \in F\qquad \qquad }$ and ${\displaystyle \mathbf {v} _{1}\in V_{1},\ldots ,\mathbf {v} _{n}\in V_{n}\qquad \qquad }$.
• It follows from the previous example that the ${\displaystyle n}$th Fréchet derivative of a function ${\displaystyle f:X\rightarrow Y}$ between two Banach spaces ${\displaystyle X}$ and ${\displaystyle Y}$ is homogeneous of degree ${\displaystyle n}$.
• Monomials in ${\displaystyle n}$ real variables define homogeneous functions ${\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }$. For example,
${\displaystyle f(x,y,z)=x^{5}y^{2}z^{3}}$
is homogeneous of degree 10 since
${\displaystyle (\alpha x)^{5}(\alpha y)^{2}(\alpha z)^{3}=\alpha ^{10}x^{5}y^{2}z^{3}}$.
${\displaystyle x^{5}+2x^{3}y^{2}+9xy^{4}}$
is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions.
## Elementary theorems
• Euler's theorem: Suppose that the function ${\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }$ is differentiable and homogeneous of degree ${\displaystyle k}$. Then
${\displaystyle \mathbf {x} \cdot \nabla f(\mathbf {x} )=kf(\mathbf {x} )\qquad \qquad }$.
This result is proved as follows. Writing ${\displaystyle f=f(x_{1},\ldots ,x_{n})}$ and differentiating the equation
${\displaystyle f(\alpha \mathbf {y} )=\alpha ^{k}f(\mathbf {y} )}$
with respect to ${\displaystyle \alpha }$, we find by the chain rule that
${\displaystyle {\frac {\partial }{\partial x_{1}}}f(\alpha \mathbf {y} ){\frac {\mathrm {d} }{\mathrm {d} \alpha }}(\alpha y_{1})+\cdots {\frac {\partial }{\partial x_{n}}}f(\alpha \mathbf {y} ){\frac {\mathrm {d} }{\mathrm {d} \alpha }}(\alpha y_{n})=k\alpha ^{k-1}f(\mathbf {y} )}$,
so that
${\displaystyle y_{1}{\frac {\partial }{\partial x_{1}}}f(\alpha \mathbf {y} )+\cdots y_{n}{\frac {\partial }{\partial x_{n}}}f(\alpha \mathbf {y} )=k\alpha ^{k-1}f(\mathbf {y} )}$.
The above equation can be written in the del notation as
${\displaystyle \mathbf {y} \cdot \nabla f(\alpha \mathbf {y} )=k\alpha ^{k-1}f(\mathbf {y} ),\qquad \qquad \nabla =({\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}})}$,
from which the stated result is obtained by setting ${\displaystyle \alpha =1}$.
• Suppose that ${\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }$ is differentiable and homogeneous of degree ${\displaystyle k}$. Then its first-order partial derivatives ${\displaystyle \partial f/\partial x_{i}}$ are homogeneous of degree ${\displaystyle k-1\qquad \qquad }$.
This result is proved in the same way as Euler's theorem. Writing ${\displaystyle f=f(x_{1},\ldots ,x_{n})}$ and differentiating the equation
${\displaystyle f(\alpha \mathbf {y} )=\alpha ^{k}f(\mathbf {y} )}$
with respect to ${\displaystyle y_{i}}$, we find by the chain rule that
${\displaystyle {\frac {\partial }{\partial x_{i}}}f(\alpha \mathbf {y} ){\frac {\mathrm {d} }{\mathrm {d} y_{i}}}(\alpha y_{i})=\alpha ^{k}{\frac {\partial }{\partial x_{i}}}f(\mathbf {y} ){\frac {\mathrm {d} }{\mathrm {d} y_{i}}}(y_{i})}$,
so that
${\displaystyle \alpha {\frac {\partial }{\partial x_{i}}}f(\alpha \mathbf {y} )=\alpha ^{k}f(\mathbf {y} )}$
and hence
${\displaystyle {\frac {\partial }{\partial x_{i}}}f(\alpha \mathbf {y} )=\alpha ^{k-1}f(\mathbf {y} )}$.
## Application to ODEs
The substitution ${\displaystyle v=y/x}$ converts the ordinary differential equation
${\displaystyle I(x,y){\frac {\mathrm {d} y}{\mathrm {d} x}}+J(x,y)=0,}$
where ${\displaystyle I}$ and ${\displaystyle J}$ are homogeneous functions of the same degree, into the separable differential equation
${\displaystyle x{\frac {\mathrm {d} v}{\mathrm {d} x}}=-{\frac {J(1,v)}{I(1,v)}}-v}$.
## References
• Blatter, Christian (1979). "20. Mehrdimensionale Differentialrechnung, Aufgaben, 1.". Analysis II (2nd ed.) (in German). Springer Verlag. pp. p. 188. ISBN 3-540-09484-9.
|
2020-06-05 06:37:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 50, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933869242668152, "perplexity": 298.9639464361314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348493151.92/warc/CC-MAIN-20200605045722-20200605075722-00195.warc.gz"}
|
https://search.r-project.org/CRAN/refmans/ANTs/html/met.diameter.html
|
met.diameter {ANTs} R Documentation
## Diameter
### Description
Calculates the network diameter .
### Usage
met.diameter(
M,
df = NULL,
weighted = TRUE,
shortest.weight = FALSE,
normalization = TRUE,
directed = TRUE,
out = TRUE
)
### Arguments
M a square adjacency matrix, or a list of square adjacency matrices, or an output of ANT functions stat.ds.grp, stat.df.focal, stat.net.lk. df a data frame of same length as the input matrix or a list of data frames if argument M is a list of matrices or an output of ANT functions stat.ds.grp, stat.df.focal, stat.net.lk. weighted if FALSE, it binarizes the square adjacency matrix M. Geodesic distances and diameter are based only on the presence or absence of edges. shortest.weight if false, it considers the highest met.strength as the shortest path. normalization normalizes the weights of the links i.e. divides them by the average strength of the network. Argument normalization can't be TRUE when argument weighted is FALSE. directed if false, then it symmetrizes the matrix. Otherwise, it calculates geodesic distances and diameter according to the directionality of the links. out if true, it considers outgoing ties.
### Details
Diameter is the longer geodesic distance.
### Value
• a double representing the diameter of the network if argument M is a square matrix.
• A list of doubles if argument M is a list of matrices and if argument df is NULL. Each double represents the diameter of the corresponding matrix of the list.
• A list of arguments df with a new column of network diameter if argumentdf is not NULL and if argument M is a list of matrices. The name of the column is adapted according to arguments values .weighted, shortest.weight, normalization, directed and out.
• A list of arguments df with a new column of network diameter if argument df is not NULL, if argument M is an output from ANT functions stat.ds.grp, stat.df.focal, stat.net.lk for multiple matrices permutations, and if argument df is a list of data frames of same length as argument M.
### Author(s)
Sebastian Sosa, Ivan Puga-Gonzalez.
### References
Opsahl, T., Agneessens, F., & Skvoretz, J. (2010). Node centrality in weighted networks: Generalizing degree and shortest paths. Social networks, 32(3), 245-251.
Sosa, S. (2018). Social Network Analysis, in: Encyclopedia of Animal Cognition and Behavior. Springer.
### Examples
met.diameter(sim.m)
[Package ANTs version 0.0.16 Index]
|
2022-10-07 03:08:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5637471675872803, "perplexity": 3483.742942308915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00468.warc.gz"}
|
https://math.stackexchange.com/questions/2230907/prove-if-a-bounded-function-is-integrable-the-difference-between-the-upper-sum-a
|
# Prove if a bounded function is integrable the difference between the upper sum and lower sum of the regular partition tends to 0.
How do I prove that a condition for (Riemann) integrability is that the difference between the upper and lower sums of the regular partition $D_n:0=a_0<a_1<...<a_n=1$ where $a_k=(k/n)$ tends to 0 as n tends to infinity? I've tried used the Riemann criterion, so for all $ε>0$ there is a dissection $D$ such that the difference between the upper and lower sums of $D$ is less than $ε$, and this is fine in the case where $D$ is composed only of rationals, but I can't figure out how to make the argument rigorous in the case that $D$ may contain irrationals. Any help would be greatly appreciated! :)
• Nobody can prove that a condition in some definition (which you don't give) is necessary. Formulating a definition is a free act. Some definitions are successful, others aren't. In short: You have to give more details. – Christian Blatter Apr 12 '17 at 18:13
• Sorry I don't understand...my question is for the upper sum being the sum of the supremums of the function over the dissection, and the lower sum the infimums, with Dn being the regular dissection of the interval [0,1]. The condition for integrability is that, over all possible dissections of [0,1], the supremum of the lower sums equals the infimum of the upper sums, but the question I have is asking about regular dissections in particlar. (Sorry I've edited the wording of my question so it's clear this is about regular partitions) – user294388 Apr 12 '17 at 19:20
## 1 Answer
First, it is a sufficient condition.
As long as $f$ is bounded on $[0,1]$, the upper and lower sums corresponding to arbitrary partitions are bounded. Let $\mathcal{P}$ denote the set of all partitions of $[0,1].$ Consequently, the sets $\{L(P,f): P \in \mathcal{P}\}$ and $\{U(P,f): P \in \mathcal{P}\}$ are bounded, and this guarantees the existence of
$$\underline{\int}_0^1 f(x) \, dx = \sup_{P \in \mathcal{P}}\, L(P,f), \\ \overline{\int}_0^1 f(x) \, dx = \inf_{P \in \mathcal{P}}\, U(P,f) ,$$
which are called the lower and upper integrals.
Given any regular partition $D_n$ we have
$$L(D_n,f) \leqslant \underline{\int}_0^1 f(x) \, dx \leqslant \overline{\int}_0^1 f(x) \, dx \leqslant U(D_n,f).$$
The central inequality follows because for any partitions $P$ and $Q$ we have $L(P,f) \leqslant U(Q,f)$ (take a common refinement of the partitions to show this) and, thus $\sup_{P \in \mathcal{P}} \,L(P,f) \leqslant \inf_{Q \in \mathcal{P}} \,U(Q,f)$.
Hence,
$$0 \leqslant \overline{\int}_0^1 f(x) \, dx - \underline{\int}_0^1 f(x) \, dx \leqslant U(D_n,f) - L(D_n,f).$$
The right-hand side converges to $0$ as $n \to \infty$, by hypothesis, which implies that $f$ is integrable since we must have
$$\underline{\int}_0^1 f(x) \, dx = \overline{\int}_0^1 f(x) \, dx,$$
where the common value of lower and upper integrals is by definition the value of the integral.
To show it is a necessary condition, consider
$$\left|U(D_n,f) - L(D_n,f) \right| \leqslant \left|U(D_n,f) - \int_0^1 f(x) \, dx \right| + \left|L(D_n,f) - \int_0^1 f(x) \, dx \right|.$$
The two terms on the RHS go to zero as $n \to \infty$. This is a consequence of the equivalent condition for integrability where for arbitrary Riemann sums corresponding to tagged partitions we have
$$\tag{*}\int_0^1 f(x) \, dx = \lim_{\|P\| \to 0} S(P,f).$$
Here $\|P\| = \max_{1 \leqslant j \leqslant n} (x_j - x_{j-1})$ is the norm of the partition $P = (x_0,x_1, \ldots, x_n)$ and, clearly, $\|D_n\| \to 0$ if and only if $n \to \infty$.
It takes a bit of effort to prove the equivalence of $(*)$ to the definition of the Riemann integral in terms of partition refinement or the Darboux approach. It has been shown a number of times on this site including here.
• Sorry for only just responding to this but that was a really good explanation thank you!! – user294388 Apr 24 '17 at 20:29
• You're welcome. Glad to help. – RRL Apr 25 '17 at 18:58
|
2019-08-24 11:42:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93262779712677, "perplexity": 139.85611664367823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00189.warc.gz"}
|
http://rin.io/category/code/matlab/
|
## Matlab: Smooth Rotating Animation for Line Plots
I recently became stuck trying to create an animation which consists of a smooth rotation of a viewpoint around the Lorenz attractor. Methods I use for changing viewpoints with respect to surface objects were creating jerky, lagging animations when applied to my line plot.
This will work for any 3D line plot.
plot3(x,y,z);
axis vis3d
fps = 60; sec = 10;
vidObj = VideoWriter('plotrotation.avi');
vidObj.Quality = 100;
vidObj.FrameRate = fps;
open(vidObj);
for i=1:fps*sec
camorbit(0.9,-0.1);
writeVideo(vidObj,getframe(gcf));
end
close(vidObj);
This results in the following smooth animation:
## Matlab: Coaxial Cylinders (Polar Coordinates)
Let’s say we want to create an aesthetically pleasing visualization of 2 coaxial cylinders.
To do this, we’ll be adjusting the lighting, proportions, and transparency of the figure.
The code to create this figure utilizes the coordinate transformation from Cartesian to 3D-Polar coordinates. Recall that the transformation between coordinate systems is as follows.
Instead of using Matlab’s built in cart2pol method, we will manually convert each Cartesian (x, y, z) coordinate to it’s equivalent polar coordinate (r, phi, z) .
Side note:
I used phi above in the transformation equations and theta below in the code to make a point. The angular coordinate in the cylindrical (a.k.a polar) coordinate system is generally represented as either phi or theta.
These are equivalent, and the variable chosen is purely a matter of notation.
Y=-5:5;
theta=linspace(0,2*pi,40);
[Y,theta]=meshgrid(Y,theta);
r = 1.5
% calculate x and z
X=r*cos(theta);
Z=r*sin(theta);
hs = surf(X,Y,Z)
set(hs,'EdgeColor','None', ...
'FaceColor', [0.5 0.5 0.5], 'FaceLighting', 'phong');
alpha(0.7);
hold on
camlight right;
r = 0.5
% recalculate x and z
X=r*cos(theta);
Z=r*sin(theta);
hs = surf(X,Y,Z)
axis equal
set(hs,'EdgeColor','None', ...
'FaceColor', [0.5 0.5 0.5], 'FaceLighting', 'phong');
alpha(0.7);
axis off
camlight right;
lighting gouraud
view(140, 24)
% white background
set(gcf,'color','white')
## Matlab: Lorenz Attractor
I’m a big fan of the Lorenz Attractor, which, when plotted, resembles the half open wings of a butterfly. This attractor was derived from a simplified model of convection in the earth’s atmosphere. One simple version of the Lorenz attractor is pictured below:
The Lorentz system is a set of ordinary differential equations notable for its chaotic solutions (see below). Here $x$, $y$ and $z$ make up the system state, $t$ is time, and $sigma, row, beta$ are the system parameters.
The Lorentz attractor is a chaotic solution to this system found when $row = 28, sigma = 10. beta = 8/3$.
The series does not form limit cycles nor does it ever reach a steady state.
We can calculate and render the aforementioned chaotic solution to this ODE as follows:
function loren3
clear;clf
global A B R
A = 10;
B = 8/3;
R = 28;
u0 = 100*(rand(3,1) - 0.5);
[t,u] = ode45(@lor2,[0,100],u0);
N = find(t>10); v = u(N,:);
x = v(:,1);
y = v(:,2);
z = v(:,3);
plot3(x,y,z);
view(158, 14)
function uprime = lor2(t,u)
global A B R
uprime = zeros(3,1);
uprime(1) = -A*u(1) + A*u(2);
uprime(2) = R*u(1) - u(2) - u(1)*u(3);
uprime(3) = -B*u(3) + u(1)*u(2);
This results in the figure:
To create a surface/mesh from this line plot, we proceed…
## Matlab: Create Mesh or Surface From Line Plot
This is a continuation of Matlab: Lorentz Attractor, however, these methods can be applied to any line plot or collection of points.
A slightly more aesthetically pleasing representation of the Lorentz Attractor can be achieved by adding axis off. And altering the view’s azimuth and elevation: view(15, 48).
Now we’re talking. Let’s say I want to make a surface or mesh from this dandy line plot. Using surf or mesh will throw an error, since x, y, and z are all 1D vectors! Whatever shall we do!
Never fear, mathematicians will save the day.
Delaunay created a sweet method of triangulating points. If we treat this line plot as a collection of points, we can triangulate to find an approximate surface.
tri = delaunay(x,y);
plot(x,y,'.')
%determine amount of triangles
[r,c] = size(tri);
disp(r)
%plot with trisurf
h = trimesh(tri, x, y, z, 'FaceAlpha', 0.6);
alpha = 0.4
view(15, 48)
axis vis3d
axis off
l = light('Position',[-50 -15 29])
lighting phong
shading interp
What if we’d like a surface instead of the mesh? Then we’ll change trimesh to trisurf add transparency (alpha = 0.7) and find:
|
2018-04-20 18:36:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7547013759613037, "perplexity": 3777.5559161427905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944677.39/warc/CC-MAIN-20180420174802-20180420194802-00560.warc.gz"}
|
https://stats.stackexchange.com/questions/337340/m-g-1-queue-and-pollaczek-khintchine-formula
|
# M/G/1 queue and Pollaczek-Khintchine formula
My question is about the interpretation of symbols used in a description of the derivation of the Pollaczek-Khintchine formula, as outlined on pp 240 - 242 of Cox and Miller's "The theory of stochastic processes"
In the book they write of the "Takács process" but I think a more modern description would be of a M/G/1 queue - arrivals are exponentially distributed with Poisson parameter $$\lambda$$ while service times have a general distribution and there is one server (eg this is a typical model of a request to a hard disk with one head, or so I read).
The waiting time (for the customer at the end of the queue to complete service) is $$X(t)$$. When $$X(t)=0$$ the system is empty and when an arrival occurs $$X(t)$$ jumps up by the amount of time taken to serve that customer, which is distributed randomly according to $$b(x)$$, otherwise $$X(t)$$ is reduced in unit time i.e. by $$\Delta t$$ in $$\Delta t$$.
So the distribution formula for $$X(t)$$ is:
$$F(x,t)=p_o(t) +\int_{0}^{x}p(z,t)dz$$
Where $$p_o(t)$$ is "the discrete probability ... that $$X(t)=0$$ i.e., that the system is empty, and a density $$p(x,t)$$ for $$X(t)>0$$".
Where I struggle is with this:
$$p_0(t + \Delta t) = p_0(t)(1-\lambda\Delta t) + p(0,t)\Delta t(1 - \lambda\Delta t) +o(t)$$
The first term on the RHS seems clear enough - the probability that the system is empty at $$t$$ multiplied by the probability there will be no arrivals in $$\Delta t$$. But what does the second term mean? And what is $$p(0,t)\Delta t$$: this term presumably represents the probability of "draining" the system - ie that $$X(t) \leq \Delta t$$ -and is then multiplied by the probability of there being no arrivals in $$\Delta t$$ - but how is that "drainable" probability represented by $$p(0,t)\Delta t$$?
Naïvely I thought $$p(0, t)$$ was the same as $$p_0(t)$$ but if we differentiate the equilibrium condition, i.e., where $$p(x, t) = p(x)$$ and $$P_0(t) = p_0$$ we can see that $$p(0) = \lambda p_0$$ (as $$p^{\prime}_0(t) = 0$$ at equilibrium).
This has been driving me mad for days, so I'd love it if someone can put me out of my misery!
What you are struggling with is part of the derivation of the Takacs integrodifferential equation.
The derivation of the expression you are trying to understand starts with:
$$P_w(t+\Delta t) = (1-\lambda \Delta t)P_{w+\Delta t}(t) + \dots$$
where the $w$ represents a generic waiting time. This expression says that (part of) the probability that the waiting time is $\leq w$ at time $t + \Delta t$ (denoted by $P_w(t + \Delta t)$) is equal to the probability that the waiting time be $\leq w + \Delta t$ at time $t$ (denoted by $P_{w+\Delta t}(t)$) and no arrivals during $\Delta t$. (There's another term for the case where arrivals do occur, but that's not part of the expression you're dealing with - it's part of the $\dots$.) I use a capital $P$ as we are dealing with cumulative distribution functions, which are typically denoted by capital letters.
Now we need to tackle the expression $P_{w+\Delta t}(t)$, because it's in terms of $w + \Delta t$, but we want everything in terms of just $w$. For $w=0$, $P$ has a jump of magnitude $P_0(t)$, and for $w > 0$, $P$ is continuous. We can construct a Taylor expansion:
$$P_{w+\Delta t}(t) = P_w(t) + {\partial P_w(t) \over \partial w}\Delta t + o(\Delta t)$$
Due to the jump at $w=0$, $P_w(t)$ is not continuous at $w=0$, however, it is continuous to the right. We can define the derivative at $0$ to be the right-hand derivative, which does exist at $w=0$ (it's equal to $\lim_{w \downarrow 0}(P_w(t)-P_0(t))/w$.)
Notationally, we define:
$${\partial P_w(t) \over \partial w} = p(w,t)$$
and substituting results in:
$$P_w(t+\Delta t) = (1-\lambda \Delta t)[P_w(t) + p(w,t)\Delta t]\dots$$
In your case, you're dealing with equations where $w$ has been set equal to $0$. Substituting gives us your original equation.
So, $p(0,t)$ is the right hand derivative of the cumulative distribution of waiting time at time $t$, evaluated at waiting time $= 0$.
|
2022-05-20 09:59:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740028142929077, "perplexity": 172.02501824724007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531779.10/warc/CC-MAIN-20220520093441-20220520123441-00329.warc.gz"}
|
http://soft-matter.seas.harvard.edu/index.php?title=Many-Body_Force_and_Mobility_Measurements_in_Colloidal_Systems&diff=15894
|
Difference between revisions of "Many-Body Force and Mobility Measurements in Colloidal Systems"
Jason W. Merrill, Sunil K. Sainis, Jerzy Bławzdziewicz and Eric R. Dufresne
Soft Matter 6 (2010) p.2187-2192
wiki entry by Emily Russell, Fall 2010
The article can be found here.
Overview
This paper introduces a technique whereby the mobility tensor of a system of particles can be determined from measurements of trajectories. Particles in close proximity can affect one another's mobility via hydrodynamic interactions through the medium, so that a force exerted on one particle indirectly causes motion of another. Thus the scalar mobility used in elementary fluid dynamics is not sufficient to describe the system; instead a mobility tensor is needed to take into account interactions between particles. The authors describe the calculation of this tensor in systems of three and seven particles, and find that it is well described by theoretical predictions.
Experiments
The experiments performed use the same system as in another recent paper by three of the same authors, Many-Body Electrostatic Forces Between Colloidal Particles at Vanishing Ionic Strength: colloidal PMMA particles of 600 nm radius, in hexadecane with here 500 $\mu$M NaAOT surfactant to introduce charging of the particles. The particles were arranged using optical tweezers, and then released and trajectories recorded using a fast camera. This paper considers both an equilateral triangle of three particles, and a hexagonal arrangement of seven particles. Note that the experiments and analysis are all done in two dimensions; since the particles and forces are all in the same plane, the analysis remains viable.
Results
Mean displacement and displacement covariance. (a) Mean displacement and (b) displacement covariance versus time for each coordinate of three particles arranged in an equilateral triangle as shown with side length s = 4.4a = 2.6 $\mu$m. Lines through the data are best fits of eqn (7) and (8) to the data. Light gray lines are drawn as an aid to the eye in locating zero.
From the trajectories, the authors calculate the mean displacement of each particle in each dimension as a function of delay time, and the displacement covariance of each pair of particle coordinates. The results are given in Figure 1.
From the mean displacement data are calculated the drift velocities. (The authors note that a contribution to the mean displacements can also come from the gradients of the diffusion constants; however these effects would be significant only at small particle separations, and the authors verify a posteriori that the gradients are much smaller than the drift velocities.) Note that the data in Fig. 1a are well-fit by the lines giving the velocity.
A linear fit of the displacement covariance data gives the elements of the diffusion tensor: $cov_{\tau}(x_i(t+\tau) - x_i(\tau), x_j(t+\tau)-x_j(\tau)) = 2D_{ij}(t) + \epsilon_{ij}$. In Fig. 1b, the diagonal elements, $cov(x_i,x_i)$, are all linear with time with roughly the same slope, giving the scalar diffusion constant of the particles. The interesting physics, however, is in those off-diagonal elements which are non-zero, indicating that the fluctuations of one particle are correlated with the fluctuations of another; this is a deeper statement than that the particle exert forces on each other, and is caused by the hydrodynamic interactions through the viscous medium.
The authors measure the drift velocities and diffusion tensors for arrangements of various side-lengths, investigating the effect of distance on these interactions (Fig. 2; for brevity, I have only included the diffusion tensor, which I think is the more interesting). Note that the diagonal elements are all high, constant, and have roughly the same value, while the off-diagonal elements, indicating the hydrodynamic influence of one particle on another, decay to zero with increasing distance between the particles, as expected.
Velocity and diffusion. (b) diffusion/mobility tensor for particles arranged in an equilateral triangle as a function of side length, s. Diffusion (mobility) values are normalized to $D_0 = k_BT/6\pi\eta a = 117 nm^2 ms^{-1} (b_0 = 1/6\pi\eta a = 29.5 mm s^{-1} pN^{-1}$). Lines through the data on the diffusion/mobility tensor plot are predictions based on eqn (11) and (12).
The experimental results are compared to theoretical predictions under the Stokeslet Superposition Approximation, and agree well with the predictions. The authors note that the hydrodynamic interactions seem to be pairwise (that is, the hydrodynamic effect of one particle on another is not so great as to change the interaction with a third), unlike the electrostatic forces as found in Many-Body Electrostatic Forces Between Colloidal Particles at Vanishing Ionic Strength.
In their previous paper, the authors considered only the force on the breathing mode of each configuration; here, they determine the forces on each particle in each direction, from which they can also extract the forces on the normal modes of the system. The present technique is of course more general.
Discussion
The paper is well-written, with a good introduction and by and large clear explanations of the meaning of, for example, the mobility tensor. The results are a nice demonstration that complex fluids - even relatively simple complex fluids with only a few particles - do indeed have complex interactions, such that even so basic a concept as a diffusion constant is no longer scalar and constant, but depends on nearby particles. The paper also clearly states assumptions, and argues well that their technique is quite general and makes few assumptions about the particle interactions.
I find the closing sentence intriguing: "It should be possible to use the same technique to measure torques on anisotropic particles." This is an even cooler idea, and I look forward to seeing if anything comes of it.
|
2020-08-07 20:53:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7047284245491028, "perplexity": 627.5857858974701}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737225.57/warc/CC-MAIN-20200807202502-20200807232502-00322.warc.gz"}
|
http://www.ntg.nl/pipermail/ntg-context/2008/035962.html
|
# [NTG-context] on imposition and local or external files
Pablo Rodríguez oinos at web.de
Fri Nov 7 00:08:34 CET 2008
```Hi there,
I would like to use imposition to rearrange an existing PDF as a
booklet, but I don' have ConTeXt installed on the machine I would like
to do it.
Using http://live.contextgarden.net/ might do the job, but I have
problems to load the local file.
Is there any way to load a local file (or even an URL) with ConTeXt? How
should I rewrite the following command to load a local file or an URL in
ConteXt live?
\insertpages[original_file.pdf][width=0pt]
|
2015-09-04 16:34:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392350077629089, "perplexity": 7367.744200300865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645356369.70/warc/CC-MAIN-20150827031556-00052-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://mfleck.cs.illinois.edu/study-problems/unrolling/unrolling-1-hints.html
|
# Unrolling Problem 1
Here is the recursive definition, for reference
$$T(1) = 5$$
$$T(n) = 3 T(n/2) + 7$$ for $$n \ge 2$$
### Hints for getting started
If you are stuck getting started, remember the basic steps of the method:
1. Substitute the recursive equation for T into itself twice.
2. Guess the pattern for what you'll get after k substitutions.
3. Solve for when the input to T will equal the base case input.
4. Substitute these base-case values into the equation.
5. Clean up the result.
### Step 1
To substitute the equation into itself, you'll need to work out the value for T(n/2). Write out what T(n/2) is before doing the substitution:
$$T(n/2) = 3 T(n/4) + 7$$
Your next substitution will be for T(n/4). Write out what that is. Then substitute these two equations into the original one.
You should have this.
### Step 2
That's enough substitutions to satsify the instructions for the problem. However, if you don't see the pattern emerging, you might try one more substitution. Also try cleaning up your equation by (for example) collapsing several multiplications by 3 into a power of three. Write all powers in exponential notation to make patterns more obvious, e.g. $$3^3$$ rather than 27.
If your equation is written out using a list of terms with $$\dots$$, convert this to summation notation.
You should have something like this.
### Steps 3 and 4
Now, you need to eliminate k from the equation.
For step 3, you need to figure out what value of k will make the input to T equal the base-case input. In the base case of the original definition of T, the input is 1. What's the input to T in your partial solution? Set this equal to 1 and solve the resulting equation for k.
Finally, substitute your value for k into the equation. You should have an equation that shows how T(n) is related to T(1). What is the value of T(1)? It's a constant, right? Look back at the definition of T and substitute this base-case value into your equation. You should now have an equation expressing T(n) just in terms of n. No recursive calls to T, no use of the temporary variable k.
When you've got the first four steps worked out, check your partial solution.
### Step 5
Now, you need to clean up your solution. There's two steps that may not be obvious:
To deal with logs in exponents, first remember that $$k^{\log_k n} = n$$. That's what log does: it undoes the effect of exponentiation. This simple version works only when the base of the exponent matches the base of the log. If they don't match, you'll need to look up the formula for changing the base of a log.
|
2018-01-22 00:10:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.774130642414093, "perplexity": 580.7555324108357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890928.82/warc/CC-MAIN-20180121234728-20180122014728-00675.warc.gz"}
|
https://fizalihsan.github.io/technology/camel.html
|
# Apache Camel
• Camel is an integration framework, not an ESB.
• Camel is a routing engine builder
• Camel is based on patterns defined in Enterprise Integration Patterns by Gregor Hohpe and Bobby Wolf
# Messaging Model
• org.apache.camel.Message —The fundamental entity containing the data being carried and routed in Camel
• org.apache.camel.Exchange — The Camel abstraction for an exchange of messages. This exchange of messages has an “in” message and as a reply, an “out” message
## Messages
• Messages are the entities used by systems to communicate with each other when using messaging channels.
• Messages flow in one direction from a sender to a receiver
• Messages are uniquely identified with an identifier of type java.lang.String. The identifier’s uniqueness is enforced and guaranteed by the message creator, it’s protocol dependent, and it doesn’t have a guaranteed format. For protocols that don’t define a unique message identification scheme, Camel uses its own UID generator.
• During routing, messages are contained in an exchange.
• Headers are values associated with the message, such as sender identifiers, hints about content encoding, authentication information, and so on.
• Headers are name-value pairs; the name is a unique, case-insensitive string, and the value is of type java.lang.Object. Headers are stored as a map within the message.
• A message can also have optional attachments, which are typically used for the web service and email components.
• Body
• The body is of type java.lang.Object, and can store any kind of content.
• When the sender and receiver use different body formats, Camel provides a number of mechanisms to transform the data into an acceptable format, and in many cases the conversion happens automatically with type converters, behind the scenes.
• Fault Flag
• Messages also have a fault flag. Some protocols and specifications, such as WSDL and JBI, distinguish between output and fault messages. They’re both valid responses to invoking an operation, but the latter indicates an unsuccessful outcome. In general, faults aren’t handled by the integration infrastructure. They’re part of the contract between the client and the server and are handled at the application level.
## Exchanges
• An exchange is the message’s container during routing.
• Message Exchange Patterns (MEPs)
• A pattern that denotes whether you’re using the InOnly or InOut messaging style.
• MEPs are used to differentiate between one-way and request-response messaging styles.
• InOnly — A one-way message (also known as an Event message). For example, JMS messaging is often one-way messaging.
• InOut — A request-response message. For example, HTTP-based transports are often request reply, where a client requests to retrieve a web page, waiting for the reply from the server.
• Exchange ID
• A unique ID that identifies the exchange.
• If not provided explicitly it will be auto-generated by default.
• Exception
• Properties
• Similar to message headers, but they last for the duration of the entire exchange.
• Used to contain global-level information, whereas message headers are specific to a particular message.
• Camel itself adds various properties to the exchange during routing. Developer can store and retrieve properties at any point during the lifetime of an exchange.
• In message
• This is the input message, which is mandatory. The in message contains the request message.
• Out message
• This is an optional message that only exists if the MEP is InOut. The out message contains the reply message.
# Components
• Routing Engine
• is what actually moves messages under the hood
• uses routes as specifications for where messages are routed.
• Routes are defined using one of Camel’s domain-specific languages (DSLs).
• Routes
• Simplest way to define a route is as a chain of processors.
• Each route has a unique identifier that’s used for logging, debugging, monitoring, and starting and stopping routes.
• Routes also have exactly one input source for messages, so they’re effectively tied to an input endpoint.
• To define a route, a DSL is used
• Endpoints
• In Camel, endpoints are configured using URIs. e.g., file:data/inbox?delay=5000.
• file denotes which Camel component handles that type of endpoint. In this case, the scheme of file selects the FileComponent. The FileComponent then works as a factory creating the FileEndpoint based on the remaining parts of the URI.
• The context path tells the FileComponent that the starting folder is data/inbox.
• The option, delay=5000 indicates that files should be polled at a 5 second interval.
• Processors
• used to transform and manipulate messages during routing and also to implement all the EIP patterns.
• The processor represents a node capable of using, creating, or modifying an incoming exchange.
• During routing, exchanges flow from one processor to another; as such, you can think of a route as a graph having specialized processors as the nodes, and lines that connect the output of one processor to the input of another.
• Components
• are the extension points in Camel for adding connectivity to other systems. To expose these systems to the rest of Camel, components provide an endpoint interface.
• There are over 80 components in the Camel ecosystem that range in function from data transports, to DSLs, data formats, etc.
• Components are associated with a name that’s used in a URI, and they act as a factory of endpoints. E.g., a FileComponent is referred to by file in a URI, and it creates FileEndpoints.
• Producers
• an entity capable of creating and sending a message to an endpoint.
• When a message needs to be sent to an endpoint, the producer will create an exchange and populate it with data compatible with that particular endpoint. For example,
• a FileProducer will write the message body to a file.
• A JmsProducer, on the other hand, will map the Camel message to a javax.jms.Message before sending it to a JMS destination.
• It hides the complexity of interacting with particular transports. All you need to do is route a message to an endpoint, and the producer does the heavy lifting.
• Consumers
• A consumer is the service that receives messages produced by a producer, wraps them in an exchange, and sends them to be processed.
• Consumers are the source of the exchanges being routed in Camel.
• To create a new exchange, a consumer will use the endpoint that wraps the payload being consumed.
• A processor is then used to initiate the routing of the exchange in Camel using the routing engine.
• Consumer Types
• Event-driven consumer
• Polling consumer
Event-driven consumer Polling consumer
# Bibliography
• Books
• Camel in Action by Claus Ibsen, Jonathan
|
2020-11-28 11:02:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34390512108802795, "perplexity": 3718.6454834261885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00601.warc.gz"}
|
https://www.biostars.org/p/9549193/#9549208
|
Reference proteomes from uniprot in FASTA format, why is there only one sequence per gene?
1
0
Entering edit mode
3 months ago
Jobbe • 0
While downloading the human proteome in fasta format from the uniprot site, I noticed that it was mentioned that there was one protein per sequence (20,594). However, above the protein count is mentioned (81,837) and this made me wonder. I need this file to interpret spectrums obtained from bottom up proteomics experiment. Doesn't this give a very bad representation of the proteins present? Additionally, how is it decided which sequence they display if alternative splicing occurs at a gene? Lastly, is there an alternative approach that searches the entire proteome rather than the gene-centered subset?
uniprot FASTA proteome • 439 views
0
Entering edit mode
You could use "unreviewed" Human set (186K): https://www.uniprot.org/uniprotkb?facets=reviewed%3Afalse%2Cmodel_organism%3A9606&query=Human
Use Protein Existence filters in left column to trim this down (transcript level etc).
0
Entering edit mode
3 months ago
In the human proteome page, https://www.uniprot.org/proteomes/UP000005640, both protein count and gene count are provided. The gene count is only provided for reference proteomes, and is algorithmically computed: for each gene, a single representative protein sequence is chosen from the proteome. Where possible, reviewed (Swiss-Prot) protein sequences are chosen as the representatives. For more detail, I suggest you look at this help page: https://www.uniprot.org/help/gene_centric_isoform_mapping
There are use cases for both approaches - some users prefer seeing only one entry per gene, others prefer using the complete proteome set with potentially several entries per gene. The latter can be downloaded from the website by clicking on the "Protein count" link in https://www.uniprot.org/proteomes/UP000005640 - or directly at https://www.uniprot.org/uniprotkb?query=proteome:UP000005640
Please don't hesitate to contact the UniProt helpdesk if you have any additional questions.
|
2023-03-26 08:29:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3977947235107422, "perplexity": 4783.881343080112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00563.warc.gz"}
|
https://askdev.io/questions/53927/fastest-english-thesaurus-on-the-internet
|
Fastest English thesaurus on the internet?
The web content on dictionary.com is clear and also thorough ... as soon as ... it ... coatings ... filling.
It is also slow-moving if I disable advertising and marketing, plus I favor not to do so as I value that advertisement - earnings might be necessary for the solutions I make use of.
As I type, it is still filling. I'm surrendering ...
What is the fastest (sensibly described) thesaurus application on the internet?
0
2019-05-18 22:02:17
Source Share
Well, if you are not right into massive definition checklists or basic synonyms, Google Translate converts immediately. Like, actually quickly.
0
2019-05-21 06:44:44
Source
Have you attempted Wiktionary? Very same cautions and also benefits as Wikipedia and also need to be rather darn quickly.
0
2019-05-21 06:40:13
Source
Google Dictionary appears to be excellent & quickly.
0
2019-05-21 06:30:53
Source
If you make use of Chrome, you can do a number of points:
1. Perhaps you currently find out about it, yet make use of the Google Dictionary extension. With that said, you merely double click a word and also definition turns up.
2. Next, what I've done is entirely decreased Merriam Webster and also dictionary.com making use of the Stylebot (stylebot.me) expansion. In this way, they really feel a whole lot much less invasive as well as additionally really feel much faster.
Below is personalized CSS I make use of for both:
dictionary.reference.com : gist.github.com/587652
merriam - webster.com : gist.github.com/606237
PS: I could not upload greater than one link given that I'm new below, so excuse me for absence of correct reference web links.
0
2019-05-19 15:18:27
Source
You stated you are seeking an etymological thesaurus. Because instance, http://www.etymonline.com/ is possibly your best choice.
Keep in mind that with Firefox and also Opera, you can set up a personalized search, which conserves you needing to see the homepage of the website. If you set up ed, as an example, as your personalized search key phrase, you can simply type ed dictionary right into the address bar, and also be taken straight to the results page.
0
2019-05-19 15:14:46
Source
I enjoy wordnik. It is quickly, covers several rare words, the instances (use) are great. It additionally has an API. It is great I assume.
0
2019-05-19 09:42:23
Source
0
2019-05-19 09:41:24
Source
If you do not require a complete thesaurus yet simply a brief definition, Google with define: in the URL.
If you do require a complete thesaurus, Google Dictionary has actually currently been stated.
0
2019-05-19 09:36:18
Source
Surprised no person has actually stated the meta thesaurus device:
It is quickly, offers you a brief definition, yet additionally seeks out words in 100s of various other thesaurus.
0
2019-05-19 09:35:19
Source
Try thefreedictionary.com with print format.
0
2019-05-19 08:46:44
Source
Ninjawords is tagline is:
An actually rapid thesaurus ... quickly like a ninja.
For thorough and also reasonably quickly, attempt The Free Dictionary. Tagline:
The globe is most thorough thesaurus
0
2019-05-19 08:44:33
Source
|
2021-12-09 11:39:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2347593754529953, "perplexity": 6519.516147507193}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00512.warc.gz"}
|
https://www.javascriptkit.net/the-miracle-of-pics-of-short-hair-cuts/304667/
|
# The Miracle Of Pics Of Short Hair Cuts
The Miracle Of Pics Of Short Hair Cuts | Every promoting that we present about pics of short hair cuts
has a background to be able to help you find suggestion It continues to motivate us to provide the best content for us to present on our online page Every thing about pics of short hair cuts
is described well and is ready with supporting images and information so that the counsel we present is easier to understand.
Image Source: hearstapps.com
Image Source: timeinc.net
Image Source: latest-hairstyles.com
Image Source: ytimg.com
Image Source:
Image Source: hearstapps.com
Image Source: suite102salonandspa.com
Image Source: ytimg.com
Image Source: latest-hairstyles.com
Image Source: hearstapps.com
Image Source: herstylecode.com
Image Source: pinimg.com
|
2019-07-21 04:48:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8354254961013794, "perplexity": 8512.584023247004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526888.75/warc/CC-MAIN-20190721040545-20190721062545-00089.warc.gz"}
|
https://dougo.info/medical-retirement-residual-income-opportunities.html
|
Peer-to-Peer Lending: Earn up to 10% in returns by lending individuals, organizations and small companies who don't qualify for traditional financing through peer-to-peer lending platforms like Lending Club. You can lend $100,$1,000, or more to borrowers who meet lending platform financial standards. Like a bank, you'll earn interest on the loan - often at higher returns than banks usually get.
The retail industry, excluding wholesale, contributed $482 billion (22% of GDP) and employed 249.94 million people (57% of the workforce) in 2016. The industry is the second largest employer in India, after agriculture.[153] The Indian retail market is estimated to be US$600 billion and one of the top-five retail markets in the world by economic value. India has one of the fastest-growing retail markets in the world,[243][244] and is projected to reach $1.3 trillion by 2020.[245][246] If you are good at some subject, especially maths, science subjects, accounts or economics, you will have an upper hand in this business. There are plenty of such coaching centers so you will have to face some stiff competition in the beginning. But if you can get good results from your students, congratulations! You have made yourself a name and now parents will send their children in big groups to your center. However, I think for those who are willing to do what it takes, the sky is the absolute limit. As an example, I’m trying to take a page out of FinancialSamauri’s book and create an online personal finance and investing blog. It is an enormous undertaking, and as a new blogger, there is a seemingly endless amount of work to be done. That said, I hope that one day I can not only generate some passive income from the hours of work I have put and will put into the project, but I hope to be able to help OTHERS reach their financial goals. How do you do this? Well, try to get the highest paying job you can! Ask for a raise! Utilize services, such as Glassdoor.com, to see how your salary competes with others in your same job. Some companies really force employees to leave to get a raise, and then come back for another raise. This industry jumping promotional strategy is very common and could work. 5) Determine What Income Level Will Make You Happy. Think back to when you made little to no income as a student. Now think back to the days when you just got started in your career. Were you happy then? Now go over every single year you got a raise or made more money doing something else. How did your happiness change at all, if any? Everybody has a different level of income that will bring maximum happiness due to different desires, needs, and living arrangements. It’s up to you to find out your optimum income level. One last thing to mention that I was truly impressed by, before you start your researching…. the panel of Gentlemen who put their minds together behind all of this, have such amazing, impressive backgrounds and innovative minds, it’s no wonder this is taking off so fast. Founded in 2009, worked through all the Legalities for years and started enrolling this past November, 2012. 2nd largest growth in MLM the past 2 months in a row, ever since it hit our state, Arizona. Canada is now launched, too. Passive income is attractive because it frees up your time so you can focus on the things you actually enjoy. A highly successful doctor, lawyer, or publicist, for instance, cannot “inventory” their profits. If they want to earn the same amount of money and enjoy the same lifestyle year after year, they must continue to work the same number of hours at the same pay rate—or more, to keep up with inflation. Although such a career can provide a very comfortable lifestyle, it requires far too much sacrifice unless you truly enjoy the daily grind of your chosen profession. However, this comes back to the old discussion of pain versus pleasure. We will always do more to avoid pain than we will to gain pleasure. When our backs are against the wall, we act. When they're not, we relax. The truth is that the pain-versus-pleasure paradigm only operates in the short term. We'll only avoid pain in the here and now. Often not in the long term. Almost all of these ideas require starting a personal blog or website. But the great thing about that is that it's incredibly cheap to do. We recommend using Bluehost to get started. You get a free domain name and hosting starts at just$2.95 per month - a deal that you won't find many other places online! You can afford that to start building a passive income stream.
What I find most interesting is the fact that I had never considered options like LendingTree or realityshares for other income sources. Investing in property has been too much of bad luck for people that I know personally, so I am interesting in getting involved in a situation where I would have to be dealing with maintenance issues or tenants. There are services for you to do that, but I had not come across any that didn’t eat most if not all of the earnings. Then again, I live in the NY area. Investing in the midwest would not be reasonably possible for me, directly, but reading about realityshares is something I am going to look into further. That might be a real possibility.
Today I sent my Annual Message to the Congress, as required by the Constitution. It has been my custom to deliver these Annual Messages in person, and they have been broadcast to the Nation. I intended to follow this same custom this year. But like a great many other people, I have had the "flu", and although I am practically recovered, my doctor simply would not let me leave the White House to go up to the Capitol. Only a few of the newspapers of the United States can print the Message in full, and I am anxious that the American people be given an opportunity to hear what I have recommended to the Congress for this very fateful year in our history — and the reasons for those recommendations. Here is what I said …[4]
This venture requires both time and money, but it is certainly worth it. Making low-risk investments with your savings offers higher dividends than letting the money in the bank. While buying stocks in large corporations comes with a high degree of risk, mutual funds are relatively safer and less volatile. They also offer higher return-on-investment compared to fixed or recurring deposits made in banks.
I own several rental properties in the mid west and I live in CA. I have never even seen them in person. With good property management in place (not easy to find but possible) it is definitely possible to own cash flowing properties across the country. Not for everyone and not without it’s drawbacks, but it seems to be working for me so far. I’m happy to answer any questions about my experience with this type of investing.
## What I like about p2p investing on Lending Club is the website’s automated investing tool. You pick the criteria for loans in which you want to invest and the program does the rest. It will look for loans every day that meet those factors and automatically invest your money. It’s important because you’re collecting money on your loan investments every day so you want that money reinvested as soon as possible.
I’m on board with having more than one source of income, but I definitely want to make my “extra” income as passive as possible. I don’t want to end up pushing myself to always earn more, more, more and never enjoy the life I have. Having said that, it’s nice to have the security blanket. My blog doesn’t earn much, but I also know it could earn more if I really needed it to. It also helps to l
Employee Income: This income almost everybody earns via a job. To cut it short if you are working for someone as an employee, you are making an employee income for yourself. This income carried the maximum risk with it, since all the decision making powers are in someone else’s hands. Once they decide to let you go, you would not make a living until you find another employee income.
It is our duty now to begin to lay the plans and determine the strategy for the winning of a lasting peace and the establishment of an American standard of living higher than ever before known. We cannot be content, no matter how high that general standard of living may be, if some fraction of our people—whether it be one-third or one-fifth or one-tenth—is ill-fed, ill-clothed, ill-housed, and insecure.
Marin County had by far the highest per capita income during that period ($58,004); its per capita income was almost$10,000 higher than San Francisco County, which ranked second in that regard. Of the ten counties in California with the highest per capita income, all but Orange were in Northern California, and all but three are located in the San Francisco Bay Area. Of the three not located there, two are smaller counties located in the Sacramento metropolitan area. Orange County's per capita income ranks last among these ten, and its per capita income is about $5,000 more than that of the state. You must sacrifice the pleasures of today for the freedom you will earn tomorrow. In my 20s, I shared a studio with my best friend from high school and drove beater cars worth less than 10% of my annual gross income. I'd stay until after 7:30 p.m. at work in order to eat the free cafeteria food. International vacations were replaced with staycations since work already sent me overseas two to four times a year. Clothes were bought at thrift shops, of course. Last but not least Blogging, which is close to my heart. It require lot of patience, skills, knowledge and flair for writing to be a successful blogger. Besides basic skills, you need expertise in SEO & SEM to drive traffic on your blog. For successful bloggers, Blogging is full time income source. Though this place is full of copycats but trust me originality pays. Bloggers earn from content writing, affiliate programs, advertisement and through public appearance/consultancy. Organizations have realized the importance of social media impact and blogs are considered to be the best way to drive traffic on website & customer engagement. Infact many organizations have started hiring full time bloggers. The one thing I learned though from all those childhood experiences though is that you never can depend on one source of income. Eventually my mom caught on and stopped giving me all those extra bags of chips and I had to figure out a new way to make money. No matter how safe something seems there’s always the chance that you could lose that income and be stuck with nothing. My returns are based on full cash purchase of the properties, as it is hard to compare the attractiveness of properties at different price ranges when only calculating down payment or properties that need very little rehab/updates. I did think about the scores assigned to each factor, but I believe tax deductions are a SIGNIFICANT factor when comparing passive income steams. I live in NYC where I never thought buying rental property would be possible, but am looking into buying rental property in the Midwest where it cash flows and have someone manage it for me (turnkey real estate investing I guess some would call it). I agree with what Mike said about leverage and tax advantages, but I’m still a newbie to real estate investing so I can’t so how it will go. I have a very small amount in P2P…I’m at around 6.3% It’s okay but I don’t know how liquid it is and it still is relatively new…I’d prefer investing in the stock market. 4. Save, build and run a bread-n-breakfast place. Look at airbnb Vacation Rentals, Homes, Apartments & Rooms for Rent - Airbnb for inspiration. My wife runs one (Firdaus, Naukuchiatal) , it is not an income as of now but if you are on it for enough time, it would be, when you grow old. It doesn't have to be a fancy and glamorous thing. It could be a 2 nice-n-clean room in a city where you live. If you have a big house, it could be a part of your house. Under British rule, India's share of the world economy declined from 24.4% in 1700 down to 4.2% in 1950. India's GDP (PPP) per capita was stagnant during the Mughal Empire and began to decline prior to the onset of British rule.[103] India's share of global industrial output declined from 25% in 1750 down to 2% in 1900.[78] At the same time, the United Kingdom's share of the world economy rose from 2.9% in 1700 up to 9% in 1870. The British East India Company, following their conquest of Bengal in 1757, had forced open the large Indian market to British goods, which could be sold in India without tariffs or duties, compared to local Indian producers who were heavily taxed, while in Britain protectionist policies such as bans and high tariffs were implemented to restrict Indian textiles from being sold there, whereas raw cotton was imported from India without tariffs to British factories which manufactured textiles from Indian cotton and sold them back to the Indian market. British economic policies gave them a monopoly over India's large market and cotton resources.[104][105][106] India served as both a significant supplier of raw goods to British manufacturers and a large captive market for British manufactured goods.[107] # Who doesn’t like some down and dirty affiliate fees?! Especially if you realize it can be even easier to make money this way than with an ebook. After all, you simply need to concentrate on pumping out some content for your own site and getting the traffic in, often via Google or social media. Unsurprisingly, most people can enjoy their first affiliate sale within 30 days of starting a blog. Continue reading > India is one of the largest centres for polishing diamonds and gems and manufacturing jewellery; it is also one of the two largest consumers of gold.[183][184] After crude oil and petroleum products, the export and import of gold, precious metals, precious stones, gems and jewellery accounts for the largest portion of India's global trade. The industry contributes about 7% of India's GDP, employs millions, and is a major source of its foreign-exchange earnings.[185] The gems and jewellery industry, in 2013, created ₹251,000 crore (US$35 billion) in economic output on value-added basis. It is growing sector of Indian economy, and A.T. Kearney projects it to grow to ₹500,000 crore (US$70 billion) by 2018.[186] If you answered " YES!", then you will profit from Robert G. Allen' s Multiple Streams of Income, Second Edition. In these pages, the bestselling author of the #1 megahits Nothing Down and Creating Wealth shows you how to create multiple streams of lifetime cash flow. You' ll learn ten revolutionary new methods for generating over$100,000 a year- - on a part-time basis, working from your home, using little or none of your own money.
You can select any of the above-mentioned, based on your interest, skill, and capability to generate a second income source. However, these are just to name a few, there exist multiple ways to generate a secondary income channel. You just need to identify the right one, which suits you the best. Remember there is no shortcut to success and you need to work hard to be successful and rich in the long run!
The book is not bad but it's not that great either. Think of it as an idea book in which you see him mention something and then research it futher. The rambling just becomes too much as you move along to the point where it becomes annoying to read. The tone the author uses is very nonchalant and he doesn't really explain anything. Ideas are just thrown out.
I need to create a passive income stream that has a definable risk profile.I have \$250k cash as a safety net in my savings account getting a measily 40 bps but I am somewhat ok with this as it is Not at risk or fluctuation (walk street is tougher nowadays). i have 270k in equity in my house, thinking of paying off the mortgage but probably does make sense since my rate is 3.125 on a 30 yr. I have 275k in my 401(k) and another 45k in a brokerage account that is invested in stocks that pay dividends.
I see you include rental income, e-book sales and P2P loans as part of your passive income. Do you not consider your other internet income as passive? Is that why it’s not in the chart? Or did you not include it because you would rather not reveal it at this point? (I apologize if this question was already answered – I didn’t read through all the comments, and it’s been about a week since I actually read this post via Feedly on my phone)
This equation implies two things. First buying one more unit of good x implies buying {\displaystyle {\frac {P_{x}}{P_{y}}}} less units of good y. So, {\displaystyle {\frac {P_{x}}{P_{y}}}} is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed {\displaystyle Y} , then its relative price falls. The usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good.
Having discussed why it is important to have a second source of Income, let me now highlight some of the sources of a second income or second income ideas. A second source of Income typically means having regular income from a source other than your primary one. This means, if you belong to a salaried class, you might want an income from some other source and if you are from a business class, you might want an income from some other business. You can be as creative as it gets. Some good sources of Income are highlighted below:
If you watched the video, he goes into a discussion about shocks (about 8 minutes in) like bad investments but how they don't really matter as much if r (rate of return) is greater than g, the rate of economic growth. If r = 5% and g = 1%, then you can lose 80% (the difference) and still be ahead because the return on the remaining 20% has paced with economic growth.
|
2018-11-17 06:28:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2536620497703552, "perplexity": 1206.2023741917574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743294.62/warc/CC-MAIN-20181117061450-20181117083450-00295.warc.gz"}
|