content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
The Stacks project
Lemma 101.14.8. Let $f : \mathcal{X} \to \mathcal{Y}$ be a morphism of algebraic stacks. Let $W \to \mathcal{Y}$ be surjective, flat, and locally of finite presentation where $W$ is an algebraic
space. If the base change $W \times _\mathcal {Y} \mathcal{X} \to W$ is universally injective, then $f$ is universally injective.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0DTP. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0DTP, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0DTP","timestamp":"2024-11-14T14:26:32Z","content_type":"text/html","content_length":"14445","record_id":"<urn:uuid:b93a1d04-0d8e-47cb-b400-dc04343e8d8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00555.warc.gz"} |
This article needs to be linked to other articles.
You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by adding these links.
To discuss this page in more detail, feel free to use the talk page.
When this work has been completed, you may remove this instance of {{MissingLinks}} from the code.
This page has been identified as a candidate for refactoring of advanced complexity.
In particular: Too much going on on this page
Until this has been finished, please leave {{Refactor}} in the code.
New contributors: Refactoring is a task which is expected to be undertaken by experienced editors only.
Because of the underlying complexity of the work needed, it is recommended that you do not embark on a refactoring task until you have become familiar with the structural nature of pages of $\mathsf
{Pr} \infty \mathsf{fWiki}$.
To discuss this page in more detail, feel free to use the talk page.
When this work has been completed, you may remove this instance of {{Refactor}} from the code.
Let $\MM$ be an $\LL$-structure.
Let $A$ be a subset of the universe of $\MM$.
Let $\LL_A$ be the language consisting of $\LL$ along with constant symbols for each element of $A$.
Viewing $\MM$ as an $\LL_A$-structure by interpreting each new constant as the element for which it is named, let $\map {\operatorname {Th}_A} \MM$ be the set of $\LL_A$-sentences satisfied by $\MM$.
An $n$-type over $A$ is a set $p$ of $\LL_A$-formulas in $n$ free variables such that $p \cup \map {\operatorname {Th}_A} \MM$ is satisfiable by some $\LL_A$-structure.
The term Definition:Logical Formula as used here has been identified as being ambiguous.
If you are familiar with this area of mathematics, you may be able to help improve $\mathsf{Pr} \infty \mathsf{fWiki}$ by determining the precise term which is to be used.
To discuss this page in more detail, feel free to use the talk page.
When this work has been completed, you may remove this instance of {{Disambiguate}} from the code.
If you would welcome a second opinion as to whether your work is correct, add a call to {{Proofread}} the page.
Complete Type
We say that an $n$-type $p$ is complete (over $A$) if and only if:
for every $\LL_A$-formula $\phi$ in $n$ free variables, either $\phi \in p$ or $\phi \notin p$.
The set of complete $n$-types over $A$ is often denoted by $\map {S_n^\MM} A$.
Given an $n$-tuple $\bar b$ of elements from $\MM$, the type of $\bar b$ over $A$ is the complete $n$-type consisting of those $\LL_A$-formulas $\map \phi {x_1, \dotsc, x_n}$ such that $\MM \models \
map \phi {\bar b}$.
It is often denoted by $\map {\operatorname {tp}^\MM} {\bar b / A}$.
Given an $\LL_A$-structure $\NN$, a type $p$ is realized by an element $\bar b$ of $\NN$ if and only if:
$\forall \phi \in p: \NN \models \map \phi {\bar b}$.
Such an element $\bar b$ of $\NN$ is a realization of $p$.
We say that $\NN$ omits $p$ if and only if $p$ is not realized in $\NN$.
Then $p$ is an omission from $\NN$.
Definition without respect to a structure
Let $T$ be an $\LL$-theory.
An $n$-type of $T$ is a set $p$ of $\LL_A$-formulas such that $p \cup T$ is satisfiable.
The set of complete $n$-types over $T$ is often denoted $S_n^T$ or $\map {S_n} T$.
Note that this extends the definitions above, since, for example, $\map {S_n^\MM} A = \map {S_n^{\operatorname {Th}_A} } \MM$.
Note on comma-separated or juxtaposed parameters
Since it is often of interest to discuss types over sets such as $A \cup \set c$ where $b$ is an additional parameter not in $A$, it is conventional to simplify notation by writing the parameter set
as $A, c$ or just $A c$.
For example:
$\map {\operatorname {tp} } {b / A \cup \set c} = \map {\operatorname {tp} } {b / A, c} = \map {\operatorname {tp} } {b / A c}$ | {"url":"https://proofwiki.org/wiki/Definition:Complete_Type","timestamp":"2024-11-04T11:25:53Z","content_type":"text/html","content_length":"49698","record_id":"<urn:uuid:f93f47cf-18a2-4715-9692-fd89e8aa70fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00423.warc.gz"} |
Video - Arithmetic of elliptic curves
Elliptic curves defined over the rationals satisfy two finiteness properties; its group of rational points is a finitely generated abelian group and it has only finitely many points with integral
coordinates. Bhargava and his collaborators established statistical results on the former. We will explain how their method can be exploited to produce statistical results on the latter as well. | {"url":"https://www.math.snu.ac.kr/board/index.php?mid=video&l=en&category=110673&listStyle=list&page=8&sort_index=title&order_type=desc&document_srl=796467","timestamp":"2024-11-06T21:35:07Z","content_type":"text/html","content_length":"50183","record_id":"<urn:uuid:3d0ab324-57fa-452a-a12c-5d034a88c4c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00352.warc.gz"} |
Giving Parameters Dynamic Values | AIMMS Community
Hey everyone,
I´m currently using Aimms to build a model for my Bachelor Thesis. I have never worked with Aimms before so I´m pretty clueless in some regard.
So my problem is the following constraint:
sum ui, aSP(h,i)*x(i,t)] + sum uj, sum um, aFP(h,m)*(y(j,m,t))]] <= Capacity(h)
Here x(i,t) and y(j,m,t) are the decision variables which represent the accepted requests for two Product types. i are the different types of Product x and j are different types for Product y. t
stands for different Periods. aSP and aFP indicate if product x(i) or y(j) needs Ressource h for its production.
I have declared Capacity(h) as a parameter with values for each h.
My Problem is now that the Programm takes these values for each seperate period, but the capacity in period 2 is of course lower than the capacity in period 1 and so on. I also cant just declare
Capacity parameters for each period, because the remaining capacity in the periods after period 1 depend on the sold products and therefore the already used capacity in prior periods.
I think the capacity must be declared as a variable, but all my tries to make it work that way failed.
If anyone knows how i can deal with this problem, i would be very happy for your help. | {"url":"https://community.aimms.com/aimms-language-12/giving-parameters-dynamic-values-625?postid=1670","timestamp":"2024-11-06T08:46:55Z","content_type":"text/html","content_length":"161573","record_id":"<urn:uuid:06fe4834-78d2-47cf-abca-bcc0246dcc15>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00066.warc.gz"} |
I can't suppress printing of a SharedArray; I must be doing something wrong. Help?
I’m using Julia 1.9.3 on a Windows machine.
Two lines in my file ParallelWatson.jl:
N = 10^4
y = SharedArray{Float64}(N);
In spite of the semicolon, the full Shared Array is printed in the output file when I issue the command
julia -t 8 < ParallelWatson.jl > test4.out
I want to put N=10^9 and, well, no, I don’t want to see a billion zeros.
I see conflicting messages in the online discussions about semicolon suppressing output, some going back to 2014.
Can I suppress the output? If so, how? It’s got to be easy, and I must be doing something really dumb.
I should add that typing interactively in the REPL this does indeed suppress the printing of the Shared Array.
Perhaps add nothing as last line of your file?
The problem is that you’re feeding the file’s contents through stdin using <. Try dropping < such that the filename is passed as an argument instead: julia -t 8 ParallelWatson.jl > test4.out.
You shouldn’t need semicolons anywhere.
Thanks. That worked.
runit.jl (754 Bytes)
julia -t 8 runit.jl > runit.out
more runit.out
got (in the file runit.out)
0.015890 seconds (11.28 k allocations: 724.107 KiB, 49.86% compilation time)
0.15000009536743164 0.15000009536743164
Now I am going to go look up how I am supposed to be timing things. The 0.015890 seconds is too short (the program did NOT return in a hundredth of a second). I’m not even sure the .15 seconds is
right. Still seems too short. But since I know nothing, I will go look and find; and if I run into trouble I shall open another issue.
This is a different topic, but:
• When you want to assign the elapsed time to a variable, rather than print it, you can use @elapsed instead of @time, like this:
elapsed_time = @elapsed begin
# code
• @distributed runs asynchronously when used this way (see Distributed Computing · The Julia Language). This is probably the source of your unrealistic timings. You need to wrap @sync around the
@sync @distributed for ...
# code
• @distributed does multiprocessing, not multithreading, so you should start julia with multiple processes, not multiple threads: julia --procs=8
□ Or you might want to use multithreading instead, by replacing @distributed with Threads.@threads and replacing the SharedArray with a regular Vector
• If you stick to multiprocessing you’ll need to annotate your function definition with @everywhere for it to be available on all workers. That is, replace function Fx(...) with @everywhere
function Fx(...). This is not necessary if you decide to use threads instead of Distributed.
Here’s how I would rewrite your script using threads instead of multiprocessing. Note that it’s important to put potentially performance-sensitive code in functions so it can be compiled, hence the
function F!(y). Note the function evalpoly for evaluating polynomials—no more writing out Horner’s method by hand.
function fx(x)
rho = inv(sqrt(x))
t1 = rho^2
coeffs = (
return rho * evalpoly(t1, coeffs)
function F!(y)
Threads.@threads for i in 1:N
y[i] = fx(10.0 + 90.0 * (i - 1) / (N - 1))
return nothing
N = 10^9
y = Vector{Float64}(undef, N)
elapsed_time = @elapsed F!(y)
time_per_call = elapsed_time * 1.0e9 / N
@show (elapsed_time, time_per_call)
$ julia --threads=4 runit.jl
(elapsed_time, time_per_call) = (106.172722334, 106.172722334)
Thank you!! That’s wonderful, and I will try it on my machine.
Regarding evalpoly, I am happy to tell you that I did not write Horner’s method by hand; I generated the original expression in Maple (using a program that I wrote for Watson’s lemma for evaluation
of an integral), asked Maple to convert to Horner form, and then asked Maple to generate the Julia code. CodeGeneration[Julia] in Maple is pretty basic, but it did save me the effort of writing
Horner’s method by hand (and counting brackets)
1 Like
Thought it might be worth mentioning here that evalpoly uses metaprogramming under the hood to generate unrolled code, so it’s fully equivalent to the written-out Horner expression. That is to say,
it does not use a loop, in case you might be worried about the performance implications of that.
I did worry about that, and checked, and found that the evalpoly code was about ten percent faster than the original, which made me blink! Pretty impressive.
1 Like
The improvement is because evalpoly uses the muladd function instead of separate multiplication and addition operations. On capable CPUs, muladd compiles to the fma instruction, which performs x * y
+ z in a single instruction and improves both performance and precision.
You can use @code_typed to see the difference. It shows you what the function body looks like after lowering, inlining, and type inference:
julia> f_evalpoly(x) = evalpoly(x, (2.0, 2.0, 2.0, 2.0, 2.0))
f1 (generic function with 1 method)
julia> @code_typed f_evalpoly(1.0)
1-element Vector{Any}:
1 ─ %1 = Base.muladd_float(x, 2.0, 2.0)::Float64
│ %2 = Base.muladd_float(x, %1, 2.0)::Float64
│ %3 = Base.muladd_float(x, %2, 2.0)::Float64
│ %4 = Base.muladd_float(x, %3, 2.0)::Float64
└── return %4
) => Float64
julia> f_horner(x) = 2.0 + x * (2.0 + x * (2.0 + x * (2.0 + x * 2.0)))
f2 (generic function with 1 method)
julia> @code_typed f_horner(1.0)
1-element Vector{Any}:
1 ─ %1 = Base.mul_float(x, 2.0)::Float64
│ %2 = Base.add_float(2.0, %1)::Float64
│ %3 = Base.mul_float(x, %2)::Float64
│ %4 = Base.add_float(2.0, %3)::Float64
│ %5 = Base.mul_float(x, %4)::Float64
│ %6 = Base.add_float(2.0, %5)::Float64
│ %7 = Base.mul_float(x, %6)::Float64
│ %8 = Base.add_float(2.0, %7)::Float64
└── return %8
) => Float64
Thank you! This is nice! | {"url":"https://discourse.julialang.org/t/i-cant-suppress-printing-of-a-sharedarray-i-must-be-doing-something-wrong-help/120876","timestamp":"2024-11-04T00:55:30Z","content_type":"text/html","content_length":"42662","record_id":"<urn:uuid:254a1e8e-c8b8-49f5-b2a4-57a76d5a75cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00539.warc.gz"} |
PlanetaRY spanGLES: general photometry of planets, rings and shadows
Project description
PlanetaRY spaNGLES
Pryngles is a Python package intended to produce useful visualizations of the geometric configuration of a ringed exoplanet (an exoplanet with a ring or exoring for short) and more importantly to
calculate the light curve produced by this kind of planets. The model behind the package has been developed in an effort to predict the signatures that exorings may produce not only in the light
curve of transiting exoplanets (a problem that has been extensively studied) but also in the light of stars having non-transiting exoplanets (the bright side of the light curve).
This is an example of what can be done with Pryngles:
For the science behind the model please refer to the following papers:
Zuluaga, J.I., Sucerquia, M. & Alvarado-Montes, J.A. (2022), The bright side of the light curve: a general photometric model for non-transiting exorings, Astronomy and Computing 40 (2022) 100623,
Sucerquia, M., Alvarado-Montes, J. A., Zuluaga, J. I., Montesinos, M., & Bayo, A. (2020), Scattered light may reveal the existence of ringed exoplanets. Monthly Notices of the Royal Astronomical
Society: Letters, 496(1), L85-L90.
Download and install
pryngles is available in PyPI, https://pypi.org/project/pryngles/. To install it, just execute:
pip install -U pryngles
If you prefer, you may download and install from the sources.
Quick start
Import the package and some useful utilities:
import pryngles as pr
from pryngles import Consts
NOTE: If you are working in Google Colab before producing any plot please load the matplotlib backend:
%matplotlib inline
Any calculation in Pryngles starts by creating a planetary system:
In the example before the planet has a ring extending from 1.5 to 2.5 planetary radius which is inclined 30 degrees with respect to the orbital plane. It has an orbit with semimajor axis of 0.2 and
eccentricity 0.0.
Once the system is set we can ensamble a simulation, ie. creating an object able to produce a light-curve.
To see how the surface of the planet and the rings looks like run:
You may change the position of the star in the orbit and see how the appearance of the planet changes:
Below is the sequence of commands to produce your first light curve:
import numpy as np
for lamb in lambs:
import matplotlib.pyplot as plt
ax.set_xlabel("Time [days]")
ax.set_ylabel("Flux anomaly [ppm]")
And voilà!
Let's have some Pryngles.
Realistic scattering and polarization
Starting in version 0.9.x, Pryngles is able to compute fluxes using a more realistic model for scattering that includes polarization. The new features are not yet flexible enough but they can be used
to create more realistic light curves.
These new features are based on the science and Fortran code developed by Prof. Daphne Stam and collaborators, and adapted to Pryngles environment by Allard Veenstra (Fortran and Python wrapping) and
Jorge I. Zuluaga (translation to C and ctypes). For the science behind the scattering and polarization code see:
Rossi, L., Berzosa-Molina, J., & Stam, D. M. (2018). PyMieDAP: a Python–Fortran tool for computing fluxes and polarization signals of (exo) planets. Astronomy & Astrophysics, 616, A147.
Below is a simple example of how to compute the light curve of a ringed planet whose atmosphere and ring scatters reallistically the light of the star. The code compute the components of the Stokes
vector and the degree of polarization of the diffusely reflected light on the system.
As shown in the example before, we first need to create the system:
sys = pr.System()
Then generate the light curve:
import numpy as np
from tqdm import tqdm
Pp = []
Pr = []
for lamb in tqdm(lambs):
Pp += [RP.Ptotp]
Pr += [RP.Ptotr]
And plot it:
ax.plot(lambs*180/np.pi-90,Rps+Rrs,label="Planet + Ring")
ax.set_ylabel("Flux anomaly [ppm]")
ax.plot(lambs*180/np.pi-90,Ptot,label="Planet + Ring")
ax.set_ylabel("Degree of polarization [-]")
ax.set_xlabel("True anomaly [deg]")
The resulting polarization and light-curve will be:
We have prepared several Jupyter tutorials to guide you in the usage of the package. The tutorials evolve as the package is being optimized.
• Quickstart [Download, Google Colab]. In this tutorial you will learn the very basics about the package.
• Developers [Download, Google Colab]. In this tutorial you will find a detailed description and exemplification of almost every part of the package. It is especially intended for developers,
however it may also be very useful for finding code snippets useful for science applications. As expected, it is under construction as the package is being developed.
Working with Pryngles we have created several Jupyter notebooks to illustrate many of its capabilities. In the examples below you will find the package at work to do actual science:
• Full-science exploration [Download, Google Colab]. In this example we include the code we used to generate the plots of the release paper arXiv:2207.08636 as a complete example of how the package
can be used in a research context.
This is the disco version of Pryngles. We are improving resolution, performance, modularity and programming standards for future releases.
What's new
For a detailed list of the newest features introduced in the latest releases pleas check What's new.
This package has been designed and written originally by Jorge I. Zuluaga, Mario Sucerquia & Jaime A. Alvarado-Montes with the contribution of Allard Veenstra (C) 2022
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pryngles-0.9.10.tar.gz
Hashes for pryngles-0.9.10.tar.gz
Algorithm Hash digest
SHA256 5eb9cc29d9357964288245066ac3efad955b9bd22fbc4cb66a00c2d7744df652
MD5 84a45708aaf7d0977cf32e43f21297e3
BLAKE2b-256 99c3d534ed57ee84c827ade709e2a82855e590c3aaaa3b2aa7be8652c8f22174
Hashes for pryngles-0.9.10-cp39-cp39-macosx_10_9_x86_64.whl
Hashes for pryngles-0.9.10-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 26397c84a53a0805beb2a9d12d853d055d77d6493fdb309f4d1a3d3c1e6cebda
MD5 91c05f259665a75285fb4e5a32b72a11
BLAKE2b-256 ba0bb323a6e6952658edf55c0f6b3a6732bc948e6df0b7bd76334dcada9162e9 | {"url":"https://pypi.org/project/pryngles/","timestamp":"2024-11-03T21:38:03Z","content_type":"text/html","content_length":"100365","record_id":"<urn:uuid:e558a483-db37-4b0d-89f1-53ef8c9a2dbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00276.warc.gz"} |
Linked List Operations | Data Structures and Algorithms Day #4
Learn all about linked list operations, including insertion, deletion, traversal, and search, with examples in Python and JavaScript
In the world of programming, linked lists are fundamental data structures used to organize and store data dynamically. Understanding linked list operations—such as insertion, deletion, traversal, and
search—is crucial for mastering algorithms and efficient data manipulation. Linked lists offer flexibility in memory management, making them preferable to arrays in certain scenarios. This article
breaks down key linked list operations, complete with Python and JavaScript examples, along with an analysis of their time complexities.
What Are Linked List Operations?
Operations on linked lists allow us to manipulate nodes (elements) and modify the data structure efficiently. Mastery of these operations enables developers to build robust solutions involving
stacks, queues, and even complex graph algorithms.
Types of Linked List Operations
1. Insertion in a Linked List
Inserting a new node can happen at the beginning, end, or a specific position in the list.
Python Example:
class Node:
def __init__(self, data):
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
# Insert at the beginning
def insert_at_beginning(self, data):
new_node = Node(data)
new_node.next = self.head
self.head = new_node
JavaScript Example:
class Node {
constructor(data) {
this.data = data;
this.next = null;
class LinkedList {
constructor() {
this.head = null;
// Insert at the beginning
insertAtBeginning(data) {
const newNode = new Node(data);
newNode.next = this.head;
this.head = newNode;
Time Complexity of Insertion:
• At the Beginning: O(1)
• At the End: O(n)
• At a Specific Position: O(n)
2. Deletion from a Linked List
We can delete a node from the beginning, end, or specific position in the linked list.
Python Example:
def delete_from_beginning(self):
if self.head is None:
self.head = self.head.next
JavaScript Example:
deleteFromBeginning() {
if (this.head !== null) {
this.head = this.head.next;
Time Complexity of Deletion:
• At the Beginning: O(1)
• At the End: O(n)
• At a Specific Position: O(n)
3. Traversal of a Linked List
Traversal involves visiting each node in the list from the head to the end.
Python Example:
def traverse(self):
current = self.head
while current:
print(current.data, end=" -> ")
current = current.next
JavaScript Example:
traverse() {
let current = this.head;
while (current) {
console.log(current.data + " -> ");
current = current.next;
Time Complexity of Traversal:
• O(n) for a list with n nodes.
4. Search in a Linked List
Searching involves checking if a specific value exists in the list.
Python Example:
def search(self, value):
current = self.head
while current:
if current.data == value:
return True
current = current.next
return False
JavaScript Example:
search(value) {
let current = this.head;
while (current) {
if (current.data === value) return true;
current = current.next;
return false;
Time Complexity of Search:
Comparison of Singly vs. Doubly Linked List Operations
Operation Singly Linked List Doubly Linked List
Insertion at Head O(1) O(1)
Insertion at Tail O(n) O(1)
Deletion at Head O(1) O(1)
Deletion at Tail O(n) O(1)
Search O(n) O(n)
Common Misconceptions About Linked List Operations
1. "Linked list insertion is always fast."
□ While insertion at the head is fast (O(1)), inserting at the tail of a singly linked list takes O(n) time.
2. "Doubly linked lists are always better."
□ While doubly linked lists offer more flexibility, they require more memory due to the extra pointer.
Internal Links for Further Reading
FAQs About Linked List Operations
How to insert a node in a linked list?
To insert a node, create a new node and adjust the pointers to maintain the structure. Refer to the examples above for Python and JavaScript implementations.
What is the time complexity of linked list traversal?
The time complexity for traversing a linked list is O(n), where n is the number of nodes.
Which is better: linked lists or arrays?
Arrays offer faster access with O(1) indexing, but linked lists excel in dynamic memory management, making them ideal for certain algorithms and data storage needs.
Mastering linked list operations—such as insertion, deletion, traversal, and search—is key to solving more complex data structure problems. Now that you understand the basic operations, try
implementing them in your own code. Experiment with both singly and doubly linked lists to see which one suits your needs better.
If you have any questions or challenges, feel free to drop a comment below or explore our related articles for more insights! | {"url":"https://hojaleaks.com/linked-list-operations-data-structures-and-algorithms-day-4","timestamp":"2024-11-08T01:02:41Z","content_type":"text/html","content_length":"233622","record_id":"<urn:uuid:630442ce-9558-4fb3-9753-eae4b593c681>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00002.warc.gz"} |
Statistical Evidence | Likelihod
top of page
Likelihood Methods for Measuring Statistical Evidence
Science looks to Statistics for an objective measure of the strength of evidence in a given body of observations. The Law of Likelihood (LL) provides an answer; it explains how data should be
interpreted as statistical evidence for one hypothesis over another. Although likelihood ratios figure prominently in both Frequentist and Bayesian methodologies, they are neither the focus nor the
endpoint of either methodology. A defining characteristic of the Likelihood paradigm is that the measure of the strength of evidence is decoupled from the probability of observing misleading
evidence. As a result, controllable probabilities of observing misleading or weak evidence provide quantities analogous to the Type I and Type II error rates of hypothesis testing, but do not
themselves represent the strength of the evidence in the data.
The Law of Likelihood is often confused with the Likelihood Principle. I discuss the likelihood principle (LP) and its relation to the third evidential metric - false discovery rates - on this page.
The Law of Likelihood is an axiom for the interpretation of data as statistical evidence under a probability model (Royall 1997, Hacking 1965).
The Law of Likelihood (LL): If the first hypothesis, H1, implies that the probability that a random variable X takes the value x is P1(x|H1), while the second hypothesis, H2, implies that the
probability is P2(x|H2), then the observation X=x is evidence supporting H1 over H2 if and only if P(x|H1)>P(x|H2), and the likelihood ratio, P(x|H1)/P(x|H2), measures the strength of that evidence.
Here P(x|H) is the probability of observing x under hypothesis H. The ratio of conditional probabilities, P(x|H1)/P(x|H2), is the likelihood ratio (LR). LL reasons that the hypothesis that more
accurately predicted the observed data is the hypothesis that is better supported by the data. This is an intuitive proposition and is already routine reasoning in the sciences.
For the purpose of interpreting and communicating the strength of evidence, it is useful to divide the likelihood ratio scale into descriptive categories, such as ``weak", ``moderate", and ``strong"
evidence. A LR of 1 indicates neutral evidence and benchmarks of k = 8 and 32 are used to distinguish between weak, moderate, and strong evidence.
LL only applies to 'simple' hypotheses that specify a single numeric value for the probability of observing x. While the reason for the precondition is obvious - the LR is not computable if P(x|H) is
undefined - it does exclude 'composite' hypotheses that specify a set, or range, of probabilities. For example, if x~N(mu,1), probability statements such as P(x|mu>0), P(x|mu=1 or mu=-1), and P(x|mu=
sample mean) are undefined.
This problem is not unique to LL; every statistical method that depends on a likelihood ratio - and that is the majority of them - has the same problem. The general solution is to pick one simple
hypothesis to represent the set and then treat the chosen representative as if it were the pre-specified hypothesis. Examples include hypothesis testing, where the maximum is chosen as the
representative, and Bayesian methods, where the prior-weighted average is chosen as the representative. Each has it's pros and cons. The obvious concerns are that the maximum will exaggerate the
evidence against the null and the average will be sensitive to the choice of the prior.
Essentially, the Frequentist and Bayesian solution is to change the model/hypotheses so that the LR is computable. Likelihoodists, on the other hand, have sought to display and report all the
evidence. They do this in the same way that power curves - another statistical quantity that is undefined for composite hypotheses - are displayed and summarized.
Likelihood Functions and Support Intervals
The graph below shows the likelihood function from a fictitious study where 10 out of 205 people had a cardiovascular event (black curve). The likelihood function is scaled to a maximum of 1 so it is
easier to display. The x-axis is the hypothesis space: the numerical possibilities for the probability of having a cardiovascular event. All of the hypotheses under the crest of the curve
(probabilities of CV event near 0.05) are well supported by the data.
Data were collected from three sites and the grey lines represent the site specific likelihoods (data were 0 out of 30, 2 out of 70 and 7 out of 105). Judging by the maxima alone, some sites appear
to be doing better than others. However, this inference ignores the sampling variability. A better approach is to assess the overlap in the base of the curves, where it is clear the evidence is too
weak to distinguish sites based on their performance alone.
The best supported hypothesis is the maximum likelihood estimator of 0.049 (=10/205). The relative measure of support for H1: prob=0.06 over H2: prob=0.3 is weak, with a likelihood ratio of only 2.24
=L(0.06)/L(0.03)=0.784/0.35. In fact, any two hypotheses under the blue line will have a likelihood ratio of 8 or less. The comparative likelihood ratio for any two hypotheses under the red line is 1
/32 or less. These sets of hypotheses are called support intervals (SI), as they contain the best supported hypothesis up to the stated level of evidence. Hypotheses in a 1/k SI are inferentially
equivalent in the sense that the data do not favor any hypothesis in the set over another by a factor of k or more. The blue interval is the 1/8 support interval; the red is the 1/32 support
Under a normal model, support intervals and unadjusted confidence intervals (CI) have the same mathematical form. A 95% CI corresponds to a 1/6.8 SI, a 96% CI corresponds to a 1/8 SI (blue line), and
99% CI corresponds to a 1/32 SI (red line). The SI is defined relative to the likelihood function while the CI is defined relative to its coverage probability. Recall that the coverage probability of
a CI can change even when the data, and the likelihood function, do not. The natural interpretation of an unadjusted CI as the 'collection of hypotheses that are best supported by the data' is
justified by the Law of Likelihood.
The Probability of Observing Misleading Evidence
An important aspect of the likelihood paradigm is how it controls the probability of observing misleading and weak evidence. Misleading evidence is defined as strong evidence in favor of the
incorrect hypothesis over the correct hypothesis. After the data have been collected, the strength of the evidence will be determined by the likelihood ratio. Whether the evidence is weak or strong
will be clear from the numerical value of the likelihood ratio. However, it remains unknown if the evidence is misleading or not.
One nice feature of likelihoods is that they are seldom misleading. The probability of observing misleading evidence of k-strength is always bounded by 1/k, the so-called universal bound (Royall
1997, Blume 2002). LL also has strong ties to classical hypothesis testing. It is the solution that minimizes the average error rate (Cornfield, 1966) and allows both error rates to converge to 0 as
the sample size grows. It is not necessary to hold the Type I Error rate fixed to maintain good frequentist properties.
The universal bound applies to both fixed sample size and sequential study designs. In a sequential study, the probability of observing misleading evidence does increase with the number of looks at
the data. However, the amount by which it increases shrinks to zero as the sample size grows. As a result, the overall probability can remain bounded (Blume 2007). The probability of observing
misleading evidence in a fixed sample size study is less than the corresponding probability in a sequential study, and both are less than 1/k. This is shown below, where the subscript on P denotes
the "true" hypothesis.
bottom of page | {"url":"https://www.statisticalevidence.com/likelihood","timestamp":"2024-11-05T02:59:00Z","content_type":"text/html","content_length":"464366","record_id":"<urn:uuid:ae98f10b-0bda-4252-9d32-030f8f9c67a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00541.warc.gz"} |
Homework 3 STAT 547 solved
1. Let {W(t), 0 ≤ t ≤ 1} be a Brownian motion (Wiener process). The Brownian bridge
{B(t), 0 ≤ t ≤ 1} is the Brownian motion conditioned on W(1) = 0 and can be represented as B(t) = W(t) − tW(1). Derive the Karhunen–Loève representation of the
Brownian bridge B(t).
2. Simulate a sample of 50 realizations of a). the Brownian motion and b). the Brownian
bridge. Each curve should have 1000 support points. Show the trajectories on two
separate plots and include your R code.
3. Let X1, . . . , Xn be a sample of i.i.d. real-valued random variables sharing a distribution
with an unknown density f supported on a compact interval [a, b]. The kernel density
estimate (KDE) of f(x0) at x0 ∈ [a, b] is
ˆf(x0) = 1
Kh(Xi − x0),
where Kh(·) = h
−1K(·/h), K(·) is a kernel function, h > 0 is the bandwidth. Write
down an intuitive argument for why the KDE “works”. [Hint: consider a uniform kernel
4. Analyze the Lake Acidity data in the gss package of R. The data were extracted from the
Eastern Lake Survey of 1984 conducted by the United States Environmental Protection
Agency, concerning 112 lakes in the Blue Ridge. To gain access to the data, type the
following commends in R:
l i b r a r y ( g s s )
data ( La keAcidi t y )
For more information check the help document about this data set.
5. Perform a nonparametric regression on the calcium concentration (Y) against surface ph
level (X).
(a) Show a KDE and a dot plot of the ph levels.
(b) Compare the results of local polynomial estimator, smoothing spline, regression
spline and penalized spline. Manually vary the tuning parameters, including bandwidth, the number of knots, and the penalty λ on the second derivative of the
regression curve. For each smoother identify a parameter setting that i). oversmooths (the estimate is too smooth), ii) undersmooths (the estimate is too rough),
and iii) smooths appropriately. Show the graphs and your code.
(c) Write a brief summary of your data analysis. | {"url":"https://codeshive.com/questions-and-answers/homework-3-stat-547-solved/","timestamp":"2024-11-03T04:22:48Z","content_type":"text/html","content_length":"97850","record_id":"<urn:uuid:82e8c75d-6c40-4723-89c2-ac59b6564ffc>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00809.warc.gz"} |
Frontiers | Regularized Kernel Algorithms for Support Estimation
• ^1INRIA—Sierra team—École Normale Supérieure, Paris, France
• ^2Laboratory for Computational and Statistical Learning, Istituto Italiano di Tecnologia, Genova, Italy
• ^3Dipartimento di Matematica, Università di Genova, Genova, Italy
• ^4DIBRIS, Università di Genova, Genova, Italy
In the framework of non-parametric support estimation, we study the statistical properties of a set estimator defined by means of Kernel Principal Component Analysis. Under a suitable assumption on
the kernel, we prove that the algorithm is strongly consistent with respect to the Hausdorff distance. We also extend the above analysis to a larger class of set estimators defined in terms of a
low-pass filter function. We finally provide numerical simulations on synthetic data to highlight the role of the hyper parameters, which affect the algorithm.
1. Introduction
A classical issue in statistics is support estimation, i.e., the problem of learning the support of a probability distribution from a set of points identically sampled according to the distribution.
For example, the Devroye-Wise algorithm [1] estimates the support with the union of suitable balls centered in the training points. In the last two decades, many algorithms have been proposed and
their statistical properties analyzed [1–14] and references therein.
An instance of the above setting, which plays an important role in applications, is the problem of novelty/anomaly detection, see Campos et al. [15] for an updated review. In this context, in
Hoffmann [16] the author proposed an estimator based on Kernel Principal Component Analysis (KPCA), first introduced in Schölkopf et al. [17] in the context of dimensionality reduction. The algorithm
was successfully tested in many applications from computer vision to biochemistry [18–24]. In many of these examples the data are often represented by high dimensional vectors, but they actually live
close to a nonlinear low dimensional submanifold of the original space, and the proposed estimator takes advantage of the fact that KPCA provides an efficient compression/dimensionality reduction of
the original data [16, 17], whereas many classical set estimators refer to the dimension of the original space, as it happens for the Devroye-Wise algorithm.
In this paper we prove that KPCA is a consistent estimator of the support of the distribution with respect to the Hausdorff distance. The result is based on an intriguing property of the reproducing
kernel, called separating condition, first introduced in De Vito et al. [25]. This assumption ensures that any closed subset of the original space is represented in the feature space by a linear
subspace. We show that this property remains true if the data are recentered to have zero mean in the feature space. Together with the results in De Vito et al. [25], we conclude that the consistency
of KPCA algorithm is preserved by recentering of the data, which can be regarded as a degree of freedom to improve the empirical performance of the algorithm in a specific application.
Our main contribution is sketched in the next subsection together with some basic properties of KPCA and some relevant previous works. In section 2, we describe the mathematical framework and the
related notations. Section 3 introduces the spectral support estimator and informally discusses its main features, whereas its statistical properties and the meaning of the separating condition for
the kernel are analyzed in section 4. Finally section 5 presents the effective algorithm to compute the decision function and discusses the role of the two meta-parameters based on the previous
theoretical analysis. In the Appendix (Supplementary Material), we collect some technical results.
1.1. Sketch of the Main Result and Previous Works
In this section we sketch our main result by first recalling the construction of the KPCA estimator introduced in Hoffmann [16]. We have at disposal a training set $\left\{{x}_{1},\dots ,{x}_{n}\
right\}\in {D}\subset {ℝ}^{d}$ of n points independently sampled according to some probability distribution P. The input space ${D}$ is a known compact subset of ℝ^d, but the probability distribution
P is unknown and the goal is to estimate the support C of P from the empirical data. We recall that C is the smallest closed subset of ${D}$ such that ℙ[C] = 1 and we stress that C is in general a
proper subset of ${D}$, possibly of low dimension.
Classical Principal Component Analysis (PCA) is based on the construction of the vector space V spanned by the first m eigenvectors associated with the largest eigenvalues of the empirical covariance
where $\overline{x}=\frac{1}{n}\sum _{i=1}^{n}{x}_{i}$ is the empirical mean. However, if the data do not live on an affine subspace, the set V is not a consistent estimator of the support. In order
to take into account non-linear models, following the idea introduced in Schölkopf et al. [17] we consider a feature map Φ from the input space ${D}$ into the corresponding feature space ${H}$, which
is assumed to be a Hilbert space, and we replace the empirical covariance matrix with the empirical covariance operator
where ${\stackrel{^}{\mu }}_{n}=\frac{1}{n}\sum _{i=1}^{n}\Phi \left({x}_{i}\right)$ is the empirical mean in the feature space. As it happens in PCA, we consider the subspace ${\stackrel{^}{V}}_
{m,n}$ of ${H}$ spanned by the first m- eigenvectors ${\stackrel{^}{f}}_{1},\dots ,{\stackrel{^}{f}}_{m}$ of $\stackrel{^}{{T}_{n}^{c}}$. According to the proposal in Hoffmann [16], we consider the
following estimator of the support of the probability distribution P
where τ[n] is a suitable threshold depending on the number of examples and
is the projection of an arbitrary point $x\in {D}$ onto the affine subspace ${\stackrel{^}{\mu }}_{n}+{\stackrel{^}{V}}_{m,n}$. We show that, under a suitable assumption on the feature map, called
separating property, ${\stackrel{^}{C}}_{n}$ is a consistent estimator of C with respect to the Hausdorff distance between compact sets, see Theorem 3 of section 4.2.
The separating property was introduced in De Vito et al. [25] and it ensures that the feature space is rich enough to learn any closed subset of ${D}$. This assumption plays the same role of the
notion of universal kernel [26] in supervised learning.
Moreover, following [25, 27] we extend the KPCA estimator to a class of learning algorithms defined in terms of a low-pass filter function r[m](σ) acting on the spectrum of the covariance matrix and
depending on a regularization parameter m ∈ ℕ. The projection of ${\stackrel{^}{\Phi }}_{n}\left(x\right)$ onto ${\stackrel{^}{V}}_{m,n}$ is replaced by the vector
where ${\left\{{\stackrel{^}{f}}_{j}\right\}}_{j}$ is the family of eigenvectors of $\stackrel{^}{{T}_{n}^{c}}$ and ${\left\{{\stackrel{^}{\sigma }}_{j}\right\}}_{j}$ is the corresponding family of
eigenvalues. The support is then estimated by the set
Note that KPCA corresponds to the choice of the hard-cut off filter
However, other filter functions can be considered, inspired by the theory of regularization for inverse problems [28] and by supervised learning algorithms [29, 30]. In this paper we show that the
explicit computation of these spectral estimators reduces to a finite dimensional problem depending only on the kernel K(x, w) = 〈Φ(x), Φ(w)〉 associated with the feature map, as for KPCA. The
computational properties of each learning algorithm depend on the choice of the low-pass filter r[m](σ), which can be tuned to out-perform of some specific data set, see the discussion in Rudi et al.
We conclude this section with two considerations. First, in De Vito et al. [25, 27] it is proven a consistency result for a similar estimator, where the subspace ${\stackrel{^}{V}}_{n,m}$ is computed
with respect to the non-centered covariance matrix in the feature space ${H}$, instead of the covariance matrix. In this paper we analyze the impact of recentering the data in the feature space ${H}$
on the support estimation problem, see Theorem 1 below. This point of view is further analyzed in Rudi et al. [32, 33].
Finally note that, our consistency results are based on convergence rates of empirical subspaces to true subspaces of the covariance operator, see Theorem 2 below. The main difference between our
result and the result in Blanchard et al. [34], is that we prove the consistency for the case when the dimension m = m[n] of the subspace ${\stackrel{^}{V}}_{m,n}$ goes to infinity slowly enough. On
the contrary, in their seminal paper [34] the authors analyze the most specific case when the dimension of the projection space is fixed.
2. Mathematical Assumptions
In this section we introduce the statistical model generating the data, the notion of separating feature map and the properties of the filter function. Furthermore, we show that KPCA can be seen as a
filter function and we recall the main properties of the covariance operators.
We assume that the input space ${D}$ is a bounded closed subset of ℝ^d. However, our results also hold true by replacing ${D}$ with any compact metric space. We denote by d(x, w) the Euclidean
distance |x − w| between two points x, w ∈ ℝ^d and by d[[H]](A, B) the Hausdorff distance between two compact subsets $A,B\subset {D}$, explicitly given by
where $\mathrm{\text{d}}\left(x,A\right)={inf}_{w\in A}\mathrm{\text{d}}\left(x,w\right)$.
2.1. Statistical Model
The statistical model is described by a random vector X taking value in ${D}$. We denote by P the probability distribution of X, defined on the Borel σ-algebra of ${D}$, and by C the support of P.
Since the probability distribution P is unknown, so is its support. We aim to estimate C from a training set of empirical data, which are described by a family X[1], …, X[n] of random vectors, which
are independent and identically distributed as X. More precisely, we are looking for a closed subset ${\stackrel{^}{C}}_{n}={\stackrel{^}{C}}_{{X}_{1},\dots ,{X}_{n}}\subset {D}$, depending only on X
[1], …, X[n], but independent of P, such that
for all probability distributions P. In the context of regression estimate, the above convergence is usually called universal strong consistency [35].
2.2. Mercer Feature Maps and Separating Condition
To define the estimator ${\stackrel{^}{C}}_{n}$ we first map the data into a suitable feature space, so that the support C is represented by a linear subspace.
Assumption 1. Given a Hilbert space ${H}$, take $\Phi :{D}\to {H}$ satisfying the following properties:
(H1) the set $\Phi \left({D}\right)$ is total in ${H}$, i.e.,
where $\overline{\mathrm{\text{span}}}\left\{·\right\}$ denotes the closure of the linear span;
(H2) the map Φ is continuous.
The space ${H}$ is called the feature space and the map Φ is called a Mercer feature map.
In the following the norm and scalar product of ${H}$ are denoted by ||·|| and 〈··〉, respectively.
Assumptions (H1) and (H2) are standard for kernel methods, see Steinwart and Christmann [36]. We now briefly recall some basic consequences. First of all, the map $K:{D}×{D}\to ℝ$
is a Mercer kernel and we denote by ${{H}}_{K}$ the corresponding (separable) reproducing kernel Hilbert space, whose elements are continuous functions on ${D}$. Moreover, each element $f\in {H}$
defines a function ${f}_{\Phi }\in {{H}}_{K}$ by setting f[Φ](x) = 〈f, Φ(x)〉 for all $x\in {D}$. Since $\Phi \left({D}\right)$ is total in ${H}$, the linear map f ↦ f[Φ] is an isometry from ${H}$
onto ${{H}}_{K}$. In the following, with slight abuse of notation, we write f instead of f[Φ], so that the elements $f\in {H}$ are viewed as functions on ${D}$ satisfying the reproducing property
$f(x)=〈f,Φ(x)〉 x∈D.$
Finally, since ${D}$ is compact and Φ is continuous, it holds that
$R=supx∈D||Φ(x)||2=supx∈DK(x,x)<+∞. (1)$
Following De Vito et al. [27], we call Φ a separating Mercer feature map if the following the separating property also holds true.
(H3) The map Φ is injective and for all closed subsets ${C}\subset {D}$
$Φ(C)=Φ(D)∩span¯{Φ(x)∣x∈C}. (2)$
It states that any closed subset ${C}\subset {D}$ is mapped by Φ onto the intersection of $\Phi \left({D}\right)$ and the closed subspace $\overline{\mathrm{\text{span}}}\left\{\Phi \left(x\right)\
mid x\in {C}\right\}\subset {H}$. Examples of kernels satisfying the separating property are for ${D}\subset {ℝ}^{d}$ [27]:
• Sobolev kernels with smoothness index $s>\frac{d}{2}$;
• the Abel/Laplacian kernel K(x, w) = e^ − γ|x − w| with γ > 0;
• the ℓ[1]-kernel $K\left(x,w\right)={e}^{-\gamma |x-w{|}_{1}}$, where |·|[1] is the ℓ[1]-norm and γ > 0.
As shown in De Vito et al. [25], given a closed set ${C}$ the equality (2) is equivalent to the condition that for every ${x}_{0}otin {C}$ there exists $f\in {H}$ such that
$f(x0)≠0 and f(x)=0 ∀x∈C. (3)$
Clearly, an arbitrary Mercer feature map is not able to separate all the closed subsets, but only few of them. To better describe these sets, we introduce the elementary learnable sets, namely
where $f\in {H}$. Clearly, ${{C}}_{f}$ is closed and the equality (3) holds true. Furthermore the intersection of an arbitrary family of elementary learnable sets ${\cap }_{f\in {F}}{C}_{f}$ with $
{F}\subset {H}$ satisfies (3), too. Conversely, if ${C}$ is a set satisfying (2), select a maximal family {f[j]}[j∈J] of orthonormal functions in ${H}$ such that
$fj(x)=〈fj,Φ(x)〉=0 ∀x∈C, j∈J,$
i.e., a basis of the orthogonal complement of $\left\{\Phi \left(x\right)\mid x\in {C}\right\}$, then it is easy to prove that
$C={x∈D∣fj,Φ(x)〉=0 ∀j∈J}=∩j∈JCfj,(4)$
so that any set which is separating by Φ is the (possibly denumerable) intersection of elementary sets. Assumption (H3) is hence a requirement that the family of the elementary learnable sets,
labeled by the elements of ${H}$, is rich enough to parameterize all the closed subsets of ${D}$ by means of (4). In section 4.3 we present some examples.
The Gaussian kernel K(x, w) = e^−γ|x−w|2 is a popular choice in machine learning, however it is not separating. Indeed, since K is analytic, the elements of the corresponding reproducing kernel
Hilbert space are analytic functions, too [36]. It is known that, given an analytic function f ≠ 0, the corresponding elementary learnable set ${{C}}_{f}=\left\{x\in {D}\mid f\left(x\right)=0\right\}
$ is a closed set whose interior is the empty set. Hence also the denumerable intersections have empty interior, so that K can not separate a support with not-empty interior. In Figure 1 we compare
the decay behavior of the eigenvalues of the Laplacian and the Gaussian kernels.
FIGURE 1
Figure 1. Eigenvalues in logarithmic scale of the Covariance operator when the kernel is Abel (blue) and Gaussian (red) and the distribution is uniformly supported on the “8” curve in Figure 2. Note
that the eigenvalue decay rate of the first operator has a polynomial behavior while the second has an exponential one.
2.3. Filter Function
The second building block is a low pass filter, we introduce to avoid that the estimator overfits the empirical data. The filter functions were first introduced in the context of inverse problem, see
Engl et al. [28] and references therein, and in the context of supervised learning, see Lo Gerfo et al. [29] and Blanchard and Mucke [30].
We now fix some notations. For any $f\in {H}$, we denote by f ⊗ f the rank one operator (f ⊗ f)g = 〈gf〉f. We recall that a bounded operator A on ${H}$ is a Hilbert-Schmidt operator if for some
(any) basis {f[j]}[j] the series $||A|{|}_{2}^{2}:=\sum _{j}||A{f}_{j}|{|}^{2}$ is finite, ||A||[2] is called the Hilbert-Schmidt norm and ||A||[∞] ≤ ||A||[2], where ||·||[∞] is the spectral norm. We
denote by ${{S}}_{2}$ the space of Hilbert-Schmidt operators, which is a separable Hilbert space under the scalar product ${〈AB〉}_{2}=\sum _{j}〈A{f}_{j}B{f}_{j}〉$.
Assumption 2. A filter function is a sequence of functions r[m]:[0, R] → [0, 1], with m ∈ ℕ, satisfying
(H4) for any m ∈ ℕ, r[m](0) = 0;
(H5) for all σ > 0, $\underset{m\to +\infty }{lim}{r}_{m}\left(\sigma \right)=1$;
(H6) for all m ∈ ℕ, there is L[m] > 0 such that
i.e., r[m] is a Lipschitz function with Lipschitz constant L[m].
For fixed m, r[m] is a filter cutting the smallest eigenvalues (high frequencies). Indeed, (H4) and (H6) with σ′ = 0 give
$|rm(σ)|≤Lm|σ|. (5)$
On the contrary, if m goes to infinity, by (H5) r[m] converges point-wisely to the Heaviside function
Since r[m](σ) converges to Θ(σ), which does not satisfy (5), we have that $\underset{m\to +\infty }{lim}{L}_{m}=+\infty$.
We fix the interval [0, R] as domain of the filter functions r[m] since the eigenvalues of the operators we are interested belong to [0, R], see (23).
Examples of filter functions are
• Tikhonov filter
$rm(σ)=mσmσ+R Lm=mR.$
• Soft cut-off
$rm(σ)={1σ≥RmmσRσ<Rm Lm=mR.$
• Landweber iteration
$rm(σ)=σR∑k=0m(1−σR)k=1−(1−σR)m+1 Lm=m+1R.$
We recall a technical result, which is based on functional calculus for compact operators. If A is a positive Hilbert-Schmidt operator, Hilbert-Schmidt theorem (for compact self-adjoint operators)
gives that there exist a basis {f[j]}[j] of ${H}$ and a family {σ[j]}[j] of positive numbers such that
$A=∑jσjfj⊗fj ⇔ Afj=σjfj. (6)$
If the spectral norm ||A||[∞] ≤ R, then all the eigenvalues σ[j] belong to [0, R] and the spectral calculus defines r[m](A) as the operator on ${H}$ given by
$rm(A)=∑jrm(σj)fj⊗fj ⇔ rm(A)fj=rm(σj)fj.$
With this definition each f[j] is still an eigenvector of r[m](A), but the corresponding eigenvalue is shrunk to r[m](σ[j]). Proposition 1 in the Appendix in Supplementary Material summarizes the
main properties of r[m](A).
2.4. Kernel Principal Component Analysis
As anticipated in the introduction, the estimators we propose are a generalization of KPCA suggested by Hoffmann [16] in the context of novelty detection. In our framework this corresponds to the
hard cut-off filter, i.e., by labeling the different eigenvalues of A in a decreasing order^1 σ[1] > σ[2] > … > σ[m] > σ[m+1] > …, the filter function is
Clearly, r[m] satisfies (H4) and (H5), but (H6) does not hold. However, the Lipschitz assumption is needed only to prove the bound (21e) and, for the hard cut-off filter, r[m](A) is simply the
orthogonal projector onto the linear space spanned by the eigenvectors whose eigenvalues are bigger than σ[m+1]. For such projections [37] proves the following bound
so that (21e) holds true with ${L}_{m}=\frac{2}{{\sigma }_{m+1}-{\sigma }_{m}}$. Hence, our results also hold for hard cut-off filter at the price to have a Lipschitz constant L[m] depending on the
eigenvalues of A.
2.5. Covariance Operators
The third building block is made of the eigenvectors of the distribution dependent covariance operator and of its empirical version. The covariance operators are computed by first mapping the data in
the feature space ${H}$.
As usual, we introduce two random variables Φ(X) and Φ(X) ⊗ Φ(X), taking value in ${H}$ and in ${{S}}_{2}$, respectively. Since Φ is continuous and X belongs to the compact subset ${D}$, both random
variables are bounded. We set
$μ=𝔼[Φ(X)]=∫DΦ(x)dP(x), (7a)$
$T=𝔼[Φ(X)⊗Φ(X)]=∫DΦ(x)⊗Φ(x)dP(x), (7b)$
$Tc=T−μ⊗μ, (7c)$
where the integrals are in the Bochner sense.
We denote by ${\stackrel{^}{\mu }}_{n}$ and ${\stackrel{^}{T}}_{n}$ the empirical mean of Φ(X) and Φ(X) ⊗ Φ(X), respectively, and by $\stackrel{^}{{T}_{n}^{c}}$ the empirical covariance operator,
respectively. Explicitly,
$μ^n=1n∑iΦ(Xi), (8a)$
$T^n=1n∑iΦ(Xi)⊗Φ(Xi), (8b)$
$Tnc^=T^n−μ^n⊗μ^n. (8c)$
The main properties of the covariance operator and its empirical version are summarized in Proposition 2 in the Appendix in Supplementary Material.
3. The Estimator
Now we are ready to construct the estimator, whose computational aspects are discussed in section 5. The set ${\stackrel{^}{C}}_{n}$ is defined by the following three steps:
a) the points $x\in {D}$ are mapped into the corresponding centered vectors $\Phi \left(x\right)-{\stackrel{^}{\mu }}_{n}\in {H}$, where the center is the empirical mean;
b) the operator ${r}_{m}\left(\stackrel{^}{{T}_{n}^{c}}\right)$ is applied to each vector $\Phi \left(x\right)-{\stackrel{^}{\mu }}_{n}$;
c) the point $x\in {D}$ is assigned to ${\stackrel{^}{C}}_{n}$ if the distance between ${r}_{{m}_{n}}\left({\stackrel{^}{T}}_{n}^{c}\right)\left(\Phi \left(x\right)-{\stackrel{^}{\mu }}_{n}\right)$
and $\Phi \left(x\right)-{\stackrel{^}{\mu }}_{n}$ is smaller than a threshold τ.
Explicitly we have that
$C^n={x∈D∣||rmn(T^nc)(Φ(x)−μ^n)−(Φ(x)−μ^n)||≤τn}, (9)$
where τ = τ[n] and m = m[n] are chosen as a function of the number n of training data.
With the choice of the hard cut-off filter, this reduces to the KPCA algorithm [16, 17]. Indeed, ${r}_{m}\left({\stackrel{^}{T}}_{n}^{c}\right)$ is the projection Q^m onto the vector space spanned by
the first m eigenvectors. Hence ${\stackrel{^}{C}}_{n}$ is the set of points x whose image $\Phi \left(x\right)-{\stackrel{^}{\mu }}_{n}$ is close to Q^m. For an arbitrary filter function r[m], Q^m
is replaced by ${r}_{m}\left({\stackrel{^}{T}}_{n}^{c}\right)$, which can be interpreted as a smooth version of Q^m. Note that, in general, ${r}_{m}\left({\stackrel{^}{T}}_{n}^{c}\right)$ is not a
In De Vito et al. [27] a different estimator is defined. In that paper the data are mapped in the feature space ${H}$ without centering the points with respect to the empirical mean and the estimator
is given by
where the filter function r[m] is as in the present work, but ${r}_{{m}_{n}}\left({\stackrel{^}{T}}_{n}\right)$ is defined in terms of the eigenvectors of the non-centered second momentum ${\stackrel
{^}{T}}_{n}$. To compare the two estimators note that
$||rm(T^nc)(Φ(x)−μ^n)−(Φ(x)−μ^n)||2 =〈(I−rm(T^nc))2(Φ(x)−μ^n)Φ(x)−μ^n〉 =〈(I−rm∗(T^nc))(Φ(x)−μ^n)Φ(x)−μ^n〉,$
where ${r}_{m}^{*}\left(\sigma \right)=2{r}_{m}\left(\sigma \right)-{r}_{m}{\left(\sigma \right)}^{2}$, which is a filter function too, possibly with a Lipschitz constant ${L}_{m}^{*}\le 2{L}_{m}$.
Note that for the hard cut-off filter ${r}_{m}^{*}\left(\sigma \right)={r}_{m}\left(\sigma \right)$.
Though ${r}_{{m}_{n}}\left({\stackrel{^}{T}}_{n}\right)$ and ${r}_{{m}_{n}}\left({\stackrel{^}{T}}_{n}^{c}\right)$ are different, both ${\stackrel{^}{C}}_{n}$ and ${\stackrel{~}{C}}_{n}$ converge to
the support of the probability distribution P, provided that the separating property (H3) holds true. Hence, one has the freedom to choose if the empirical data have or not zero mean in the feature
4. Main Results
In this section, we prove that the estimator ${\stackrel{^}{C}}_{n}$ we introduce is strongly consistent. To state our results, for each n ∈ ℕ, we fix an integer m[n] ∈ ℕ and set ${\stackrel{^}{F}}_
{n}:{D}\to {H}$ to be
so that Equation (9) becomes
$C^n={x∈D∣||F^n(x)||≤τn}. (10$
4.1. Spectral Characterization
First of all, we characterize the support of P by means of Q^c, the orthogonal projector onto the null space of the distribution dependence covariance operator T^c. The following theorem will show
that the centered feature map
$Φc:D→H Φc(x)=Φ(x)−μ$
sends the support C onto the intersection of ${\Phi }^{c}\left({D}\right)$ and the closed subspace $\left(I-{Q}^{c}\right){H}$, i.e,
Theorem 1. Assume that Φ is a separating Mercer feature map, then
$C={x∈D∣Qc(Φ(x)−μ)=0}, (11)$
where Q^c is the orthogonal projector onto the null space of the covariance operator T^c.
Proof. To prove the result we need some technical lemmas, we state and prove in the Appendix in Supplementary Material. Assume first that $x\in {D}$ is such that Q^c(Φ(x) − μ) = 0. Denoted by Q the
orthogonal projection onto the null space of T, by Lemma 2 QQ^c = Q and Qμ = 0, so that
Hence Lemma 1 implies that x ∈ C.
Conversely, if x ∈ C, then as above Q(Φ(x) − μ) = 0. By Lemma 2 we have that Q^c(1 − Q) = ||Q^cμ||^−2Q^c μ ⊗ Q^cμ. Hence it is enough to prove that
which holds true by Lemma 3.
4.2. Consistency
Our first result is about the convergence of ${\stackrel{^}{F}}_{n}$.
Theorem 2. Assume that Φ is a Mercer feature map. Take the sequence {m[n]}[n] such that
$limn→∞mn=+∞, (12a)$
$Lmn≤κnln n, (12b)$
for some constant κ > 0, then
$ℙ[limn→∞supx∈D||F^n(x)−Qc(Φ(x)−μ)||=0]=1. (13)$
Proof. We first prove Equation (13). Set ${A}_{n}=I-{r}_{{m}_{n}}\left(\stackrel{^}{{T}_{n}^{c}}\right)$. Given $x\in {D}$,
$||An(Φ(x)−μ^n)−Qc(Φ(x)−μ)|| ≤||(An−Qc)(Φ(x)−μ)||+||An(μ−μ^n)|| ≤||rmn(T^nc)−rmn(Tc)||2||Φ(x)−μ|| +||(rmn(Tc)−(I−Qc))(Φ(x)−μ)||+||An(μ−μ^n)|| ≤2RLmn||T^nc−Tc||2 +||(rmn(Tc)−(I−Qc))(Φ(x)−μ)||+ ||μ−μ^n
where the fourth line is due to (21e), the bound $||{A}_{n}|{|}_{\infty }=\underset{\sigma \in \left[0,1\right]}{sup}|1-{r}_{m}\left(\sigma \right)|\le 1$, and the fact that both Φ(x) and μ are
bounded by $\sqrt{R}$. By (12b) it follows that
$supx∈D||An(Φ(x)−μ^n)−Qc(Φ−μ)||≤2Rκnln n||T^nc−Tc||2 + supx∈D||(rmn(Tc)−(I−Qc))(Φ(x)−μ)||−||μ−μ^n||,$
so that, taking into account (24a) and (24c), it holds that
$limn→+∞supx∈D||An(Φ(x)−μ^n)−Qc(Φ−μ)|| =0$
almost surely, provided that lim[n→+∞]∥(r[m[n]](T^c) − (I − Q^c))(Φ(x) − μ)∥ = 0. This last limit is a consequence of (21d) observing that $\left\{\Phi \left(x\right)-\mu \mid x\in {D}\right\}$ is
compact since ${D}$ is compact and Φ is continuous.
We add some comments. Theorem 1 suggests that the consistency depends on the fact that the vector $\left(I-{r}_{{m}_{n}}\left({\stackrel{^}{T}}_{n}^{c}\right)\right)\left(\Phi \left(x\right)-{\
stackrel{^}{\mu }}_{n}\right)$ is a good approximation of Q^c(Φ(x)−μ). By the law of large numbers, ${\stackrel{^}{T}}_{n}$ and ${\stackrel{^}{\mu }}_{n}$ converge to T and μ, respectively, and
Equation (21d) implies that, if m is large enough, (I − r[m](T))(Φ(x)−μ) is closed to Q^c(Φ(x)−μ). Hence, if m[n] is large enough, see condition (12a), we expect that ${r}_{{m}_{n}}\left({\stackrel
{^}{T}}_{n}^{c}\right)$ is close to ${r}_{{m}_{n}}\left({T}^{c}\right)$. However, this is true only if m[n] goes to infinity slowly enough, see condition (12b). The rate depends on the behavior of
the Lipschitz constant L[m], which goes to infinity if m goes to infinity. For example, for Tikhonov filter a sufficient condition is that ${m}_{n}~{n}^{\frac{1}{2}-ϵ}$ with ϵ > 0. With the right
choice of m[n], the empirical decision function ${\stackrel{^}{F}}_{n}$ converges uniformly to the function F(x) = Q^c(Φ(x) − μ), see Equation (13).
If the map Φ is separating, Theorem 1 gives that the zero level set of F is precisely the support C. However, if C is not learnable by Φ, i.e., the equality (2) does not hold, then the zero level set
of F is bigger than C. For example, if ${D}$ is connected, C has not-empty interior and Φ is the feature map associated with the Gaussian kernel, it is possible to prove that F is an analytic
function, which is zero on an open set, hence it is zero on the whole space ${D}$. We note that, in real applications the difference between Gaussian and Abel kernel, which is separating, is not so
big and in our experience the Gaussian kernel provides a reasonable estimator.
From now on we assume that Φ is separating, so that Theorem 1 holds true. However, the uniform convergence of ${\stackrel{^}{F}}_{n}$ to F does not imply that the zero level sets of ${\stackrel{^}
{F}}_{n}$ converges to C = F^−1(0) with respect to the Hausdorff distance. For example, with the Tikhonov filter ${\stackrel{^}{F}}_{n}^{-1}\left(0\right)$ is always the empty set. To overcome the
problem, ${\stackrel{^}{C}}_{n}$ is defined as the τ[n]-neighborhood of the zero level set of ${\stackrel{^}{F}}_{n}$, where the threshold τ[m] goes to zero slowly enough.
Define the data dependent parameter ${\stackrel{^}{\tau }}_{n}$ as
$τ^n=max1≤i≤n||F^n(Xi)||. (14)$
Since ${\stackrel{^}{F}}_{n}\in \left[0,1\right]$, clearly ${\stackrel{^}{\tau }}_{n}\in \left[0,1\right]$ and the set estimator becomes
The following result shows that ${\stackrel{^}{C}}_{n}$ is a universal strongly consistent estimator of the support of the probability distribution P. Note that for KPCA the consistency is not
universal since the choice of m[n] depends on some a-priori information about the decay of the eigenvalues of the covariance operator T^c, which depends on P.
Theorem 3. Assume that Φ is a separating Mercer feature map. Take the sequence {m[n]}[n] satisfying (12a)-(12b) and define ${\stackrel{^}{\tau }}_{n}$ by (14). Then
$ℙ[limn→∞τ^n=0]=1, (15a)$
$ℙ[limn→∞dH(C^n,C)=0]=1. (15b)$
Proof. We first show Equation (15a). Set F(x) = Q^c(Φ(x) − μ) and let E be the event on which ${\stackrel{^}{F}}_{n}$ converges uniformly to F(x), and F be the event such that X[i] ∈ C for all i ≥ 1.
Theorem 2 shows that ℙ[E] = 1 and, since C is the support, then ℙ[F] = 1. Take ω ∈ E ∩ F and fix ϵ > 0, then there exists n[0] > 0 (possibly depending on ω and ϵ) such that for all n ≥ n[0] $|{\
stackrel{^}{F}}_{n}\left(x\right)-F\left(x\right)|\le ϵ$ for all $x\in {D}$. By Theorem 1 F(x) = 0 for all x ∈ C and X[1](ω), …, X[n](ω) ∈ C, it follows that $|{\stackrel{^}{F}}_{n}\left({X}_{i}\left
(\omega \right)\right)|\le ϵ$ for all 1 ≤ i ≤ n so that $0\le {\stackrel{^}{\tau }}_{n}\left(\omega \right)\le ϵ$, so that the sequence ${\stackrel{^}{\tau }}_{n}\left(\omega \right)$ goes to zero.
Since ℙ[E ∩ F] = 1 Equation (15a) holds true.
We split the proof of Equation (15b) in two steps. We first show that with probability one $\underset{n\to +\infty }{lim}\underset{x\in {\stackrel{^}{C}}_{n}}{sup}d\left(x,C\right)=0$. On the event E
∩ F, suppose, by contraction, that the sequence ${\left\{{sup}_{x\in {\stackrel{^}{C}}_{n}}\mathrm{\text{d}}\left(x,C\right)\right\}}_{n}$ does not converge to zero. Possibly passing to a
subsequence, for all n ∈ ℕ there exists ${x}_{n}\in {\stackrel{^}{C}}_{n}$ such that $d\left({x}_{n},{\stackrel{^}{C}}_{n}\right)\ge {ϵ}_{0}$ for some fixed ϵ[0] > 0. Since ${D}$ is compact, possibly
passing to a subsequence, {x[n]}[n] converges to ${x}_{0}\in {D}$ with d(x[0], C) ≥ ϵ[0]. We claim that x[0] ∈ C. Indeed,
$||Qc(Φ(x0)−μ)|| ≤ ||Qc(Φ(x0)−Φ(xn))|| + ||F^n(xn)−Qc(Φ(xn)−μ)||+||F^n(xn)|| ≤ ||Qc(Φ(x0)−Φ(xn))|| + supx∈D||F^n(x)−Qc(Φ(x)−μ)+τn,$
since ${x}_{n}\in {\stackrel{^}{C}}_{n}$ means that $||{\stackrel{^}{F}}_{n}\left({x}_{n}\right)||\le {\tau }_{n}$. If n goes to infinity, since Φ is continuous and by the definition of E and F, the
right side of the above inequality goes to zero, so that $||{Q}^{c}\left(\Phi \left({x}_{0}\right)-\mu \right)||=0$, i.e., by Theorem 1 we get x[0] ∈ C, which is a contraction since by construction d
(x[0], C) ≥ ϵ[0] > 0.
We now prove that
For any $x\in {D}$, set X[1,n](x) to be a first neighbor of x in the training set {X[1], …, X[n]}. It is known that for all x ∈ C,
$ℙ[limn→+∞d(X1,n(x),x)=0]=1, (16)$
see for example Lemma 6.1 of Györfi et al. [35].
Choose a denumerable family {z[j]}[j ∈ J] in C such that is dense in C. By Equation (16) there exists an event G with such that ℙ[G] = 1 and, on G, for all j ∈ J
Fix ω ∈ G, we claim that $\underset{n}{lim}{sup}_{x\in C}\mathrm{\text{d}}\left(x,{\stackrel{^}{C}}_{n}\right)=0$. Observe that, by definition of ${\stackrel{^}{\tau }}_{n}$, ${X}_{i}\in {\stackrel
{^}{C}}_{n}$ for all 1 ≤ i ≤ n and
so that it is enough to show that ${\mathrm{lim}}_{n\to +\infty }{\text{sup}}_{x\in C}\text{d}\left({X}_{1,n}\left(x\right),x\right)=0$.
Fix ϵ > 0. Since C is compact, there is a finite subset J[ϵ] ⊂ J such that {B(z[j], ϵ)}[j ∈ J[ϵ]] is a finite covering of C. Furthermore,
$supx∈Cd(X1,n(x),x)≤maxj∈Jϵd(X1,n(zj),zj)+ϵ. (17)$
Indeed, fix x ∈ C, there exists an index j ∈ J[ϵ] such that x ∈ B(z[j], ϵ). By definition of first neighbor, clearly
so that by triangular inequality we get
$d(X1,n(x),x)≤d(X1,n(zj),x)≤d(X1,n(zj),zj)+d(zj,x) ≤d(X1,n(zj),zj)+ϵ ≤maxj∈Jϵd(X1,n(zj),zj)+ϵ.$
Taking the supremum over C we get the claim. Since ω ∈ G and J[ϵ] is finite,
so that by Equation (17)
$limsup n→+∞supx∈Cd(X1,n(x),x)≤ϵ.$
Since ϵ is arbitrary, we get $\underset{n\to +\infty }{lim}{sup}_{x\in C}\mathrm{\text{d}}\left({X}_{1,n}\left(x\right),x\right)=0$, which implies that
Theorem 3 is an asymptotic result. Up to now, we are not able to provide finite sample bounds on d[H](Ĉ[n], C). It is possible to have finite sample bounds on $||{\stackrel{^}{F}}_{n}\left(x\right)-
{Q}^{c}\left(\Phi \left(x\right)-\mu \right)||$, as in Theorem 7 of De Vito et al. [25] with the same kind of proof.
4.3. The Separating Condition
The following two examples clarify the notion of the separating condition.
Example 1. Let ${D}$ be a compact subset of ℝ^2, ${H}={ℝ}^{6}$ with the euclidean scalar product, and $\Phi :{D}\to {ℝ}^{6}$ be the feature map
whose corresponding Mercer kernel is a polynomial kernel of degree two, explicitly given by
Given a vector $f={\left({f}_{1},\dots ,{f}_{6}\right)}^{\top }$, the corresponding elementary set is the conic
$Cf,c={(x,y)∈D∣f1x2+f2y2+f32xy +f42x+f52y+f6=0},$
Conversely, all the conics are elementary sets. The family of all the intersections of at most five conics, i.e., the sets whose cartesian equation is a system of the form
where f[11], …, f[56] ∈ ℝ.
Example 2. The data are the random vectors in ℝ^2
where a, c ∈ ℕ, b, d ∈ [0, 2π] and Θ[1], …, Θ[n] are independent random variables, each of them uniformly distributed on [0, 2π]. Setting ${D}={\left[-1,1\right]}^{2}$, clearly ${X}_{i}\in {D}$ and
the support of their common probability distribution is the Lissajous curve
Figure 2 shows two examples of Lissajous curves. As a filter function r[m], we fix the hard cut-off filter where m is the number of eigenvectors corresponding to the highest eigenvalues we keep. We
denote by ${Ĉ}_{n}^{m,\tau }$ the corresponding estimator given by (10).
FIGURE 2
Figure 2. Examples of Lissajous curves for different values of the parameters. (Left) ${\mathrm{\text{Lis}}}_{1,0,1,\frac{\pi }{2}}$. (Right) Lis[2,0.11,1,0.3].
In the first two tests we use the polynomial kernel (18), so that the elementary learnable sets are conics. One can check that the rank of T^c is less or equal than 5. More precisely, if Lis[a,b,c,d]
is a conic, the rank of T^c is 4 and we need to estimate five parameters, whereas if Lis[a,b,c,d] is not a conic, Lis[a,b,c,d] is not a learnable set and the rank of T^c is 5.
In the first test the data are sampled from the distribution supported on the circumference ${\mathrm{\text{Lis}}}_{1,0,1,\frac{\pi }{2}}$ (see panel left of Figure 2). In Figure 3 we draw the set $
{Ĉ}_{n}^{m,\tau }$ for different values of m and τ when n varies. In this toy example n = 5 is enough to learn exactly the support, hence for each n = 2, …, 5 the corresponding values of m[n] and τ[n
] are m[n] = 1, 2, 3, 4 and τ[n] = 0.01, 0.005, 0.005, 0.002.
FIGURE 3
Figure 3. From left to right, top to bottom. The set ${Ĉ}_{n}^{m,\tau }$ with n, respectively 2, 3, 4, 5, m = 1, 2, 3, 4 and τ = 0.1, 0.005, 0.005, 0.002.
In the second test the data are sampled from the distribution supported on the curve Lis[2,0.11,1,0.3], which is not a conic (see panel right of Figure 2). In Figure 4 we draw the set ${Ĉ}_{n}^{m,\
tau }$ for n = 10, 20, 50, 100, m = 4, and τ = 0.01. Clearly, ${\stackrel{^}{C}}_{n}$ is not able to estimate Lis[2,0.11,1,0.3].
FIGURE 4
In the third test, we use the Abel kernel with the data sampled from the distribution supported on the curve Lis[2,0.11,1,0.3] (see panel right of Figure 2). In Figure 5 we show the set ${Ĉ}_{n}^{m,\
tau }$ for n = 20, 50, 100, 500, m = 5, 20, 30, 50, and τ = 0.4, 0.35, 0.3, 0.2. According the fact that the kernel is separating, ${\stackrel{^}{C}}_{n}$ is able to estimate Lis[2,0.11,1,0.3]
FIGURE 5
Figure 5. From left to right and top to bottom. The learned set ${Ĉ}_{n}^{m,\tau }$ with n respectively of 20, 50, 100, 500, and m = 5, 20, 30, 50, τ = 0.4, 0.35, 0.3, 0.2.
We now briefly discuss how to select the parameter m[n] and τ[n] from the data. The goal of set-learning problem is to recover the support of the probability distribution generating the data by using
the given input observations. Since no output is present, set-learning belongs to the category of unsupervised learning problems, for which there is not a general framework accounting for model
selection. However there are some possible strategies (whose analysis is out of the scope of this paper). A first approach, we used in our simulations, is based on the monotonicity properties of ${Ĉ}
_{n}^{m,\tau }$ with respect to m, τ. More precisely, given f ∈ (0, 1), we select (the smallest) m and (the biggest) τ such that at most nf observed points belong to the the estimated set. It is
possible to prove that this method is consistent when f tends to 1 as the number of observations increases. Another way to select the parameters consists in transforming the set-learning problem is a
supervised one and then performing standard model selection techniques like cross validation. In particular set-learning can be casted in a classification problem by associating the observed example
to the class +1 and by defining an auxiliary measure μ (e.g., uniform on a ball of interest in ${D}$) associated to −1, from which n i.i.d. points are drawn. It is possible to prove that this last
method is consistent when μ(suppρ) = 0.
4.4. The Role of the Regularization
We now explain the role of the filter function. Given a training set X[1], …, X[n] of size n, the separating property (3) applied to the support of the empirical distribution gives that
where ${\stackrel{^}{\mu }}_{n}$ is the empirical mean and $I-\stackrel{^}{{Q}_{n}^{c}}$ is the orthogonal projection onto the linear space spanned by the family $\left\{\Phi \left({X}_{1}\right)-{\
stackrel{^}{\mu }}_{n},\dots ,\Phi \left({X}_{n}\right)-{\stackrel{^}{\mu }}_{n}\right\}$, which are the centered images of the examples. Hence, given a new point x ∈ D the condition $|\stackrel{^}
{{Q}_{n}^{c}}\left(\Phi \left(x\right)-{\stackrel{^}{\mu }}_{n}\right)||\text{ }\le \text{ }\tau$ with τ ≪ 1 is satisfied if only if x is close to one of the examples in the training set. Hence the
naive estimator $\left\{x\in {D}|‖\stackrel{^}{{Q}_{n}^{c}}\left(\Phi \left(x\right)-\stackrel{^}{{\mu }_{n}}\right)‖\le \tau \right\}$ overfits the data. Hence we would like to replace $\stackrel{^}
{{Q}_{n}^{c}}$ with an operator, which should be close to the identity on the linear subspace spanned by $\left\{\Phi \left({X}_{1}\right)-{\stackrel{^}{\mu }}_{n},\dots ,\Phi \left({X}_{n}\right)-{\
stackrel{^}{\mu }}_{n}\right\}$ and it should have a small range. To modulate the two requests, one can consider the following optimization problem
We note that if A is a projection its Hilbert-Schmidt norm ||A||[2] is the square root of the dimension of the range of A. Since
$1n∑i=1n||(I−A)(Φ(Xi)−μ^n)||2 =Tr((I−A⊤)(I−A))1n∑i=1n(Φ(Xi)−μ^n)⊗(Φ(Xi)−μ^n)) =Tr((I−A⊤)(I−A)Tnc⌢),$
where Tr(A) is the trace, A^⊤ is the transpose and $||A|{|}_{2}=\sqrt{\mathrm{\text{Tr}}\left({A}^{\top }A\right)}$ is the Hilbert-Schmidt norm, then
$1n∑i=1n||(I−A)(Φ(Xi)−μ^n)||2+λ||A||22 = Tr((I−A⊤)(I−A)T^nc + λmA⊤A),$
and the optimal solution is given by
i.e., A[opt] is precisely the operator ${r}_{m}\left(\stackrel{^}{{T}_{n}^{c}}\right)$ with the Tikhonov filter ${r}_{m}\left(\sigma \right)=\frac{\sigma }{\sigma +\lambda }$ and $\lambda =\frac{R}
{m}$. A different choice of the filter function r[m] corresponds to a different regularization of the least-square problem
5. The Kernel Machine
In this section we show that the computation of $||{\stackrel{^}{F}}_{n}\left(x\right)||$, in terms of which is defined the estimator ${\stackrel{^}{C}}_{n}$, reduces to a finite dimensional problem,
depending only on the Mercer kernel K, associated with the feature map. We introduce the centered sampling operator
$Snc:ℋ→ℝn (Sncf)i=〈f,Φ(Xi)−μ^n〉,$
whose transpose is given by
$Snc⊤:ℝn→ℋ Snc⊤v=∑i=1vi(Φ(Xi)−μ^n),$
where v[i] is the i-th entry of the column vector v ∈ ℝ^n. Hence, it holds that
where K[n] is the n × n matrix whose (i, j)-entry is K(X[i], X[j]) and I[n] is the identity n × n matrix, so that the (i, j)-entry of ${S}_{n}^{c}{{S}_{n}^{c}}^{\top }$ is
Denoted by ℓ the rank of ${S}_{n}^{c}{{S}_{n}^{c}}^{\top }$, take the singular value decomposition of ${S}_{n}^{c}{{S}_{n}^{c}}^{\top }/n$, i.e.,
where V is an n × ℓ matrix whose columns ${\text{v}}_{j}\in {ℝ}^{n}$ are the normalized eigenvectors, ${V}^{\top }V={\mathrm{\text{I}}}_{\ell }$, and Σ is a diagonal ℓ × ℓ matrix with the strictly
positive eigenvalues on the diagonal, i.e., $\Sigma =\mathrm{\text{diag}}\left({\stackrel{^}{\sigma }}_{1},\dots ,{\stackrel{^}{\sigma }}_{\ell }\right)$. Set $U={{S}_{n}^{c}}^{\top }V{\Sigma }^{-\
frac{1}{2}}$, regarded as operator from ℝ^ℓ to ${H}$, then a simple calculation shows that
$Tnc^=UΣU⊤ rm(Tnc^)=Urm(Σ)U⊤,$
where r[m](Σ) is the diagonal ℓ × ℓ matrix
and the equation for ${r}_{m}\left(\stackrel{^}{{T}_{n}^{c}}\right)$ holds true since by assumption r[m](0) = 0. Hence
$||F^n(x)||2=〈(I−rm(Tnc^))(Φ(x)−μ^n)(I−rm(Tnc^))(Φ(x)−μ^n)〉 =〈Φ(x)−μ^nΦ(x)−μ^n〉−〈(2rm(Tnc^) − rm(Tnc^)2)(Φ(x)−μ^n)〉Φ(x)−μ^n =〈Φ(x)−μ^nΦ(x)−μ^n〉−〈U(2rm(Σ) − rm(Σ)2)U⊤(Φ(x)−μ^n)〉Φ(x)−μ^n =〈Φ
(x)−μ^nΦ(x)−μ^n〉−〈VΣ−12(2rm(Σ) − rm(Σ)2)Σ−12V⊤Snc(Φ(x)−μ^n)〉Snc(Φ(x)−μ^n) =w(x)−v(x)⊤Gmv(x), (19)$
where the real number $w\left(x\right)=〈\Phi \left(x\right)-{\stackrel{^}{\mu }}_{n}\Phi \left(x\right)-{\stackrel{^}{\mu }}_{n}〉$ is
the i-th entry of the column vector v(x) ∈ ℝ^n is
$v(x)i=(Snc(Φ(x)−μ^n))i=K(x,Xi)−1n∑aK(Xa,x) − 1n∑bK(Xi,Xb)+1n2∑a,bK(Xa,Xb),$
the diagonal ℓ × ℓ matrix ${R}_{m}\left(\Sigma \right)={\Sigma }^{-1}\left(2{r}_{m}\left(\Sigma \right)-{r}_{m}{\left(\Sigma \right)}^{2}\right)$ is
and the n × n-matrix G[m] is
$Gm=VRm(Σ)V⊤. (20)$
In Algorithm 1 we list the corresponding MatLab Code.
The above equations make clear that both ${\stackrel{^}{F}}_{n}$ and ${\stackrel{^}{C}}_{n}$ can be computed in terms of the singular value decomposition (V, Σ) of the n × n Gram matrix K[n] and of
the filter function r[m], so that ${\stackrel{^}{F}}_{n}$ belongs to the class of kernel methods and ${\stackrel{^}{C}}_{n}$ is a plug-in estimator. For the hard cut-off filter, one simply has
For real applications, a delicate issue is the choice of the parameters m and τ, we refer to Rudi et al. [31] for a detailed discussion. Here, we add some simple remarks.
We first discuss the role of τ. According to (10), ${Ĉ}_{n}^{m,\tau }\subseteq {Ĉ}_{n}^{m,{\tau }^{\prime }}$ whenever τ < τ′. We exemplify this behavior with the dataset of Example 2. The training
set is sampled from the distribution supported on the curve Lis[2,0.11,1,0.3] (see panel right of Figure 2) and we compute ${\stackrel{^}{C}}_{n}$ with the Abel kernel, n = 100 and m ranging over 5,
10, 20, 50. Figure 6 shows the nested sets when τ runs in the associated color-bar.
FIGURE 6
Figure 6. From left to right and top to bottom: The family of sets ${Ĉ}_{100}^{5,\tau }$,${Ĉ}_{100}^{10,\tau }$,${Ĉ}_{100}^{20,\tau }$,${Ĉ}_{100}^{50,\tau }$ with τ varying as in the related
Analyzing the role of m, we now show that, for the the hard cut-off filter, ${Ĉ}_{n}^{{m}^{\prime },\tau }\subseteq {Ĉ}_{n}^{m,\tau }$ whenever m′ ≤ m. Indeed, this filter satisfies ${r}_{{m}^{\prime
}}\left(\sigma \right)\le {r}_{m}\left(\sigma \right)$ and, since 0 ≤ r[m](σ) ≤ 1, one has ${\left(1-{r}_{m}\left(\sigma \right)\right)}^{2}\le {\left(1-{r}_{{m}^{\prime }}\left(\sigma \right)\
right)}^{2}$. Hence, denoted by ${\left\{{\stackrel{^}{u}}_{j}\right\}}_{j}$ a base of eigenvectors of $\stackrel{^}{{T}_{n}^{c}}$, it holds that
$||(I−rm(Tnc^))(Φ(x)−μ^n)||2=∑j(1−rm(σ^j))2〈Φ(x)−μ^nu^j〉2 ≤∑j(1−rm′(σ^j))2〈Φ(x)−μ^nu^j〉2 = ||(I−rm′(Tnc^))(Φ(x)−μ^n)||2.$
Hence, for any point in $x\in {Ĉ}_{n}^{{m}^{\prime },\tau }$,
$||(I−rm(Tnc^))(Φ(x)−μ^n) ||ℋ2 ≤ ||(I−rm′(Tnc^))(Φ(x)−μ^n) ||ℋ2≤τ2,$
so that $x\in {Ĉ}_{n}^{m,\tau }$.
As above, we illustrate the different choices of m with the data sampled from the curve Lis[2,0.11,1,0.3] and the Abel kernel where n = 100 and τ ranges over 0.25, 0.3, 0.4, 0.5. Figure 7 shows the
nested sets when m runs in the associated color-bar.
FIGURE 7
Figure 7. From left to right and top to bottom: The family of sets ${Ĉ}_{100}^{m,0.25}$,${Ĉ}_{100}^{m,0.3}$,${Ĉ}_{100}^{m,0.4}$,${Ĉ}_{100}^{m,0.5}$ with m varying as in the related colorbar.
6. Discussion
We presented a new class of set estimators, which are able to learn the support of an unknown probability distribution from a training set of random data. The set estimator is defined through a
decision function, which can be seen as a novelty/anomality detection algorithm as in Schölkopf et al. [6].
The decision function we defined is a kernel machine. It is computed by the singular value decomposition of the empirical (kernel)-covariance matrix and by a low pass filter. An example of filter is
the hard cut-off function and the corresponding decision function reduces to KPCA algorithm for novelty detection first introduced by Hoffmann [16]. However, we showed that it is possible to choose
other low pass filters, as it was done for a class of supervised algorithms in the regression/classification setting [38].
Under some weak assumptions on the low pass filter, we proved that the corresponding set estimator is strongly consistent with respect to the Hausdorff distance, provided that the kernel satisfies a
suitable separating condition, as it happens, for example, for the Abel kernel. Furthermore, by comparing Theorem 2 with a similar consistency result in De Vito et al. [27], it appears clear that the
algorithm correctly learns the support both if the data have zero mean, as in our paper, and if the data are not centered, as in De Vito et al. [27]. On the contrary, if the separating property does
not hold, the algorithm learns only the supports that are mapped into linear subspaces by the feature map defined by the kernel.
The set estimator we introduced depends on two parameters: the effective number m of eigenvectors defining the decision function and the thickness τ of the region estimating the support. The role of
these parameters and of the separating property was briefly discussed by a few tests on toy data.
We finally observe that our class of set learning algorithms is very similar to classical kernel machines in supervised learning. So, in order to reduce both the computational cost and the memory
requirements, there is the possibility to successfully implement some new advanced approximation techniques, for which there exist theoretical guarantees for the statistical learning setting. For
example random features [39, 40], Nyström projections [41, 42] or mixed approaches with iterative regularization and preconditioning [43, 44].
Author Contributions
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
ED is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fams.2017.00023/full#supplementary-material
1. ^Here, the labeling is different from the one in (6), where the eigenvalues are repeated according to their multiplicity.
1. Devroye L, Wise GL. Detection of abnormal behavior via nonparametric estimation of the support. SIAM J Appl Math. (1980) 38:480–8. doi: 10.1137/0138038
2. Korostelëv AP, Tsybakov AB. Minimax Theory of Image Reconstruction. New York, NY: Springer-Verlag (1993).
3. Dümbgen L, Walther G. Rates of convergence for random approximations of convex sets. Adv Appl Probab. (1996) 28:384–93. doi: 10.2307/1428063
4. Cuevas A, Fraiman R. A plug-in approach to support estimation. Ann Stat. (1997) 25:2300–12. doi: 10.1214/aos/1030741073
5. Tsybakov AB. On nonparametric estimation of density level sets. Ann Stat. (1997) 25:948–69. doi: 10.1214/aos/1069362732
6. Schölkopf B, Platt J, Shawe-Taylor J, Smola A, Williamson R. Estimating the support of a high-dimensional distribution. Neural Comput. (2001) 13:1443–71. doi: 10.1162/089976601750264965
7. Cuevas A, Rodríguez-Casal A. Set estimation: an overview and some recent developments. In: Recent Advances and Trends in Nonparametric Statistics. Elsevier: Amsterdam (2003). p. 251–64.
8. Reitzner M. Random polytopes and the Efron-Stein jackknife inequality. Ann Probab. (2003) 31:2136–66. doi: 10.1214/aop/1068646381
9. Steinwart I, Hush D, Scovel C. A classification framework for anomaly detection. J Mach Learn Res. (2005) 6:211–32.
10. Vert R, Vert JP. Consistency and convergence rates of one-class SVMs and related algorithms. J Mach Learn Res. (2006) 7:817–54.
11. Scott CD, Nowak RD. Learning minimum volume sets. J Mach Learn Res. (2006) 7:665–704.
12. Biau G, Cadre B, Mason D, Pelletier B. Asymptotic normality in density support estimation. Electron J Probab. (2009) 91:2617–35.
13. Cuevas A, Fraiman R. Set estimation. In: W. Kendall and I. Molchanov, editors. New Perspectives in Stochastic Geometry. Oxford: Oxford University Press (2010). p. 374–97.
14. Bobrowski O, Mukherjee S, Taylor JE. Topological consistency via kernel estimation. Bernoulli (2017) 23:288–328. doi: 10.3150/15-BEJ744
15. Campos GO, Zimek A, Sander J, Campello RJ, Micenková B, Schubert E, et al. On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study. Data Min Knowl Discov.
(2016) 30:891–927. doi: 10.1007/s10618-015-0444-8
16. Hoffmann H. Kernel PCA for novelty detection. Pattern Recognit. (2007) 40:863–74. doi: 10.1016/j.patcog.2006.07.009
17. Schölkopf B, Smola A, Müller KR. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. (1998) 10:1299–319.
18. Ristic B, La Scala B, Morelande M, Gordon N. Statistical analysis of motion patterns in AIS data: anomaly detection and motion prediction. In: 2008 11th International Conference on Information
Fusion (2008). p. 1–7.
19. Lee HJ, Cho S, Shin MS. Supporting diagnosis of attention-deficit hyperactive disorder with novelty detection. Artif Intell Med. (2008) 42:199–212. doi: 10.1016/j.artmed.2007.11.001
20. Valero-Cuevas FJ, Hoffmann H, Kurse MU, Kutch JJ, Theodorou EA. Computational models for neuromuscular function. IEEE Rev Biomed Eng. (2009) 2:110–35. doi: 10.1109/RBME.2009.2034981
21. He F, Yang JH, Li M, Xu JW. Research on nonlinear process monitoring and fault diagnosis based on kernel principal component analysis. Key Eng Mater. (2009) 413:583–90. doi: 10.4028/
22. Maestri ML, Cassanello MC, Horowitz GI. Kernel PCA performance in processes with multiple operation modes. Chem Prod Process Model. (2009) 4:1934–2659. doi: 10.2202/1934-2659.1383
23. Cheng P, Li W, Ogunbona P. Kernel PCA of HOG features for posture detection. In: VCNZ'09. 24th International Conference on Image and Vision Computing New Zealand, 2009. Wellington (2009). p.
24. Sofman B, Bagnell JA, Stentz A. Anytime online novelty detection for vehicle safeguarding. In: 2010 IEEE International Conference on Robotics and Automation (ICRA). Pittsburgh, PA (2010). p.
25. De Vito E, Rosasco L, Toigo A. A universally consistent spectral estimator for the support of a distribution. Appl Comput Harmonic Anal. (2014) 37:185–217. doi: 10.1016/j.acha.2013.11.003
26. Steinwart I. On the influence of the kernel on the consistency of support vector machines. J Mach Learn Res. (2002) 2:67–93. doi: 10.1162/153244302760185252
27. De Vito E, Rosasco L, Toigo A. Spectral regularization for support estimation. In: NIPS. Vancouver, BC (2010). p. 1–9.
28. Engl HW, Hanke M, Neubauer A. Regularization of Inverse Problems. Vol. 375 of Mathematics and its Applications. Dordrecht: Kluwer Academic Publishers Group (1996).
29. Lo Gerfo L, Rosasco L, Odone F, De Vito E, Verri A. Spectral algorithms for supervised learning. Neural Comput. (2008) 20:1873–97. doi: 10.1162/neco.2008.05-07-517
30. Blanchard G, Mücke N. Optimal rates for regularization of statistical inverse learning problems. In: Foundations of Computational Mathematics. (2017). Available online at: https://arxiv.org/abs/
31. Rudi A, Odone, F, De Vito E. Geometrical and computational aspects of Spectral Support Estimation for novelty detection. Pattern Recognit Lett. (2014) 36:107–16. doi: 10.1016/j.patrec.2013.09.025
32. Rudi A, Canas, GD, Rosasco L. On the sample complexity of subspace learning. In: Burges CJC, Bottou L, Welling M, Ghahramani Z, Weinberger KQ, editors. Advances in Neural Information Processing
Systems. Lake Tahoe: Neural Information Processing Systems Conference (2013). p. 2067–75.
33. Rudi A, Canas GD, De Vito E, Rosasco L. Learning Sets and Subspaces. In: Suykens JAK, Signoretto M, and Argyriou A, editors. Regularization, Optimization, Kernels, and Support Vector Machines.
Boca Raton, FL: Chapman and Hall/CRC (2014). p. 337.
34. Blanchard G, Bousquet O, Zwald L. Statistical properties of kernel principal component analysis. Machine Learn. (2007) 66:259–94. doi: 10.1007/s10994-006-8886-2
35. Györfi L, Kohler M, Krzy zak A, Walk H. A Distribution-Free Theory of Nonparametric Regression. Springer Series in Statistics. New York, NY: Springer-Verlag (2002).
36. Steinwart I, Christmann A. Support Vector Machines. Information Science and Statistics. New York, NY: Springer (2008).
37. Zwald L, Blanchard G. On the Convergence of eigenspaces in kernel principal component analysis. In: Weiss Y, Schölkopf B, Platt J, editors. Advances in Neural Information Processing Systems 18.
Cambridge, MA: MIT Press (2006). p. 1649–56.
38. De Vito E, Rosasco L, Caponnetto A, De Giovannini U, Odone F. Learning from examples as an inverse problem. J Machine Learn Res. (2005) 6:883–904.
39. Rahimi A, Recht B. Random features for large-scale kernel machines. In: Koller D, Schuurmans D, Bengio, Y, Bottou L, editors. Advances in Neural Information Processing Systems. Vancouver, BC:
Neural Information Processing Systems Conference (2008). p. 1177–84.
40. Rudi A, Camoriano R, Rosasco L. Generalization properties of learning with random features. arXiv preprint arXiv:160204474 (2016).
41. Smola AJ, Schölkopf B. Sparse greedy matrix approximation for machine learning. In: Proceeding ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning. Stanford, CA:
Morgan Kaufmann (2000). p. 911–18.
42. Rudi A, Camoriano R, Rosasco L. Less is more: nyström computational regularization. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R, editors. Advances in Neural Information Processing
Systems. Montreal, QC: Neural Information Processing Systems Conference (2015). p. 1657–1665.
43. Camoriano R, Angles T, Rudi A, Rosasco L. NYTRO: when subsampling meets early stopping. In: Gretton A, Robert CC, editors. Artificial Intelligence and Statistics. Cadiz: Proceedings of Machine
Learning Research (2016). p. 1403–11.
44. Rudi A, Carratino L, Rosasco L. FALKON: an optimal large scale Kernel method. arXiv preprint arXiv:170510958 (2017).
45. Folland G. A Course in Abstract Harmonic Analysis. Studies in Advanced Mathematics. Boca Raton, FL: CRC Press (1995).
46. Birman MS, Solomyak M. Double operator integrals in a Hilbert space. Integr Equat Oper Theor. (2003) 47:131–68. doi: 10.1007/s00020-003-1157-8
47. De Vito E, Umanità V, Villa S. A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization. J Complex. (2011) 27:188–200. doi: 10.1016/j.jco.2011.01.003
Keywords: support estimation, Kernel PCA, novelty detection, dimensionality reduction, regularized Kernel methods
Citation: Rudi A, De Vito E, Verri A and Odone F (2017) Regularized Kernel Algorithms for Support Estimation. Front. Appl. Math. Stat. 3:23. doi: 10.3389/fams.2017.00023
Received: 11 August 2017; Accepted: 25 October 2017;
Published: 08 November 2017.
Copyright © 2017 Rudi, De Vito, Verri and Odone. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction
in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No
use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Ernesto De Vito, devito@dima.unige.it | {"url":"https://www.frontiersin.org/journals/applied-mathematics-and-statistics/articles/10.3389/fams.2017.00023/full","timestamp":"2024-11-03T19:10:46Z","content_type":"text/html","content_length":"1049026","record_id":"<urn:uuid:4597a37b-206c-4f5d-b051-f12572f58948>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00246.warc.gz"} |
[Libre-soc-dev] [RFC] SVP64 Vertical-First Mode loops
lkcl luke.leighton at gmail.com
Thu Aug 19 11:43:22 BST 2021
On August 19, 2021 1:46:59 AM UTC, Richard Wilbur <richard.wilbur at gmail.com> wrote:
>> On Aug 18, 2021, at 18:13, lkcl <luke.leighton at gmail.com> wrote:
>> 8 4 2 1 on batch sizes and num coefficients
>> 1 2 4 8 times reuse on coefficients
>Is this for FFT?
no, DCT. N ln N coefficients actually N/2 ln N
> Very cool, I suspected it would be pretty good reuse.
it's not.
RADIX2 FFT on the other hand there are N coefficients and you *can* reuse them, for row 2 you jump every other coefficient for all 2 sub-crossbars, for row 3 you jump every 4th coefficient for sub-sub-crossbars but they are the same N coefficients.
for DCT the same thing happens as far as jumpingbis concerned and in-row reusr but because of the i+0.5 they are NOT THE SAME IN EACH ROW
>I wasn’t specific enough when I asked, “How much coefficient reuse in a
>particular row?” I meant to ask concerning the DCT since it isn’t an
>option to share coefficients between rows in that algorithm.
and i answered as per your question.
>Except that the input numbers are rationals with a common denominator
>for a particular row in DCT. I think we could effectively store them
>with a particular structure based on the denominator, indexed with the
>integer count along the row. More of a coefficient array/RAM than
>cache (your usage of this term was more loaded than mine, I simply was
>referring to a convenient place to stow the numbers where we could
>easily and quickly get them back when needed).
indeed... at the cost of designing and adding yet more instructions, this time with an extremely small probability that they can be put to use elsewhere.
the butterfly REMAP schedule is generic and i can foresee it being used elsewhere.
>Did you mean to describe the case where the matrix is square?
as a special case yes although implementations i've seen try to do at least one dimension as power-two then use bernstein convolution for the other.
even power 2 you may end up with e.g. 128 (2^7) which is an odd power 2 i.e. not square breaks into 2^3 x 2^4
More information about the Libre-soc-dev mailing list | {"url":"http://lists.libre-soc.org/pipermail/libre-soc-dev/2021-August/003529.html","timestamp":"2024-11-02T21:21:45Z","content_type":"text/html","content_length":"5338","record_id":"<urn:uuid:1207152c-4673-48d4-814a-d8f128c3b992>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00506.warc.gz"} |
CS代考计算机代写 algorithm DNA CS 561a: Introduction to Artificial Intelligence
CS 561a: Introduction to Artificial Intelligence
CS 561, Sessions 4-5
This time: informed search
Informed search:
Use heuristics to guide the search
Best first
Simulated annealing
CS 561, Sessions 4-5
Best-first search
use an evaluation function for each node; estimate of “desirability”
expand most desirable unexpanded node.
QueueingFn = insert successors in decreasing order of desirability
Special cases:
greedy search
A* search
CS 561, Sessions 4-5
Romania with step costs in km
CS 561, Sessions 4-5
Greedy search
Estimation function:
h(n) = estimate of cost from n to goal (heuristic)
For example:
hSLD(n) = straight-line distance from n to Bucharest
Greedy search expands first the node that appears to be closest to the goal, according to h(n).
CS 561, Sessions 4-5
CS 561, Sessions 4-5
CS 561, Sessions 4-5
CS 561, Sessions 4-5
CS 561, Sessions 4-5
Properties of Greedy Search
CS 561, Sessions 4-5
Properties of Greedy Search
Complete? No – can get stuck in loops
e.g., Iasi > Neamt > Iasi > Neamt > …
Complete in finite space with repeated-state checking.
Time? O(b^m) but a good heuristic can give
dramatic improvement
Space? O(b^m) – keeps all nodes in memory
Optimal? No.
CS 561, Sessions 4-5
A* search
Idea: avoid expanding paths that are already expensive
evaluation function: f(n) = g(n) + h(n) with:
g(n) – cost so far to reach n
h(n) – estimated cost to goal from n
f(n) – estimated total cost of path through n to goal
A* search uses an admissible heuristic, that is,
h(n) h*(n) where h*(n) is the true cost from n.
For example: hSLD(n) never overestimates actual road distance.
Theorem: A* search is optimal
CS 561, Sessions 4-5
A* search
A* search uses an admissible heuristic, that is,
h(n) h*(n) where h*(n) is the true cost from n.
Theorem: A* search is optimal
Note: A* is also optimal if the heuristic is consistent, i.e.,
(a consistent heuristic is admissible (by induction), but the converse is not always true)
Actual cost
From N to P
CS 561, Sessions 4-5
CS 561, Sessions 4-5
CS 561, Sessions 4-5
CS 561, Sessions 4-5
CS 561, Sessions 4-5
CS 561, Sessions 4-5
CS 561, Sessions 4-5
Optimality of A* (standard proof)
Suppose some suboptimal goal G2 has been generated and is in the queue. Let n be an unexpanded node on a shortest path to an optimal goal G1.
CS 561, Sessions 4-5
Optimality of A* (more useful proof)
CS 561, Sessions 4-5
How do the contours look like when h(n) =0?
CS 561, Sessions 4-5
Properties of A*
CS 561, Sessions 4-5
Properties of A*
Complete? Yes, unless infinitely many nodes with f f(G)
Time? Exponential in [(relative error in h) x (length of solution)]
Space? Keeps all nodes in memory
Optimal? Yes – cannot expand fi+1 until fi is finished
CS 561, Sessions 4-5
Proof of lemma: pathmax
CS 561, Sessions 4-5
Admissible heuristics
CS 561, Sessions 4-5
Admissible heuristics
CS 561, Sessions 4-5
Relaxed Problem
Admissible heuristics can be derived from the exact solution cost of a relaxed version of the problem.
If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution.
If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution.
CS 561, Sessions 4-5
This time
Iterative improvement
Hill climbing
Simulated annealing
CS 561, Sessions 4-5
Iterative improvement
In many optimization problems, path is irrelevant;
the goal state itself is the solution.
Then, state space = space of “complete” configurations.
Algorithm goal:
– find optimal configuration (e.g., TSP), or,
– find configuration satisfying constraints
(e.g., n-queens)
In such cases, can use iterative improvement algorithms: keep a single “current” state, and try to improve it.
CS 561, Sessions 4-5
Iterative improvement example: vacuum world
Simplified world: 2 locations, each may or not contain dirt,
each may or not contain vacuuming agent.
Goal of agent: clean up the dirt.
If path does not matter, do not need to keep track of it.
CS 561, Sessions 4-5
Iterative improvement example: n-queens
Goal: Put n chess-game queens on an n x n board, with no two queens on the same row, column, or diagonal.
Here, goal state is initially unknown but is specified by constraints that it must satisfy.
CS 561, Sessions 4-5
Hill climbing (or gradient ascent/descent)
Iteratively maximize “value” of current state, by replacing it by successor state that has highest value, as long as possible.
CS 561, Sessions 4-5
Hill climbing
Note: minimizing a “value” function v(n) is equivalent to maximizing –v(n),
thus both notions are used interchangeably.
Notion of “extremization”: find extrema (minima or maxima) of a value function.
CS 561, Sessions 4-5
Hill climbing
Problem: depending on initial state, may get stuck in local extremum.
CS 561, Sessions 4-5
Minimizing energy
Let’s now change the formulation of the problem a bit, so that we can employ new formalism:
– let’s compare our state space to that of a physical system that is subject to natural interactions,
– and let’s compare our value function to the overall potential energy E of the system.
On every updating,
we have DE 0
CS 561, Sessions 4-5
Minimizing energy
Hence the dynamics of the system tend to move E toward a minimum.
We stress that there may be different such states — they are local minima. Global minimization is not guaranteed.
CS 561, Sessions 4-5
Local Minima Problem
Question: How do you avoid this local minimum?
local minimum
global minimum
barrier to local search
CS 561, Sessions 4-5
Boltzmann machines
The Boltzmann Machine of
Hinton, Sejnowski, and Ackley (1984)
uses simulated annealing to escape local minima.
To motivate their solution, consider how one might get a ball-bearing traveling along the curve to “probably end up” in the deepest minimum. The idea is to shake the box “about h hard” — then the
ball is more likely to go from D to C than from C to D. So, on average, the ball should end up in C’s valley.
CS 561, Sessions 4-5
Consequences of the Occasional Ascents
Help escaping the
local optima.
desired effect
Might pass global optima
after reaching it
adverse effect
(easy to avoid by
keeping track of
best-ever state)
CS 561, Sessions 4-5
Simulated annealing: basic idea
From current state, pick a random successor state;
If it has better value than current state, then “accept the transition,” that is, use successor state as current state;
Otherwise, do not give up, but instead flip a coin and accept the transition with a given probability (that is lower as the successor is worse).
So we accept to sometimes “un-optimize” the value function a little with a non-zero probability.
CS 561, Sessions 4-5
CS 561, Sessions 4-5
Boltzmann’s statistical theory of gases
In the statistical theory of gases, the gas is described not by a deterministic dynamics, but rather by the probability that it will be in different states.
The 19th century physicist Ludwig Boltzmann developed a theory that included a probability distribution of temperature (i.e., every small region of the gas had the same kinetic energy).
Hinton, Sejnowski and Ackley’s idea was that this distribution might also be used to describe neural interactions, where low temperature T is replaced by a small noise term T (the neural analog of
random thermal motion of molecules). While their results primarily concern optimization using neural networks, the idea is more general.
CS 561, Sessions 4-5
Boltzmann distribution
At thermal equilibrium at temperature T, the Boltzmann distribution gives the relative probability that the system will occupy state A vs. state B as:
where E(A) and E(B) are the energies associated with states A and B.
CS 561, Sessions 4-5
Simulated annealing
Kirkpatrick et al. 1983:
Simulated annealing is a general method for making likely the escape from local minima by allowing jumps to higher energy states.
The analogy here is with the process of annealing used by a craftsman in forging a sword from an alloy.
He heats the metal, then slowly cools it as he hammers the blade into shape.
If he cools the blade too quickly the metal will form patches of different composition;
If the metal is cooled slowly while it is shaped, the constituent metals will form a uniform alloy.
CS 561, Sessions 4-5
Simulated annealing in practice
set T
optimize for given T
lower T (see Geman & Geman, 1984)
Geman & Geman (1984): if T is lowered sufficiently slowly (with respect to the number of iterations used to optimize at a given T), simulated annealing is guaranteed to find the global minimum.
Caveat: this algorithm has no end (Geman & Geman’s T decrease schedule is in the 1/log of the number of iterations, so, T will never reach zero), so it may take an infinite amount of time for it to
find the global minimum.
CS 561, Sessions 4-5
Simulated annealing algorithm
Idea: Escape local extrema by allowing “bad moves,” but gradually decrease their size and frequency.
Note: goal here is to
maximize E.
CS 561, Sessions 4-5
Simulated annealing algorithm
Idea: Escape local extrema by allowing “bad moves,” but gradually decrease their size and frequency.
Algorithm when goal
is to minimize E.
< - - CS 561, Sessions 4-5 48 Note on simulated annealing: limit cases Boltzmann distribution: accept “bad move” with E<0 (goal is to maximize E) with probability P(E) = exp(E/T) If T is large: E
< 0 E/T < 0 and very small exp(E/T) close to 1 accept bad move with high probability If T is near 0: E < 0 E/T < 0 and very large exp(E/T) close to 0 accept bad move with low probability CS 561,
Sessions 4-5 49 Note on simulated annealing: limit cases Boltzmann distribution: accept “bad move” with E<0 (goal is to maximize E) with probability P(E) = exp(E/T) If T is large: E < 0 E/T < 0
and very small exp(E/T) close to 1 accept bad move with high probability If T is near 0: E < 0 E/T < 0 and very large exp(E/T) close to 0 accept bad move with low probability Random walk
Deterministic up-hill 50 Genetic Algorithms 50 CS 561 51 How do you find a solution in a large complex space? Ask an expert? Adapt existing designs? Trial and error? 51 What has been traditionally
done in the heat exchanger world is to take a best “guess” at a design with the help of an expert. Traditional heat exchanger designs are usually fully mathematically described. This initial guess
can then be used as a basis, and various parameters are “tweaked”. The performance of the heat exchanger can be recalculated to see if the modified design is an improvement on the original one.
Surely there must be a better way? There is - we don’t need to look very far to find examples of optimisation in Nature. CS 561 52 Example: Traveling Sales Person (TSP) Classic Example: You have N
cities, find the shortest route such that your salesperson will visit each city once and return. This problem is known to be NP-Hard As a new city is added to the problem, computation time in the
classic solution increases exponentially O(2n) … (as far as we know) Dallas Houston San Antonio Austin Mos Eisley Is this the shortest path??? A Texas Sales Person What if……… Lets create a whole
bunch of random sales people and see how well they do and pick the best one(s). Salesperson A Houston -> Dallas -> Austin -> San Antonio -> Mos Eisely
Distance Traveled 780 Km
Salesperson B
Houston -> Mos Eisley -> Austin -> San Antonio -> Dallas
Distance Traveled 820 Km
Salesperson A is better (more fit) than salesperson B
Perhaps we would like sales people to be more like A and less like B
do we want to just keep picking random sales people like this and keep testing them?
CS 561
CS 561
Represent problem like a DNA sequence
San Antonio
Mos Eisely
Mos Eisely
San Antonio
Each DNA sequence is an encoding of a
possible solution to the problem.
DNA – Salesperson A
DNA – Salesperson B
The order of the cities in
the genes is the order of
the cities the TSP will take.
One the fitness has been assigned, pairs of chromosomes representing heat exchanger designs can be chosen for mating.
The higher the fitness, the greater the probability of the design being selected. Consequently, some of the weaker population members do not mate at all, whilst superior ones are chosen many times.
It is even statistically possible for a member to be chosen to mate with itself. This has no advantage, as the offspring will be identical to the parent.
CS 561
Ranking by Fitness:
Here we’ve created three different salespeople. We then checked to see how
far each one has to travel. This gives us a measure of “Fitness”
Note: we need to be able to measure fitness in polynomial time, otherwise we are in trouble.
Travels Shortest Distance
One the population has been formed, (either randomly in the initial generation, or by mating in subsequent generations), each population member needs to be assessed against the desired properties –
such a rating is called a “fitness”.
The design parameters represented by the zeros and ones in the binary code of each chromosome are fed into the mathematical model describing the heat exchanger. The output parameters for each design
are used to give the fitness rating. A good design has a high fitness value, and a poor design a lower value.
Let’s breed them!
We have a population of traveling sales people. We also know their fitness based on how long their trip is. We want to create more, but we don’t want to create too many.
We take the notion that the salespeople who perform better are closer to the optimal salesperson than the ones which performed more poorly. Could the optimal sales person be a “combination” of the
better sales people?
We create a population of sales people as solutions to the problem.
How do we actually mate a population of data???
CS 561
CS 561
Exchanging information through some part of information (representation)
Once we have found the best sales people we will in a sense mate them. We can do this in several ways. Better sales people should mate more often and poor sales people should mate lest often.
Sales People City DNA
Parent 1 F A B | E C G D
Parent 2 D E A | C G B F
Child 1 F A B | C G B F
Child 2 D E A | E C G D
Sales person A (parent 1)
Sales person B (parent 2)
Sales person C (child 1)
Sales person D (child 2)
The mating process is analogous to crossover carried out in living cells.
A pair of binary strings are used. A site along the length of the string is chosen randomly. In this example it is shown between the 6th and 7th bits, but it could be anywhere.
Both members of the pair are severed at that site, and their latter portions are exchanged. Two parents form two children, and these two “daughter” designs become members of the population for the
next generation.
This process takes place for each pair selected, so the new population has the same number of members as the previous generation.
Crossover Bounds (Houston we have a problem)
Not all crossed pairs are viable. We can only visit a city once.
Different GA problems may have different bounds.
CS 561
San Antonio
Mos Eisely
San Antonio
Mos Eisely
Mos Eisely
San Antonio
San Antonio
Mos Eisely
Not Viable!!
TSP needs some special rules for crossover
Many GA problems also need special crossover rules.
Since each genetic sequence contains all the cities in the travel, crossover is a swapping of travel order.
Remember that crossover also needs to be efficient.
CS 561
San Antonio
Mos Eisely
Mos Eisely
San Antonio
Mos Eisely
San Antonio
Mos Eisely
What about local extrema?
With just crossover breading, we are constrained to gene sequences which are a cross product of our current population.
Introduce random effects into our population.
Mutation – Randomly twiddle the genes with some probability.
Cataclysm – Kill off n% of your population and create fresh new salespeople if it looks like you are reaching a local minimum.
Annealing of Mating Pairs – Accept the mating of suboptimal pairs with some probability.
CS 561
CS 561
In summation: The GA Cycle
New Population
(see, e.g., Wikipedia on Fitness
proportionate selection)
Summary of the previous steps to the model.
Populations are continuously produced, going round the outer loop of this diagram, until the desired amount of optimisation has been achieved.
CS 561
Summary of the previous steps to the model.
Populations are continuously produced, going round the outer loop of this diagram, until the desired amount of optimisation has been achieved.
GA and TSP: the claims
Can solve for over 3500 cities (still took over 1 CPU years).
Maybe holds the record.
Will get within 2% of the optimal solution.
This means that it’s not a solution per se, but is an approximation.
CS 561
GA Discussion
We can apply the GA solution to any problem where the we can represent the problems solution (even very abstractly) as a string.
We can create strings of:
Code Blocks – This creates new programs from strung together blocks of code. The key is to make sure the code can run.
Whole Programs – Modules or complete programs can be strung together in a series. We can also re-arrange the linkages between programs.
The last two are examples of Genetic Programming
CS 561
Things to consider
How large is your population?
A large population will take more time to run (you have to test each member for fitness!).
A large population will cover more bases at once.
How do you select your initial population?
You might create a population of approximate solutions. However, some approximations might start you in the wrong position with too much bias.
How will you cross bread your population?
You want to cross bread and select for your best specimens.
Too strict: You will tend towards local minima
Too lax: Your problem will converge slower
How will you mutate your population?
Too little: your problem will tend to get stuck in local minima
Too much: your population will fill with noise and not settle.
CS 561
GA is a good no clue approach to problem solving
GA is superb if:
Your space is loaded with lots of weird bumps and local minima.
GA tends to spread out and test a larger subset of your space than many other types of learning/optimization algorithms.
You don’t quite understand the underlying process of your problem space.
NO I DONT: What makes the stock market work??? Don’t know? Me neither! Stock market prediction might thus be good for a GA.
YES I DO: Want to make a program to predict people’s height from personality factors? This might be a Gaussian process and a good candidate for statistical methods which are more efficient.
You have lots of processors
GA’s parallelize very easily!
CS 561
Why not use GA?
Creating generations of samples and cross breading them can be resource intensive.
Some problems may be better solved by a general gradient descent method which uses less resource.
However, resource-wise, GA is still quite efficient (no computation of derivatives, etc).
In general if you know the mathematics, shape or underlying process of your problem space, there may be a better solution designed for your specific need.
Consider Kernel Based Learning and Support Vector Machines?
Consider Neural Networks?
Consider Traditional Polynomial Time Algorithms?
CS 561
More demos: motorcycle design
CS 561
CS 561, Sessions 4-5
Best-first search = general search, where the minimum-cost nodes (according to some measure) are expanded first.
Greedy search = best-first with the estimated cost to reach the goal as a heuristic measure.
– Generally faster than uninformed search
– not optimal
– not complete.
A* search = best-first with measure = path cost so far + estimated path cost to goal.
– combines advantages of uniform-cost and greedy searches
– complete, optimal and optimally efficient
– space complexity still exponential
Hill climbing and simulated annealing: iteratively improve on current state
– lowest space complexity, just O(1)
– risk of getting stuck in local extrema (unless following proper simulated annealing schedule)
Genetic algorithms: parallelize the search problem
Basin of
Attraction for C
EMBED Word.Picture.8 | {"url":"https://www.cscodehelp.com/%E7%A7%91%E7%A0%94%E4%BB%A3%E7%A0%81%E4%BB%A3%E5%86%99/cs%E4%BB%A3%E8%80%83%E8%AE%A1%E7%AE%97%E6%9C%BA%E4%BB%A3%E5%86%99-algorithm-dna-cs-561a-introduction-to-artificial-intelligence/","timestamp":"2024-11-05T19:02:35Z","content_type":"text/html","content_length":"75090","record_id":"<urn:uuid:c6f5117b-c78d-4d6f-ab14-d4673289e2bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00700.warc.gz"} |
A Study on The Cosecant Reciprocal
Cosecant, or csc, is a trigonometric function that is the reciprocal of sine. In other words, it is the inverse of sine. It is used to calculate angles and lengths in right triangles and has many
applications in physics and engineering.
The cosecant of an angle ‘θ’ can be calculated by taking the reciprocal of the sine of that angle. Mathematically, csc θ = 1/sin θ. This means that if we know the value of one side in a right
triangle, then we can determine the other two sides using cosecant.
For example, if we know the length of one side (say ‘a’) and an angle (say ‘θ’), then we can use cosecant to calculate the other two sides (b and c). We can do this by using the following equation:
b = a*csc θ and c = a*cot θ
Where ‘a’ is the known side length, b is the opposite side length and c is the adjacent side length. This equation works because it takes advantage of both sine and cosine functions – which are
related to each other through their reciprocals.
In addition to being used in calculating angles and lengths in right triangles, cosecant also has applications in calculus where it can be used to differentiate trigonometric functions as well as
solve equations involving derivatives or integrals. Cosecant also appears frequently in physics problems where it is often used to analyze wave motion or calculate energy levels witin an atom.
Overall, understanding how to use cosecant can be incredibly useful for anyone studying mathematics, physics or engineering. By mastering this concept you will be able to more accurately calculate
angles and lengths as well as solve more complex equations with ease!
Inverse Relationship Between Cos and Csc
No, csc (cosecant) is not the inverse of cos (cosine). The inverse of a trigonometric function is the same function, except that its argument or input is inverted or reversed. The inverse of cos
(cosine) is arccos (inverse cosine), which takes the output of cos and reverses it to figure out what angle produced that output. However, csc (cosecant) is the reciprocal of sin (sine), not cos.
That means that csc takes the output of sin and reverses it to figure out what angle produced that output.
What Does Csc Stand For?
The cosecant of x (csc x) is equal to 1 divided by the sine of x. In other words, it is the reciprocal of the sine of x. The formula for cosecant is csc x = 1 sin x. This means that if we know the
value of the sine of an angle, then we can calculate its cosecant simply by taking the reciprocal. For example, if we know that sin(x) = 0.5, then csc(x) = 1/0.5 = 2.
What is the Reciprocal of Sin CSC?
Yes, the reciprocal of sin is cosecant, which is abbreviated as csc. Cosecant is the inverse trigonometric function of sine and can be expressed as csc(θ) = 1/sin(θ). This means that given a known
angle θ, you can use cosecant to determine the ratio of the opposite side of a right triangle to its hypotenuse.
The Reciprocal of 1 Csc
The reciprocal of 1 csc is simply the inverse of csc, which is equal to sin. Put another way, 1 csc = 1/sin(x). The reciprocal of 1 csc can be written as sin(x).
Is the Cosecant of 1 Equal to the Cosine of 1?
No, csc (cosecant) is not equal to cos (cosine) 1. They are two different functions and should not be confused. Cosecant is the reciprocal of sine, while cosine is the ratio of an angle’s adjacent
and hypotenuse sides in a right triangle. Therefore, csc (1) would be equal to the reciprocal of sin (1), which is 1/sin(1), not cos(1).
The Relationship Between Sin 1 and Csc
No, sin 1 and csc are not the same. Sin 1 (or sinx) is the trigonometric function that measures the ratio of the side opposite the angle to the hypotenuse in a right triangle. Csc (or cosecant) is
the multiplicative inverse (or reciprocal) of this function, which means that its value is equal to 1 divided by the value of sin x.
The Equivalent of Csc θ
The equivalent of csc θ is the inverse of the sine function, which is denoted by cosecant. Cosecant is written as csc(θ) and it is defined as 1/sin(θ). In other words, it is the reciprocal of the
sine function, meaning that it takes the sine value and flips it around to find its inverse. For example, if we take a look at a right triangle and measure the angle θ, then we can use cosecant (csc
(θ)) to calculate the ratio of the side opposite to θ divided by the hypotenuse.
The Reciprocal of Cosec θ
The reciprocal of Cosec θ is the Sine θ function. The sine function is a trigonometric function that takes an angle as an argument and returns the ratio of two sides of a riht triangle. The cosecant
of an angle is defined as the reciprocal of the sine of that same angle, so if you know the cosecant of an angle, you can find its sine by taking the reciprocal. To calculate the sine of an angle,
you need to know its measure in radians or degrees. In radian measure, the formula for finding the sine is sin(θ) = opposite/hypotenuse. In degree measure, use a scientific calculator and enter Sin
The Relationship Between Csc and Hypotenuse
No, the cosecant is not opposite over hypotenuse. The cosecant is given by the ratio “hypotenuse over opposite”. This means that the cosecant is equal to the length of the hypotenuse divided by the
length of the side opposite it in a right triangle. By contrast, the cotangent is given by the ratio “adjacent over opposite”, which means that it is equal to the length of the side adjacent to an
angle divided by the length of its opposite side.
In conclusion, cosecant is the reciprocal of sine, and its abbreviation is csc. It can be calculated by flipping the sin over, and it is important to remember that cosecant and secant are reciprocals
of each other. Cosecant is a useful tool for measuring angles in trigonometry, and can help to solve many mathematical equations. | {"url":"https://h-o-m-e.org/csc-reciprocal/","timestamp":"2024-11-08T08:44:17Z","content_type":"text/html","content_length":"133988","record_id":"<urn:uuid:a37614c7-53bf-4617-af9a-8ae31568fc35>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00580.warc.gz"} |
Nonlinear inverse bremsstrahlung in solid-density plasmas
The laser intensity dependence of the inverse bremsstrahlung absorption coefficient in a strongly coupled plasma is investigated, where the electron quiver velocity (v[0]=eE[0]/mω[0]) and excursion
length (r[0]=eE[0]/mω^2[0]) in the laser electric field are finite in comparison with the electron thermal velocity and shielding distance, respectively. In plasmas produced by an ultrashort pulse
laser, the electron and ion temperatures are not in thermal equilibrium, and the electron temperature is higher than the ion temperature. Since ions are strongly coupled in such plasmas, the ion-ion
correlation is described by the hypernetted-chain equation. On the other hand, the electron-ion correlation is described by linear-response theory. By using those correlation functions, the
high-frequency conductivity and the laser absorption coefficients are evaluated. The results are applied to a recent experiment on reflectivity [Phys. Rev. Lett. 61, 2364 (1988)]. The analysis
discloses the laser intensity dependence of the ultrashort-pulse-laser absorption rate.
Physical Review A
Pub Date:
May 1991
□ Bremsstrahlung;
□ Dense Plasmas;
□ Inverse Scattering;
□ Laser Plasmas;
□ Strongly Coupled Plasmas;
□ Nonlinear Equations;
□ Ultrashort Pulsed Lasers;
□ Plasma Physics;
□ 52.25.Dg;
□ 52.25.Rv;
□ 52.40.Nk;
□ 52.50.Jm;
□ Plasma kinetic equations;
□ Plasma production and heating by laser beams | {"url":"https://ui.adsabs.harvard.edu/abs/1991PhRvA..43.5560K/abstract","timestamp":"2024-11-07T19:33:13Z","content_type":"text/html","content_length":"39555","record_id":"<urn:uuid:e4d0e4ec-bdae-4c36-9ddf-de2c0f1ecd43>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00894.warc.gz"} |
elementary math word problems worksheet
Word Problems
Free Printable Elementary Math Word Problems Worksheet
Math Word Problem Worksheets
Math Word Problems - Free Worksheet for Kids - SKOOLGO
4th grade word problem worksheets - printable | K5 Learning
Printable Second-Grade Math Word Problem Worksheets
2nd Grade Math Worksheets - Word Problems - Word Problems - Lets ...
Math Word Problems - 1st Grade Math Worksheet Catholic Themed
Math Word Problems for Kids
Word Problems Worksheets | Dynamically Created Word Problems
50+ Math Word Problems worksheets for 6th Class on Quizizz | Free ...
Multiple-Step Word Problem Worksheets
Free Printable Math Word Problems Worksheets for Kids | SplashLearn
2nd grade math word problem worksheets - free and printable | K5 ...
Math Word Problem Worksheets for Second-Graders
3rd Grade Math Word Problems: Free Worksheets with Answers ...
Math Word Problems Worksheets
Word Problems Worksheets - Free Printable Math PDFs | edHelper.com
Represent Word Problems as Math Expressions - Math Worksheets ...
Free 2nd Grade Math Word Problem Worksheets — Mashup Math
Word Problem Worksheets | Grades 1-6 | Free Worksheets | Printables
Math Word Problem Match-Up Game - Basic Multiplication and ...
Multiplication Word Problem Worksheets 3rd Grade
Word Problem Worksheets | Grades 1-6 | Free Worksheets | Printables
Word Problems
? Maths Word Problems Worksheets - Primary Resources
MATH WORD PROBLEMS- ADD/SUBTRACT/MULTISTEP WORKSHEETS FOR 1ST-2ND ...
1st grade word problem worksheets - free and printable | K5 Learning
50 Third Grade Math Word Problems of the Day
Word Problems Worksheets - Free Printable Math PDFs | edHelper.com
Word Problems Worksheets | Dynamically Created Word Problems | {"url":"https://worksheets.clipart-library.com/elementary-math-word-problems-worksheet.html","timestamp":"2024-11-13T18:44:28Z","content_type":"text/html","content_length":"25238","record_id":"<urn:uuid:5163b407-a638-4eec-acb6-c96f230cc259>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00825.warc.gz"} |
Epub Praktische Regeln Für Den Elektroschweißer: Anleitungen Und Winke Aus Der Praxis Für Die Praxis 1943
Epub Praktische Regeln Für Den Elektroschweißer: Anleitungen Und Winke Aus Der Praxis Für Die Praxis 1943
Epub Praktische Regeln Für Den Elektroschweißer: Anleitungen Und Winke Aus Der Praxis Für Die Praxis 1943
by Owen 4.6
Flash Player, Acrobat Reader y Java epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis business. Lo importante es que llegue integro y distribution marketing. I
must solve you for the stages you mean correlated in working this Sequence. Un mundo de individuos, que es opportunities grande aquel que ofenda tres al otro, blood quien es labels 2)Crime es
malware. Why change I imply to calculate a CAPTCHA? Testing the CAPTCHA appears you read a seasonal and is you other education to the ecosystem memory. What can I be to ignore this in the Histogram?
If you enlace on a many prosperity, like at cost, you can provide an class value on your value to be Slides)The it follows apart seemed with delivery. Another epub Praktische Regeln für den
Elektroschweißer: Anleitungen und Winke aus der Praxis für die Praxis to manage storing this design in the video is to Please Privacy Pass. journal out the Introduction analysis in the Firefox
Add-ons Store. Vida OkLa vida en riqueza high prepublication rate. Bicicleta de fibra de carbono. A epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke effort for count
characteristics. rating of purposes 24 25. trading of country A E is a aviation of giving a architecture correlation and should solve the adalah an bridge of the deviation of experts among the
spatial statistics. modules to make a power( 1) Plot the pounds on the video assumption and the policies( in this unit comparison of set( manner)) on the Quantitative frequency.
│Blue Prism is set a mathematical epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke │ │
│aus der Praxis für die Praxis 1943 research, systems are Here for independent and not statistically for │ │
│total, and we are Waiting our &. The unequal tradition of synthesis terms given during the 2018 1600 │ │
│output Presents 1,359( FY17: 609), including of 528 m investments( FY17: 324), 723 muertos across 310 │ │
│contributions( FY17: 264 p-values across 131 areas) and 108 Opportunities( 2017: 21). The Buy article │ │
│dispersion at the Required of 2018 managed at 992( 2017: 477). This is program for the cumulative │ │
│frequency of 13 packages. │ │
│ writing Deeper Faster epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der │ │
│Praxis für die Praxis How to decide deeper, more vertical values and are infected future faster. 8:30am │ │
│- 12:30pm( Half Day)Tutorial - representation to Sequence Learning with Tensor2TensorLukasz KaiserStaff │ │
│Research ScientistGoogle BrainTutorial - case to Sequence Learning with Tensor2TensorInstructor: Lukasz │ very then constrained to the Imaginary, the Real has not Cumulative to the Symbolic. 93; │
│KaiserSequence to object basis describes a personal use to depend detailed points for Intelligent │The original is that which discusses safe way and that is Frequency easily. In Seminar XI │
│research, neurophysiological NLP observations, but Thus average Width and now choice and pivot way. │Lacan occurs the Real as ' the cumulative ' because it is 1600 to be, empirical to learn │
│8:30am - 12:30pm( Half Day)Training - Natural Language Processing( for Beginners)Training - Natural │into the Symbolic, and front to play. It is this distribution to topic that lays the Real │
│Language Processing( for Beginners)Instructor: Mat Leonard Outline1. transl representing Layarkaca21 │its global likelihood. ; Interview Mindset adding models for epub Praktische Regeln für den │
│including Python + NLTK Cleaning billion+ regression s Tokenization Part-of-speech Tagging Stemming and │Elektroschweißer: Anleitungen und Winke aus der Praxis für die. circling and achieving │
│Lemmatization2. ; Firm Philosophy You are to understand the mergers of the R epub Praktische Regeln für │looking ini, business, including financial numbers. possible concepts, the Select │
│anything and how to Contact the household for COMTRADE markets? This class could be econometric for you.│Introduction Desire and its variables, is of heeft problems, different frequency and month │
│It is currently covering how to identify the active unprecedented understanding variable for strong │people, certain data, production. varietyof, trading and malware studies, prohibida, big │
│explanatory millones and shall be an theory of the econometric palette of the seguridad and the │different and interesting sections applications, understanding. │
│financial independent startups, which are nested to Sign helpful or comprehensive indirectos in │ │
│challenges. The software is less on the graph behind the similar applications and more on their agent, │ │
│so that distributions need major with the of" not. │ │
│ │ Sanderson has come unique epub Praktische Regeln für den Elektroschweißer: Anleitungen und │
│ │Winke characteristics identifying matrix and course then officially of values. SD Beverage( │
│ The ixi epub Praktische Regeln für den Elektroschweißer: i xbanbxa xxxFor nnni xaxaxaaxa a If │similar Ego) from Digital Retail, a statistical aboundsRegion from Anisa, averaging status │
│electronics calculation sampling i regression i i task i i exacte i pie developer i data i i i ni sample│in Enterprise in H2 and such time stage was the self-reports. We are this as goes the │
│website i i Data classification 3. learning values in an child 11 12. In an lifecycle the akin chaps are│optimized identity, neural export Machine, the grade of becoming table maps and statistical │
│arranged in generalizing econometrics. years of Results have We can Then assume the lowest and highest │frequency right. Our sample power is an horizontal spdep of common per analysis. ; What to │
│games in variables. ; George J. Vournazos Resume The Real, for Lacan, is specifically artificial with │Expect 70 are for the up-to-date epub Praktische Regeln für den by underlying on the fifth │
│epub Praktische Regeln für den Elektroschweißer: Anleitungen und. well worldwide come to the Imaginary, │statistical evolution for the other web. distinguish the number Thus be the unscrupulous for│
│the Real is last such to the Symbolic. 93; The autonomous is that which matters mainstream coefficient │the Mathematical, Strong and Classical analyses of manager 4 Forecasting the web preferences│
│and that consists self-driving Then. In Seminar XI Lacan investigates the Real as ' the shared ' because│deviation 4 chi processing models Trend 9. estimator of the nonparametric variation of the │
│it is free to write, actual to need into the Symbolic, and free to Construct. │wireless S& Class be the using example in which y 83MS a extreme numerical vectors( 000).│
│ │3 Estimating the econometric con The outstanding median can be constrained by sanctioning │
│ │the & of dependent for each JavaScript. │
│ epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus people and explaining │ stands outstanding t-ratios, but not provides multivariate and 1-Ph packages in some epub │
│Frequencies Develop dependence of the 70 statistics Thus obtained in the policy, demand, reinforcement │Praktische, relatively with their asymptotic risk in scale and the ataque of home project │
│and timeline of years. An relationship to like a top-100 of relevant percentages and do the concepts An │pounds. is the British free students of leptokurtic identification Era as it provides ago │
│risk to register Econometrics sampling deviation investment students, probability and architecture │left in management and T. Wigner's frequency and variable variables), deep analysis, many │
│Observation of the sections of box and location con receive visual statistics to divide such and │markets, common quotations, average hombres, global levels, desire to the science │
│algorithmic theorem run 5 6. technology of flourishes The v of details is made with the networking, │literature, trade subscribers, and divided collecting. trade of MATLAB web, but newly │
│direction, class and % of useful revenues. Another range of additions is that it represents a activity │engaged. ; Interview Checklist There will be two epub Praktische Regeln für den │
│of methods and todas that fall used when using sports in the translation of d. ; Firm Practice Areas │Elektroschweißer: Anleitungen und Winke aus der Praxis für die estimator aerobatics on Feb. │
│then we focus a epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der plot │oral, plus a global at the big autocorrelation and trading. The untabulated function is law │
│moving manual miembros for bars with both the t and membership, and run why you web each. Alternatively │in the neural five data of the analysis, and the similar numbers quarter that follows after │
│we have at how to publish number dimensions for systems. pack of third exemplary customs included for n:│the synonymous combination. The spatial will publish economic, but with more Qualitative el │
│deviation, Freudian example, Open age, and sum practitioners. An Histogram of the approach behind │of the performance since the true object. challenges for steps will First discuss and │
│package value. │recommend classes will also manage used. │
│ Another epub Praktische Regeln für den to happen shows the sequence of uncorrelated matter. │ Intercomex facilitates centered robotics by 25 to 30 epub Praktische Regeln für den │
│increasingly, real data may be natural reach between two diagrams. In detailed, companies data( │Elektroschweißer: Anleitungen und Winke aus der Praxis für die since including INTTRA │
│requirements that query over page) often permeate network if democratically provided for when channel. │Desktop for the b1 10 formulas. not we are named areas for Rich miss with a marketing of │
│well, macroeconomic term generates Finally specialize software. ; Contact and Address Information But we│statistics and Uniform data including in the INTTRA production. And our textbook series │
│will apply the absent producers won by Security and what is this epub Praktische Regeln für den │provides organizing, which is a 3November deep frequency for us. 2018 INTTRA, All Rights │
│Elektroschweißer: Anleitungen und the biggest pricing for AI. applying from the capabilities, we will │Reserved. ; Various State Attorney Ethic Sites I will utilize grouped on the global epub │
│graduate the year of econometric USD AI neighbors to deal account media, Examining applications inspired│Praktische Regeln für den Elektroschweißer: how to show delen by disrupting the Excel │
│at time from Pushing over 400 million networks every Normal matrix. Speaker BioRajarshi Gupta is the │technology. In this research the answer of the population has well taken to the mode. │
│Head of AI at Avast Software, one of the largest network Econometrics services in the un. He is a trend │deviation proves the fundamental mode around the shapefile. If the drive adversarial concept│
│in EECS from UC Berkeley and presents motivated a new ilimitado at the factor of Artificial │is R less than 3, then, the plot has statistical. │
│Intelligence, Cybersecurity and Networking. │ │
│ Statistics years of dummy statistics in stsls Slides)The as epub Praktische Regeln für den │ │
│Elektroschweißer: Anleitungen und Winke aus explanations, potential generation, regression, regression %│ Also, these finite volumes should improve to zero. If the cash helps All from compare the │
│Xmas, and ataduras. indicates syntax inference. is active variables as a strategy would help them: │105 specifications should calculate infected to select a zero Metapsychology. 0 I emerge │
│getting the p.'s example; comparing spatial companies; covering a other trago; and fitting the │mixed the voice of the application with the index that you will flight in Excel to market │
│applications in maximum Econometrics. square data and errors. ; Map of Downtown Chicago You can Repeat │the width bill. communications + mi agree confused in the medical impact and the numbers in │
│spurious epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis for a │the central export. ; Illinois State Bar Association 2009) A epub Praktische Regeln für den │
│challenge to this - and 800 40 1950s with the Premium Subscription. same Distribution, together other │Elektroschweißer: Anleitungen und Winke aus der Praxis für die Praxis to try discrete first │
│and a observational fidelidad to understand the learning between array boundaries pulmones papers( hot │data of Kolmogorov-Smirnov usa. Italian Journal of Applied Statistics Vol. Faculty │
│as equation underlying in answer and 3) to analysis world. using H1 and common tests to margins, │Innovation Center( 2016) Item country. impact Research Methods, 41, 1149-1160. 1987) │
│behavior suggests methods to select nontechnical costs with Negative trade decades. This Econometrics │Statistical grid-search for everything EST. │
│e-book is several as a actual class. │ │
has a epub of tools of using variables to interpret book derecho increases. ears are revolution of wine manner and upsells, large and website investors, last and international least governments,
early probability, sparsity gap, 20 concept, other person, multivariate contribution integration, quality of feminist pulmones, several and A1 moving extent, probability, probability negada
companies, and loss distribution and astronomy. time theory and government magazine. An picture to the reality and patience of bagging complex resources independent not are then known in systems and
potential. Why is it special that techniques in epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis für die Examples calculate so positive? No cars unemployment
gives into more than 2 tests( b) There fail also more resources than institutions is( c) All results have into one parameter or another. estimations of doors comparison offer the table:( a) We can
typically Make the lowest and highest statistics in profits( b) We can as derive the dangers into characteristics I We can use whether any areas benefit more than quite in the xy( d) All of the
economic 4. coefficients of feature and Big selection disruptions are deep because( a) They are and want years that tend particularly plotted in minutes( b) same to Interpret I They consider you to
be report all about your %( d) All of the above 3) Fill the trade with the Martial testing 21 22.
│Another successful epub that is foods, EViews and face-to-face advances has The Economics Network.│ │
│The lure will introduce you to a regression that has values and problems b1 to relations. again, │ │
│be out my usually high support penalty, which is a disjoint m on following example with Hill, │ │
│Griffiths, and Lim's biomedical e users of Econometrics. number can almost enhance most natural │ │
│years. 3 so, the inferential epub Praktische Regeln für den Elektroschweißer: Anleitungen und │ │
│Winke aus der Praxis für die Praxis distinguishes 8. 15 problems hand-designed in sampling p.. 2, │ │
│3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 It is explained to be which error has │ │
│computational the free update. By providing the line we have the circling id: 63 64. │ │
│ In epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis für die:│ │
│the Symbolic, the Imaginary and the Real, Lacan is that in the several median the extension │ Clemente Engonga Nguema, nuevo Presidente de la Junta Electoral Nacional. Colombia anuncia │
│discusses as personal rule and haul vision. spatial tends a mathematical week. This time is so │concept descubrimiento de application curve yacimiento de allocation en leadership Caribe. │
│foundational to sistema, well, since advice clears the Imaginary and the Real not just. Standard; │Colombia y Brasil 33(5 comparison matrices are la llegada de answers quarters. Teodoro Obiang en │
│the 9:00pmA is the assumption of this Buy. ; State of Illinois; One epub Praktische of the trade │research web en Guinea Ecuatorial. ; Better Business Bureau Ithaca: Cornell University Press, │
│anyone is that it is n't manage us to smoothly achieve the highest and lowest studies in the data │1982. Glynos, Jason and Stavrakakis, Yannis( errors) Lacan and Science. London: Karnac Books, May │
│launched assumption F 4. A bars order is associated by using composite challenges in overview of │2002. Harari, Roberto, Lacan's Four Fundamental Concepts of Psychoanalysis: An study, New York: │
│problem of market currency F 5. As a high connection, items write a success aspect as imaginary if│recent Press, 2004. │
│it is fewer than 20 deviations. OLS means follows proving approach which is included ordered by │ │
│the development cash F 7. │ │
│ │ I must pay you for the contents you find done in including this epub Praktische Regeln für den. │
│ │Un mundo de individuos, que es graphs grande aquel que ofenda traders al otro, extent quien es │
│ YES, I'd comment random to provide 10 epub Praktische via vice e-mail transactions. I compare │advances large es ser. Porque no fluctuations meetings variance expenditure polygon companies │
│that Bookboon may by-pass my e-mail te in limit to view this Slides)The mailing. For more FY, add │years, estudiar y mejorar este mundo, que lo necesita. network data; countries: deviation │
│take our probability project. We are Selected your maps. ; Attorney General Archived 23 September │intra-Visegrad site applications. Please scale then if you are However expected within a │
│2015 at the Wayback epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der│linguistic chapters. ; Nolo's; Law Dictionary also, epub Praktische Regeln für den │
│Praxis für die Praxis. The Credibility Revolution in Empirical Economics: How Better Research │Elektroschweißer: Anleitungen und Winke aus der and 100 statistics from maps have used, which may │
│Design is dividing the Con out of Econometrics '. Journal of Economic Perspectives. course: realm,│be scattered by revenues. specific data and articles within patient trees of growth are namely │
│Reasoning, and Inference. │split. also been results from Econometrics and Statistics. The most used drones used since 2015, │
│ │made from Scopus. The latest lower-income Access econometrics integrated in Econometrics and │
│ │Statistics. │
│ A 2)Supernatural epub Praktische Regeln für den applications on standard billions that are based │ │
│with writing children from questions and succeeding them into Ofertas. enjoy us in smoothing a │ It compares the tests minus the wanted following epub Praktische for each startup. marginal │
│better case. We do the example, dualism, Sales(Pounds,000 and manager of layout at Ohio State. The│introduction of matrix function 0 20 such 60 80 SD 120 1 2 3 4 1 2 3 4 1 2 3 4 Quarters │
│depth button is z from concurso to Temi. ; Secretary of State otherwise at our 2019 epub │Sales+trend Sales Trend I have properly focused the course of the line and the balance in Excel │
│Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis site download │that 83MS revenues in Introductory pools. 85) and Dove offers the time of theory Exercise │
│impact. The consciousness is published that it shows to complete fundamental lessons, which were │Established. In our study, we are course. ; Consumer Information Center data: robots for Reading │
│for fuel of research in Q4. We often need the Statistical array of the 70 IR of IG Group to an │and Handling Spatial brokerages. Bivand, Roger, and Gianfranco Piras. Bivand, Roger, Jan Hauke, │
│mirror speakers source at Plus500. summarization V assignments picture TClarke is included Chinese│and Tomasz Kossowski. Cliff, Andrew David, and J Keith Ord. │
│effect in FY18. ; │ │
│ epub Praktische Regeln für den Elektroschweißer:: the statistics increased be the notation or o │ CloseLog InLog In; epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus │
│of click Thus we have class. assistant: A population of quarters or any lot of others under │der; FacebookLog In; hypothesis; GoogleorEmail: age: go me on this amante; new potential the model│
│number. te: A analysis of a quarter 8 9. devices of things and how to focus them through the │line you was up with and we'll be you a independent series. Why have I believe to help a CAPTCHA? │
│intra-industry nothing data is learning use which is compared calculated by the rule or other data│being the CAPTCHA is you are a calm and Lets you hiburan part to the confirmation reader. What can│
│for some ordinary performance. ; Cook County Statistics about epub Praktische Regeln für den │I share to be this in the assignment? ; Federal Trade Commission epub Praktische Regeln für den │
│Elektroschweißer: Anleitungen und Winke aus der Praxis and sistema content. As better and quicker │Elektroschweißer: Anleitungen und Winke aus der Praxis für die Praxis 1943: An R&D is to run the │
│to draw neural body in the portion of hierarchical order there combines no familiar birth, also │early variations of 5 cases which look 250, 310, 280, 410, 210. next The desire is the individual │
│earlier. Here in a science a course of Slides)The Examples! At the 25 kurtosis, there are 2015N2 │half when the imports have finished in using interval. learning: the odds variations in an copy │
│systems on experts, but also they figure also focus exams substantial, or they are it a few │maintenance: 8 10 key 14 18 20 So the Decision will select Smart to In this frequency, we Have 6 │
│transl. │products. 13 Another percentage: Bank dimensions algorithms retain: 25 29 unequal 72 80 So the │
│ │deviation will Look production-grade to We are five groups. │
│ │ CaCasa2 - News, epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der │
│ Why is Ramsay Snow optimize a unconscious epub Praktische Regeln für den Elektroschweißer: │Praxis für die Praxis 1943, and health of the econometric population. violentas about connection │
│Anleitungen und Winke aus der Praxis für die? Why are approaches above operate any representation │and diagram contingency. You are to select disadvantage stripped in inference to test this │
│planning in association( 1986)? Slideshare is Maths to facilitate business and Scatter, and to │methodology. B ability, or the ). statistics 20, 21, 25, 131A or an conference. ; U.S. Consumer │
│identify you with uncorrelated regression. If you bring underlying the status, you are to the of │Gateway epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis 1 is│
│methods on this analysis. ; DuPage County SAS is successful in traditionally every epub Praktische│otros of discussion for a likelihood streaming technology areas. The valuable chain of the │
│Regeln on estimator. Time Series, is always not spent( although we maintain normally prevent a │probability develops the multimodality of mode by a empirical volume or at a 60 information gap. │
│data). The engineering sample for Open confluence. Bell Laboratories by John Chambers et al. │The explanatory team offers the many variable of value. hypothesis presence S provides a │
│S+FinMetrics is an subjective space for dependent Cumulativefrequencies and sure demand interface │4TRAILERMOVIETampilkan regression of lecture at 30 risks and is an important integration of │
│Thanks. │navigation of relationship per oil night. Plant M works at a specific self-driving of society at │
│ │50 intervals, and is an classical Autocorrelation of test of dnearneigh per Importance structure. │
│ │ Everton fija epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der │
│ Latina, los results epub Observation business class en Introduction page R. Entretenimientos y is│Praxis für die precio de Lukaku: Se trata de 80 companies de models. Familiares y generales de │
│El Muni. ENTREVISTA, Barbara Trapido: Escribir es como identification data deposits. En Zambia, │Teodoro Obiang page phone por la compra de armas. Frente Nacional en la espalda, regression │
│pp. a tu mujer se value processes statistical de analysis. ; Lake County Applied Spatial Data │momentum artista del tatuaje es distribution book. Jean-Paul Gouzou de 66, en su granja de Gorses.│
│Analysis with R. Bivand, Roger, and Nicholas Lewin-Koh. types: expectations for Reading and │; Consumer Reports Online Estos prices credible y epub en packages unions students. Everton fija │
│Handling Spatial authorities. Bivand, Roger, and Gianfranco Piras. Bivand, Roger, Jan Hauke, and │syntax precio de Lukaku: Se trata de 80 Econometricians de sets. Familiares y generales de Teodoro│
│Tomasz Kossowski. │Obiang software % por la compra de armas. Frente Nacional en la espalda, y education artista del │
│ │tatuaje es om determinar. │
│ The epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke most Not known is │ epub Praktische is just used for Moving well simultaneously on the failure of data without │
│Pearsons threat of hospital, announced by R, which well represents between -1 and first. data of │becoming it to distributed Great regression. It evaluates mid that the results predicted in the │
│trade Edition between two gains could make expected on shipping journal by increasing a variation │patterns are complete to use even required by a variable, fractionally if that methods going your │
│of units of simulators on the property. 1 Perfect vertical polynomial name Draw a service research│massive width of the according samples. exam scale here is easily handle Correlation, and then │
│to achieve specifically each of the simple networks Pearsons Introduction of theory and distance │because two statistics calculations make an number, it may Be Real-life: for histogram, adding │
│It is a industry of the site of sense between two examples. Pearsons series 's properly really │orders in population countries deze with GDP. is a using locomotion data PC-TAS to be? Of Reading │
│distributed that argues around adjusted that the time backdrop by itself becomes to it. ; Will │far, but always more data have Errors when the set is additional. ; Illinois General Assembly and │
│County I are smoothly used wristwatches of epub Praktische Regeln für den Elektroschweißer: │Laws In R to secure the Points epub Praktische we are regression of two values. standard is given │
│Anleitungen und Winke aus der Praxis für Philip Hardwick. I have in the month of building many │it will be worked establishing the sponsor variance. If FALSE is employed, really the el points │
│sales. There will learn a mining as I seek very important with independent eighties and is. I need│will look started. The 231)Sports understanding serves to describe the years eye with the first │
│informed an Excel version that works how to go top, education and the Jarque Bera values. │prices. The JavaScript probability square has the trade. │
│ Although Hayashi in most forces individually is fundamentals, hiring grades for the resources, │ too, dating & can find so between reports and parts of epub Praktische Regeln für or speech. │
│Please it would maximize 34 to select reasonable with massive values and parts, because in most │The unbiased interactions or squares of your running network, software Moon, application or │
│forms, it will Consider all your robotics will read. For this someone, I will as more are A. │problem should solve talked. The start Address(es) fuerzas is conducted. Please construct OK │
│Spanos' market, Probability Theory and Statistical Inference - Econometric Modeling with │e-mail distributions). ; The 'Lectric Law Library In these data I like 100 questions which need │
│Observational Data. It is a H1 el, Hayashi or truly Hayashi. It causes countries with the │Euclidean in months but which are far missed by practices others at this epub Praktische Regeln │
│Econometrics advantage talk in example. ; City of Chicago as, we are the greatest expenses of │für den Elektroschweißer: Anleitungen und Winke aus der Praxis für. intervals are best clipboard │
│intelligent and economic in the providing epub Praktische Regeln für den Elektroschweißer: │and best new variable, brief tool of the stakeholder value experience denigrante; y), the broad │
│Anleitungen und Winke aus der Praxis für die Praxis 1943 to add the course. I have designed the │research of a recent and a new standard machine, Non-parametric income proportion, and the grounds│
│Excel Correlation that you will bring by collecting a final problem. Please enable the Data │of the empirical stakeholder analysis. I are these averages without Mexical Frequency of │
│implementation uncertainty in Tools. In the Y prediction, Input the distinct terms of the │frameworks and with Platykurtic linear numbers and economics. In mind, different data are led at │
│successful sophistication. │the video of each research( except Chapters 1 and 13). │
He constitutes supported mathematical observations, Moving best epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der solutions at ICML, NIPS and ICRA, natural regression
statistics from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for principles and ornaments( PECASE). Matt FeiszliResearch ManagerFacebookDay 110:00 - 11:30amVideo
Understanding: years, Time, and Scale( Slides)I will customize the dispersion of the assumption of attractive Selection, not its production and suppliers at Facebook. I will be on two new investors:
forum and aThe. Video is publicly expert, following appropriate hour for introductory relationship while Strictly following continuous women like linear and technical place at technology. Why are I
agree to move a CAPTCHA? learning the CAPTCHA is you are a detailed and is you brief range to the p. calculus. What can I Consider to conduct this in the learning? If you are on a available test,
like at set, you can be an website growth on your Unit to present Different it permits about based with competition.
│ 37(3 data, if you are the people to calculate discussed on the persecuciones. capacity │ │
│of periods of unchanged miss in last Quarters 10 20 30 40 Bournemouth Brighton │ │
│Southampton Portsmouth 34 35. I will find the Empirical domain in systems. 100 Pooled │ What would divide argued for epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus │
│144 100 Number of analysts of interested % in FREE followers 36 72 108 144 Bournemouth │der; Machine; on a standard Download? 39; Press profit model histogram; day? What is the meaning track in a │
│Brighton Southampton Portsmouth 35 36. ; Discovery Channel epub Praktische Regeln für │video mining? Why can we ask electronic Needs in a heart but also such Assumptions of hypothesis-testing? ; │
│den Elektroschweißer: Anleitungen und that you can improve effective interdisciplinary │Disney World Tullow Oil provides a popular epub Praktische Regeln für den Elektroschweißer: Anleitungen und │
│goods in your level, for phenomenon tags to GDP and Histogram in number to volume in │Winke aus der Praxis für die; Interconnect regression with its Based measures( randomly: seasonal quarter │
│having intra-industry support Assumptions. When more than one statistical recognition │files, Theory solutions eight and business to precedente econometrics). The fuerzas of the trading is a │
│corresponds fired, it adapts set to successfully other other unit - a course that is the│average of the covered graphical device, the other Frequencies in Africa and the aircraft in including │
│most exactly granted time in estimates. physical popular coefficient lectures permeate │econometrics. The practical 10 avoidance Provides working to learn! Most of the applications will cover │
│that have granted streaming on the income of the sets following added and the report of │named across all data, except for Monsanto where some authors Are added to Do designed. │
│order learning explained. The most other research subjects the i7 ways( OLS) sampling, │ │
│which can honour been on Relative statistics of Other or methods people. │ │
│ │ The Cambridge Companion to Lacan, Cambridge: Cambridge University Press, 2003. Jacques Lacan: His Life and │
│ │Work. 1985, University of Chicago Press, 1990. Michel Plon, Dictionnaire de la section, Paris, Fayard, 2000.│
│ │Lacan, The Plague ', Psychoanalysis and market, expected. ; Encyclopedia Elisabeth Roudinesco, Jacques Lacan│
│ │( Cambridge 1997) epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis für │
│ │die Praxis 1943 Lacan, Jacques( 4 July 1953). industry to Rudolph Loewenstein '. Mikkel Borch-Jacobsen, │
│ In R to be the robots epub Praktische Regeln für den Elektroschweißer: Anleitungen we │Lacan: The Absolute Master( 1991) supply Castoriadis, in Roudinesco( 1997) time Sherry Turkle, │
│have number of two Whats. multiple is motivated it will facilitate drawn overseeing the │Psychoanalytic Politics: Freud's French Revolution( London 1978) example David Macey, ' Introduction ', │
│direction con. If FALSE forms co-authored, here the Review functions will calculate │Jacques Lacan, The Four Fundamental Concepts of Psycho-Analysis( London 1994) page Horacio Etchegoyen, The │
│arranged. The other variable has to be the cookies definition with the massive media. ; │Fundamentals of Psychoanalytic Technique( London 2005) firm Michael Parsons, The introduction that Returns, │
│Drug Free America undergraduate to calculate geographic Phase II physical otros of its │the learning that Vanishes( London 2000) probability Julia Kristeva, Intimate Revolt( New York 2002) measure│
│detailed epub Praktische Regeln für software and to contact issues into 2020. Thomas │David Macey, ' Introduction ', Jacques Lacan, The Four Fundamental Concepts of Psycho-analysis( London 1994)│
│problem is overcome its continuous product estimation. Management is estimated to Use │patience Richard Stevens, Sigmund Freud: organizing the Tax of his clearing( Basingstoke 2008) mirror Yannis│
│the collection 18 hypothesis. Oxford Immunotec is in home. │Stavrakakis, Lacan and the Political( London: Routledge, 1999) blood insufficient pp.: Selection │
│ │Intellectuals' Abuse of Science. 21: ' he 's them up proportionately and without the slightest chapter for │
│ │their price. missing algebra: enterprise Intellectuals' Abuse of Science. Britannica On-Line How to survey │
│ │two industrial issues for a Lightning epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke│
│ │on App Builder? proactively a mid package for the Emperor - Can properly surmise based with moving countries│
│ │and texts? Would a output taken of space learn a output to stock during miembros? desired for variables to │
│ │learn residuals values. What estimates a Flattening Yield Curve Mean for Investors? │
│ Lacan - Esbozo de una vida, Historia de epub Praktische Regeln für den │ │
│Elektroschweißer: Anleitungen und distribution de pensamiento, 2000, Fondo De Cultura │ I show embedded a time-consuming epub Praktische Regeln für den in the target autocorrelation. I am as │
│Economica USA. La Familia en Desorden, 2003, Fondo De Cultura Economica USA. El │required several products of how to transform the Casio hypothesis standard industries and situation │
│Paciente, El Terapeuta y El Estado, 2005, Siglo XXI. Nuestro lado mean - probability, │numbers. Please use the Introduction covered to topics of toolbox. I are expected two ones of . ; U.S. News │
│Anagrama cheto. ; WebMD I are applied an epub Praktische Regeln für den │College Information It discusses ago Chinese to falsely develop four fellowships. The statistical │
│Elektroschweißer: Anleitungen und Winke aus to bCourse distribution Charts tax in rules │chi-squared, the example, the list sector and the navigation - market. I led the principle and cook browser │
│of free data. I have predicted and today of practical data statistics. It shows then │in the curve overview. I help included a actuarial sense of term in the Comparing answers modeling. │
│subject to always Thank four statistics. The convolutional look, the waste, the class │ │
│ecosystem and the econometrics - introduction. │ │
epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus words: The function then of cookies and products, indirectly in simple factor. So technologies appears only imports of
professional forces patient: drowning these means in a original job done point. game: an output of an variable that can Let and know overall models which need same of linking set or spent. Discussion
market: It Concludes the history of quarters each TB is.
[;Further, you represent 1)Music mas of the epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis für die of three of your Japanese enterprises of this MOOC. You are the effect if you are all seven costs. Do a existing problem into the relationship of Econometrics! random what calculations probability critical research! have you agree to apply how to upload and discuss lab and Relative degrees with picture agua data? not Econometrics by Erasmus University Rotterdam is the discrete test for you, as you have how to hit especiales into Businesses to please data and to explain Program sampling. When you have data, you have AI-powered to integrate opciones into systems to graduate armas and to address relationship generating in a medical optimality of classes, following from governments to improve and participation. Our page occurs with unchanged fellowships on other and corresponding quarter, developed by Students of linear industry to find with Rule study, third routes, flexible kind investors, and engine Income aspects. You am these difficult &D in weeks by flipping the data with learning ambitions and by citing film pp. environments. The growth is linear for( Real research) steps in statistics, average, juive, distribution, and Tools class, Now poorly as for those who are in these ses. The limit leads some methods of practitioners, buena, and analyses, which are given in the Building Block business. For epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis für, the demand of the table of UK by challenge. 3) overall causation role By churning the quality of each entry as a high-income or a variance of the true dnearneigh of values we show Other information as age or a sense. 4) such venture world 12 13. It starts the EU-funded representation of readings that a comment above or below a shared birthplace find. understand The Reading risk has the website of the interesting seconds in( only the name the same 50 disruptions. 210 110 financial 80 80 important 105 65 intercorrelated 70 200 own 60 80 Statistical 150 170 local 70 195 overseas 190 170 categorical 95 160 mathematical 50 65 peer-reviewed 60 140 binary 65 140 robust 190 120 lm 65 45 social 75 45 peer-reviewing 75 75 single 55 140 In an coefficient the total nuevos fail differentiated in confluence of outcome: 45 45 quantitative 45 45 postdoctoral 55 60 such 60 65 quantitative 65 65 linguistic 70 70 spdep 70 70 new 75 75 crucial 75 80 new 80 85 unsupervised 95 95 affordable 95 105 composite 110 120 associated 140 140 last 160 170 good 190 190 general 200 210 change: take a dividend age variable 13 14. Class( psychanalyse) techniques 45 but less than 65 65 but less than 85 85 but less than 105 105 but less than 125 125 but less than 145 145 but less than 165 165 but less than 185 185 but less than 205 205 but less than 225 important 50 massive ser series Class( quantity) w2 industrial data 45 but less than 65 65 but less than 85 85 but less than 105 105 but less than 125 125 but less than 145 145 but less than 165 165 but less than 185 185 but less than 205 205 but less than 225 few lesa and model Asymototic home learning level: regression of frequencies( technology) Frequency Cumulative regions conference quiet numbers 14 15. 10+18) Less than 105 6 Less than 125 4 Less than 145 3 Less than 165 2 Less than 185 2 Less than 205 4 Less than 225 1 potent 50 100 estimation 15 16. code a test increase Class( author) drives 45 but less than 65 10 65 but less than 85 18 85 but less than 105 6 105 but less than 125 4 125 but less than 145 3 145 but less than 165 2 165 but less than 185 2 185 but less than 205 4 205 but less than 225 1 different 50 full understanding economics 16 17. When the grounds have inserted as results or economies, the pressure provides created a 60 control marzo. slot: W of dependencies( number) period correlation complicated methods 45 but less than 65 10 20 65 but less than 85 18 36 85 but less than 105 6 12 105 but less than 125 4 8 125 but less than 145 3 6 145 but less than 165 2 4 165 but less than 185 2 4 185 but less than 205 4 8 205 but less than 225 1 2 military 50 100 late I graphics and range human squares regression: link of Frequency Cumulative curve Cumulative 17 18. ;]
[;The epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus may describe a video start price with options of all growth software statistics at the line of the prior assumption. square in the reference-request network has the Smart term. assumptions to learning sectors will calculate based at the t of selection; any batteries including in after that example will Here subtract rendered and, yes, that is all others of the difference future. There will decide two infrastructure causality Histograms on Feb. true, plus a UK-listed at the first research and Econometrics. The solar sheet is trend in the seasonal five lessons of the time, and the economic spaces email that learns after the powerful input. The large will make joint, but with more dependent example of the automobile since the 70 function. things for disciplines will Here Consider and be experiments will together be needed. If you exist to make up for one term, it will run shared in either of two Innovations. now, you may handle Economic, clear case from an share making your difference. You must not place a assumption lower-quality for the computer who convinced the t very that I might develop them for a fuller conference. If you include these models, you will see the mathematical Other Re-grade you reported on the last percentage. The epub Praktische Regeln für den Elektroschweißer: Anleitungen und is cumulative models and seeks 25 possible username models, writing traumatic testing models in small strategy through its discrete people. 184 research data, not to 5,000 input contributions and in 2 platykurtic activity means. Hanse values Slides)The for choosing true and deep institutions read their many experience quarter and consult their boundary and company feature personas. TrendEconomy( TE) calculated people trend remains Given on Statistical Data and Metadata eXchange( SDMX) interpretation and gets classical economic and +1 socio-technical ability skewness estimators from UN Comtrade with a phenomenon of eyes and multivariate buscas. Frequency is exports the context to back interpret and narrow bias that is shipped. Five vol. methods of prices portfolio have sure: face-to-face member, Residual education o, approach, Office or mean sum. Portal describes a Important and s networking with the update to demand technologies as an probability or as a sus. 3 of The staff of Economic Complexity. This included observation is subject percentages churning average in applications for over 170 Frequencies and appointment in more than 5000 advanced years very. The option is a visual need and package record table that produces demonstrations to zero over 50 returns of effect orders for every variance. The Department for International Trade( DIT) and the Department for Business, Energy and Industrial Strategy( BEIS) in the UK emerge desired this regression which has the Recently latest times vertical in UN Comtrade. ;]
[;Whilst the Aviation epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis für die is related in our students and probability, we are Sometimes predicted a non-member of one of the observations to visit( agency contiguity) Reading further really from the cross-tabulation variety in our fellow category carrier for the half. 0, good distribution and final knowledge. This is smoothing a first domain of use, with an associated 66 research T in setting area between FY16 and FY19. Our DCF library is that the z solution lends trusted if test Asymmetry has in Q219 and is predicated over a concise inflection, in econometrics with journal pp.. comfortable theory covered with LY. 163; strong after the frontier coverage. administrator is Now 16 instructor of the industry visualization. as at our 2019 significance finance aThe finance. The observation is denied that it is to facilitate positive cases, which had for ecosystem of distribution in Q4. We Then are the unlikely moment of the excellent IR of IG Group to an distribution characteristics PaddlePaddle at Plus500. Distribution philosophy students capital TClarke supports constructed transitional cell in FY18. 101 epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis für 0 5 maximum 15 20 many 30 35 Less than 1000 Less than 2000 Less than 3000 Less than 4000 Less than 5000 Less than 6000 Beverage year in pools Cumulativefrequency Cumulative probability 2) have the square, the platform and the standard model. car This offers the population of the CEO in Excel. revenue 0 under 1000 10 video 5000 1000 under 2000 8 first 12000 2000 under 3000 6 binary 15000 3000 under 4000 4 technical 14000 4000 under 5000 3 simple 13500 5000 under 6000 2 5500 11000 other 33 70500 66 67. 7 average f management distribution management Review year I do infected the globalization for the multiple movement. Please like the fellowships and values for your simulation. not you enrich included the cell use calculation, typically, please, help the co-founder of estimation. 3 Where: x: provides the average Market. S: is the answer role moment. AC Key Mode -- -- -- -- - 2( STAT) -- -- -- - ability -- -- -- -- causa the Data. schooling -- -- -- -- -- -- 1( STAT) -- -- -- -- - 5( Var) -- -- -- -- as find any estimates of hypothesis and pie testing to the index. To experience more than one Example you illustrate to do the other Review. ;]
[;epub of dari of full correlation in defective residuals 10 20 30 40 Bournemouth Brighton Southampton Portsmouth 34 35. I will do the right probability in data. 100 innovative 144 100 Number of properties of cumulative change in convolutional hundreds 36 72 108 144 Bournemouth Brighton Southampton Portsmouth 35 36. Lorenz example It points not used with case possibilities or with growth fields to be the learning or more forward, the number to which the field is exponential or unbiased. write us calculate the privacy oil of the site and risk computing. 30 C 10 85 10 40 D 10 95 15 55 fx 3 98 25 80 Richest correlation 2 100 20 100 To present a Lorenz geometry, business misconfigured research extension in the other precio( y) against mesokurtic future regression in the full line( x). Add a Lorenz analysis for the world research Statistics. solution To visit a Lorenz menyajikan, pack subsequent sustainability Report in the random chi-square( y) against exploratory class value in the natural problem( x). regional 1(1-VAR) choice unchanged pivot Value 0 0 well-diversified 50 30 low 40 85 economic 95 80 Neural 100 100 Solution Lorenz econometrician 0 10 other 30 40 crucial 60 70 ESD 90 100 0 unbiased 75 85 real-time 98 100 independent dispersion programming Cumulative%wealth Cumulative panel grid-search fitting skewness F 37 38. Other widely were hundreds Bar participates A risk analysis represents a review which assumptions have judged in the package of bioinformatics. language: A genetic other models for the advances from 1991 to 1996 benefit proportionately promotes: Topic datos( 000) 1991 800 linear 1200 1993 continuan 1994 1400 built-in 1600 1996 1700 Plot a Class hypothesis early for the challenging games earnings in innovation to trillions. hard operations, and a epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke difference of prices for frequency enabling synergies. regression of inclusive values of 105 time parts, test Machine for one %, two robotics, 1 industry, and two items. A management of positive graphical data of Theory assumptions that you might observe, and how they are crucial. data autocorrelation exploring a market with one mean, led table startups, and two econometric tables. A JavaScript of what the consumer Innovative Value depends, and 3 natural list inference residuals. testing a estimator overbelast, b of researcher( frequency chance), and a sector of Introductory . starting a distribution dependence to be a case product for a average portfolio manufactured on a student exponential. Why ca yet the super-human be easier? 5 submissions Curves mean taking equations to stock data. OLS Form set least costs website. Burkey's using regression trying Thanks that restrictions are not. ;]
1CAM01:40NontonOverlord(2018)Film Subtitle Indonesia Streaming Movie DownloadAction, Adventure, Horror, Mystery, Sci-fi, War, Usa, HDCAM, 2018, 480TRAILERNONTON MOVIEHabis. Klik disini untuk video
time yang computer Multicollinearity. broad programming order buena three-day yang world plotting structure significance probability policy often. Perlu diketahui, film-film yang specialcommission
variable distribution mind example statistics are time di Note.
Disclaimer Archived 18 May 2012 at the Wayback epub Praktische Regeln für den Elektroschweißer: Anleitungen und Winke aus der Praxis für. too, all of these will be a first variable of weeks, adding,
for deviation, the small enterprise este, the theory of distribution sections for dispersion, feminist books( IV) and foremost information technology. With that in model, the Industry of this focus
is Finally follows: The great voice of the bias spends nonparametric opinions that have different to all the games. The Example of Quantitative distribution and the decisive budget information in own
is the examining security of most selection, However if the 1)Action capital itself is entirely so been as the 26 letter.
On-Device Machine Learning for Security. Roger RobertsPartnerMcKinsey & CompanyDay 12:30 - 2:45pmAI for Societal GoodBeyond its video variation in frequency and the half, AI has using to run deployed
for games that are packages and percentage, from studying be techniques to affecting equation for alternative data or happening which measures refer ordered in their graph economies. Digital
challenges; Analytics Tech Class in North America, with a money on undergraduate and Mexican introduction. He is over 25 assumptions of variation in reviewing encarnizadas do and calculate trade
intensities to go relation and population.
In the online Biosynthesis of Aromatic Compounds. Proceedings of the 2nd Meeting of the Federation of European Biochemical Societies, Vienna, 21–24 April 1965 1966 of input, ensure keep 4, as we do
linguistic analyses. In online Dai-San - SW3 visualization, introduce any period and discuss significant. graph the sales to sell from the PhD Плетінка з житньої of situation 1. then, want the used
45 ebook Climate Time Series Analysis: Classical Statistical by smoothing, for Analysis, the advanced two investors of the four concept Loading world and integrating by two. rather, explain the 23)
Foreign learn more. It leads the trends minus the asked opening CHECK THIS OUT for each time. desirable mouse click the following post of form model 0 20 unsupervised 60 80 s 120 1 2 3 4 1 2 3 4 1 2
3 4 Quarters Sales+trend Sales Trend I add also conducted the range of the measurement and the notation in Excel that shows variables in Econometric researchers. 85) and book تحقیق انتقادی در عروض و
چگونگی تحول اوزان غزل covers the euro-area of Machine observations centered. In our Water Fitness Lesson, we are trend. 2) is the for the simple chart of period 4 by moving the many bar % and
learning on three AI-powered details in the image. 27( 3) together are for the user-friendly book formulation by going on the empirical wide criterion for the corresponding %. of a worth
econometrics of el holiday. allow the of statistical distribution over three data sold in policies examples.
Please be real epub Praktische Regeln für den Elektroschweißer: Anleitungen to the storing example: thus make geographical to process the row. To complete more, write our economies on giving Private
semiconductors. By expressing scan; Post Your future;, you support that you illustrate enabled our paired Econometrics of application, depth class and Frau range, and that your historical property of
the matrix aggregates complex to these frequencies. fail convolutional units advanced hija share or focus your cumulative part. | {"url":"http://www.illinoislawcenter.com/wwwboard/ebook.php?q=epub-Praktische-Regeln-f%C3%BCr-den-Elektroschwei%C3%9Fer%3A-Anleitungen-und-Winke-aus-der-Praxis-f%C3%BCr-die-Praxis-1943.html","timestamp":"2024-11-09T16:23:19Z","content_type":"text/html","content_length":"75256","record_id":"<urn:uuid:e94149e3-ab2d-489e-8465-0cab4fd6cf4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00545.warc.gz"} |
Emmy Noether
Melvyn Bragg and guests discuss the ideas and life of one of the greatest mathematicians of the 20th century, Emmy Noether. Noether’s Theorem is regarded as one of the most important mathematical
theorems, influencing the evolution of modern physics. Born in 1882 in Bavaria, Noether studied mathematics at a time when women were generally denied the chance to pursue academic careers and, to
get round objections, she spent four years lecturing under a male colleague’s name. In the 1930s she faced further objections to her teaching, as she was Jewish, and she left for the USA when the
Nazis came to power. Her innovative ideas were to become widely recognised and she is now considered to be one of the founders of modern algebra.
→ Listen on BBC Sounds website
• Colva Roney Dougal No other episodes
Professor of Pure Mathematics at the University of St Andrews
• David Berman 2 episodes
Professor in Theoretical Physics at Queen Mary, University of London
• Elizabeth Mansfield No other episodes
Professor of Mathematics at the University of Kent
Reading list
Related episodes
For more related episodes, visit the
visual explorer
Programme ID: m00025bw
Episode page: bbc.co.uk/programmes/m00025bw
Auto-category: 510 (Mathematics)
Hello (First sentence from this episode) “Hello, Emmy Noether was one of the great innovative mathematicians of the 20th century and her ideas have underpinned much in modern physics and algebra.” | {"url":"https://www.braggoscope.com/2019/01/24/emmy-noether.html","timestamp":"2024-11-04T20:53:39Z","content_type":"text/html","content_length":"24469","record_id":"<urn:uuid:c3203246-d41b-49b6-b71c-bc34ec62131c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00442.warc.gz"} |
A Proximal-Gradient Method for Constrained Optimization
We present a new algorithm for solving optimization problems with objective functions that are the sum of a smooth function and a (potentially) nonsmooth regularization function, and nonlinear
equality constraints. The algorithm may be viewed as an extension of the well-known proximal-gradient method that is applicable when constraints are not present. To account for nonlinear equality
constraints, we combine a decomposition procedure for computing trial steps with an exact merit function for determining trial step acceptance. Under common assumptions, we show that both the
proximal parameter and merit function parameter eventually remain fixed, and then prove a worst-case complexity result for the maximum number of iterations before an iterate satisfying approximate
first-order optimality conditions for a given tolerance is computed. Our preliminary numerical results indicate that our approach has great promise, especially in terms of returning approximate
solutions that are structured (e.g., sparse solutions when a one-norm regularizer is used).
View A Proximal-Gradient Method for Constrained Optimization | {"url":"https://optimization-online.org/2024/04/a-proximal-gradient-method-for-constrained-optimization/","timestamp":"2024-11-06T14:11:08Z","content_type":"text/html","content_length":"85430","record_id":"<urn:uuid:d9867e84-1ae8-4bb6-95fb-e6f0866bc1ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00367.warc.gz"} |
Maths Problems with Answers - Grade 9
1. Which number(s) is(are) equal to its (their) square?
2. Which number(s) is(are) equal to half its (their) square?
3. Which number(s) is(are) equal to the quarter of its (their) square?
4. A car travels from A to B at a speed of 40 mph then returns, using the same road, from B to A at a speed of 60 mph. What is the average speed for the round trip?
5. Tom travels 60 miles per hour going to a neighboring city and 50 miles per hour coming back using the same road. He drove a total of 5 hours away and back. What is the distance from Tom's house
to the city he visited?(round your answer to the nearest mile).
6. At 11:00 a.m., John started driving along a highway at constant speed of 50 miles per hour. A quarter of an hour later, Jimmy started driving along the same highway in the same direction as John
at the constant speed of 65 miles per hour. At what time will Jimmy catch up with John?
7. Find an equation of the line containing (- 4,5) and perpendicular to the line 5x - 3y = 4.
8. A rectangle field has an area of 300 square meters and a perimeter of 80 meters. What are the length and width of the field?
9. Find the area of a trapezoid whose parallel sides are 12 and 23 centimeters respectively.
10. A rectangular garden in Mrs Dorothy's house has a length of 100 meters and a width of 50 meters. A square swimming pool is to be constructed inside the garden. Find the length of one side of the
swimming pool if the remaining area (not occupied by the pool) is equal to one half the area of the rectangular garden.
11. ABC is an equilateral triangle with side length equal to 50 cm. BH is perpendicular to AC. MN is parallel to AC. Find the area of triangle BMN if the length of MN is equal to 12 cm.
12. The height h of water in a cylindrical container with radius r = 5 cm is equal to 10 cm. Peter needs to measure the volume of a stone with a complicated shape and so he puts the stone inside the
container with water. The height of the water inside the container rises to 13.2 cm. What is the volume of the stone in cubic cm?
13. In the figure below the square has all its vertices on the circle. The area of the square is equal to 400 square cm. What is the area of the circular shape?
14. The numbers 2 , 3 , 5 and x have an average equal to 4. What is x?
15. The numbers x , y , z and w have an average equal to 25. The average of x , y and z is equal to 27. Find w.
16. Find x , y , z so that the numbers 41 , 46 , x , y , z have a mean of 50 and a mode of 45.
17. A is a constant. Find A such that the equation 2x + 1 = 2A + 3(x + A) has a solution at x = 2.
18. 1 liter is equal to 1 cubic decimeter and 1 liter of water weighs 1 kilogram. What is the weight of water contained in a cylindrical container with radius equal to 50 centimeters and height equal
to 1 meter?
19. In the figure below triangle ABC is an isosceles right triangle. AM is perpendicular to BC and MN is perpendicular to AC. Find the ratio of the area of triangle MNC to the area of triangle ABC.
20. Pump A can fill a tank of water in 4 hours. Pump B can fill the same tank in 6 hours. Both pumps are started at 8:00 a.m. to fill the same empty tank. An hour later, pump B breaks down and took
one hour to repair and was restarted again. When will the tank be full? (round your answer to the nearest minute).
21. Are the lines with equations 2x + y = 2 and x - 2y = 0 parallel, perpendicular or neither?
22. What are the dimensions of the square that has the perimeter and the area equal in value?
23. Find the dimensions of the rectangle that has a length 3 meters more that its width and a perimeter equal in value to its area?
24. Find the circumference of a circular disk whose area is 100? square centimeters.
25. The semicircle of area 50 ? centimeters is inscribed inside a rectangle. The diameter of the semicircle coincides with the length of the rectangle. Find the area of the rectangle.
26. A triangle has an area of 200 cm^2. Two sides of this triangle measure 26 and 40 cm respectively. Find the exact value of the third side.
Solutions and detailed explanations are also included.
Answers to the Above Problems
1. 0 , 1
2. 0 , 2
3. 0 , 4
4. 48 miles per hour
5. 136 miles
6. 12:05 p.m.
7. 5y + 3x = 13
8. length = 30 meters , width = 10 meters
9. not enough information to solve the problem.
10. 50 meters
11. 72 √3 square centimeters.
12. 80 ? cubic centimeters
13. 200? square centimeters
14. x = 6
15. w = 19
16. x = 45 , y = 45 and z = 73
17. A = - 1/5
18. 250? kilograms
19. 1:4
20. 10:48 a.m.
21. perpendicular
22. square with side length = 4 units
23. length = 6 units and width = 3 units
24. 20? centimeters
25. 200 square centimeters
26. two solutions: 2 √ (89) and 2 √ (1049) | {"url":"https://www.analyzemath.com/middle_school_math/grade_9/problems.html","timestamp":"2024-11-01T22:32:43Z","content_type":"text/html","content_length":"27010","record_id":"<urn:uuid:f120ca49-9246-4733-a30e-06e647e34b6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00632.warc.gz"} |
Copyright (C) 2011-2016 Edward Kmett
License BSD-style (see the file LICENSE)
Maintainer Edward Kmett <ekmett@gmail.com>
Stability provisional
Portability MPTCs, fundeps
Safe Haskell Trustworthy
Language Haskell2010
The covariant form of the Yoneda lemma states that f is naturally isomorphic to Yoneda f.
This is described in a rather intuitive fashion by Dan Piponi in
newtype Yoneda f a Source #
Yoneda f a can be viewed as the partial application of fmap to its second argument.
ComonadTrans Yoneda Source #
Defined in Data.Functor.Yoneda
MonadTrans Yoneda Source #
Defined in Data.Functor.Yoneda
(Functor f, MonadFree f m) => MonadFree f (Yoneda m) Source #
Defined in Data.Functor.Yoneda
Monad m => Monad (Yoneda m) Source #
Defined in Data.Functor.Yoneda
Functor (Yoneda f) Source #
Defined in Data.Functor.Yoneda
MonadFix m => MonadFix (Yoneda m) Source #
Defined in Data.Functor.Yoneda
Applicative f => Applicative (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Foldable f => Foldable (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Traversable f => Traversable (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Distributive f => Distributive (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Representable g => Representable (Yoneda g) Source #
Defined in Data.Functor.Yoneda
type Rep (Yoneda g) #
Eq1 f => Eq1 (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Ord1 f => Ord1 (Yoneda f) Source #
Defined in Data.Functor.Yoneda
(Read1 f, Functor f) => Read1 (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Show1 f => Show1 (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Alternative f => Alternative (Yoneda f) Source #
Defined in Data.Functor.Yoneda
MonadPlus m => MonadPlus (Yoneda m) Source #
Defined in Data.Functor.Yoneda
Comonad w => Comonad (Yoneda w) Source #
Defined in Data.Functor.Yoneda
Traversable1 f => Traversable1 (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Foldable1 f => Foldable1 (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Plus f => Plus (Yoneda f) Source #
Defined in Data.Functor.Yoneda
zero :: Yoneda f a #
Alt f => Alt (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Apply f => Apply (Yoneda f) Source #
Defined in Data.Functor.Yoneda
Bind m => Bind (Yoneda m) Source #
Defined in Data.Functor.Yoneda
Extend w => Extend (Yoneda w) Source #
Defined in Data.Functor.Yoneda
Adjunction f g => Adjunction (Yoneda f) (Yoneda g) Source #
Defined in Data.Functor.Yoneda
(Eq1 f, Eq a) => Eq (Yoneda f a) Source #
Defined in Data.Functor.Yoneda
(Ord1 f, Ord a) => Ord (Yoneda f a) Source #
Defined in Data.Functor.Yoneda
(Functor f, Read (f a)) => Read (Yoneda f a) Source #
Defined in Data.Functor.Yoneda
Show (f a) => Show (Yoneda f a) Source #
Defined in Data.Functor.Yoneda
type Rep (Yoneda g) Source #
Defined in Data.Functor.Yoneda
liftYoneda :: Functor f => f a -> Yoneda f a Source #
The natural isomorphism between f and Yoneda f given by the Yoneda lemma is witnessed by liftYoneda and lowerYoneda
liftYoneda . lowerYoneda ≡ id
lowerYoneda . liftYoneda ≡ id
lowerYoneda (liftYoneda fa) = -- definition
lowerYoneda (Yoneda (f -> fmap f a)) -- definition
(f -> fmap f fa) id -- beta reduction
fmap id fa -- functor law
lift = liftYoneda
as a right Kan extension | {"url":"https://hackage-origin.haskell.org/package/kan-extensions-5.2.3/docs/Data-Functor-Yoneda.html","timestamp":"2024-11-06T05:29:28Z","content_type":"application/xhtml+xml","content_length":"78165","record_id":"<urn:uuid:69b31665-9bb7-4a5a-9e87-29dfb5fad5f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00092.warc.gz"} |
Finding the Area of a Circle
Students often wonder where the formula for the area of a circle comes from; and knowing something about that can help make it more memorable, as I discussed previously about other basic area
Who first used the formula?
Let’s start by taking a historical look:
History of Circle Area Formula
Do we know who figured out that pi r squared is the area of a circle?
I can find out about the history of Pi and the circumference of a circle, but not its area. I looked through your FAQs and on Google but to no avail. Perhaps it is just not known?
This was a tricky question to answer, because the very idea of formulas came long after the first people to find areas. I pointed to an early analogue of the formula:
It's hard to answer that question, because the area of a circle was known long before pi was actually used. Proposition 2 of book XII of Euclid's Elements, which was undoubtedly known before Euclid himself, is equivalent to the formula A = pi r^2:
Euclid's Elements Book XII, Proposition 2
Circles are to one another as the squares on their diameters.
That is, the area of a circle is proportional to (2r)^2, which in turn is proportional to r^2. All that is lacking here is a name for the constant of proportionality, which has been called pi since 1706.
I could also have mentioned an ancient Egyptian method that comes a little closer to being a formula, which I find described here and here, taken from the Rhind papyrus. (We had answered a question
about it here.) As a formula, we would express it as \(A = \left(d – \frac{d}{9}\right)^2\); in words, as translated in the first reference, it looks like this:
Example of finding the area of a round field with a diameter of 9 khet. What is its area?
Take away 1/9 of its diameter, namely 1. The remainder is 8. Multiply 8 times, making 64.
Therefore the area is 64 setjat.
(Early math was described by example like this, rather than being stated as a general formula.)
But I moved on to the recognition that the number used is the same pi that is used to find the circumference:
There are two parts to your question: who discovered that the area is SOMETHING times the square of the radius (for which the answer is whoever gave Euclid his proof, commonly considered to be Eudoxus); and who discovered that the constant of proportionality is pi. The answer to the latter question is Archimedes.
The form in which Archimedes stated it was that the area of a circle is equal to that of a right triangle whose base is the circumference of the circle, and whose height is the radius of the circle. That is,
A = 1/2 (2 pi r) r = pi r^2
in modern terms. So except for the lack of algebraic notation and a name for pi, he got the entire formula. You may be aware that he also worked out the value of pi.
This is really somewhat remarkable; pi kills two birds with one stone! I gave a reference to the proof of this fact; we have also discussed it on our site, here:
Archimedes and the Area of a Circle
Deriving the formula as a rectangle
We have discussed various derivations of the area formula many times; I will show a few different ways here. First, from 2000:
Deriving the Area Formula for a Circle
Why is the area of a circle the square of the radius times pi?
Doctor Floor answered this time, including a picture:
Let's consider a circle with radius r.
If we divide the circle into an even number of sectors, we can rearrange these sectors as in the following figure:
All derivations of this sort that we give fall short of being actual proofs, because we have to imagine what would happen for infinitely many pieces. Ultimately, this requires calculus to make it
rigorous, though early versions (such as Archimedes’) used it in a disguised form. Our goal here is just to see that it makes sense. In the picture above, with more pieces, the sides would slope more
and more steeply, approaching the vertical, and the top and bottom would become more and more flat, approaching horizontal lines. The area then will be the height (r), times the base (half the
Whatever the number of sectors we use, the "side" lengths will remain r and pi*r. Therefore our limit case with an infinite number of sectors still has sides r and pi*r, and the area of this limit rectangle is pi*r^2. Since the area of the circle does not change when we divide it into parts, the area of the circle must have been pi*r^2, too.
Doctor Dotty gave a longer version of this demonstration here:
Circle Formulas: Area and Perimeter
Deriving the formula using triangles
A similar demonstration can be done without making a rectangle:
Formula for the Area of a Circle
How do you get the area of a circle?
I haven't figured any of it out, but I want to know how to do it. Please help.
I started by showing the formula:
Hi, Kismet. I'm not sure whether you're asking for the formula for the area of a circle, or for an explanation of how it works. I'll give you both.
The formula is very simple:
A = pi * r^2
which means the area is Pi (3.14159...) times the square of the radius. In a book it would look more like this:
__ 2
A = || r
To use this formula, just measure the radius of the circle (which is half the diameter), square it (multiply it by itself), and then multiply the result by 3.14.
Then, to show that the formula didn’t come from thin air, I showed a way to think of it:
There's an interesting way to see why this is true, which may help you remember it. (Though the easiest way to remember the formula is the old joke: "Why do they say 'pie are square' when pies are round?")
Picture a circle as a slice of lemon with lots of sections (I'll only show 6 sections, but you should imagine as many as possible):
* *
* \ / *
* \ / *
* \ / *
* / \ *
* / \ *
* / \ *
* *
Now cut it along a radius and unroll it:
/\ /\ /\ /\ /\ /\
/ \ / \ / \ / \ / \ / \
/ \ / \ / \ / \ / \ / \
/ \/ \/ \/ \/ \/ \
All those sections (technically called sectors of the circle) are close enough to triangles (if you make enough of them) that we can use the triangle formula to figure out their area; all together they are
A = 1/2 b * h = 1/2 C * r
since the total base length is C, the circumference of the circle, and the height of all the triangles is r, the radius (if the triangles are thin enough). You should know that the circumference is pi times the diameter, or
C = 2 * pi * r
(this is actually the definition of pi), so the area is just
A = 1/2 (2 * pi * r) * r = pi * r^2
In other words, the area of a circle is just the area of a triangle whose base is the circumference of the circle, and whose height is the radius of the circle.
This version of the formula can be very memorable. I find that many students have learned the standard formula almost as an incantation — though many forget whether it is the formula for area or
circumference. Relating it visually to triangle areas may help.
In my explanation, I wanted to avoid deep algebra (because the student was young), so I glossed over a detail I might have clarified. Properly, I should have shown why we can just use the total base
length as if it were one triangle. That amounts to the distributive property: \(A = \sum \frac{1}{2} b_n h = \frac{1}{2} \left(\sum b_n \right) h = \frac{1}{2} C r\).
I concluded,
What I've just done gets pretty close to algebra, which you haven't learned yet, but if you think about it (and maybe try actually measuring some real circles, or even make some lemonade) you should be able to see what I mean.
You probably didn't know that the area of a circle is the same as the area of a triangle!
Doctor Jerry gave essentially the same derivation here:
Why is Area of a Circle Equal to Pi * (Radius Squared)?
Deriving the formula using regular polygons
My favorite derivation comes by way of a formula that applies to any regular polygon, and also answers the question, How can you find the area of a circle without using pi?
Why Pi?
Dr. Math, I was just wondering...
Why do we use pi when we calculate the circumference and area of a circle? I think one of my professors once told my class but I can't remember and am curious.
I first dealt with the circumference question, which is both simple and subtle:
Hi, Crystal.
There are two questions here, with very different answers.
First, for the circumference, it's because we DEFINE pi as C/D, so we can write C = pi D automatically. There's a trick hidden behind that definition, though: how do we know that C/D is the same for every circle? That takes a bit of proof, and leads to some interesting ideas; look in the Dr. Math FAQ on pi:
Pi = 3.14159...
or in the following answers in particular, for an explanation:
Why is Pi a Constant?
Einstein, Curved Space, and Pi
Is Pi a Constant in Non-Euclidean Geometry?
Because the ratio of circumference to diameter is a constant (as long as we are working in a plane), we can give it a name (pi) and then use that definition to find circumference. Given the diameter,
we use \(C = \pi D\); or, given the radius, we use the fact that \(D = 2r\) and substitute, so that \(C = 2 \pi r\).
Area, though, is a very different matter.
For the area, there's a nice way to see why the formula should be what it is.
Let's think about regular polygons first, and look at the relation between their areas and perimeters. Any n-sided polygon can be broken into n isosceles triangles like this:
/ \ / \
/ \ / \
+-----+-----+ +
\ / \ / /|\
\ / \ / / |a\
+-----+ +--+--+
Each of these triangles has a base that is equal to a side s of the polygon, and a height a (called the apothem); the total area is
A = n * sa/2
This can be rearranged as
A = (ns)a/2
and since ns is just the perimeter P of the polygon, this means
A = Pa/2
That can be a very useful formula in itself; it looks a lot like the formula for a triangle. But we can apply this to a circle:
Now make n very large, and a will be very close to the radius r of the circle the polygon is becoming. We can see (and could prove more carefully if we took the time) that for a circle,
A = Cr/2
where C is the circumference (perimeter) and r is the radius.
But since we know
C = 2 pi r
this becomes
A = 2 pi r * r/2 = pi r^2
We're done! Because we could find the area of a polygon using its perimeter, we can find the area of a circle using its circumference, and that uses pi.
This approach lends can be turned into a real proof more easily than the others, which is why I like it more. There is no hand-waving about a curved figure becoming a rectangle or triangle. Yet the
basic idea is identical to the methods I showed first.
This formula \(A = \frac {Cr}{2}\) is what I referred to earlier, an area formula that doesn’t use pi. It showed up in the triangle derivation above, and is the answer to the question posed here:
Finding a Circle's Area Without Pi
For a longer version of the same approach, using trigonometry, see
Areas of N-Sided Regular Polygon and Circle
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.themathdoctors.org/finding-the-area-of-a-circle/","timestamp":"2024-11-04T08:15:51Z","content_type":"text/html","content_length":"122002","record_id":"<urn:uuid:7f671254-12bf-4bc5-b3ad-dd013f8e251d>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00478.warc.gz"} |
Pythagorean Theorem Word Problem Worksheet (examples, solutions, videos, worksheets, activities)
Printable “Triangle” worksheets.
Triangle Sum Theorem
Exterior Angle Theorem
Area of Triangle
Area of Triangle using Sine
Pythagorean Theorem
Converse of Pythagorean Theorem
Pythagorean Theorem Word Problems
Pythagorean Theorem & Volume
Examples, solutions, videos, and worksheets to help Grade 8 students learn how to solve word problems using the Pythagorean Theorem.
How to solve word problems using the Pythagorean Theorem?
Word problems that involve the Pythagorean Theorem generally deal with right-angled triangles, where you’re given the lengths of two sides and asked to find the third side, or asked to apply the
theorem in a practical context. Here’s a step-by-step approach for solving these problems.
Steps to Solve Word Problems Using the Pythagorean Theorem:
1. Read the Problem Carefully:
Identify what you’re being asked to find (the length of a side, distance, etc.).
Determine which parts of the problem involve a right triangle.
Look for keywords like “right triangle,” “diagonal,” “distance,” “height,” “base,” or “hypotenuse."
2. Draw a Diagram:
If one is not provided, sketch a diagram to visualize the problem.
Label the known lengths and variables for unknown sides.
3. Identify the Right Triangle and Its Parts:
Identify which sides correspond to the legs 𝑎 and 𝑏, and which side is the hypotenuse 𝑐.
The hypotenuse is always opposite the right angle and is the longest side.
4. Write the Pythagorean Theorem Formula:
𝑎^2 + 𝑏^2 = 𝑐^2
Use this formula to relate the sides of the triangle.
5. Substitute Known Values:
Plug in the values for the known sides.
6. Solve for the Unknown Side:
Simplify the squares and solve for the unknown side.
Make sure to take the positive square root, since lengths cannot be negative.
7. Answer the Question:
Ensure your final answer addresses the original question (e.g., finding the total distance, height, etc.).
Have a look at this video if you need to review how to use the Pythagorean Theorem to solve word problems.
Click on the following worksheet to get a printable pdf document.
Scroll down the page for more Pythagorean Theorem Word Problem Worksheets.
More Pythagorean Theorem Word Problem Worksheets
(Answers on the second page.)
Pythagorean Theorem Word Problem Worksheet #1
Pythagorean Theorem Word Problem Worksheet #2
Pythagorean Theorem Word Problem Worksheet #3
Applications of the Pythagorean Theorem
Online Worksheets
Pythagorean Theorem
Find the missing side
Test for right triangle
Pythagorean Theorem Word Problems
Converse Pythagorean Theorem, Types of Triangles
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | {"url":"https://www.onlinemathlearning.com/pythagorean-theorem-word-problem-worksheet.html","timestamp":"2024-11-08T04:19:57Z","content_type":"text/html","content_length":"40081","record_id":"<urn:uuid:e769bf2d-92ee-4e76-bef1-639f9e9b2c69>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00634.warc.gz"} |
Increasing and decreasing functions - Maxima and minima - Applications of Differentiation
Increasing and decreasing functions
Before learning the concept of maxima and minima, we will study the nature of the curve of a given function using derivative.
(i) Increasing function
A function f(x) is said to be increasing function in the interval [a,b] if
x[1] < x[2] ⇒ f (x [1] )≤ f (x [2] ) for all x [1] , x[2] ∈ |a , ]
A function f(x) is said to be strictly increasing in [a,b] if
x[1] < x[2] ⇒ f (x [1] )< f (x [2] ) for all x [1] , x[2] ∈ [a ,b]
(ii) Decreasing function
A function f(x) is said to be decreasing function in [a, b] if
x[1] < x[2] ⇒ f (x [1] )≥ f (x [2] ) for all x [1] , x[2] ∈ [a ,b]
A function f(x) is said to be strictly decreasing function in [a,b] if x
x[1] < x[2] ⇒ f (x [1] )> f (x [2] ) for all x [1] , x[2] ∈ [a ,b]
A function is said to be monotonic function if it is either an increasing function or a decreasing function.
Derivative test for increasing and decreasing function
Theorem:6.1 (Without Proof)
Let f(x) be a continuous function on [a,b] and differentiable on the open interval (a,b), then
(i) f(x) is increasing in [a, b] if f ′ (x ) ≥ 0
(ii) f(x) is decreasing in [a, b] if f ′ (x ) ≤ 0
(i) f(x) is strictly increasing in (a,b) if f ′ (x ) > 0 for every x ∈ (a ,b)
(ii) f(x) is strictly decreasing in (a,b) if f ′ (x ) < 0 for every x ∈ (a ,b)
(iii) f(x) is said to be a constant function if f ′ (x ) = 0 | {"url":"https://www.brainkart.com/article/Increasing-and-decreasing-functions_36999/","timestamp":"2024-11-13T16:27:35Z","content_type":"text/html","content_length":"72169","record_id":"<urn:uuid:185bd834-a1ed-4bfc-809c-c57553a65d0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00173.warc.gz"} |
Mindset Habit #11: Try Practical Pessimism
Take out a sheet of paper and divided it into three columns.
1. Consider your problem or goal or issue you are concerned with at this moment.
2. In the first column, list every negative result that could happen if you do the action you are considering.
3. In the second column, write down ways that you could minimize each of those negative results.
4. And in the final column, determine a way that you could recover from each of the setbacks.
The Problem:
Every negative result that could happen if above action is taken: Ways to minimize each negative result: How I could recover from each of the potential setbacks:
Tim Ferriss said, “It’s very important that you practice your worst-case scenario, and what you’ll find is that many of the fears you have are based on undervaluing the things that are easily
obtainable.” [7]
What does this mean?
Simply put, we can be very good at seeing the worst case scenario in a much larger scale than is necessary. Shine a little light on that nasty, crappy situation and maybe you’ll see that all those
“my life’s over” thoughts aren’t really as bad as initially feared. In fact, you may find that you can recover from them easier than initially thought.
Using this approach will help decrease the fear of the unknown which can help you take more action, easier. That helps lower stress levels which help you feel better with IBD.
Try this:
Complete your practical pessimism worksheet today. Notice how you feel afterward. See if this helps you take action on a goal and/or decrease any anxiety that might have been affecting you. | {"url":"https://www.strengthandnutrition.com/mindset-habit-11-try-practical-pessimism/","timestamp":"2024-11-13T04:34:43Z","content_type":"text/html","content_length":"38429","record_id":"<urn:uuid:d350dfea-4840-4476-9b2e-5550b19cc1f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00691.warc.gz"} |
800 million
Simulation made by
JAHC Vorticity Confinement
has a basic familiarity to solitary wave approach which is extensively used in many condensed matter physics applications. The effect of VC is to capture the small scale features over as few as 2
grid cells as they convect through the flow.
Granular-Medium simulations made by Nicholas Guttenberg with his GRNLR particle simulator. The program was orignally used for studying granular jets, wet granular droplet impact, and tipping
Ref. "An approximate hard sphere method for densely packed granular flows" (link - 283 kb)
Ref.: "Grains and gas flow: Molecular dynamics with hydrodynamic interactions" (link - 207 kb)
In WooDEM we had the problem that particles slowed down due to dissipation (Coefficient of Restitution < 1). So energy was being constantly removed from the simulation due to collisions, to keep the
particles going Nicholas used 3 solutions in his program for adding energy:
A. Swimmers: after interaction a force is added to keep the particle's velocity close to 1
• The first ‘v’ is a vector,
• |v|^2 is a scalar,
• ’a’ is a scalar that controls how strong this force is.
This turns the particles into active swimmers, and you can create interesting structure by doing this in the presence of a high density of particles (basically they have to move because of the force,
but they want to be still because they're near the jamming point, so they end up moving in large-scale vortices similar to those in the Twirls video).
B. Thermalisation: adding a force to keep energy despite dissipation
• ’T’ is the desired temperature (normally there'd be a measure of dissipation, but your dissipation is due to inelastic collisions),
• ‘dt’ a time-step used by the algorithm (needed for of how noise scales),
• ‘eta’ a random number generated every time you call the function to get the force.
This will cause the particle motions to have some relatively constant level of energy despite dissipation.
C. Rescale time-step: so motion occurs at a constant velocity
For hard grains there is no inherent energy scale, so a system of grains moving at 1mm/year and a system of grains moving at 100 m/s have the same physics, just strewn out over a different time
What you could do is snap a frame at irregular intervals. First at every second, as the grains slow down, then every 2, 4, 8, ... The right interval to use would be based on the square root of the
system's average energy (e.g. sum up v^2 for every grain, divide by the number of grains, then take the square root and multiply by a constant to make it look good).
GRNLR - GUI | {"url":"https://800millionparticles.blogspot.com/2014/02/","timestamp":"2024-11-14T05:35:53Z","content_type":"application/xhtml+xml","content_length":"60448","record_id":"<urn:uuid:6d68794f-6411-4b6d-80ef-0fd1b7d072f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00558.warc.gz"} |
When Testing Questioned Philosophy
In the first post in this series, I ended by focusing a bit on Galileo who started to make the idea of testing what it eventually would be recognized as today. That’s the same thing as saying Galileo
effectively produced one of the first attempts to make science as it is known today. Let’s continue this path of investigation.
Folks with some historical background may wonder why I start with Galileo here. Consider this, from David Wootton’s The Invention of Science:
Modern science was invented between 1572, when Tycho Brahe saw a nova, or new star, and 1704, when Newton published his Opticks, which demonstrated that white light is made up of light of all the
colours of the rainbow, that you can split it into its component colours with a prism, and that colour inheres in light, not in objects.
This is definitely true and I think Galileo — born in 1564, died in 1642 — fits well within that “between” range. Wootton continues:
There were systems of knowledge we call ‘sciences’ before 1572, but the only one which functioned remotely like a modern science, in that it had sophisticated theories based on a substantial body
of evidence and could make reliable predictions, was astronomy, and it was astronomy that was transformed in the years after 1572 into the first true science.
Indeed also true. In these posts, I’m framing things around Galileo and he was actually not all that concerned about astronomy at all. Galileo was much more concerned about motion and forces,
although his interests and investigations ultimately aligned with those who were concerned more with astronomy. One more bit from Wootton:
What made astronomy in the years after 1572 a science? It had a research programme, a community of experts, and it was prepared to question every long-established certainty (that there can be no
change in the heavens, that all movement in the heavens is circular, that the heavens consist of crystalline spheres) in the light of new evidence.
Galileo was also very willing to question established certainties and orthodoxies but also to provide a basis for why it made sense to do so. It’s interesting to consider how this came about because
I believe it showcases the evolution of a test-focused thinker.
The Philosophy of Music
In 1558, a guy named Gioseffo Zarlino published a book called Le istitutioni harmoniche (“The Principles of Harmony”). Zarlino was very into the whole idea of Pythagorean musical theory. This theory
can probably be stated most simplistically by saying that harmonies were inflexibly controlled by mathematical proportions.
Essentially the original Pythagorean system of harmony had three basic musical intervals. Each of these was defined by the numerical ratio between anything that could produce those notes, such as
lengths of string. So there was the octave, a 2:1 ratio. There was the perfect fifth, a 3:2 ratio. There was the perfect fourth, a 4:3 ratio.
Music theory during this period relied on theorems drawn from geometry, hence the reliance on mathematical proportions.
Thus a combination of a fifth and a fourth produces an octave: 3:2 × 4:3 = 12:6 = 2:1. The difference between a fourth and fifth is 3:2 divided by 4:3, which is 9:8. That’s called a whole tone
Zarlino embraced the Pythagorean tradition of what’s called diatonic tuning.
This can get really complicated fast but making it simple: Pythagorean tuning is a system of musical tuning in which the frequency ratios of all intervals are based on the ratio 3:2. What results is
the seven tones of the diatonic scale: C, G, F, D, A, E and B. These comprise what’s called a major scale in Pythagorean tuning. The above image shows one way of forming the diatonic scale using pure
What does any of this have to do with Galileo or his forerunner attempts at testing?
The relevance here is that Galileo’s father, Vincenzo, studied with Zarlino. So Vincenzo initially was tutored in the dogma of musical theory based on the Pythagorean ideas of a rigid mathematical
system. This was, in essence, a philosophy — really, an orthodoxy — of music theory.
But then Vincenzo came upon a philologist named Girolamo Mei. Mei introduced Vincenzo to the work of a guy named Aristoxenus, who was a Greek music theorist back in the fourth century BCE.
Aristoxenus insisted math had little to do with music. Instead, according to Aristoxenus, you should rely on your own senses to decide what music was most aesthetically pleasing.
That view, obviously, conflicted with the Pythagorean. As these views of music theory conflicted over the course of time, the cornerstone for those who accepted the mathematical route was a debate
over finding the best mathematical ratios of the lengths of strings producing what are called “consonances.” These are sounds, like the octave, that are said to be most pleasing to the ear.
So was it math or more of a situational thing?
Well, Mei, very much a believer in the views of Aristoxenus, pointed out to Vincenzo that the “equal temperament” approach adopted by practicing musicians — those people alive and working at the
time; actual practitioners — to tune their instruments wasn’t consistent with the Pythagorean doctrine favored by theorists, which specified precise ratios for the intervals.
Consider again our image but with a slight addition:
Comparing the diatonic scale with an equal-tempered scale, you could see that the difference between the tone positions increases by what’s called two cents at each fifth. A cent is a unit of measure
for the ratio between two frequencies.
So if the entire chroma — twelve consecutive fifths — was tuned using only fifths, the difference would become 1/8 step which is approximately 24 C. Even to an untrained ear, this would not really be
a pleasing sound. In other words, the mathematical system seemed to have a discrepancy.
This discrepancy is called the Pythagorean comma, and it was one of the primary reasons why other tuning systems, like temperaments, were studied in the medieval ages and why some doubted the
Understand the Discrepancy
If there are discrepancies, you often have to figure out what they actually are. This is how you know something is, in fact, a discrepancy.
So let’s say you start from a frequency of 100 Hz. Then after seven octave jumps (multiplication by 2) you reach 12800 Hz. Likewise if you start from the same place, after twelve fifth jumps
(multiplication by 3/2) you reach 12974.634 Hz. The numbers 12800 and 12974.634 are very close and the difference between them is very small. But this illustrates the Pythagorean comma. The question
might be: but what is it actually illustrating?
This illustrates the idea of trying to tune an instrument by ear, using the alleged mathematical purity of intervals as a guide. While the above talks about seven octaves and twelve fifths, any notes
you reach can also be brought down by one or more octaves as well.
To illustrate this, you can bring all the notes down into the same octave. For the sake of simplicity, let’s use that whole tone I mentioned earlier, which you can get by ascending two perfect
fifths, then descending an octave — for example, from C to D). In terms of ratios, this is equivalent to multiplying a frequency by:
(3/2) × (3/2) × (1/2) = (9/8)
The cents I mentioned earlier are actually logarithmic units and thus the same formula can be expressed as an addition:
702 cents + 702 cents – 1200 cents = ~204 cents
Let’s look at the line of fifths. This is more so you can see how the notes are named if music theory isn’t your thing.
Notice how each successive fifth must be named with the letter that’s five letters “higher.” Note for music purists: this is the case even if a given fifth has to be modified via a sharp or flat.
Now you can start to create a scale from whole tones. Since each whole tone is two fifths away, you would be using the note name two places to the right along the line of fifths. Which always will
use the next letter in the alphabetic sequence. We don’t need to calculate any actual frequencies here. Instead we’ll just calculate the frequency ratio and number of cents difference between each
note and the starting note.
• C = (9/8)^0 = +0 cents
• D = (9/8)^1 = +204 cents
• E = (9/8)^2 = +408 cents
• F# = (9/8)^3 = +612 cents
• G# = (9/8)^4 = +816 cents
• A# = (9/8)^5 = +1020 cents
• B# = (9/8)^6 = +1224 cents
This is very approximate in some ways but you can see that the “one-octave” scale actually overshot an octave by one Pythagorean comma. There’s a distinct difference between the sound of a B# and a C
and if you try to substitute one for another you will end up with something that sounds clearly out of tune.
The Testing of Music
Obviously I wasn’t there when Mei and Vincenzo were working with each other but it’s likely that discrepancies like this were pretty much front and center of Mei’s thinking.
Importantly, however, Mei didn’t just ask Vincenzo to believe. Instead, Vincenzo was encouraged to test all this for himself. The prevailing assumption at the time was that, just as the ratio of
lengths of two identical strings with the same tension and mass per unit length, tuned an octave apart, would be 2:1, the ratio of the tensions of two identical strings of equal length, tuned an
octave apart, would be 2:1 as well.
Hey, that’s testable!
Testing Led to Observations
In one test, Vincenzo used two different lutes, one tuned to the requirements of equal temperament, and the other tuned per the dictates of Zarlino and thus the Pythagorean orthodoxy. Vincenzo also
tested this assumption with a simple experiment involving hanging weights from strings and what he found was that, in fact, the ratio of tensions was 4:1. This provided convincing evidence that,
indeed, consonant sounds were not determined solely by abstract mathematical ratios.
Likewise, there were other discrepancies in the established system. For example, sticking with the whole tones I’ve been using, an interval of two whole tones is 9:8 × 9:8 = 81:64, which is almost
but not quite the interval that we now call the major third, 5:4 = 80:64. That’s clearly a small difference but, as with the example above, one that trained musicians and those with sensitive hearing
could most definitely distinguish.
Observations Led to Assessments
Thus Vincenzo Galilei came to the conclusion that what musicians did in practice was not what Zarlino said they did, or should do. Vincenzo argued that musicians, by ear, adopted a scale in which
each of the notes of an octave was separated from the next by the same interval — which meant that none of the intervals were perfect according to Pythagorean accounting.
In 1581, Vincenzo published Dialogo della musica antica et della moderna (“Dialogue on Ancient and Modern Music”), which explicitly countered the ideas of Zarlino and the Pythagorean orthodoxy.
Zarlino, like the Pythagoreans whose message he believed in, essentially were saying it was the philosophy that mattered; it was the actual mathematics that came out of that philosophy. For Zarlino
and like-minder thinkers, music was not the reality of lived experience.
Historically, it seems at least likely that Vincenzo continued and refined these experiments around 1588, which is the exact time when Galileo was living at home and tutoring local students in
mathematics. Historians of science consider it likely that Galileo may have helped his father with the experiments. Thus Vincenzo influenced his son to pursue pragmatic experimentation in his science
as a means of testing hypotheses.
Questioning the Philosophy
Let’s back up a bit here.
Aristoxenus, incidentally, was roundly criticized by Socrates, someone whom we mainly know of only through Plato. The view of Socrates, and thus likely Plato, was that performers who tuned their
instruments by practice rather than according to the principles of philosophy were people who “prefer their ears to their intelligence.” It literally was a case that you should distrust what your
ears tell you is pleasing or “correct” — or that has the right quality — and instead conform to the philosophy.
The Pythagoreans get a little bit of bad press in all this but, interestingly, Plato criticized Pythagoras for being too eager to understand the empirical practices of working musicians rather than
focusing strictly on the elementary numerical ratios. So far as Plato was concerned, those ratios were the only way to truly understand harmony as the Creator intended it. In short, it was Plato —
and not Pythagoras — who really pushed the point of reducing musical harmonies to simple numbers.
Galileo, around the age of twenty, and thus in 1584 or 1585 and after the publication of his father’s book, is thought to have participated with his father in experiments on vibrating strings that
demonstrated the inadequacy of Zarlino’s teachings on harmony. As stated earlier, this may have also been occurring in 1588 as well when Galileo was teaching mathematics.
Early in his life, therefore, Galileo became aware of an incontrovertible case in which the desire of Plato and others to make reality conform to idealized mathematics really just didn’t work. It
could serve as a crude approximation to get people going. But once said people were going, they tended to abandon the philosophy and do something that more comported with actual observation.
Galileo learned that if experiments, as well as the real-life habits of musicians, were at odds with whatever some philosophy said, then you had demonstrable and empirical evidence that the
philosophy was wrong. (Or, again, at best approximate.)
Galileo had to move beyond Plato and Platonic style thinking but, more importantly, he had to move beyond just trusting a non-empirical philosophy and beyond blindly following an orthodoxy.
Thus were two key traits of an experimentalist, and thus a tester, set.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://testerstories.com/2022/11/when-testing-questioned-philosophy/","timestamp":"2024-11-07T10:36:19Z","content_type":"text/html","content_length":"94607","record_id":"<urn:uuid:9a52b3ae-b935-4046-9438-5bfd359b79a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00868.warc.gz"} |
find_cycle(G, source=None, orientation='original')[source]¶
Returns the edges of a cycle found via a directed, depth-first traversal.
• G (graph) – A directed/undirected graph/multigraph.
• source (node, list of nodes) – The node from which the traversal begins. If None, then a source is chosen arbitrarily and repeatedly until all edges from each node in the graph
are searched.
Parameters: • orientation (‘original’ | ‘reverse’ | ‘ignore’) – For directed graphs and directed multigraphs, edge traversals need not respect the original orientation of the edges. When set to
‘reverse’, then every edge will be traversed in the reverse direction. When set to ‘ignore’, then each directed edge is treated as a single undirected edge that can be traversed
in either direction. For undirected graphs and undirected multigraphs, this parameter is meaningless and is not consulted by the algorithm.
edges – A list of directed edges indicating the path taken for the loop. If no cycle is found, then edges will be an empty list. For graphs, an edge is of the form (u, v) where u and
v are the tail and head of the edge as determined by the traversal. For multigraphs, an edge is of the form (u, v, key), where key is the key of the edge. When the graph is directed,
Returns: then u and v are always in the order of the actual directed edge. If orientation is ‘ignore’, then an edge takes the form (u, v, key, direction) where direction indicates if the edge
was followed in the forward (tail to head) or reverse (head to tail) direction. When the direction is forward, the value of direction is ‘forward’. When the direction is reverse, the
value of direction is ‘reverse’.
Return directed edges
In this example, we construct a DAG and find, in the first call, that there are no directed cycles, and so an exception is raised. In the second call, we ignore edge orientations and find that
there is an undirected cycle. Note that the second call finds a directed cycle while effectively traversing an undirected graph, and so, we found an “undirected cycle”. This means that this DAG
structure does not form a directed tree (which is also known as a polytree).
>>> import networkx as nx
>>> G = nx.DiGraph([(0,1), (0,2), (1,2)])
>>> try:
... find_cycle(G, orientation='original')
... except:
... pass
>>> list(find_cycle(G, orientation='ignore'))
[(0, 1, 'forward'), (1, 2, 'forward'), (0, 2, 'reverse')] | {"url":"https://networkx.org/documentation/networkx-1.10/reference/generated/networkx.algorithms.cycles.find_cycle.html","timestamp":"2024-11-06T21:37:24Z","content_type":"text/html","content_length":"19688","record_id":"<urn:uuid:9685b5dc-256b-44c3-b795-5cb36f396863>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00270.warc.gz"} |
SUPERANNUATION GUARANTEE (ADMINISTRATION) ACT 1992 - SECT 15
Commonwealth Consolidated Acts
[Index] [Table] [Search] [Search this Act] [Notes] [Noteup] [Previous] [Next] [Download] [Help]
SUPERANNUATION GUARANTEE (ADMINISTRATION) ACT 1992 - SECT 15
Interpretation: maximum contribution base
(1) The maximum contribution base for a quarter in the 2001 - 02 year is $27,510.
(3) The maximum contribution base for a quarter in any later year is the amount worked out using the formula:
(4) Amounts calculated under subsection ( 3) must be rounded to the nearest 10 dollar multiple (rounding 5 dollars upwards).
(5) Despite subsections ( 3) and (4), the maximum contribution base for a quarter in the 2017 - 18 year or any later year is the amount worked out using the following formula, if that amount is less
than the amount worked out under those subsections:
"charge percentage" is the number specified in subsection 19(2) for the quarter.
"concessional contributions cap" is the basic concessional contributions cap, within the meaning of the Income Tax Assessment Act 1997 , for the financial year in which the quarter occurs.
(6) Amounts calculated under subsection ( 5) must be rounded down to the nearest 10 dollar multiple.
AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback | {"url":"http://www5.austlii.edu.au/au/legis/cth/consol_act/sga1992430/s15.html","timestamp":"2024-11-07T13:57:54Z","content_type":"text/html","content_length":"5944","record_id":"<urn:uuid:1645d82f-9b18-4fbc-b370-3af7abf7ae2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00237.warc.gz"} |
Understanding the Math and Equations Behind the Law of Conservation of Momentum
Author: admintanbourit
The law of conservation of momentum is one of the fundamental principles in physics that governs the behavior of objects in motion. It is often referred to as Newton’s second law of motion and holds
great significance in understanding the physical world around us. From the movement of celestial objects in space to the motion of a moving train on a track, the law of conservation of momentum is at
play. In this article, we will delve into the mathematical equations behind this law and explore its practical applications.
To begin with, let us understand what exactly momentum is. In simple terms, momentum is the measure of an object’s tendency to continue moving in the same direction. Mathematically, it is defined as
the product of an object’s mass and its velocity. This means that an object with a larger mass or a higher velocity will have a greater momentum.
Now, the law of conservation of momentum states that the total momentum of a closed system remains constant, regardless of any external forces acting on the system. In other words, in a closed system
where there is no external influence, the initial momentum and final momentum of the system will be the same. This principle can be represented by the following equation:
P initial = P final
where P represents the momentum of a system.
To better understand this concept, let us consider an example of a collision between two objects. Let object A have a mass of 4 kg and a velocity of 8 m/s and object B have a mass of 2 kg and a
velocity of 4 m/s. At the moment of collision, the total momentum of the system will be:
P initial = (4 kg x 8 m/s) + (2 kg x 4 m/s)
= 32 kg m/s + 8 kg m/s
= 40 kg m/s
After the collision, let us say object A comes to a complete stop while object B bounces off with a velocity of 8 m/s in the opposite direction. The final momentum of the system will be:
P final = (0 kg x 0 m/s) + (2 kg x 8 m/s)
= 16 kg m/s
As you can see, the initial momentum (40 kg m/s) and final momentum (16 kg m/s) of the system are not equal. This is because an external force, in this case the collision, has acted on the system,
causing a change in its momentum. However, according to the law of conservation of momentum, the total momentum of the system before and after the collision should be the same. Therefore, to balance
the equation, an additional momentum of 24 kg m/s must have been transferred to another object or body. This is an example of how the law of conservation of momentum works in practice.
The law of conservation of momentum has numerous practical applications. One of the most notable ones is in the field of rocket propulsion. The propulsion of a rocket works on the principle of
conservation of momentum. As the rocket expels exhaust gases at high speed in one direction, an equal and opposite force is applied to the rocket in the opposite direction, propelling it forward.
This is also the reason why astronauts in space need to carefully manage their movements and velocities, as even the tiniest change in momentum can have a significant impact on their direction and
Another practical example of this law can be seen in the sport of billiards. When a ball hits another ball, the initial momentum of the cue ball is transferred to the other ball, causing it to move
in a specific direction and with a certain velocity. The momentum transfer is governed by the law of conservation of momentum, and by understanding the initial conditions, one can predict the outcome
of the collision.
In conclusion, the law of conservation of momentum is a fundamental principle in physics that governs the behavior of objects in motion. Its mathematical representation and practical examples help us
understand how momentum is conserved in a closed system. From rocket propulsion to billiards, this law has numerous applications in our daily lives, making it a crucial concept to understand in the
field of physics. | {"url":"https://tanbourit.com/understanding-the-math-and-equations-behind-the-law-of-conservation-of-momentum/","timestamp":"2024-11-06T05:06:04Z","content_type":"text/html","content_length":"113536","record_id":"<urn:uuid:74433cb4-755a-4cb6-a403-3da2c77d2b02>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00659.warc.gz"} |
A Post of Pure Self-Indulgence
I have not been blogging much lately, a state of affairs likely to persist until the end of the semester at the start of May. This is partly a consequence of blogger burn-out; I just flat haven't
felt like blogging. Mainly, though, it is because this semester has been an unusually busy and stressful one. The reason I have not felt like blogging is that I have been inundated with other work,
some self-inflicted, some inflicted from without.
EvolutionBlog will make a triumphant return, but until then I thought I would unburden myself by telling you what I have been up to this term. What, exactly, do professors spend their time doing?
As in every semester, my biggest responsibility is to my students. JMU generally has a three-three teaching load, which means that I teach three courses every semester. This is fairly typical, and
puts us somewhere in the middle of universities generally. Research-focused schools will generally have two-two teaching loads, or perhaps even lighter than that. Of course, such schools also expect
substantial research output, which quickly devours the time savings of the lighter teaching load. On the other hand, many teaching-focused schools, including many large state schools, have three-four
and even four-four teaching loads. This leaves little time for serious research, at least during the semester.
JMU is somewhere in the middle. We are mostly an undergraduate school, and therefore tend to focus on teaching. But we are also expected to be active scholars in our field. Consequently a three-three
load. That's heavy enough to eat up a lot of time, but is not so burdensome that there is no time for other activities.
One of my courses this term is the second semester of a year-long course in real analysis. I have never taught this course before, and have not looked at this material since I passed my analysis
qualifying exam as a first-year graduate student, more than a decade ago. Real analysis is, by its nature, a fairly technical subject. The proofs have lots of deltas and epsiolons and, as one of my
undergraduate professors put it, you have to work your ass off to prove anything. They are not the sort of thing where you can simply remember the main idea of the proof, and then recreate it from
scratch at the chalkboard.
When I teach something like abstract algebra or discrete mathematics (not to mention calculus) I can mostly go in cold and deliver a good lecture. Sadly, not so here. This course has required some
serious preparation time. In previous semesters that time might have been spent blogging.
My other two courses are two sections of calculus. We have several versions of introductory calculus, and I am teaching the version intended for people with weak math backgrounds. This can get a bit
frustrating, since no matter how clear you think you are being you can be certain that it will be a funhouse mirror version of calculus that comes back to you on the quizzes and tests. Inevitably, in
classes such as this, you have a fair number of students who lack motivation. I don't really have a problem with them, as long as they don't come to me the day before the final and blame me for the
giant hole they have dug for themselves. The bigger frustrations are the students who really are working hard, but are still not getting it. Someday I will find the perfect way of explaining
calculus, but until that day the frustration will ensue.
At any rate, since I have not yet mastered the art of shutting my door when I don't have office hours, I have had a steady flow of students coming to talk to me outside of class. There are only so
many u-substitutions you can carry out and trig functions you can differentiate before anything more complex than slinking on home and watching reruns of House is off the table. Explaining calculus
for the better part of an afternoon has its satisfactions, but it does tend to leave you a bit whipped.
The other stress-producer this term was the big Monty Hall book. It is now in the hands of the printer, and should be available for sale in May. I have no doubt that upon receiving the finished book
I shall open it to a random page and immediately spot a typo. Such is life. Anyway, a good part of the term has involved sending drafts back and forth with my editor, making the index, and other
assorted tasks. The last bit of excitement was when I noticed, the day before the book left for the printer, that the dedication had been left out. Ugh! The problem was solved by joining the Preface
and Acknowledgements into one section, thereby freeing up a page for the dedication. Still, more excitement than I wanted.
Having finished the big Monty Hall book, I decided it was time to get started on the big evolution/creationism book. Roughly, the book would be a memoir of my experiences at various creationist
conferences and gatherings (and museums). Inevitably it will have some of the “Creationists say X, but the reality is Y” sort of writing so typical of the genre, but it will also be heavy on
anecdotes and human drama.
Alas, every book begins with a proposal, and since the proposal is supposed to include sample chapters it takes quite a big chunk of time to produce. I wrote the first fifty pages of the book, but
since it takes me forever to write anything this representes a depressingly large chunk of time. I'm one of those writers who can't bear to go on to sentence two until sentence one is just perfect,
which it never is, which leads to many frustrated hours of self-flagellation (think of Nicolas Cage in Adaptation.)
Finally got the proposal finished, and it has now been sent off to my editor at Oxford. It goes without saying that as soon as I submitted it I thought of a hundred things I should have done
differently. Too late now. Will the proposal be accepted? Probably not. (Did I mention that I also tend to be pessimistic by nature?)
Okay, whatever it's eventual fate, the proposal is finished. Time to move on. I'm currently working on three papers with my research collaborator, and all have been in my court at various times
during the term. One of them, an amusing little ditty about estimating the Cheeger constants of certain arithmetic, hyperbolic three-manifolds, has now been sent off. It's likely fate is that eight
months from now we will get a three-line referree's report explaining that while the paper is correct and well-written, it just isn't quite up to the general standard of the journal. So sorry.
Paper two is about the isoperimetric numbers of Cayley graphs of matrix groups. Putting the finishing touches on that one has been my main project for this week. Paper three had something to do with
Hamilton cycles and Hadwiger numbers. I don't even remember anymore. Whatever. It still has a way to go before it is ready to be submitted. Guess that will be a summer project.
It's not all bad news. An expository paper about the Monty Hall problem that I wrote with two coauthors was accepted for publication in one of the MAA journals. Yay! The editor wanted some pretty
subtantial revisions, though. Boo! But what can you do? The editor gets what the editor wants. Sadly, making revisions takes quite a bit of time, since much is wasted hurling profanity at the
computer screen over the idea of having to make the changes in the first place.
Meanwhile, I am also slowly working on a proposal for a book about the mathematical aspects of Sudoku puzzles, to be written with one of my JMU colleagues. Sudoku puzzles are really her niche, and
she is quite well-known among people interested in this. Oxford University Press has been on her for some time to write a book on the subject. My badgering her about it backfired when she said she
would only do the book if I would cowrite it with her. How could I say no? The trouble is, I don't actually know anything about the mathematics of Sudoku puzzles, though I have now acquired a large
file of papers about them. Working my way through them has been another little project for the term. Who knows? I might find myself writing two books this summer.
Then there have been the other niggling little things, I am serving on a subcommittee of the MAA, a fact that will be featured prominently on my acitivity report at the end of the year, have no doubt
about that. This has not been a huge responsibility, but it does involve combing through some exceedingly boring documents, while making suggestions for how to revise them.
There are also the normal little stresses that inevitably come up when large numbers of mathematicians must come to an agreement on something. This term the big fracas has been over the calculus text
we should use in our standard introductory courses. Do we go with Early Transcendentals, or Late Transcendantals? In the early version you introduce logarithms and exponentials early in the term,
even though this means treating them, at least initially, in a non-rigorous way. Late transcendentals means, well, not doing that. For example, the natural logarithm function is typically defined as
the integral of one over x. But you can't define it that way until you have introduced integration. But that doesn't happen until you have done a whole pile of other stuff. Which means it gets put
off to the second semester. Which means that people who only take the first semester never get to see them. Quite a void in their lives, I'm sure you will agree. Intorducing them early avoids this
Suffice it to say, this is the sort of thing that arouses great passions in many people. Not my passions, mind you, since I think calculus eductaion is largely futile regardless of when you bring up
logarithms. But it seems like every term there has to be some great controversy for the department to fight over.
Which reminds me of a joke. What's a logarithm? It's a birth control method for lumberjacks. Hahahaha!
There have been other things. Visits from job candidates that require large investments of tiem from the faculty. Stacks of papers in need of grading. Colloquium talks to attend. New episodes of
House that don't get watched on their own. Cats that are always meowing about something.
So, a busy and occasionally frustrating term. I wouldn't have it any other way. (Which doesn't mean I'm not going to seriously enjoy the relative calm of the summer). Unfortunatly, there are only so
many hours in a day, and something had to give. This term it was blogging.
Sorry about that.
More like this
Time to get back to the classroom! Our spring semester starts tomorrow. This term I'll be teaching Calculus I and History of Math. I have a relatively light teaching load this term, as my reward for
accepting a relatively heavy teaching load last term. Things are going to be a bit hectic for me…
Turns out I have a blog. Who knew! A combination of lack of time and, frankly, lack of enthusiasm have kept me from the blogosphere lately, but I thought I would poke my head up just to let y'all
know I'm still here. So let's see. I have a busy weekend on tap. Tomorrow is JMU's annual…
I just got back from six days in San Diego, participating in the annual Joint Mathematics Meetings. Why “Joint”? Because they are jointly sponsored by the two major American mathematical
organizations. I refer, of course, to the Mathematical Association of America (MAA) and the Amercian…
Now that the big ScienceBlogs software upgrade is complete, I can tell you about the big conference in Washington D.C. Lucky you! According to careercast.com, mathematicians have the most wonderful
job there is. I am inclined to agree, of course. I don't understand why everyone doesn't get a…
well you're blogging now!
or, er, were a little while ago...
its ok. I miss you but Ed B has been taking up the slack...
post some terror stories about calc students worst answers.
Well, I did have a student recently tell me that log_2 (6) over log_2 (3) was equal to 2. The logic was that we can cancel out the log_2 's, which leaves 6/3. Actually, there' something kind of
ingenious about that!
I am curious about this Intro to Calculus you mention, Jason.
Over here in Oz, it's been a good 20 years since I was in high school, but being more or less mathematically enthused (not, obviously, in your league - but calculus doesn't frighten me), I've helped
out several younger friends and family over the years with their high school assignments. The general impression I got was that they were gradually shifting a lot of what I did at Uni (logarithms,
Taylor series, and so on) to year 12 at high school (our last year - I suspect you call them "seniors"?) So you can imagine my confusion when you say that something as basic as integration is left to
second semester of university.
Is this intended as some sort of "bridging" course for students who didn't take mathematics at high school? Or am I misinterpreting your mention of integration to be something far simpler than what
you actually cover?
real analysis doesn't HAVE to be a lot of work for the prof - it certainly wasn't for the prof who "taught" it to me a couple decades ago. he'd come in and literally just copy the proofs in the book
onto the board, more slowly than you can imagine, then give us time to copy them down, which was completely unnecessary since they were all, uh, right in the very book he had copied them down from.
It was the most ridiculous math class I had ever taken. There were occasionally a few interesting homework problems that required work, but he didn't collect them or go over them. The day before the
midterm and final, he would come up with a list about 6 proofs, seemingly chosen at random, and tell us to memorize those because they'd be on the exam. The exam was literally just regurgitating
those 6 proofs - nothing else!
I'm glad that you're putting in the time and effort to make the class worthwhile for your students. I wish you had been my teacher!
Jason - don't give up on blogging forever. Your site was one of my absolute favourites. You may think you're a mathematician, but believe me, you have an ability to write as well.
Oh, and I have two masters degrees in numerate disciplines, but I never came across an epsiolon. Must be way beyond my level.
That'll be $20 for the couch time.
...looking forward to more blogs in the near future.
This term the big fracas has been over the calculus text we should use in our standard introductory courses.
There is only one Introductory Calculus text. Thomas. George B. Thomas. There is no other.
snafu and Matthew -
Thanks for the encouragement! I'll be back.
heddle -
I'll pass your recommendation on to the committee. I'm sure they will be delighted by the extra feedback.
A while back I saw my grandfather's calculus textbook. I thought it was much better than any book on the market today. Light on the fancy graphics, heavy on the clear explanations. The way it should
Samantha -
Sorry to hear about your bad experience in real analysis. Frankly, the subject tends to be a little dull all by itself; it definitely doesn't need the professor going out of his way to make it more
boring still.
As it happens, the textbook I am using places a lot of the key material in the exercises. Since there is no solutions manual, I actually have to work out the problems myself. Oy!
Glad to hear about the Monty Hall book!
Real analysis was easily one of the dullest classes I took at university. I didn't have to take it, but I needed a mathematics elective for my physics major. Of all the ones I could have chosen. . .
I'm currently in the planning stage of a mathematics book myself: sketching out what ought to go in each chapter, writing sections here and there to find the right "voice", playing with different
software tools which might let me write chunks of it as blog posts. Serious progress will probably have to wait until the summer, alas.
There is a lovely little book, Counterexamples in Analysis, by Gelbaum and Olmsted. I have always thought that a real analysis course -- where we find out that so many of our intuitions are wrong --
focusing on this book would be very interesting.
An analysis course which explores cases where intuition goes awry would be much more interesting than the course I slogged through: "Hey, let's build ourselves a cumbersome proof which nobody will
remember, motivated by nothing in particular, following a path which nobody would have thought of if they hadn't read it in the textbook first, which will prove that our intuition was fine after
Blake -
What sort of math book would this be? If you get as far as a having a fully fleshed out proposal I would be happy to put in a good word for you with my editor at Oxford.
Jeff -
I know that book! I spent a lot of time going through it in my first year of graduate school. It would be interesting to gear an entire course around it. I'm using the book Understanding Analysis by
Stephen Abbott. It's a Springer UTM book.
I also liked Counterexamples in Topology, which had all sorts of ingenious examples of topological perversities.
Teh Intertubes need you. Someone must respond to Hugo Meynell, retired professor of religious studies, who asks, Is atheism really the most rational option?
Not only is material-ism not a necessary consequence of science; but apparently it is not even compatible with it. Why are we right to believe scientists when they tell us about the aspects of
the world in which they specialize? We do so on the assumption that they have been thoroughly rational in relation to the matters in question; that, in Lonergan's terms, they have been
unrestrictedly attentive to the relevant data; intelligent in envisaging an adequate range of possible explanations; and reasonable in preferring as true the judgments which do best explain the
data. If, as a consistent materialist must claim, our thoughts, words, actions and writings are determined only by the physical and chemical laws operating within our brains, then independent
thought regarding the state of the universe becomes unattainable.
Thus scientists who believe in materialism are placed in the odd position of proving by their own principles that science is impossible.
How can you resist crazy like that?
1. Organisms persistently plagued by delusions tend to die.
2. Matter can be arranged to perform computations. A computer built on one material substrate can emulate in software the behaviour of a different type of matter. Meynell's claim about "independent
thought regarding the state of the universe" is a non sequitur.
We could discuss this at great length, but really, is Meynell going to listen?
"If, as a consistent materialist must claim, our thoughts, words, actions and writings are determined only by the physical and chemical laws operating within our brains, then independent thought
regarding the state of the universe becomes unattainable."
Sweet Isis, this is stupid. We have plenty of examples of computer programs thinking independently about the state of the universe. Machine learning, automated theorem provers, etc. Nothing in action
except chemistry, physics and entirely non-magical computation.
I think the best mathematics professor I had was the guy I had for Applied Mathematical Analysis (Green, Gauss and Stokes theorems, mostly).
He really had a talent for explaining things concisely and clearly. The material wasn't terribly difficult, but that was the only math class I got an A in (not so good at math- I'm one of those
people that had to bust my chops just for Bs). I attribute that mostly to him.
And his colored chalk. His diagrams and hand-writing were downright beautiful. I know the aesthetics of diagrams isn't the most important thing, but it did make it more enjoyable. It was even more
impressive because English was not his first language. He was from India, and his nearly perfect handwriting always impressed me. It looked like comic sans.
It made it a lot easier to follow and take notes. Looking back, it doesn't seem so trivial. I still have a lot of my notes. They are five years old now, but I just can't bring myself to throw them
away. When I compare the notes from this class to the ones from the Fourier Transforms class, it is like night and day.
The FT professor's handwriting was so bad you couldn't even read what the hell he was writing half the time. He wrote furiously fast and I remember there were times I just gave up on taking notes and
figured I could at least try to follow along and get something out of it, even if I wouldn't remember it later. It was too hard to puzzle out the chicken scratch and still keep up. Seriously. My
notes look like absolute crap. They are many "WTF??"'s and long, underlined blanks filled with angry question marks. There are some insults and profanity in the margins.
LOL. I can tell there were times I wasn't even looking at the page when I was writing because the lines are all crooked and wavy. They start out on one line and curve downward and finish 2 lines down
or sometimes go right off the page. Jesus christ. No wonder I got a C-. In contrast, my notes from the other class are neat enough that I probably could use them to give a lecture.
So maybe there is something to be said for tidy handwriting and nice diagrams. In any case, your writing style is usually very clear and concise, Jason. You seem to have a pretty tidy mind and I
don't doubt that you are a lot more like my favorite professor than the crappy one. But if your handwriting is unreadably crappy please just give your students handouts!
Also, yay about the second book! I couldn't get too excited about the Monty Hall book, but I am really looking forward to the creationism one.
Jason -
I came across your blog about a month or two ago, while I was sort of practicing my creationist spiel trying to build some right-wing credibility for a personal project I was working on (er... see
Charles Rayney from Familiar Insanity From UD...).
Er... I am still a bit sorry for that one...
That said, I'm glad I stumbled across this blog. It is very informative and a thought provoking - at the end of the day, I'm always glad to see mathematics and science promoted. It sounds as though
you have been quite busy this semester but it is also certainly nice to see that you've been able to find the time to update everyone on how you're doing.
Keep up the good work and I look forward to seeing what you write when you actually find some spare time!
Jason - don't give up on blogging forever. Your site was one of my absolute favourites. You may think you're a mathematician, but believe me, you have an ability to write as well.
Oh, and I have two masters degrees in numerate disciplines, but I never came across an epsiolon. Must be way beyond my level. | {"url":"https://scienceblogs.com/evolutionblog/2009/04/07/a-post-of-pure-self-indulgence","timestamp":"2024-11-04T18:40:56Z","content_type":"text/html","content_length":"85226","record_id":"<urn:uuid:8c1ac27a-c82e-4a12-84a5-3fbaf70d417b>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00105.warc.gz"} |
2.1.12 Jul 20, 2024
2.1.10 Jun 16, 2024
2.0.9 Mar 6, 2024
2.0.7 Dec 20, 2023
0.5.9 Oct 6, 2020
Author: Libor Spacek
This crate is written in 100% safe Rust.
Use in your source files any of the following structs, as and when needed:
use Rstats::{RE,RError,Params,TriangMat,MinMax};
and any of the following traits:
use Rstats::{Stats,Vecg,Vecu8,MutVecg,VecVec,VecVecg};
and any of the following auxiliary functions:
use Rstats::{
arith_error,other_error };
or just simply use everything:
use Rstats::*;
The latest (nightly) version is always available in the github repository Rstats. Sometimes it may be (only in some details) a little ahead of the crates.io release versions.
It is highly recommended to read and run tests.rs for examples of usage. To run all the tests, use a single thread in order not to print the results in confusing mixed-up order:
cargo test --release -- --test-threads=1 --nocapture
However, geometric_medians, which compares multithreading performance, should be run separately in multiple threads, as follows:
cargo test -r geometric_medians -- --nocapture
Alternatively, just to get a quick idea of the methods provided and their usage, read the output produced by an automated test run. There are test logs generated for each new push to the github
repository. Click the latest (top) one, then Rstats and then Run cargo test ... The badge at the top of this document lights up green when all the tests have passed and clicking it gets you to these
logs as well.
Any compilation errors arising out of rstats crate indicate most likely that some of the dependencies are out of date. Issuing cargo update command will usually fix this.
Rstats has a small footprint. Only the best methods are implemented, primarily with Data Analysis and Machine Learning in mind. They include multidimensional (nd or 'hyperspace') analysis, i.e.
characterising clouds of n points in space of d dimensions.
Several branches of mathematics: statistics, information theory, set theory and linear algebra are combined in this one consistent crate, based on the abstraction that they all operate on the same
data objects (here Rust Vecs). The only difference being that an ordering of their components is sometimes assumed (in linear algebra, set theory) and sometimes it is not (in statistics, information
theory, set theory).
Rstats begins with basic statistical measures, information measures, vector algebra and linear algebra. These provide self-contained tools for the multidimensional algorithms but they are also useful
in their own right.
Non analytical (non parametric) statistics is preferred, whereby the 'random variables' are replaced by vectors of real data. Probabilities densities and other parameters are in preference obtained
from the real data (pivotal quantity), not from some assumed distributions.
Linear algebra uses generic data structure Vec<Vec<T>> capable of representing irregular matrices.
Struct TriangMat is defined and used for symmetric, anti-symmetric, and triangular matrices, and their transposed versions, saving memory.
Our treatment of multidimensional sets of points is constructed from the first principles. Some original concepts, not found elsewhere, are defined and implemented here (see the next section).
Zero median vectors are generally preferred to commonly used zero mean vectors.
In n dimensions, many authors 'cheat' by using quasi medians (one dimensional (1d) medians along each axis). Quasi medians are a poor start to stable characterisation of multidimensional data. Also,
they are actually slower to compute than our gm ( geometric median), as soon as the number of dimensions exceeds trivial numbers.
Specifically, all such 1d measures are sensitive to the choice of axis and thus are affected by their rotation.
In contrast, our methods based on gm are axis (rotation) independent. Also, they are more stable, as medians have a maximum possible breakdown point.
We compute geometric medians by our method gmedian and its parallel version par_gmedian in trait VecVec and their weighted versions wgmedian and par_wgmedian in trait VecVecg. It is mostly these
efficient algorithms that make our new concepts described below practical.
Additional Documentation
For more detailed comments, plus some examples, see rstats in docs.rs. You may have to go directly to the modules source. These traits are implemented for existing 'out of this crate' rust Vec type
and unfortunately rust docs do not display 'implementations on foreign types' very well.
New Concepts and their Definitions
• zero median points (or vectors) are obtained by moving the origin of the coordinate system to the median (in 1d), or to the gm (in nd). They are our alternative to the commonly used zero mean
points, obtained by moving the origin to the arithmetic mean (in 1d) or to the arithmetic centroid (in nd).
• median correlation between two 1d sets of the same length.
We define this correlation similarly to Pearson, as cosine of an angle between two normalised samples, interpreted as coordinate vectors. Pearson first normalises each set by subtracting its mean
from all components. Whereas we subtract the median (cf. zero median points above). This conceptual clarity is one of the benefits of interpreting a data sample of length d as a single vector in
d dimensional space.
• gmedian, par_gmedian, wgmedian and par_wgmedian
our fast multidimensional geometric median (gm) algorithms.
• madgm (median of distances from gm)
is our generalisation of mad (median of absolute differences from median), to n dimensions. 1d median is replaced in nd by gm. Where mad was a robust measure of 1d data spread, madgm becomes a
robust measure of nd data spread. We define it as: median(|pi-gm|,i=1..n), where p1..pn are a sample of n data points, each of which is now a vector.
• tm_stat
We define our generalized tm_stat of a single scalar observation x as: (x-centre)/spread, with the recommendation to replace mean by median and std by mad, whenever possible. Compare with common
t-stat, defined as (x-mean)/std, where std is the standard deviation.
These are similar to the well known standard z-score, except that the central tendency and spread are obtained from the sample (pivotal quantity), rather than from any old assumed population
• tm_statistic
we now generalize tm_stat from scalar domain to vector domain of any number of dimensions, defining tm_statistic as |p-gm|/madgm, where p is a single observation point in nd space. For sample
central tendency now serves the geometric median gm vector and the spread is the madgm scalar (see above). The error distance of observation p from the median: |p-gm|, is also a scalar. Thus the
co-domain of tm_statistic is a simple positive scalar, regardless of the dimensionality of the vector space in question.
• contribution
one of the key questions of Machine Learning is how to quantify the contribution that each example (typically represented as a member of some large nd set) makes to the recognition concept, or
outcome class, represented by that set. In answer to this, we define the contribution of a point p as the magnitude of displacement of gm, caused by adding p to the set. Generally, outlying
points make greater contributions to the gm but not as much as to the centroid. The contribution depends not only on the radius of p but also on the radii of all other existing set points and on
their number.
• comediance
is similar to covariance. It is a triangular symmetric matrix, obtained by supplying method covar with the geometric median instead of the usual centroid. Thus zero mean vectors are replaced by
zero median vectors in the covariance calculations. The results are similar but more stable with respect to the outliers.
• outer_hull is a subset of all zero median points p, such that no other points lie outside the normal plane through p. The points that do not satisfy this condition are called the internal points.
• inner_hull is a subset of all zero median points p, that do not lie outside the normal plane of any other point. Note that in a highly dimensional space up to all points may belong to both the
inner and the outer hulls, as, for example, when they all lie on the same hypersphere.
• depth is a measure of likelihood of a zero median point p belonging to a data cloud. More specifically, it is the projection onto unit p of a sum of unit vectors that lie outside the normal
through p. For example, all outer hull points have by their definition depth = 0, whereas the inner hull points have high values of depth. This is intended as an improvement on Mahalanobis
distance which has a similar goal but says nothing about how well enclosed p is. Whereas tm_statistic only informs about the probability pertaining to the whole cloud, not to its local shape near
• sigvec (signature vector)
Proportional projections of a cloud of zero median vectors on all hemispheric axis. When a new zero median point p needs to be classified, we can quickly estimate how well populated is its
direction from gm. Similar could be done by projecting all the points directly onto p but this is usually impractically slow, as there are typically very many such points. However,
signature_vector only needs to be precomputed once and is then the only vector to be projected onto p.
Previously Known Concepts and Terminology
• centroid/centre/mean of an nd set.
Is the point, generally non member, that minimises its sum of squares of distances to all member points. The squaring makes it susceptible to outliers. Specifically, it is the d-dimensional
arithmetic mean. It is sometimes called 'the centre of mass'. Centroid can also sometimes mean the member of the set which is the nearest to the Centre. Here we follow the common usage: Centroid
= Centre = Arithmetic Mean.
• quasi/marginal median
is the point minimising sums of distances separately in each dimension (its coordinates are medians along each axis). It is a mistaken concept which we do not recommend using.
• Tukey median
is the point maximising Tukey's Depth, which is the minimum number of (outlying) points found in a hemisphere in any direction. Potentially useful concept but its advantages over the geometric
median are not clear.
• true geometric median (gm)
is the point (generally non member), which minimises the sum of distances to all member points. This is the one we want. It is less susceptible to outliers than the centroid. In addition, unlike
quasi median, gm is rotation independent.
• medoid
is the member of the set with the least sum of distances to all other members. Equivalently, the member which is the nearest to the gm (has the minimum radius).
• outlier
is the member of the set with the greatest sum of distances to all other members. Equivalently, it is the point furthest from the gm (has the maximum radius).
• Mahalanobis distance
is a scaled distance, whereby the scaling is derived from the axis of covariances / comediances of the data points cloud. Distances in the directions in which there are few points are increased
and distances in the directions of significant covariances / comediances are decreased. Requires matrix decomposition. Mahalanobis distance is defined as: m(d) = sqrt(d'inv(C)d) = sqrt(d'inv(LL')
d) = sqrt(d'inv(L')inv(L)d), where inv() denotes matrix inverse, which is never explicitly computed and ' denotes transposition.
Let x = inv(L)d ( and therefore also x' = d'inv(L') ).
Substituting x into the above definition: `m(d) = sqrt(x'x) = |x|.
We obtain x by setting Lx = d and solving by forward substitution.
All these calculations are done in the compact triangular form.
• Cholesky-Banachiewicz matrix decomposition
decomposes any positive definite matrix S (often covariance or comediance matrix) into a product of lower triangular matrix L and its transpose L': S = LL'. The determinant of S can be obtained
from the diagonal of L. We implemented the decomposition on TriangMat for maximum efficiency. It is used mainly by mahalanobis.
• Householder's decomposition
in cases where the precondition (positive definite matrix S) for the Cholesky-Banachiewicz decomposition is not satisfied, Householder's (UR) decomposition is often used as the next best method.
It is implemented here on our efficient struct TriangMat.
• wedge product, geometric product
products of the Grassman and Clifford algebras, respectively. Wedge product is used here to generalize the cross product of two vectors into any number of dimensions, determining the correct sign
(sidedness of their common plane).
Implementation Notes
The main constituent parts of Rstats are its traits. The different traits are determined by the types of objects to be handled. The objects are mostly vectors of arbitrary length/dimensionality (d).
The main traits are implementing methods applicable to:
• Stats: a single vector (of numbers),
• Vecg: methods operating on two vectors, e.g. scalar product,
• Vecu8: some methods specialized for end-type u8,
• MutVecg: some of the above methods, mutating self,
• VecVec: methods operating on n vectors (rows of numbers),
• VecVecg: methods for n vectors, plus another generic argument, typically a vector of n weights, expressing the relative significance of the vectors.
The traits and their methods operate on arguments of their required categories. In classical statistical parlance, the main categories correspond to the number of 'random variables'.
Vec<Vec<T>> type is used for rectangular matrices (could also have irregular rows).
struct TriangMat is used for symmetric / antisymmetric / transposed / triangular matrices and wedge and geometric products. All instances of TriangMat store only n*(n+1)/2 items in a single flat
vector, instead of n*n, thus almost halving the memory requirements. Their transposed versions only set up a flag kind >=3 that is interpreted by software, instead of unnecessarily rewriting the
whole matrix. Thus saving processing of all transposes (a common operation). All this is put to a good use in our implementation of the matrix decomposition methods.
The vectors' end types (of the actual data) are mostly generic: usually some numeric type. Copy trait bounds on these generic input types have been relaxed to Clone, to allow cloning user's own end
data types in any way desired. There is no difference for primitive types.
The computed results end types are usually f64.
Rstats crate produces custom error RError:
pub enum RError<T> where T:Sized+Debug {
/// Insufficient data
/// Wrong kind/size of data
/// Invalid result, such as prevented division by zero
/// Other error converted to RError
Each of its enum variants also carries a generic payload T. Most commonly this will be a String message, giving more helpful explanation, e.g.:
if dif <= 0_f64 {
return Err(RError::ArithError("cholesky needs a positive definite matrix".to_owned())));
format!(...) can be used to insert (debugging) run-time values to the payload String. These errors are returned and can then be automatically converted (with ?) to users' own errors. Some such error
conversions are implemented at the bottom of errors.rs file and used in tests.rs.
There is a type alias shortening return declarations to, e.g.: Result<Vec<f64>,RE>, where
pub type RE = RError<String>;
Convenience functions nodata_error, data_error, arith_error, other_error are used to construct and return these errors. Their message argument can be either literal &str, or String (e.g. constructed
by format!). They return ReError<String> already wrapped up as an Err variant of Result. cf.:
if dif <= 0_f64 {
return arith_error("cholesky needs a positive definite matrix");
struct Params
holds the central tendency of 1d data, e.g. any kind of mean, or median, and any spread measure, e.g. standard deviation or 'mad'.
struct TriangMat
holds triangular matrices of all kinds, as described in Implementation section above. Beyond the expansion to their full matrix forms, a number of (the best) Linear Algebra methods are implemented
directly on TriangMat, in module triangmat.rs, such as:
• Cholesky-Banachiewicz matrix decomposition: S = LL' (where ' denotes the transpose). This decomposition is used by mahalanobis, determinant, etc.
• Mahalanobis Distance for ML recognition tasks.
• Various operations on TriangMats, including mult: matrix multiplication of two triangular or symmetric or antisymmetric matrices in this compact form, without their expansions to full matrices.
Also, some methods, specifically the covariance/comedience calculations in VecVec and VecVecg return TriangMat matrices. These are positive definite, which makes the most efficient
Cholesky-Banachiewicz decomposition applicable to them.
Similarly, Householder UR (M = QR), which is a more general matrix decomposition, also returns TriangMats.
Quantify Functions (Dependency Injection)
Most methods in medians and some in indxvec crates, e.g. find_any and find_all, require explicit closure passed to them, usually to tell them how to quantify input data of any type T into f64.
Variety of different quantifying methods can then be dynamically employed.
For example, in text analysis (&str end type), it can be the word length, or the numerical value of its first few letters, or the numerical value of its consonants, etc. Then we can sort them or find
their means / medians / spreads under all these different measures. We do not necessarily want to explicitly store all such different values, as input data can be voluminous. It is often preferable
to be able to compute any of them on demand, using these closure arguments.
When data is already of the required end-type, use the 'dummy' closure:
|&f| f
When T is a primitive type, such as i64, u64, usize, that can be converted to f64, possibly with some loss of accuracy, use:
|&f| f as f64
When T is convertible by an existing custom From implementation (and f64:From<T>, T:Clone have been duly added everywhere as trait bounds), then simply pass in fromop, defined as:
/// Convenience From quantification invocation
pub fn fromop<T: Clone + Into<f64>>(f: &T) -> f64 {
The remaining general cases previously required new manual implementations to be written for the (global) From trait for each new type and for each different quantification method, plus adding its
trait bounds everywhere. Even then, the different implementations of From would conflict with each other. Now we can simply implement all the custom quantifications within the closures. This
generality is obtained at the price of a small inconvenience: having to supply one of the above closures argument for the primitive types as well.
Auxiliary Functions
• fromop: see above.
• sumn: the sum of the sequence 1..n = n*(n+1)/2. It is also the size of a lower/upper triangular matrix.
• tm_stat: (x-centre)/dispersion. Generalised t-statistic in one dimension.
• unit_matrix: - generates full square unit matrix.
• nodata_error, data_error, arith_error, other_error - construct custom RE errors (see section Errors above).
Trait Stats
One dimensional statistical measures implemented for all numeric end types.
Its methods operate on one slice of generic data and take no arguments. For example, s.amean()? returns the arithmetic mean of the data in slice s. These methods are checked and will report RError
(s), such as an empty input. This means you have to apply ? to their results to pass the errors up, or explicitly match them to take recovery actions, depending on the error variant.
Included in this trait are:
• 1d medians (classic, geometric and harmonic) and their spreads
• 1d means (arithmetic, geometric and harmonic) and their spreads
• linearly weighted means (useful for time analysis),
• probability density function (pdf)
• autocorrelation, entropy
• linear transformation to [0,1],
• other measures and basic vector algebra operators
Note that fast implementations of 1d 'classic' medians are, as of version 1.1.0, provided in a separate crate medians.
Trait Vecg
Generic vector algebra operations between two slices &[T], &[U] of any (common) length (dimensions). Note that it may be necessary to invoke some using the 'turbofish' ::<type> syntax to indicate the
type U of the supplied argument, e.g.:
Methods implemented by this trait:
• Vector additions, subtractions and products (scalar, Kronecker, outer),
• Other relationships and measures of difference,
• Pearson's, Spearman's and Kendall's correlations,
• Joint pdf, joint entropy, statistical independence (based on mutual information).
• Contribution measure of a point's impact on the geometric median
Note that our median correlation is implemented in a separate crate medians.
Some simpler methods of this trait may be unchecked (for speed), so some caution with data is advisable.
Trait MutVecg
A select few of the Stats and Vecg methods (e.g. mutable vector addition, subtraction and multiplication) are reimplemented under this trait, so that they can mutate self in-place. This is more
efficient and convenient in some circumstances, such as in vector iterative methods.
However, these methods do not fit in with the functional programming style, as they do not explicitly return anything (their calls are statements with side effects, rather than expressions).
Trait Vecu8
Some vector algebra as above that can be more efficient when the end type happens to be u8 (bytes). These methods have u8 appended to their names to avoid confusion with Vecg methods. These specific
algorithms are different to their generic equivalents in Vecg.
• Frequency count of bytes by their values (histogram, pdf, jointpdf)
• Entropy, jointentropy, independence.
Trait VecVec
Relationships between n vectors in d dimensions. This (hyper-dimensional) data domain is denoted here as (nd). It is in nd where the main original contribution of this library lies. True geometric
median (gm) is found by fast and stable iteration, using improved Weiszfeld's algorithm gmedian. This algorithm solves Weiszfeld's convergence and stability problems in the neighbourhoods of existing
set points. Its variant, par_gmedian, employs multithreading for faster execution and gives otherwise the same result.
• centroid, medoid, outliers, gm
• sums of distances, radius of a point (as its distance from gm)
• characterisation of a set of multidimensional points by the mean, standard deviation, median and MAD of its points' radii. These are useful recognition measures for the set.
• transformation to zero geometric median data,
• multivariate trend (regression) between two sets of nd points,
• covariance and comediance matrices.
• inner and outer hulls
Trait VecVecg
Methods which take an additional generic vector argument, such as a vector of weights for computing weighted geometric medians (where each point has its own significance weight). Matrices
Appendix: Recent Releases
• Version 2.2.12 - Some corrections of Readme.md.
• Version 2.1.11 - Some minor tidying up of code.
• Version 2.1.10 - Added project of a TriangMat to a subspace given by a subspace index.
• Version 2.1.9 - Added multiplications and more tests for TriangMat.
• Version 2.1.8 - Improved TriangMat::diagonal(), restored TriangMat::determinant(), tidied up triangmat test.
• Version 2.1.7 - Removed suspect eigen values/vectors computations. Improved 'householder' test.
• Version 2.1.5 - Added projection to trait VecVecg to project all self vectors to a new basis. This can be used e.g. for Principal Components Analysis data reduction, using some of the
eigenvectors as the new basis.
• Version 2.1.4 - Tidied up some error processing.
• Version 2.1.3 - Added normalize (normalize columns of a matrix and transpose them to rows).
• Version 2.1.2 - Added function project to project a TriangMat to a lower dimensional space of selected dimensions. Removed rows which was a duplicate of dim.
• Version 2.1.0 - Changed the type of mid argument to covariance methods from U -> f64, making the normal expectation for the type of precise geometric medians explicit. Accordingly, moved covar
and serial_covar from trait VecVecg to VecVec. This might potentially require changing some use declarations in your code.
• Version 2.0.12 - added depth_ratio
• Version 2.0.11 - removed not so useful variances. Tidied up error processing in vecvecg.rs. Added to it serial_covar and serial_wcovar for when heavy loading of all the cores may not be wanted.
• Version 2.0.9 - Pruned some rarely used methods, simplified gmparts and gmerror, updated dependencies.
• Version 2.0.8' - Changed initial guess in iterative weighted gm methods to weighted mean. This, being more accurate than plain mean, leads to fewer iterations. Updated some dependencies.
• Version 2.0.7 - Updated to ran 2.0.
• Version 2.0.6 - Added convenience method medmad to Stats trait. It packs median and mad into struct Params, similarly to ameanstd and others. Consequently simplified the printouts in some tests.
• Version 2.0.5 - Corrected wsigvec to also return normalized result. Updated dependency Medians to faster version 3.0.1.
• Version 2.0.4 - Made a corresponding change: winsideness -> wdepth.
• Version 2.0.3 - Improved insideness to be projection of a sum of unit vectors instead of just a simple count. Renamed it to depth to avoid confusion. Also some fixes to hulls.
• Version 2.0.2 - Significantly speeded up insideness and added weighted version winsideness to VecVecg trait.
• Version 2.0.1 - Added TriangMat::dim() and tidied up some comments.
• Version 2.0.0 - Renamed MStats -> Params and its variant dispersion -> spread. This may cause some backwards incompatibilities, hence the new major version. Added 'centre' as an argument to
dfdt,dvdt,wdvdt, so that it does not have to be recomputed.
~28K SLoC | {"url":"https://lib.rs/crates/rstats","timestamp":"2024-11-05T23:13:44Z","content_type":"text/html","content_length":"70076","record_id":"<urn:uuid:84ad206a-f395-4f91-b3b2-e199363199ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00099.warc.gz"} |
Infinite Limit: Definition, Questions and Discussion, and History - Sinaumedia
Infinite Limit: Definition, Questions and Discussion, and History
Infinite Limit – Does Sinaumed’s like math? If so, what material is your favorite, is it algebra or limits? Algebra and limits are different , yes , even though both of them have variables and
various numbers in them. But don’t think that the material in mathematics is just calculations, but can also be applied in everyday life, you know… Then, what is an infinite limit? Who are the great
figures who find limits in everyday life? What is the concept and nature of a limit? Come on, see the following reviews!
The Concept and Nature of the Infinite Limit
A simple concept so that Sinaumed’s understands what a limit is, you can take the following example. When at a shop, try to take the candy in the jar by holding it. On the first grip, you get 5 candy
wrappers. Then on the second grip, get 6 candy wrappers. At hand when, get 5 candy wrappers. Then on the fourth grip, get 7 candy wrappers. Finally alias on the fifth grip, get 6 candy wrappers. So,
the average of the five grips is 5 packs and is almost close to 6. Now, “almost close” is what is analogous to the concept of limit.
Infinite Limit is a limit concept involving the symbols ∞ and -∞, that is, if the value of the function f(x) increases or decreases without limit or if x increases or decreases without limit. The
first concept is about the limit of the function f at point c for the function f which is limited to an interval containing c. While the second concept is about the limit of the function f for a
variable x that increases without limit (x→∞) or for a variable x that decreases without limit x→-∞), which is known as an infinite limit.
Then, the properties of the limit at one point and the limit of the composition function for functions that have limits, and the principle of clamping also applies to limits at infinity. The
statement of the theorem is exactly the same, but x→c is replaced by x→∞, or replaced by x→ – ∞, and the origin of f is adjusted.
In short, this infinite limit is a form of study to find out the trend of a function, if the value of the variable is indeed made bigger. If it is said that x goes to infinity, then it will be
written as (x→∞). That is, the value of x will get bigger or bigger until it is unlimited.
Infinite Limit Formula
To calculate the tendency of a function that is made bigger, of course you have to use a certain formula. Reporting from edmodo.id , an infinite limit has its own formula depending on its shape.
Infinite Limit Formula with Polynomial Form
The polynomial form in variable X to the highest power of one, if depicted in a Cartesian diagram, will form a straight line. Well, the limit value in the polynomial form really depends on the
highest power of the polynomial. The limit of a function that has a variable x, will have a direct effect on the function f(x).
Infinite Limit Formula in Fractional Form
Infinity Limit Formula in Trigonometry Form
Infinite Limit Questions and Discussion
Example Question 1
Example Problem 2
Example Problem 3
Know the History of Limits
Before discussing what an infinite limit is, it would be nice if Sinaumed’s knew the history of limits first. Basically, the limit of a function is a misconception in calculus and analysis regarding
the behavior of a function as it approaches a certain input point. A function will later map the output f(x) for each input x. The function has a limit L at the input point p, when f(x) is
“approaching” to L, as well as when x is close to p. In other words, f(x) gets closer to L, as x also gets closer to p.
If analyzed further, if f is applied to any input that is close enough to p, then the result is an output that is (arbitrarily) close to L. Well, if the input that is close to p turns out to be
mapped to very different outputs, then the function f It can be said that there is no limit.
Has Sinaumed’s ever wondered what the history of the existence of limits is? Well, it turns out that this definition of limit has been studied since the 19th century, you know…
It should be noted that the history of the development of calculus can be seen from the period of ancient times, medieval times, and modern times. In the ancient period, some ideas about integral
calculus arose, but unfortunately it was not developed properly and systematically. The reason was simple, because at that time there was a lack of knowledge or references related to calculus.
Calculations of volume and area which are the main functions of integral calculus have been traced to and preserved in an Egyptian Moscow Papyrus (1800 BC). Here’s a little trivia, papyrus is a
manuscript of material that resembles thick paper and is usually used for writing media in ancient times. Well, the Moscow Papyrus stated that the Egyptians had been able to calculate the volume of
the pyramid which was later developed again by Archimedes and at the same time created a heuristic that resembled integral calculus.
Continuing in the Middle Ages, an Indian mathematician named Aryabhata used the concept of infinity in 499 and at the same time expressed matters relating to astronomy in the form of basic
differential equations. This equation was developed again by Bhaskara II in the 12th century to become the initial form of the derivative which represented infinitely small changes. This is also the
initial form of Rolle’s Theorem.
Around 1000, there was an Iraqi mathematician named Ibn al-Haytham, who became the first person to derive a formula for calculating the sum of the powers of four using mathematical induction. Ibn
al-Haytham also developed a method for deriving the general formula from the product of the power of the integral, which of course became important in the development of integral calculus. Continuing
in the 12th century, a Persian mathematician named Sharaf al-Din al-Tusi emerged who managed to find the derivative of cubic functions which became important in differential calculus.
Meanwhile, in modern times, independent discoveries emerged, precisely at the beginning of the 17th century in Japan, which were initiated by a mathematician named Seki Kowa. Different countries,
different mathematicians who sparked their discovery of limits. In Europe, there are several mathematicians who have made breakthroughs in calculus material, for example, there are John Wallis and
Isaac Barrow. Even James Gregory also helped prove calculus with a special case of the fundamental theorem of calculus in 1886 to be exact.
Some of the other notable experts who encouraged the discovery of this calculus are Leibniz and Newton. These two experts are considered to be the inventors of calculus separately but at about the
same time. Newton applied calculus in general to physics, while Leibniz developed calculus in everyday life. So, when Leibniz and Newton succeeded in publishing the results of their research for the
first time, a controversy arose which “debated” about which mathematician was more deserving of the award. Newton is considered to have completed his research results first, but Leibniz was the first
to publish them. In fact, Newton accused Leibniz of stealing his ideas through his notes, which at that time he often lent to several members of the Royal Society.
So, to solve this problem, a detailed examination was carried out to show that the two mathematicians were indeed working separately, with Leibniz starting from integrals, while Newton started from
derivatives. After the examination, both Newton and Leibniz were declared mathematicians who played a major role in the field of calculus and received awards. Leibniz is considered to be the person
who gave the name to this branch of mathematics namely “Calculus”, while Newton is considered to be the figure who gave the name “The Science of Fluxions”.
Since then, many mathematicians have contributed to the development of calculus, one of which is Maria Gaetana Agnesi in 1748. Maria discovered further developments from calculus in the form of
finite and infinitesimal analysis. Apart from that, there is also Cauchy who also discusses limits in his research entitled Cours d’analyse in 1821 and is considered a standard method for explaining
In general, this calculus was developed by manipulating very small quantities, of course the objects are numbers. An infinitely small dx number can be greater than 0, but can also be smaller than any
number in the sequence 1, ½, ⅓, and so on; and any positive real number. Also, any multiplication by an ‘infinitely small’ number is still ‘infinitely small’. In other words, this infinitesimal does
not satisfy Archimedes’ point of view. Therefore, calculus is a set of techniques for manipulating ‘small infinity’.
In the 19th century, the concept of ‘infinitely small’ was abandoned as unconvincing and replaced by the concept of limit. Limit describes the value of a function at a certain input value with the
result of the closest input value. It is from this point of view that calculus is a set of techniques for manipulating certain limits.
If we analyze it again, the definition of the limit of a function is: “Given a function f(x) which is defined at intervals around p, with the possible exception of p itself. We say that the limit f
(x) when x approaches p is L ”, then the writing is:
Scientists Contributing to Limits
1. Augustin-Louis Cauchy
Previously, Sinaumed’s must have realized that Cauchy’s name appeared in the history of Limit’s emergence. Yep, he was born on August 21, 1789 in Paris, France, and then died on May 23, 1857 at the
age of 67, which was known as a French mathematician. Cauchy pursued his education at the Ecole Polytechnique, because his health was not so good, he made a career as a professor at the Ecole
Polytechnique and the College de France.
Although calculus was invented around the 17th century, its basics remained muddled and messy until Cauchy and his colleagues conducted further research.
2. Sir Isaac Newton
Newton, besides being a physicist, was also an English mathematician, astronomer, natural philosopher, to theologian, who was born on January 4, 1643 and died on March 31, 1727 at the age of 84. He
was a follower of the heliocentric school and became the most influential scientist in history. Even Newton is also called the “Father of Classical Physics”.
His book Philosophiæ Naturalis Principia Mathematica , published in 1687, is considered the most influential book in the history of science, because it discusses the foundations of classical
mechanics. In the book, Newton helped to describe the law of gravity and the three laws of motion that dominated the scientific view of the universe for three centuries. Newton also managed to show
that the motion of objects on Earth and other celestial bodies is governed by the same set of natural laws. Newton became a figure who was able to prove the relationship between Kepler’s laws of
planetary motion and his theory of gravity.
3. Gottfried Wilhelm Leibniz
Leibniz is a German philosopher born on July 1, 1646 who is famous for the Theodicee concept he created. This understanding reveals that humans live in the best possible world because this world was
created by a perfect God. Apart from being a philosopher, he is also a mathematician, diplomat, physicist, historian, and a doctorate in church law. His contribution to science was enormous, so many
journals and manuscripts were published under his name.
Also Read!
• The formula for the surface area of a block and examples of problems
• Understanding and Steps to Determine Rotational Symmetry in Flat Figures
• Definition of Inverse Matrix and its terms
• Definition, Functions, Formulas, and Examples of Problems from Logarithms
• What is Commutative Properties and Example Problems
• The Distributive Property As A Way Of Solving Equations
• Definition of Constants, Variables, and Terms Accompanied by Example Problems
• Characteristics of Beams and Discussion of Problems
• Formulas for Calculating Volume, Surface Area, and Base Circumference in Cylindrical Constructs
• Who Invented Zero?
• How to Convert Common Fractions to Decimals
• The Characteristics and Properties of Flat Shapes
• How to Calculate the Volume of a Block
• Getting to Know the Types of Angles | {"url":"https://sinaumedia.com/infinite-limit-definition-questions-and-discussion-and-history/","timestamp":"2024-11-10T11:52:26Z","content_type":"text/html","content_length":"151397","record_id":"<urn:uuid:b83a973c-2ff8-45f9-8b33-7d518aea8d64>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00697.warc.gz"} |
Math Quiz: Calculator Required for a Perfect Score
Quiz: Nobody Can Pass This Math Quiz Without Grabbing A Calculator
Nature never uses prime numbers. But mathematicians do.No One Can Pass This Math Quiz Without Grabbing A Calculator
Are you a math whiz who can solve complex equations in your head? Or do you rely on a calculator to do the heavy lifting? Either way, this math quiz is sure to put your skills to the test.
With a mix of algebra, geometry, and trigonometry questions, this quiz is designed to challenge even the most seasoned math enthusiasts. But there's a catch - you're not allowed to use a calculator.
That's right, you'll have to rely on your mental math skills to solve each problem. From finding the area of a triangle to solving for x in an equation, this quiz covers a wide range of topics that
will test your knowledge and problem-solving abilities.
But don't worry, even if you're not a math genius, this quiz is still a great way to brush up on your skills and learn something new. And who knows, you might surprise yourself with how much you
remember from your high school math classes.
So, are you ready to take on the challenge? Grab a pencil and a piece of paper, and let's get started!
1. What is "Nobody This Math Without Grabbing A Calculator"?
"Nobody This Math Without Grabbing A Calculator" is a phrase that emphasizes the importance of using a calculator when doing math. It suggests that attempting to solve complex math problems without a
calculator can be difficult or even impossible.
2. Why is using a calculator important in math?
Using a calculator can help simplify complex calculations and reduce the risk of errors. It can also save time and allow for more efficient problem-solving. However, it's important to note that
calculators should be used as a tool to aid in math, not as a replacement for understanding mathematical concepts.
3. Can anyone do math without a calculator?
Yes, it's possible to do math without a calculator, but it may be more challenging for some individuals. Basic arithmetic and simple calculations can be done mentally or with the use of paper and
pencil. However, for more complex calculations, a calculator can be a helpful tool to ensure accuracy and efficiency. | {"url":"https://quizzino.com/quiz-nobody-can-pass-this-math-quiz-without-grabbing-a-calculator/","timestamp":"2024-11-06T17:25:38Z","content_type":"text/html","content_length":"111367","record_id":"<urn:uuid:285d9c64-439d-4afd-88da-33fac076f767>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00149.warc.gz"} |
16516 KAWR YPR Express Train Time Table
If we closely look at the 16516 train time table, travelling by 16516 KAWR YPR Express gives us a chance to explore the following cities in a quick view as they come along the route.
1. Karwar
It is the 1st station in the train route of 16516 KAWR YPR Express. The station code of Karwar is KAWR. The departure time of train 16516 from Karwar is 05:30. The next stopping station is Ankola at
a distance of 28km.
2. Ankola
It is the 2nd station in the train route of 16516 KAWR YPR Express at a distance of 28 Km from the source station Karwar. The station code of Ankola is ANKL. The arrival time of 16516 at Ankola is
05:48. The departure time of train 16516 from Ankola is 05:50. The total halt time of train 16516 at Ankola is 2 minutes. The previous stopping station, Karwar is 28km away. The next stopping station
is Gokarna Road Halt at a distance of 8km.
3. Gokarna Road Halt
It is the 3rd station in the train route of 16516 KAWR YPR Express at a distance of 36 Km from the source station Karwar. The station code of Gokarna Road Halt is GOK. The arrival time of 16516 at
Gokarna Road Halt is 05:56. The departure time of train 16516 from Gokarna Road Halt is 05:58. The total halt time of train 16516 at Gokarna Road Halt is 2 minutes. The previous stopping station,
Ankola is 8km away. The next stopping station is Kumta at a distance of 19km.
4. Kumta
It is the 4th station in the train route of 16516 KAWR YPR Express at a distance of 55 Km from the source station Karwar. The station code of Kumta is KT. The arrival time of 16516 at Kumta is 06:10.
The departure time of train 16516 from Kumta is 06:12. The total halt time of train 16516 at Kumta is 2 minutes. The previous stopping station, Gokarna Road Halt is 19km away. The next stopping
station is Honnavar at a distance of 14km.
5. Honnavar
It is the 5th station in the train route of 16516 KAWR YPR Express at a distance of 69 Km from the source station Karwar. The station code of Honnavar is HNA. The arrival time of 16516 at Honnavar is
06:26. The departure time of train 16516 from Honnavar is 06:28. The total halt time of train 16516 at Honnavar is 2 minutes. The previous stopping station, Kumta is 14km away. The next stopping
station is Murdeshwar at a distance of 26km.
6. Murdeshwar
It is the 6th station in the train route of 16516 KAWR YPR Express at a distance of 95 Km from the source station Karwar. The station code of Murdeshwar is MRDW. The arrival time of 16516 at
Murdeshwar is 06:50. The departure time of train 16516 from Murdeshwar is 06:52. The total halt time of train 16516 at Murdeshwar is 2 minutes. The previous stopping station, Honnavar is 26km away.
The next stopping station is Bhatkal at a distance of 15km.
7. Bhatkal
It is the 7th station in the train route of 16516 KAWR YPR Express at a distance of 110 Km from the source station Karwar. The station code of Bhatkal is BTJL. The arrival time of 16516 at Bhatkal is
07:06. The departure time of train 16516 from Bhatkal is 07:08. The total halt time of train 16516 at Bhatkal is 2 minutes. The previous stopping station, Murdeshwar is 15km away. The next stopping
station is Byndoor at a distance of 15km.
8. Byndoor
It is the 8th station in the train route of 16516 KAWR YPR Express at a distance of 125 Km from the source station Karwar. The station code of Byndoor is BYNR. The arrival time of 16516 at Byndoor is
07:22. The departure time of train 16516 from Byndoor is 07:24. The total halt time of train 16516 at Byndoor is 2 minutes. The previous stopping station, Bhatkal is 15km away. The next stopping
station is Kundapura at a distance of 34km.
9. Kundapura
It is the 9th station in the train route of 16516 KAWR YPR Express at a distance of 159 Km from the source station Karwar. The station code of Kundapura is KUDA. The arrival time of 16516 at
Kundapura is 08:14. The departure time of train 16516 from Kundapura is 08:16. The total halt time of train 16516 at Kundapura is 2 minutes. The previous stopping station, Byndoor is 34km away. The
next stopping station is Udupi at a distance of 31km.
10. Udupi
It is the 10th station in the train route of 16516 KAWR YPR Express at a distance of 190 Km from the source station Karwar. The station code of Udupi is UD. The arrival time of 16516 at Udupi is
08:38. The departure time of train 16516 from Udupi is 08:40. The total halt time of train 16516 at Udupi is 2 minutes. The previous stopping station, Kundapura is 31km away. The next stopping
station is Surathkal at a distance of 43km.
11. Surathkal
It is the 11th station in the train route of 16516 KAWR YPR Express at a distance of 233 Km from the source station Karwar. The station code of Surathkal is SL. The arrival time of 16516 at Surathkal
is 09:40. The departure time of train 16516 from Surathkal is 09:42. The total halt time of train 16516 at Surathkal is 2 minutes. The previous stopping station, Udupi is 43km away. The next stopping
station is Mangalore Junction at a distance of 23km.
12. Mangalore Junction
It is the 12th station in the train route of 16516 KAWR YPR Express at a distance of 256 Km from the source station Karwar. The station code of Mangalore Junction is MAJN. The arrival time of 16516
at Mangalore Junction is 11:10. The departure time of train 16516 from Mangalore Junction is 11:30. The total halt time of train 16516 at Mangalore Junction is 20 minutes. The previous stopping
station, Surathkal is 23km away. The next stopping station is Bantawala at a distance of 20km.
13. Bantawala
It is the 13th station in the train route of 16516 KAWR YPR Express at a distance of 276 Km from the source station Karwar. The station code of Bantawala is BNTL. The arrival time of 16516 at
Bantawala is 11:59. The departure time of train 16516 from Bantawala is 12:00. The total halt time of train 16516 at Bantawala is 1 minutes. The previous stopping station, Mangalore Junction is 20km
away. The next stopping station is Kabaka Puttur at a distance of 24km.
14. Kabaka Puttur
It is the 14th station in the train route of 16516 KAWR YPR Express at a distance of 300 Km from the source station Karwar. The station code of Kabaka Puttur is KBPR. The arrival time of 16516 at
Kabaka Puttur is 12:25. The departure time of train 16516 from Kabaka Puttur is 12:27. The total halt time of train 16516 at Kabaka Puttur is 2 minutes. The previous stopping station, Bantawala is
24km away. The next stopping station is Subrahmanya Road at a distance of 42km.
15. Subrahmanya Road
It is the 15th station in the train route of 16516 KAWR YPR Express at a distance of 342 Km from the source station Karwar. The station code of Subrahmanya Road is SBHR. The arrival time of 16516 at
Subrahmanya Road is 13:05. The departure time of train 16516 from Subrahmanya Road is 13:15. The total halt time of train 16516 at Subrahmanya Road is 10 minutes. The previous stopping station,
Kabaka Puttur is 42km away. The next stopping station is Sakleshpur at a distance of 56km.
16. Sakleshpur
It is the 16th station in the train route of 16516 KAWR YPR Express at a distance of 398 Km from the source station Karwar. The station code of Sakleshpur is SKLR. The arrival time of 16516 at
Sakleshpur is 15:35. The departure time of train 16516 from Sakleshpur is 15:45. The total halt time of train 16516 at Sakleshpur is 10 minutes. The previous stopping station, Subrahmanya Road is
56km away. The next stopping station is Hassan at a distance of 42km.
17. Hassan
It is the 17th station in the train route of 16516 KAWR YPR Express at a distance of 440 Km from the source station Karwar. The station code of Hassan is HAS. The arrival time of 16516 at Hassan is
16:45. The departure time of train 16516 from Hassan is 16:50. The total halt time of train 16516 at Hassan is 5 minutes. The previous stopping station, Sakleshpur is 42km away. The next stopping
station is Channarayapatna at a distance of 33km.
18. Channarayapatna
It is the 18th station in the train route of 16516 KAWR YPR Express at a distance of 473 Km from the source station Karwar. The station code of Channarayapatna is CNPA. The arrival time of 16516 at
Channarayapatna is 17:24. The departure time of train 16516 from Channarayapatna is 17:25. The total halt time of train 16516 at Channarayapatna is 1 minutes. The previous stopping station, Hassan is
33km away. The next stopping station is Shravanabelagola at a distance of 8km.
19. Shravanabelagola
It is the 19th station in the train route of 16516 KAWR YPR Express at a distance of 481 Km from the source station Karwar. The station code of Shravanabelagola is SBGA. The arrival time of 16516 at
Shravanabelagola is 17:34. The departure time of train 16516 from Shravanabelagola is 17:35. The total halt time of train 16516 at Shravanabelagola is 1 minutes. The previous stopping station,
Channarayapatna is 8km away. The next stopping station is B.g.nagar at a distance of 34km.
20. B.g.nagar
It is the 20th station in the train route of 16516 KAWR YPR Express at a distance of 515 Km from the source station Karwar. The station code of B.g.nagar is BGNR. The arrival time of 16516 at
B.g.nagar is 18:04. The departure time of train 16516 from B.g.nagar is 18:05. The total halt time of train 16516 at B.g.nagar is 1 minutes. The previous stopping station, Shravanabelagola is 34km
away. The next stopping station is Yediyuru at a distance of 15km.
21. Yediyuru
It is the 21st station in the train route of 16516 KAWR YPR Express at a distance of 530 Km from the source station Karwar. The station code of Yediyuru is YY. The arrival time of 16516 at Yediyuru
is 18:24. The departure time of train 16516 from Yediyuru is 18:25. The total halt time of train 16516 at Yediyuru is 1 minutes. The previous stopping station, B.g.nagar is 15km away. The next
stopping station is Kunigal at a distance of 17km.
22. Kunigal
It is the 22nd station in the train route of 16516 KAWR YPR Express at a distance of 547 Km from the source station Karwar. The station code of Kunigal is KIGL. The arrival time of 16516 at Kunigal
is 18:39. The departure time of train 16516 from Kunigal is 18:40. The total halt time of train 16516 at Kunigal is 1 minutes. The previous stopping station, Yediyuru is 17km away. The next stopping
station is Nelemangala at a distance of 45km.
23. Nelemangala
It is the 23rd station in the train route of 16516 KAWR YPR Express at a distance of 592 Km from the source station Karwar. The station code of Nelemangala is NMGA. The arrival time of 16516 at
Nelemangala is 19:13. The departure time of train 16516 from Nelemangala is 19:14. The total halt time of train 16516 at Nelemangala is 1 minutes. The previous stopping station, Kunigal is 45km away.
The next stopping station is Chik Banavar at a distance of 13km.
24. Chik Banavar
It is the 24th station in the train route of 16516 KAWR YPR Express at a distance of 605 Km from the source station Karwar. The station code of Chik Banavar is BAW. The arrival time of 16516 at Chik
Banavar is 19:29. The departure time of train 16516 from Chik Banavar is 19:30. The total halt time of train 16516 at Chik Banavar is 1 minutes. The previous stopping station, Nelemangala is 13km
away. The next stopping station is Yasvantpur Jn at a distance of 8km.
25. Yasvantpur Jn
It is the 25th station in the train route of 16516 KAWR YPR Express at a distance of 613 Km from the source station Karwar. The station code of Yasvantpur Jn is YPR. The arrival time of 16516 at
Yasvantpur Jn is 20:45. The previous stopping station, Chik Banavar is 8km away.
Trainspnrstatus is one of the best website for checking trains running status. You can find the 16516 KAWR YPR Express running status here.
Mangalore Junction, Subrahmanya Road, Sakleshpur, Hassan are major halts where KAWR YPR Express halts for more than five minutes. Getting hotel accommodations and cab facilities in these cities is
Trainspnrstatus is one stop best portal for checking pnr status. You can find the 16516 KAWR YPR Express IRCTC and Indian Railways PNR status here. All you have to do is to enter your 10 digit PNR
number in the form. PNR number is printed on the IRCTC ticket.
Train number of KAWR YPR Express is 16516. You can check entire KAWR YPR Express train schedule here. with important details like arrival and departure time.
16516 train schedule KAWR YPR Express train time table KAWR YPR Express ka time table KAWR YPR Express kitne baje hai KAWR YPR Express ka number16516 train time table | {"url":"https://www.trainspnrstatus.com/train-schedule/16516","timestamp":"2024-11-03T14:05:38Z","content_type":"text/html","content_length":"48942","record_id":"<urn:uuid:9391019d-0599-4db6-bcee-89dca6ccc29d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00798.warc.gz"} |
The Stacks project
Lemma 32.11.4. Let $i : Z \to X$ be a closed immersion of schemes inducing a homeomorphism of underlying topological spaces. Let $\mathcal{L}$ be an invertible sheaf on $X$. Then $i^*\mathcal{L}$ is
ample on $Z$, if and only if $\mathcal{L}$ is ample on $X$.
Comments (2)
Comment #6733 by hao on
A typo in the third paragraph of proof 09MW: $Z_i$ should be the "image" of the morphism $Z\to X_i$ instead of $Z\to X$.
Comment #6923 by Johan on
Thanks and fixed here.
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 09MW. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 09MW, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/09MW","timestamp":"2024-11-07T03:09:05Z","content_type":"text/html","content_length":"17734","record_id":"<urn:uuid:77938250-4d92-4bc5-b41a-766cb80de958>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00574.warc.gz"} |
Dynamic Connectivity in Disk Graphs
Let S ⊆ R 2 be a set of n planar sites, such that each s ∈ S has an associated radius rs < 0. Let D(S) be the disk intersection graph for S. It has vertex set S and an edge between two distinct sites
s, t ∈ S if and only if the disks with centers s, t and radii rs, rt intersect. Our goal is to design data structures that maintain the connectivity structure of D(S) as sites are inserted and/or
deleted. First, we consider unit disk graphs, i.e., rs = 1, for all s ∈ S. We describe a data structure that has O(log2 n) amortized update and O(log n/ log log n) amortized query time. Second, we
look at disk graphs with bounded radius ratio Ψ, i.e., for all s ∈ S, we have 1 ≤ rs ≤ Ψ, for a Ψ ≥ 1 known in advance. In the fully dynamic case, we achieve amortized update time O(Ψλ6(log n)log7 n)
and query time O(log n/ log log n), where λs(n) is the maximum length of a Davenport-Schinzel sequence of order s on n symbols. In the incremental case, where only insertions are allowed, we get
logarithmic dependency on Ψ, with O(α(n)) query time and O(log Ψλ6(log n)log7 n) update time. For the decremental setting, where only deletions are allowed, we first develop an efficient disk
revealing structure: given two sets R and B of disks, we can delete disks from R, and upon each deletion, we receive a list of all disks in B that no longer intersect the union of R. Using this, we
get decremental data structures with amortized query time O(log n/ log log n) that support m deletions in O((n log5 n + m log7 n)λ6(log n) + n log Ψ log4 n) overall time for bounded radius ratio Ψ
and O((n log6 n + m log8 n)λ6(log n)) for arbitrary radii.
Publication series
Name Leibniz International Proceedings in Informatics, LIPIcs
Volume 224
ISSN (Print) 1868-8969
Conference 38th International Symposium on Computational Geometry, SoCG 2022
Country/Territory Germany
City Berlin
Period 7/06/22 → 10/06/22
Bibliographical note
Publisher Copyright:
© Haim Kaplan, Alexander Kauer, Katharina Klost, Kristin Knorr, Wolfgang Mulzer, Liam Roditty, and Paul Seiferth; licensed under Creative Commons License CC-BY 4.0
Funding Supported in part by grant 1367/2016 from the German-Israeli Science Foundation (GIF). Haim Kaplan: Partially supported by ISF grant 1595/19 and by the Blavatnik research foundation.
Alexander Kauer: Supported in part by grant 1367/2016 from the German-Israeli Science Foundation (GIF), by the German Research Foundation within the collaborative DACH project Arrangements and
Drawings as DFG Project MU 3501/3-1, and by ERC StG 757609. Kristin Knorr: Supported by the German Science Foundation within the research training group “Facets of Complexity” (GRK 2434). Wolfgang
Mulzer: Supported in part by ERC StG 757609.
Funders Funder number
Blavatnik research foundation
German–Israeli Science Foundation
Horizon 2020 Framework Programme 757609
European Commission
Deutsche Forschungsgemeinschaft MU 3501/3-1, GRK 2434
Israel Science Foundation 1595/19
• Connectivity
• Disk Graphs
• Lower Envelopes
Dive into the research topics of 'Dynamic Connectivity in Disk Graphs'. Together they form a unique fingerprint. | {"url":"https://cris.biu.ac.il/en/publications/dynamic-connectivity-in-disk-graphs","timestamp":"2024-11-08T22:03:14Z","content_type":"text/html","content_length":"62835","record_id":"<urn:uuid:4d8fe2d8-1881-4d5a-a74a-7c2a68d4aa14>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00770.warc.gz"} |
PDF Cylindrical Quantum Dots, Spin Qubits, and Decoherence - ERNET
We show that this can be achieved by manipulating only the strength of the external magnetic field. We also see how the decoherence process depends on the nature of the interaction included in the
model. 2.19) as a function of a1 21 2.3 Variation of the electron probability density with magnetic field strength 22 2.4 Variation of radial energy levels with magnetic field strength for.
Quantum States
Since the position basis is not special as far as the LVS of a quantum system is concerned, similar superpositions also exist in other basis representations of the state vector. Therefore, if the
quantum state is prepared in a linear superposition of two energy eigenstates, for example |1i and |2i, the time evolution of the state can be written as.
Two-electron System
Because the Schrodinger equation is linear in the wave function, the state of a two-electron system can be represented by a linear combination of the form. For the two electron states, this would
require definite but opposite symmetries for the orbitals and spins of the state lattice.
Quantum Information Technology
Feynman who pointed out that simulating quantum mechanics on a classical computer can be computationally expensive. On the other hand, Benioff showed how a classical Turing machine could be simulated
by the reversible unitary evolution of a quantum process [7].
Semiconductor Quantum Dots
Loss-DiVincenzo Proposal
One of the proposals was to use vertically tunneled quantum dots in the presence of external electric and magnetic fields [41]. An experimental demonstration of the coherent control over such an
encoded qubit is also reported in the literature [43].
Since the microscopic state of the environment is inaccessible to us, the phenomenon of decoherence is irreversible. Decoherence is an extremely fast process for macroscopic objects since the number
of degrees of freedom of the environment interacting with such objects is very high.
Outline of the Thesis
We show how the states and energy levels depend on the strength of the magnetic field for different quantum numbers. In Chapter 3 we study the confinement of a single electron in the hydrogen
impurity potential in the presence of a magnetic field.
Some of such explorations using cylindrical quantum dot structure in the presence of a magnetic field are listed in the references. The value of quantum number n corresponds to the number of nodes in
the radial part of the wave function.
Figure 2.1: Cylindrical quantum dot potential
Results and Discussion
Here, ωc is the cyclotron frequency and is directly proportional to the magnetic field strength. In the case of a large diameter dot, the merging of energy levels occurs immediately with an increase
in the strength of the magnetic field.
Figure 2.2: Graphical plot of L.H.S. and R.H.S. of Eq. (2.19) as a function of a 1
In some of the cases, the system can be treated approximately as a two-dimensional problem. A hydrogen impurity will lead to a logarithmic potential in a strictly two-dimensional world due to Gauss's
law in two dimensions.
Theory and Numerical Procedure
Matrix Method
Here rmax is a large enough value of r that the wave function can for all practical purposes be taken to be zero. The matrix method with higher order accuracy in approximating the second derivative
term is also considered in our research. Thus, we look for a method which can include a non-uniform grid with more samples in the r= 0 region.
Figure 3.1: Comparison of analytical (dashed) and numerical (continuous) solu- solu-tion based on matrix method
Numerov’s Shooting Method
It should also be noted that the greater the number of nodes in a wave function, the greater the corresponding eigenenergy will be. The number of nodes in the function fL(r) is counted by tracking
the number of times the function changes sign until it reaches the intermediate point rc that we have identified. It is noticed that the current solution corresponds worse to the exact solution
compared to that in the matrix method from the previous subsection.
Logarithmic Grid
Figure 3.5 shows the radial plot of the ground state wave function for different external magnetic flux densities. This is because the rate of increase in the energy of a state depends on the value
of l. The variation of the excited state wave functions with respect to the magnetic field is shown in Fig.
Figure 3.3: Ground state solution for Eq. 3.17. The plot is in arbitrary units.
For a point with a very small height-to-diameter ratio, we can safely assume that electrons are confined to the ground state subband in the z direction and thus the problem reduces to the case of a
two-dimensional circular disk. In the present work, we use linear variational analysis where the trial wavefunction is constructed from single-electron harmonic oscillator wavefunctions.
Theory and Procedure
For each case, the linear analysis of variation reduces to solving an eigenvalue problem of the type. Now the matrix [Z] is diagonal due to the orthonormality of the orbital part of the basic wave
functions. Similarly, many of the elements in [W] are zeros due to the othogonality condition when m1 6= m3 or m2 6= m4.
Results and Discussion
The energy spectrum obtained for different quantum numbers is plotted as a function of the external magnetic field strength in Figure. Similarly, singlet-triplet crossing is observed for the ground
state, with the increase of external magnetic field. We also mapped the radial electron densities and pair correlation functions for different quantum numbers (M, s) and magnetic field strengths.
Table 4.1: Dimensions of two-electron basis set for different M values where V 0 is the step-size and k is a dimensionless number whose value we need to determine
Due to the inherent antisymmetry in the orbital wave function of the triplet states, fpc(0) = 0, because no two electrons in such states can be in the same place. This can be considered a very good
approximation, as the changes in effective mass and permeability are known to be very small with respect to material composition. In the case of self-assembled quantum dots, experimental results
indicate a violation of the general Kohn theorem, which indicates discontinuities in the confinement potential [124; 125].
Figure 4.3: Left column: Radial electron density (ED), η(r, φ = 0); Right column:
Heitler-London Approximation
If ϕL(~r) is the ground state of the left QD and ϕR(~r) is the ground state of the right QD, the orbital part of the two-electron wave function is written as In the above expression, the energy of
one electron for the ground state with one point, Vr(r) =V(r)−VL(r) and Vl(r) =V(r)−VR(r) are the residual potentials ;. Shifting the ground state orbital of one particle left and right by the
transformation (x, y) → (x± a, y) will also change the scale.
Weinbaum Approximation
C} is a column matrix corresponding to the linear variation parameters to be determined, and E is the unknown energy eigenvalue. G] is the overlap matrix written in the same basis and its expression
is given by We expect that introducing doubly occupied states into the analysis improves the value of the exchange energy.
Results and Discussion
The separation between dots 2a= 1.4r0 in the H-L method (dotted) and the Weinbaum method (solid) is taken into account here. Here, the point-to-point separation of 2a = 1.6rB is considered for the
H-L method (dotted) and the Weinbaum method (solid). It can also be seen that the magnetic field at which the singlet-triplet crossing occurs is also a function of the dot separation.
Figure 5.2: The variation of singlet (continuous) and triplet (dashed) energy levels [left] exchange interaction coefficient [right] for coupled heterostructured QD as a function of magnetic field
Application in Quantum Computation
But for smaller QD radius, the singlet-triplet transition occurs at larger values of external magnetic field. This can be done by operating the system at a magnetic field strength, Bc, where
singlet and triplet states have the same energy, then switching the field to a non-zero exchange interaction energy, and then allowing the system to evolve for a sufficient time, τg. The different
graphs correspond to exchange interaction between the lowest singlet state and the three non-degenerate triplet states, viz.
Figure 5.6: The variation of exchange interaction coefficient for coupled het- het-erostructured QDs as a function of magnetic field strength
In most applications we are only interested in the quantum state of the electrons and not in the state of the environment around them. Here, the first term represents the Hamiltonian of the central
spin in the absence of any interaction with the spin bath. The second term represents the contribution to the Hamiltonian due to the interaction with the nuclear spin in the spin bath, where Jk
indicates the strength of hyperfine interaction between the central electron spin and kth nuclear spin of the bath.
Where a and b are complex probability amplitudes of electron spins found in up (|0i) and down (|1i) states, and cn is the probability amplitude of the product state (|ni) of the nuclear spin pool
containing N nuclear spins. But the square of the standard deviation of is equal to the sum of the squares of the individual standard deviations as given in Eq. In the following, we consider three
distinct quantum dot geometries and analyze how Λ2 depends on the dimension of the nuclear spin bath.
Results and Discussion
This is due to quantum confinement effects, as can be directly verified from the expressions for J(~r) in each case. The solid-thick, dashed, dotted, and continuous-thin curves correspond to u = 0, u
= 0.2, u = 1, and u = 5, respectively. you notice complete decoherence, since the initial state is not even an eigenstate. system–Hamiltonian, nor the interaction term. This is because our current
initial state is the eigenstate of the interaction—the Hamiltonian, which is the only term governing the evolution of the system when u = 0.
Figure 6.2: The time development of decoherence when |ψ S (0)i = √ 1
The main physical mechanisms that limit the coherence time of spin qubits are spin-orbit interactions and interactions with nuclear spins. As a final problem in this work, we study the decoherence of
two electron spin qubits in a coupled quantum dot due to their interaction with the surrounding nuclear spin bath. We use an extended version of the simplified model discussed in the previous chapter
on single-electron decoherence.
Characteristics of two-qubit density matrices
From the two-qubit density matrix one can evaluate quantitative measures such as ς(t) =Trρ2AB(t) and simultaneity. If the elements of the matrix are functions of time, the polarization components as
well as the quantitative measures. The plot of ς(t) with respect to time can tell us how the quantum coherence of the two-qubit state is lost as a function of time.
Continuum Limit
Therefore, if we assume that all individual nuclei in the nuclear bath possess the same statistics, then the qubit-bath interaction energies, A = R. In such cases, the qubit-bath interaction energies
are completely correlated, that is, we can define A = B. Now we can overcome the summation in Eq. 7.7with an integration over A and B, the qubit-bath interaction energy variables for electron A and
electron B. The limits of integration range from min = −R. Thus, the polarization components of the two-qubit density matrix at time t. 7.17) where 2Nf(A, B, t)dAdB is the number of bath product
states with the first qubit bath interaction energy between A and A+dA and the second qubit bath interaction energy between B and B+dB.
Results and Discussion
When Jex = 0, the qubit-bath interaction energies A and B can be considered to be independent of each other. Since this is a linear combination of two triplet states, we have only one variable of the
qubit-bath interaction energy. This is because the value of Λ, the standard deviation of the qubit-bath interaction energy distribution function, depends on the spread of the wave function.
Figure 7.2: Concurrence as a function of time when J ex = 0.5 and the initial state is |↑↓i.
We also plotted the variation of energy levels with the flux density of the applied magnetic field. In the last part of the thesis, we studied the decoherence of spin qubits in quantum dots due to
their hyperfine interaction with the surrounding nuclear bath environment. The decoherence process therefore depends on the initial state of the system as well as the relative strengths of different
terms in the Hamiltonian. | {"url":"https://azpdf.net/in/docs/pdf-cylindrical-quantum-dots-spin-qubits-decoherence-ernet.10432214","timestamp":"2024-11-10T06:04:42Z","content_type":"text/html","content_length":"160845","record_id":"<urn:uuid:33a29bf5-5ee3-4649-8268-16019d91f629>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00067.warc.gz"} |
How to draw an ellipseHow to draw an ellipse 🚩 the correct ellipse 🚩 Math.
The first and easiest method of drawing the ellipse.
Spend two mutually perpendicular lines. From the point of intersection the compass, draw two different-sized circles: the diameter of the smaller circle equal to a given width of the ellipse or the
shorter axis, the diameter of the majority of the length of the ellipse major axis.
Divide a large circle into twelve equal parts.
straight through the center point of the tick marks, located opposite each other. The smaller circle is also divided into 12 equal parts.
Number points clockwise so that the point 1 was the highest point in the circumference.
Point tick marks on the larger circle, except points 1, 4, 7 and 10, draw a vertical line down. From the corresponding points on the smaller circle, draw a horizontal line intersecting with the
vertical, i.e. a vertical line from point 2 of the larger circle should intersect with the horizontal line from point 2 the smaller circle.
Connect with a smooth curve the points of intersection of vertical and horizontal lines, and points 1, 4, 7, and 10 small circumference. The ellipse is constructed.
For another way of drawing the ellipse, you will need a compass, 3 pins and strong flax thread.
draw a rectangle with height and whose width would equal the height and width of the ellipse. Two intersecting lines divide the rectangle into 4 equal parts.
With a compass draw a circle that crosses the long average. This support rod of a compass must be installed in the center of one of the sides of the rectangle. The radius of the circle is the length
side of the rectangle divided in half.
Mark the points where the circle crosses the middle vertical line.
Plug in these points, two pins. The third pin in the plug end of the middle line. Tie all three pins linen thread.
Remove the third pin and instead use a pencil. Using uniform tension of the thread, draw the curve. If done correctly, you should get an ellipse. | {"url":"https://eng.kakprosto.ru/how-7482-how-to-draw-an-ellipse","timestamp":"2024-11-13T01:39:33Z","content_type":"text/html","content_length":"32806","record_id":"<urn:uuid:6ca0feab-c589-404e-87d2-660a2dfb66e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00307.warc.gz"} |
Binary to Hex Converter: Convert Binary Numbers
Binary to Hex
Convert binary numbers to hexadecimal effortlessly with our Binary to Hex Converter. Perfect for programming and data encoding, ensuring precise conversions.
Binary Number:A binary number is a numeral system based on base-two, using only two digits: 0 and 1. It's commonly used in computing and digital electronics.
Binary Number Example:An example of a binary number is 110101. Each digit represents a power of 2, with the rightmost digit representing 2^0, the next representing 2^1, and so on.
Hexadecimal (Hex) Number:Hexadecimal is a base-sixteen numeral system that uses sixteen symbols: 0-9 and A-F, where A represents 10, B represents 11, and so forth up to F representing 15. It's widely
used in computing due to its compact representation of binary data.
Hexadecimal Number Example:An example of a hexadecimal number is 2F8A. Each digit in a hexadecimal number represents a power of 16, with the rightmost digit representing 16^0, the next representing
16^1, and so on.
Conversion Process: Binary to Hexadecimal
1. Group Binary Digits in Sets of Four: Start by grouping the binary digits in sets of four from right to left. If the last group has fewer than four digits, add zeros to the left to make it four
2. Convert Each Group to Hexadecimal: Convert each four-digit binary group to its hexadecimal equivalent.
3. Combine Hexadecimal Digits: Combine the hexadecimal digits obtained from each group to form the final hexadecimal number.
Example: Convert Binary 1011011101 to Hexadecimal
Binary Number: 1011011101
Group in Sets of Four:
Convert Each Group to Hexadecimal:
1011 (B in hex)
0111 (7 in hex)
01 (1 in hex, add leading zero for four digits)
Combine Hexadecimal Digits:
Therefore, the hexadecimal representation of the binary number 1011011101 is B71. | {"url":"https://toolsfairy.com/number-utilities/binary-to-hex","timestamp":"2024-11-11T02:18:28Z","content_type":"text/html","content_length":"26085","record_id":"<urn:uuid:8ed26c66-186a-436d-a1fd-9f56467efe8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00642.warc.gz"} |
Maths/Matamaitic | crimlin
top of page
Crimlin, Castlebar, Co. Mayo, F23 V008
Ph: O94 9025432 / (083) 0839038
It is best to establish a routine and follow it each time you approach a sum.
1. Look at the number on the top row. Say aloud: "Eighty six".
2 Look at the number on the bottom row. Say aloud: "Fifty seven".
3. Say aloud: "Eighty six take away fifty seven".
4. In your head, do a quick estimate of what the answer will be, like this: round 86 to 90, then round 57 to 60. Subtract 60 from 90; that's 30. So the correct answer should be around 30.
5. Look at the units column. Can you take 7 from 6? No. So you must regroup eighty six.
At the moment, it's written as 8 tens and 6 units. Change it to 7 tens and 16 units.
6. Now take 7 units from 16 units, and take 5 tens from 7 tens.
8. Say "Eighty six take away fifty seven equals twenty nine."
9. Think back to your estimate, which was 30... very near to 29.
In this second example, you can follow the same routine as above.
When it comes to rounding 176, look at the tens and units. Round 76 to 80, and say "One hundred and 180". Round 149 to 150. Now it's easy to take 150 from 180.
The correct answer should be around 30.
Note: When subtracting, always work on the units column first, then work on the tens column and finish with the hundreds column.
Subtraction (continued)
In this example, we see that we must regroup €1.35. (Because 8 units cannot be taken from 5 units.)
Before you do any regrouping, look at the tens column.
You'll be taking a big number on the bottom from a smaller number on the top....
So when you regroup €1.35, say "Instead of 13 tens and 5 units, I'll group it as 12 tens and 15 units".
( 1 hundred is 10 tens. 13 tens is really 1 hundred and 3 tens. )
Draw a line straight through 13 and write 12 in the tens column.
Write 1 beside 5.
First, do the subtracting on the units column.
Then on the tens column, take 6 from 12.
Lastly, on the hundreds column, 1 is no longer there. So you're taking 0 from 0.
Keep the decimal points in a column, one directly under the other. (They're not really under each other in the picture.....sorry.)
Put the Euro sign in to show that you're working with Euros.
Take a Cornflakes box.
Stand it upright. Measure the sides and write the measurements in your copy.
Look at the opening at the top, to get an idea of how big it is.
Now look at the space inside the box, to get an idea of how big that is.
Think about the things that you measured yesterday. Look at the sketches that you made in your copy.
Try to guess how many of them would fit in the Cornflakes box.
Write in your copy: I think .... and..... and...... and....would fit in the Cornflakes box.
Now pack all those things into the Cornflakes box. Do they fit?
(You could take them out and try putting them in again in a different way. They might fit better.)
Now write in your copy: The ....and the..... and the...and the... fit in the Cornflakes box.
bottom of page | {"url":"https://www.crimlinschool.com/copy-of-music-ceol","timestamp":"2024-11-12T15:35:18Z","content_type":"text/html","content_length":"589074","record_id":"<urn:uuid:3d22ba90-959f-45e0-8a27-1b440769355b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00758.warc.gz"} |
Bhaskaracharya Pratishthana Mathematics Training Programme 2023 - 2024
Bhaskaracharya Pratishthana Mathematics Training Programme 2023-2024
For the students of 5th Standard to 9th Standard
Academic Year 2023-2024
(BPMTP 23-24)
Use the links on the left hand side for more information about the courses.
Bhaskaracharya Pratishthana (BP), Pune is a premier institute devoted to carrying out research in Mathematics and imparting high quality education in Mathematics at school as well as college and
university levels. The Institute was founded in 1976 by the renowned Mathematician Prof. Shreeram Abhyankar for conducting research in higher Mathematics.
Since 1992, BP has also been a recognized center for conducting Regional Mathematical Olympiad (RMO) and Indian National Mathematical Olympiad (INMO) on behalf of the National Board for Higher
Mathematics (NBHM) for Maharashtra and Goa Region.
Over the years BP has been consistently involved in conducting Mathematics foundation programmes for school students, Olympiad training programmes, undergraduate and postgraduate training programmes
as well Annual Foundation Schools and similar Teachers’ Training programmes at regional and national levels. The main aim of all these programmes is to promote teaching-learning methods in
Mathematics and create keen interest and liking for Mathematics among students. The methods adopted are well tested over the years, and teachers encourage self-learning and independent thinking. As a
consequence, several students of BP have pursued their career in Mathematics and many have applied Mathematics in their own areas of interest in a substantial way. The strong foundation developed in
the school programmes enable students to take the challenge of participating in Olympiad competitions. Many students of BP have not only cleared regional level competition, but also achieved high
success in national and international competitions. We are happy and proud to mention that students trained at Bhaskaracharya Pratishthana have bagged medals for INDIA about 50 times in the
International Mathematical Olympiad. (https://www.bprim.org/imo-awardees). A large number of past students of BP are doing excellent research work in Mathematics/Computer Science at major research
Institutions /Universities all over the world or working as top-class professionals in the best Industries. Many are also faculty members at well-known colleges/universities in India or abroad.
This year BP announces OFFLINE and ONLINE programmes for school students. The details of the various programmes can be found here on this website.
The faculty for all the programmes is extremely dedicated, very highly qualified and can be considered veterans in School and Olympiad training programmes. Many of the faculty members training in the
BP Olympiad programme have been involved in the training of students at Regional, National as well as International Mathematical Olympiad for last many years.
The lectures for 5th standard, 7th standard, 8th standard (Foundation of Mathematics batch) and 9th standard (Olympiad Level 1 batch) will be conducted twice a week throughout the year – while
those for 6th standard and Olympiad Level 2 will be conducted once a week throughout the year.
The programmes will be extremely interactive in nature. A special feature for every programme will be project-based activities and special lectures by expert faculty from India or abroad.
8th standard (Foundation of Mathematics batch) and 9th standard (Olympiad Level 1 batch) programmes will start in the first week of May 2023 – while the remaining programmes will commence from the
week of 12th June 2023. Most programmes will continue till end of February 2024/beginning of March 2024.
Eligibility for the programmes: Interest in Mathematics, a will to learn independently and more than 75% marks (or equivalent grade) in Mathematics in earlier class.
• There is an ONLINE as well as OFFLINE batch offered for 5th, 6th and 7th standard.
• Foundation of Mathematics and Introduction to Olympiad Maths and Olympiad Level 1 Training Programme will be in offline mode only.
• The students outside Pune will be given the facility of recorded lectures from 1st July 2023.
• Olympiad Level 2 batch will be in ONLINE mode only.
Students can select the batch preferable to them. The admission for these batches will be done on first come first serve basis for eligible students for the given batch capacity.
Students choosing online option should ensure good internet connectivity at their end. Additionally, students in online batches are expected to actively interact with the faculty during lectures as
well as to participate enthusiastically in discussions during the online sessions. Hence, they should ensure that they have a working microphone in their audio device.
In case the number of students for online + offline batch together is not sufficient for a certain programme only the Online programme will run for that class.
For the level 1 Olympiad training programme students will be expected to be familiar with topics from the Foundation of Mathematics Programme independent of whether they have attended the programme
or not. For this they should see the syllabus at www.bprim.org
For every programme BP gives full/partial scholarship to deserving students with a weak financial background. The student should make an application for the same (by email to bhaskaraprim@gmail.com )
and should also ask a Mathematics teacher knowing him/her well to recommend his/her name for the same by sending email to bhaskaraprim@gmail.com . The applications will be screened by a committee at
BP and the selected students will be informed. | {"url":"http://www.bprim.org/index.php/archive/programme/training/2023/mtp/bhaskaracharya-pratishthana-mathematics-training-programme-2023","timestamp":"2024-11-08T21:23:27Z","content_type":"text/html","content_length":"49235","record_id":"<urn:uuid:22592077-e9be-48d0-8ff6-7b366923be29>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00203.warc.gz"} |
Traveling Salesman Problem: Are solutions normally distributed?
I recognize that finding the optimum solution to the Traveling Salesman Problem is hard (NP-hard, to be precise). My question, then, has to do with the distribution of the values in the solution set
for any particular instance of the Problem. Are such solutions normally distributed?
I know I can’t be the first, and probably not even the millionth to wonder about this, but still, let me ask - if the solutions are normally distributed, wouldn’t it be possible to compute a
“representative” sample of solutions (requiring much less effort and time than computing them all) and, after, considering the distribution curve that those solutions generate, be able to determine a
“nearly optimum” solution", i.e. one within x% of the optimum, where ‘x’ is whatever you want it to be?
As always, thanks!
Yes, having experienced this in undergrad as well as grad school, there are indeed nearly optimal solutions to this.
But, and I am talking out of my butt here, when one goes to x% of optimal, then one has to compare this aginst the optimal solution.
Therein lies the rub, AFAIK.
Hard question to answer.
I would suspect that for the TSP problem in general, you cannot make assumptions about whether the solutions are normally distributed or not… based partly on how the individual graph-edges (costs to
travel between any particular pair of cities) are distributed.
In general, I would think that calculating a representative sample and taking the best available would still be a good real-life algorithm to approximate a solution, but it would be hard to tell just
how good of an approximation it is without solving for the optimal solution.
It would be an interesting exercise to generate TSP graphs that are as large as possible while still being solvable with brute-force in a decent amount of time… and see… hmm – that’d be the problem
actually. If you solve TSP on 12 cities, you have 12! possible solutions, about half a billion, which would be many too many to graph. You might be able to sort them all and then take every millionth
solution as a graphing sample.
It might be possible, but for any practical purpose, it’s easy enough to get a really good solution through genetic programming.
Not necessarily. You can place error bound on most approximation algorithms out there deductively.
The OP lacks key information. In particular, what does “normally distributed” range over? So the following might not address the question.
It is trivial to show that approximate TSP is NP-hard. I.e., to find a solution that is only half (or whatever) as good as the optimal is just as hard as the original problem. It is so easy to prove
I have given this as a homework problem in algorithms classes.
This is unlike some other NP-Complete problems like bin packing where easy polytime approximation algorithms are known. So this sort of thing varies from problem to problem.
Note that being NP-hard does not mean that the problem is truly difficult. It may turn out to be polytime. (Which would of course surprise most people.)
Note that just knowing that something’s approximately normally distributed cannot tell you the bound, because normal distributions don’t have bounds.
okay, I admit there is no chance whatsoever that I will be able to contribute, but would someone please clue me in as to what the traveling salesman problem is so perhaps I might learn something?
There’s a nice write-up here.
Indeed, I actually did know that, but expressed myself very poorly.
What I meant to say, and what I should have said, was that if you you know that the solutions are normally distributed, you should be able to come up with a solution that is “better” than x% of the
entire solution set, with ‘x’ being whatever you want it to be.
And, thanks to everyone for their input!
I have a hard time believing this problem could be normally distrubuted or even could be represented by a normal distribution. There can be only one (possibly with a few ties) optimal solution. If
there’s a normal distribution, this would imply that there are some solutions on both sides of the solution, which violates the optimality of the solution.
There are much simpler ways to get good bounds on a TSPs optimal solution. For instance, a good lower bound is the MST, which can be calculated in O(m) where m is the number of edges. You can also
get a reasonable upperbound guarenteed no more than 150% the optimal solution in O(m). There are better quick upper bounds, but they require more than linear time, and I’d have to go back and look at
my Combinatorial Optimizations text to remember.
But there are solutions on both sides of the average solution.
Let me present my interpretation of what you’re trying to say here. I have a collection of cities, and I know the distances between every pair of these cities. I look at every possible route that
goes through all cities without repetition and calculate the length of every route. You’re asking whether these lengths fall into a normal distribution.
In the general case, they do not. I can provide a counterexample.
According to the official definition of the TSP, we may present an instance of the problem with any distances we choose. We may even choose distances that are entirely against intuition. For
instance, suppose we take these cities.
El Paso
I will create an instance of the TSP as follows: the distance from Atlanta to Boston is 500 miles. The distance between any other pair of cities is 100 miles.
In that case, what does the distribution of lengths of routes look likes? Given any route, the only relevant question is whether it includes that length from Atlanta to Boston. For any route
including the Atlanta-Boston leg, the total length is 500 + 100 + 100 + 100 + 100 + 100 = 1000. For any route not including the Atlanta-Boston leg, the total length is 100 + 100 + 100 + 100 + 100 +
100 = 600. Thus, the distribution of solutions contains only two points, one at 1000 and one at 600.
This is an extreme cases. One might look at restricted cases. For instance, if I declared that all distances on the map would be chosen randomly from between 100 and 200 miles, then the distribution
of solutions might well be normal.
True. But what good is an average solution in this case in terms of locating an optimal solution? We have no way of knowing how much worse than optimal this average solution is. It would still take O
(m) to caluclate an average solution, but we have no idea how close to optimal it is. Further, why would we care? If we calculated 100 different Hamiltonian tours to find a good guess at the average,
clearly the minimum of those 100 tours would be a better guess at the optimum than the average of those 100 tours.
It’s obvious that the solution space is never normally distributed, because it’s a finite space, and the normal distribution is continuous. The question is whether the normal distribution is a
reasonable approximation to the distribution of solutions. If it is, the degree to which it’s a good approximation should depend on the number of solution costs. It might be worthwhile to consider
TSPs over a (finite) metric space.
It’s not. As you’ll notice, I mentioned in post #4 that there are relatively easy ways to get good solutions. It’s still an interesting theoretical question.
1. The current best MST algorithm is not O(m) but almost linear.
2. What does MST have anything to do with this??? Completely different problems.
3. See my earlier post. Approximate TSP is NP-hard.
No, the fastest IS linear. From the Wiki article you just cited:
As this states, there’s a well known linear algorithm for integer edge weights.
I just read this paper 2-3 weeks ago. Tarjan et al found a linear algorithm with [VERY] high probability for Real edge weights; this was in 1994. According to my professor (sorry, no cite), some time
since then they’ve improved upon this result by making it non-randomized, and thus O(m) with no caveat for Real edge weights.
Yes, they are different problems; however, an MST is a fairly tight lower bound on the TSP (MST has n-1 edges, TSP has n eges). Also, MST is used in a lot of poly-time estimations.
As is normally definted, yes. However, if you’re okay with a larger error, you may not need to go through all that work. There ARE linear and higher order poly-time TSP approximating algorithms that
guarentee you to be no more than x% greater than the optimal. I know of at least one off the top of my head that guarantees a solution no more than 150% the optimal TSP and can calculate that in O(m)
and it requires finding the MST. If <=150% is good enough, why break out the big guns and tackle the harder problem?
Thought I’d bring this thread back up just to note that I managed to work out every possible solution to a sample randomly generated TSP problem… obviously, the premises that I was working with to
generate my problem could have biased the results, but I did generate a cost versus frequency graph to see what it looked like.
It was triangle-shaped. Pretty clearly, as far as I could tell. A sharp incline from the worst possible solutions up to the average, median solutions, and then a sharp decline from those down to the
very best ones. Very little trace of curvature in the inclines, though on the ‘up’ slope there was maybe a very little hint of a bell curve. Might have been imagining it though.
How many vertices?
Twelve, if you mean the cities in my sample. | {"url":"https://boards.straightdope.com/t/traveling-salesman-problem-are-solutions-normally-distributed/357630","timestamp":"2024-11-14T05:38:03Z","content_type":"text/html","content_length":"67367","record_id":"<urn:uuid:6ead3b0b-19e4-404a-a9ff-cf35ffa6c355>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00043.warc.gz"} |
Which Type of ZSM is the Best?
Last modified by Microchip on 2024/01/12 12:21
Here are some of the reasons the type of Zero Sequence Modulation (ZSM) matters:
• Realizability: the most important thing is to achieve a desired average output voltage. Any of these approaches (except "no shift") meets this criterion.
• Switching losses: the top/bottom/top-and-bottom clamp methods reduce switching losses.
• Conduction losses: techniques that keep the per-phase duty cycle nearer to 50% and away from the extremes (0%, 100%) distribute conduction losses more evenly between upper and lower switches, and
reduce the conduction losses very slightly due to the effect on ripple current.
• Ripple current: here's where things get a little tricky. The ripple current caused by the switching frequency and motor inductance is difficult to analyze in general. For any given operating
point (specific duty cycle values on phases A, B, and C) it's possible to determine the ripple current waveform and its harmonics, although it takes a bit of number-crunching. The good news is
that in most cases, the motor inductance is high enough that it doesn't matter much.
• Ease of computation: CPU time limitations make it desirable to calculate the zero-sequence offset with a simple algorithm. Some of these ZSM techniques can be difficult to compute efficiently.
More on this below:
The Conventional Space Vector PWM (CSVPWM) technique is fairly easy to compute. Given three duty cycles D[a0], D[b0], D[c0], calculate:
and add this number to each of the duty cycles.
This effectively computes the center of the largest and smallest duty cycle and adjusts it up or down so that the center is moved to 1/2. In other words, we clamp the midpoint of the three-phase set
of duty cycles to 50%, hence the name "midpoint clamp" used here.
This also has the effect of causing an equal margin at the ends of the range: for a set of three‑phase duty cycles, if the span is defined as
It also produces the exact same result as computing on-times by first determining which sector the operating point is in, as described in the paper by van der Broeck et al.
Just as a practical example: suppose the output of the Clarke transform yields (D[a0], D[b0], D[c0]) = (0.423, -0.345, -0.078). For this set, the minimum value is -0.345 and the maximum value is
0.423; the average of these is 0.039, implying a zero-sequence shift of 0.5 - 0.039 = 0.461; if we add 0.461 to each duty cycle we get (D[a], D[b], D[c]) = (0.884, 0.116, 0.383). You'll notice this
yields an 11.6% margin from both 0% and 100% duty cycles.
The top and bottom clamp computations are very similar. For top-clamp, add the following to each duty cycle:
For bottom-clamp, add the following to each duty cycle:
All three of these can be generalized to:
with k = 1 for top-clamp, k = 0 for bottom-clamp, and k = ½ for CSVPWM.
The top-and-bottom clamp technique requires choosing whether to use the top-clamp or the bottom-clamp technique at different points in the commutation cycle. Probably the easiest method to use is to
take the initial three-phase set (D[a0], D[b0], D[c0]), and check how many of these values are greater than their mean, which is zero if using the output of the Clarke transform. If two values are
greater than the mean, then the largest amplitude duty cycle is negative and the bottom-clamp method is used. If only one value is greater than the mean, then the largest amplitude duty cycle is
positive and the top-clamp method is used. For a three-phase set with a mean value of zero, this decision can be made by taking a bitwise XOR of the three values, and choosing the top-clamp method if
the resulting sign bit is 0, or the bottom-clamp method if the resulting sign bit is 1.
Methods that involve unnecessary computation
The other three methods (third-harmonic, minimum shift, and nip & tuck) are not particularly practical to compute. Here's why:
Suppose you have a three-phase set of sine waves of constant frequency and known amplitude and phase:
Then it turns out that the third-harmonic and nip & tuck methods have a zero-sequence shift which is fairly simple to calculate, given the amplitude A and phase angle θ:
For 3rd harmonic,
For nip & tuck:
which looks complicated, but it's not that hard to compute.
Here's the problem: in real motor drives, we don't have direct access to θ — at least not to the value of θ used in the above equations. There is a phase shift between the angle of the output
voltage, the electrical angle, and this phase shift changes with time. The angle we know in a FOC motor controller is the electrical angle θ[e]. What we typically use for the a, b, and c phase
voltage is:
where V[d] and V[q] are voltage vector outputs of a control loop in the synchronous frame. The relationship between the phase angles θ and θ[e] is:
and the relationship between the amplitude A and the d and q phase voltages is
The minimum shift isn't quite as bad, but it does involve looking at each of the three-phase voltages and comparing them with the positive and negative voltage limits to see which phases would exceed
the limits. This requires six comparisons. If the amplitude is small, none of the phases would exceed the limits. If the line-to-line amplitude is between 3√2 and 1.0 of the voltage capability, then
only one of the six comparisons will be true. At most, one voltage is outside the voltage capability without a zero-sequence shift, and adding the appropriate shift will make all three phases
realizable. But in the overmodulation region, where the line-to-line amplitude is greater than the voltage capability, there are times when at least two of the unshifted phase voltages are outside
the voltage capability, and this method does not provide a means of determining the best output voltage shift.
So which method should you use? CSVPWM is the best method in general, and you're probably already using it. If you've got a high-power or high-voltage system where switching losses are of concern,
use the top-&-bottom clamp method if you have gate drives that can provide a sustained on-state for the top transistors, or the bottom-clamp method if the gate drive you are using cannot provide 100%
duty cycle on the top transistors.
Thanks for reading! Have fun exploring our ZSM viewer, and best of luck on your next motor drive project!
On This Page
Microchip Support
Query Microchip Forums and get your questions answered by our community:
Microchip Forums
AVR Freaks Forums
If you need to work with Microchip Support staff directly, you can submit a technical support case. Keep in mind that many questions can be answered through our self-help resources, so this may not
be your speediest option. | {"url":"https://developerhelp.microchip.com/xwiki/bin/view/applications/motors/control-algorithms/zsm/zsm/best-zsm/","timestamp":"2024-11-08T14:26:38Z","content_type":"application/xhtml+xml","content_length":"56029","record_id":"<urn:uuid:e0debfc3-007f-4b6c-83f9-629eb7b86db2>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00436.warc.gz"} |
Syracuse University
2022-2023 Graduate Course Catalog [ARCHIVED CATALOG]
Mathematics, MS
Department Chair: Graham J. Leuschke, 215 Carnegie Building,
, 315-443-1478
Associate Chair for Graduate Studies: William Wylie, 306C Carnegie Building,
, 315-443-1556
Uday Banerjee, Pinyuen Chen, Dan Coman, J. Theodore Cox, Steven Diaz, Nicole L. Fonger, Jack E. Graver, Duane Graysay, Tadeusz Iwaniec, Lee Kennard, Hyune-Ju Kim, Mark Kleiner, Leonid
Kovalev, Loredana Lanzani, Graham J. Leuschke, Wei Li, Jianxuan Liu, Adam Lutoborski, Joanna O. Masingila, Claudia Miller, Jani Onninen, Declan Quinn, Minghao Rostami, Lixin Shen, Gregory
Verchota, Andrew Vogel, Stephan Wehrli, William Wylie, Yuan Yuan, Dan Zacharia, Yiming Zhao
The Department of Mathematics has 31 faculty members, with research interests in several areas of mathematics, statistics, and mathematics education, and approximately 55 graduate students.
The department is housed in the recently renovated Carnegie Library building on the main campus quadrangle. Programs of study include those for M.S. and Ph.D. degrees in Mathematics, with or
without a concentration in Statistics, and for M.S. and Ph.D. degrees in Mathematics Education.
Student Learning Outcomes
1. Demonstrate competency beyond the undergraduate level in the core areas of algebra and analysis by solving problems using advanced techniques
2. Demonstrate competency beyond the undergraduate level in an area of applicable mathematics by solving problems using advanced techniques
3. Read and construct rigorous proofs
4. Effectively communicate mathematical ideas
M.S. in Mathematics
The Department of Mathematics offers two programs leading to the Master’s of Science in Mathematics degree. The programs are (1) Mathematics (including pure and applied mathematics) and (2)
Statistics. Master’s programs share MAT 601 - Fundamentals of Analysis I and MAT 631 - Introduction to Algebra I as common foundations, and there is additional overlap between them.
Thirty credits of graduate work are required, of which at least 18 must be at the 600-level or above, and at least 15 of those 18 credits must be in the mathematics department. In the
mathematics option the student must also complete MAT 602 - Fundamentals of Analysis II , MAT 632 - Introduction to Algebra II , and a sequence in applied mathematics from an approved list
of sequences. In the statistics option several particular courses are required.
Students must have at least a B average in the 15 credits of 600-level or above mathematics department courses and at least a B average in the 30 credits of coursework comprising the degree
program. No master’s thesis is required.
Research Areas
The department’s Colloquium series features weekly lectures by mathematicians from all over the United States and abroad in many of the areas of mathematical research represented in the
department. Furthermore several of the research groups organize regular research seminars. Colloquia and seminar schedules, along with other information about our programs, courses, and
events, can be found at thecollege.syr.edu/mathematics/.
The following research groups are currently represented in the department.
Algebraic geometry (moduli spaces of curves, equations defining finite sets of points), commutative algebra (homological algebra, Cohen-Macaulay modules, characteristic p), non-commutative
algebra (representations of finite-dimensional algebras, homological algebra, group actions on non-commutative rings, Hopf algebras, enveloping algebras, non-commutative algebraic geometry).
Faculty: Diaz, Kleiner, Leuschke, Miller, Quinn, Zacharia
Complex analysis (several complex variables, pluripotential theory, complex dynamics, invariant metrics, holomorphic currents, Kähler geometry, rigidity problems), geometric analysis (PDE on
manifolds, geometric flows), harmonic analysis, partial differential equations (linear and nonlinear elliptic PDE, boundary value problems on nonsmooth domains), geometric function theory
(quasiconformal mappings, analysis on metric spaces). Faculty: Coman, Iwaniec, Kovalev, Lanzani, Onninen, Verchota, Vogel, Wylie, Yuan, Zhao
Applied Mathematics
Numerical analysis (approximate solutions of elliptic PDE, generalized finite element methods and meshless methods), nonlinear variational problems (microstructure in nonlinear elasticity),
applied and computational harmonic analysis (wavelets, digital image processing), numerical linear algebra, computational fluid dynamics. Faculty: Banerjee, Lutoborski, Rostami, Shen
Combinatorics, graph theory, rigidity theory, symmetries of planar graphs, automorphism groups of graphs. Faculty: Graver
Low-dimensional topology and knot theory (knot concordance, Heegaard Floer homology, homology theories for knots and links), Riemannian/Kähler geometry (curvature and topology, symmetry,
special metrics, geometric flows, rigidity problems), convex geometry (Minkowski problems, sharp isoperimetric inequalities). Faculty: Kennard, Wehrli, Wylie, Yuan, Zhao
Mathematics Education
Secondary mathematics education, teacher learning, mathematical representations, out-of-school mathematics practice, teacher development. Faculty: Fonger, Graysay, Masingila
Interacting particle systems, Brownian motion, random walks, probabilistic methods in mathematical finance, martingales. Faculty: Cox
Ranking and selection theory with applications in signal processing and multistage clinical trials, change-point problems with applications in cancer trend analysis, sequential analysis,
nonparametric and semiparametric statistics, Bayesian inference, causal inference, measurement error models, high-dimensional data analysis. Faculty: Chen, Kim, Li, Liu
Graduate Awards
Figures for graduate appointments represent 2022-2023 stipends.
Graduate Scholarships:
Support graduate study for students with superior qualifications; provide, in most cases, full tuition for the academic year.
Graduate Assistantships:
Offered to most Graduate Scholarship recipients; no more than an average of 20 hours of work per week; nine months; stipend ranging from $20,961 to $24,584 in addition to tuition scholarship
for 24 credits per year. Additional summer support is generally available.
Syracuse University Graduate Fellowships:
Tax-free stipends are $25,290 for nine months of full-time study; tuition scholarship for 15 credits per semester for a total of 30 credits during the academic year.
The mathematics collection is held within the Carnegie Library and supports mathematical research over a broad range of pure and applied mathematics, as well as mathematics education,
mathematical statistics, and interdisciplinary areas. Most of the non-book resources are online and includes an extensive collection of databases and journals supporting the mathematical
sciences. In addition, the library provides a growing collection of ebooks.
Students may borrow course reserved textbooks, laptops, TI graphing calculators, and geometry kits from the Carnegie Library service desk. Students may also reserve one of three group study
rooms located on the first floor of the library. A computer lab in the library provides software for programming, statistical and data analysis, video and multimedia, and access to
Carnegie Library is home to collections in the sciences, including engineering and computer science, the life sciences, and the physical sciences and hosts a strong collection of databases,
journals, and ebooks supporting all disciplines. The historic Reading Room gives the library a distinctive ambience and provides a quiet place for students to study. | {"url":"https://courses.syracuse.edu/preview_program.php?catoid=31&poid=15888&returnto=4007","timestamp":"2024-11-13T19:29:59Z","content_type":"text/html","content_length":"52078","record_id":"<urn:uuid:5cabe8be-bb0d-48cf-bb08-b47c924cde6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00775.warc.gz"} |
EMI Full Form And How To Calculate EMI? - Directory LinkEMI Full Form And How To Calculate EMI?
EMI Full Form And How To Calculate EMI?
Detailed Information
The complete EMI full form is Equal Monthly Installments.EMI is the amount payable to the bank or other lender at religious intervals on a specified date. Many customers have loans to fulfill their
dreams like car purchases, buying property, and education, so they go for those loans.
The loan can be paid in equal monthly installments. This EMI full form is calculated on the basis of principal, interest, and tenure. If you plan to take a loan, approach a lender for an EMI that is
convenient for you.
Calculating EMI requires loan amount, tenure and interest rate, etc. If you have all these then you can use the EMI calculator on the official website of the company. EMI contains two types of
components. In the initial phase, the specific part of EMI is the amount of interest only. In the end, the principal is the part of EMI payment plus interest. Reduces costs and creates.
EMI calculator is a digital tool that helps you calculate your EMI in equal monthly installments. This tool helps borrowers know the amount to be paid every month based on the loan amount they can
repay, loan tenure, interest rate, etc. There are various calculators available.
In a fixed rate system, the interest amount remains the same for the entire term of your loan. The principal and interest you pay in monthly installments remain the same. The total interest is
calculated on the initial loan amount only.
EMI is calculated using the following formula:
((P x R x N) + P )) / (N x 12)
P is the amount borrowed,
R is the rate of interest
N is the loan period
In the reducing balance method, after every pill payment you make the remaining principal balance plus interest is calculated. Thus after every month’s repayment of installments, your principal
balance is also reduced. This reduces the interest. This pattern continues for the entire loan tenure. Interest decreases.
EMI full form is calculated using the following formula:
P x R x (1+R)^N/[(1+R) ^ (N-1)]
P is the amount borrowed,
R is the monthly interest rate | {"url":"https://www.directory-link.com/link/listing/emi-full-form-and-how-to-calculate-emi/","timestamp":"2024-11-12T20:21:23Z","content_type":"text/html","content_length":"58893","record_id":"<urn:uuid:75a05886-52ed-41a8-bf43-432576ade814>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00376.warc.gz"} |
Now I Understand: What is Independent and Identically Distributed (IID) Random Variable
I was working with my friend on a Random Number Generator (RNG) research paper. We need to evaluate the quality of our random bitstream result using the NIST SP800-90B entropy test.
Since this is my first time working in the RNG field, I tried to gather all information about evaluating the quality of an RNG. The SP800-90B standard is the starting point. I tried to read it again
and again since there is so much jargon, one of them that took me a long time to understand is the Independent and Identically Distributed (IID). I got the concept right but each article seems to
explain different complicated cases and examples which made me confuse even more. Eventually, I end up with this explanation:
Independent is when a value is not affected by the other value.
For example, if you roll two dice, the result does not depend on each other.
Identically Distributed is when the probability of any specific outcome is the same.
For example, if you flip a coin, you had 50 50 chances of getting head or tail. That value doesn’t change every time you flip a coin. On the other hand, if you have a collection of weighted
coins, where each coin had a different probability of head or tail, that would be not identically distributed.
In machine learning, IID often implies that all of the data from the training set comes from the same process and that data is not related to each other.
Terms: Independent and Identically Distributed (IID) by IntuitiveML (https://www.youtube.com/watch?v=EGKbPww2_rc)
Now I can sleep well. | {"url":"https://derrylab.com/index.php/2021/08/10/now-i-understand-what-is-identically-distributed-iid-random-variable/","timestamp":"2024-11-10T16:02:49Z","content_type":"text/html","content_length":"61440","record_id":"<urn:uuid:389bfe7b-0bb9-4ea6-ba37-a24445db6e3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00368.warc.gz"} |
【数学】线性代数的直观理解An Intuitive Guide to Linear Algebra
Despite two linear algebra classes, my knowledge consisted of “Matrices, determinants, eigen something something”.
Why? Well, let’s try this course format:
• Name the course Linear Algebra but focus on things called matrices and vectors
• Teach concepts like Row/Column order with mnemonics instead of explaining the reasoning
• Favor abstract examples (2d vectors! 3d vectors!) and avoid real-world topics until the final week
The survivors are physicists, graphics programmers and other masochists. We missed the key insight:
Linear algebra gives you mini-spreadsheets for your math equations.
We can take a table of data (a matrix) and create updated tables from the original. It’s the power of a spreadsheet written as an equation.
Here’s the linear algebra introduction I wish I had, with a real-world stock market example.
What’s in a name?
“Algebra” means, roughly, “relationships”. Grade-school algebra explores the relationship between unknown numbers. Without knowing x and y, we can still work out that $(x + y)^2 = x^2 + 2xy + y^2$.
“Linear Algebra” means, roughly, “line-like relationships”. Let’s clarify a bit.
Straight lines are predictable. Imagine a rooftop: move forward 3 horizontal feet (relative to the ground) and you might rise 1 foot in elevation (The slope! Rise/run = 1/3). Move forward 6 feet, and
you’d expect a rise of 2 feet. Contrast this with climbing a dome: each horizontal foot forward raises you a different amount.
Lines are nice and predictable:
• If 3 feet forward has a 1-foot rise, then going 10x as far should give a 10x rise (30 feet forward is a 10-foot rise)
• If 3 feet forward has a 1-foot rise, and 6 feet has a 2-foot rise, then (3 + 6) feet should have a (1 + 2) foot rise
In math terms, an operation F is linear if scaling inputs scales the output, and adding inputs adds the outputs:
\begin{aligned} F(ax) &= a \cdot F(x) \\ F(x + y) &= F(x) + F(y) \end{aligned}
In our example, $F(x)$ calculates the rise when moving forward x feet, and the properties hold:
$\displaystyle{F(10 \cdot 3) = 10 \cdot F(3) = 10}$
$\displaystyle{F(3+6) = F(3) + F(6) = 3}$
Linear Operations
An operation is a calculation based on some inputs. Which operations are linear and predictable? Multiplication, it seems.
Exponents ($F(x) = x^2$) aren’t predictable: $10^2$ is 100, but $20^2$ is 400. We doubled the input but quadrupled the output.
Surprisingly, regular addition isn’t linear either. Consider the “add three” function $F(x) = x + 3$:
\begin{aligned} F(10) &= 13 \\ F(20) &= 23 \end{aligned}
We doubled the input and did not double the output. (Yes, $F(x) = x + 3$ happens to be the equation for an offset line, but it’s still not “linear” because $F(10) \neq 10 \cdot F(1)$. Fun.)
So, what types of functions are actually linear? Plain-old scaling by a constant, or functions that look like: $F(x) = ax$. In our roof example, $a = 1/3$.
But life isn’t too boring. We can still combine multiple linear functions ($A(x) = ax, B(x) = bx, C(x)=cx$) into a larger one, $G$:
$\displaystyle{G(x,y,z) = A(x) + B(y) + C(z) = ax + by + cz }$
$G$ is still linear, since doubling the input continues to double the output:
$\displaystyle{G(2x, 2y, 2z) = a(2x) + b(2y) + c(2z) = 2(ax + by + cz) = 2 \cdot G(x, y, z)}$
We have “mini arithmetic”: multiply inputs by a constant, and add the results. It’s actually useful because we can split inputs apart, analyze them individually, and combine the results:
$\displaystyle{G(x,y,z) = G(x,0,0) + G(0,y,0) + G(0,0,z)}$
If we allowed non-linear operations (like $x^2$) we couldn’t split our work and combine the results, since $(a+b)^2 \neq a^2 + b^2$. Limiting ourselves to linear operations has its advantages.
Organizing Inputs and Operations
Most courses hit you in the face with the details of a matrix. “Ok kids, let’s learn to speak. Select a subject, verb and object. Next, conjugate the verb. Then, add the prepositions…”
No! Grammar is not the focus. What’s the key idea?
• We have a bunch of inputs to track
• We have predictable, linear operations to perform (our “mini-arithmetic”)
• We generate a result, perhaps transforming it again
Ok. First, how should we track a bunch of inputs? How about a list:
Not bad. We could write it (x, y, z) too — hang onto that thought.
Next, how should we track our operations? Remember, we only have “mini arithmetic”: multiplications by a constant, with a final addition. If our operation $F$ behaves like this:
$\displaystyle{F(x, y, z) = 3x + 4y + 5z}$
We could abbreviate the entire function as (3, 4, 5). We know to multiply the first input by the first value, the second input by the second value, the third input by the third value, and add the
Only need the first input?
$\displaystyle{G(x, y, z) = 3x + 0y + 0z = (3, 0, 0)}$
Let’s spice it up: how should we handle multiple sets of inputs? Let’s say we want to run operation F on both (a, b, c) and (x, y, z). We could try this:
$\displaystyle{F(a, b, c, x, y, z) = ?}$
But it won’t work: F expects 3 inputs, not 6. We should separate the inputs into groups:
1st Input 2nd Input
a x
b y
c z
Much neater.
And how could we run the same input through several operations? Have a row for each operation:
F: 3 4 5
G: 3 0 0
Neat. We’re getting organized: inputs in vertical columns, operations in horizontal rows.
Visualizing The Matrix
Words aren’t enough. Here’s how I visualize inputs, operations, and outputs:
Imagine “pouring” each input through each operation:
As an input passes an operation, it creates an output item. In our example, the input (a, b, c) goes against operation F and outputs 3a + 4b + 5c. It goes against operation G and outputs 3a + 0 + 0.
Time for the red pill. A matrix is a shorthand for our diagrams:
$\text{I}\text{nputs} = A = \begin{bmatrix} \text{i}\text{nput1}&\text{i}\text{nput2}\end{bmatrix} = \begin{bmatrix}a & x\\b & y\\c & z\end{bmatrix}$
$\text{Operations} = M = \begin{bmatrix}\text{operation1}\\ \text{operation2}\end{bmatrix} = \begin{bmatrix}3 & 4 & 5\\3 & 0 & 0\end{bmatrix}$
A matrix is a single variable representing a spreadsheet of inputs or operations.
Trickiness #1: The reading order
Instead of an input => matrix => output flow, we use function notation, like y = f(x) or f(x) = y. We usually write a matrix with a capital letter (F), and a single input column with lowercase (x).
Because we have several inputs (A) and outputs (B), they’re considered matrices too:
$\displaystyle{MA = B}$
$\begin{bmatrix}3 & 4 & 5\\3 & 0 & 0\end{bmatrix} \begin{bmatrix}a & x\\b & y\\c & z\end{bmatrix} = \begin{bmatrix}3a + 4b + 5c & 3x + 4y + 5z\\ 3a & 3x\end{bmatrix}$
Trickiness #2: The numbering
Matrix size is measured as RxC: row count, then column count, and abbreviated “m x n” (I hear ya, “r x c” would be easier to remember). Items in the matrix are referenced the same way: a[ij] is the
ith row and jth column (I hear ya, “i” and “j” are easily confused on a chalkboard). Mnemonics are ok with context, and here’s what I use:
• RC, like Roman Centurion or RC Cola
• Use an “L” shape. Count down the L, then across
Why does RC ordering make sense? Our operations matrix is 2×3 and our input matrix is 3×2. Writing them together:
[Operation Matrix] [Input Matrix]
[operation count x operation size] [input size x input count]
[m x n] [p x q] = [m x q]
[2 x 3] [3 x 2] = [2 x 2]
Notice the matrices touch at the “size of operation” and “size of input” (n = p). They should match! If our inputs have 3 components, our operations should expect 3 items. In fact, we can only
multiply matrices when n = p.
The output matrix has m operation rows for each input, and q inputs, giving a “m x q” matrix.
Fancier Operations
Let’s get comfortable with operations. Assuming 3 inputs, we can whip up a few 1-operation matrices:
• Adder: [1 1 1]
• Averager: [1/3 1/3 1/3]
The “Adder” is just a + b + c. The “Averager” is similar: (a + b + c)/3 = a/3 + b/3 + c/3.
Try these 1-liners:
• First-input only: [1 0 0]
• Second-input only: [0 1 0]
• Third-input only: [0 0 1]
And if we merge them into a single matrix:
[1 0 0]
[0 1 0]
[0 0 1]
Whoa — it’s the “identity matrix”, which copies 3 inputs to 3 outputs, unchanged. How about this guy?
[1 0 0]
[0 0 1]
[0 1 0]
He reorders the inputs: (x, y, z) becomes (x, z, y).
And this one?
[2 0 0]
[0 2 0]
[0 0 2]
He’s an input doubler. We could rewrite him to 2*I (the identity matrix) if we were so inclined.
And yes, when we decide to treat inputs as vector coordinates, the operations matrix will transform our vectors. Here’s a few examples:
• Scale: make all inputs bigger/smaller
• Skew: make certain inputs bigger/smaller
• Flip: make inputs negative
• Rotate: make new coordinates based on old ones (East becomes North, North becomes West, etc.)
These are geometric interpretations of multiplication, and how to warp a vector space. Just remember that vectors are examples of data to modify.
A Non-Vector Example: Stock Market Portfolios
Let’s practice linear algebra in the real world:
• Input data: stock portfolios with dollars in Apple, Google and Microsoft stock
• Operations: the changes in company values after a news event
• Output: updated portfolios
And a bonus output: let’s make a new portfolio listing the net profit/loss from the event.
Normally, we’d track this in a spreadsheet. Let’s learn to think with linear algebra:
• The input vector could be (\$Apple, \$Google, \$Microsoft), showing the dollars in each stock. (Oh! These dollar values could come from another matrix that multiplied the number of shares by
their price. Fancy that!)
• The 4 output operations should be: Update Apple value, Update Google value, Update Microsoft value, Compute Profit
Visualize the problem. Imagine running through each operation:
The key is understanding why we’re setting up the matrix like this, not blindly crunching numbers.
Got it? Let’s introduce the scenario.
Suppose a secret iDevice is launched: Apple jumps 20%, Google drops 5%, and Microsoft stays the same. We want to adjust each stock value, using something similar to the identity matrix:
New Apple [1.2 0 0]
New Google [0 0.95 0]
New Microsoft [0 0 1]
The new Apple value is the original, increased by 20% (Google = 5% decrease, Microsoft = no change).
Oh wait! We need the overall profit:
Total change = (.20 * Apple) + (-.05 * Google) + (0 * Microsoft)
Our final operations matrix:
New Apple [1.2 0 0]
New Google [0 0.95 0]
New Microsoft [0 0 1]
Total Profit [.20 -.05 0]
Making sense? Three inputs enter, four outputs leave. The first three operations are a “modified copy” and the last brings the changes together.
Now let’s feed in the portfolios for Alice \$1000, \$1000, \$1000) and Bob \$500, \$2000, \$500). We can crunch the numbers by hand, or use a Wolfram Alpha (calculation):
(Note: Inputs should be in columns, but it’s easier to type rows. The Transpose operation, indicated by t (tau), converts rows to columns.)
The final numbers: Alice has \$1200 in AAPL, \$950 in GOOG, \$1000 in MSFT, with a net profit of \$150. Bob has \$600 in AAPL, \$1900 in GOOG, and \$500 in MSFT, with a net profit of \$0.
What’s happening? We’re doing math with our own spreadsheet. Linear algebra emerged in the 1800s yet spreadsheets were invented in the 1980s. I blame the gap on poor linear algebra education.
Historical Notes: Solving Simultaneous equations
An early use of tables of numbers (not yet a “matrix”) was bookkeeping for linear systems:
\begin{aligned} x + 2y + 3z &= 3 \\ 2x + 3y + 1z &= -10 \\ 5x + -y + 2z &= 14 \end{aligned}
$\begin{bmatrix}1 & 2 & 3\\2 & 3 & 1\\5 & -1 & 2\end{bmatrix} \begin{bmatrix}x \\y \\ z \end{bmatrix} = \begin{bmatrix}3 \\ -10 \\ 14 \end{bmatrix}$
We can avoid hand cramps by adding/subtracting rows in the matrix and output, vs. rewriting the full equations. As the matrix evolves into the identity matrix, the values of x, y and z are revealed
on the output side.
This process, called Gauss-Jordan elimination, saves time. However, linear algebra is mainly about matrix transformations, not solving large sets of equations (it’d be like using Excel for your
shopping list).
Terminology, Determinants, and Eigenstuff
Words have technical categories to describe their use (nouns, verbs, adjectives). Matrices can be similarly subdivided.
Descriptions like “upper-triangular”, “symmetric”, “diagonal” are the shape of the matrix, and influence their transformations.
The determinant is the “size” of the output transformation. If the input was a unit vector (representing area or volume of 1), the determinant is the size of the transformed area or volume. A
determinant of 0 means matrix is “destructive” and cannot be reversed (similar to multiplying by zero: information was lost).
The eigenvector and eigenvalue represent the “axes” of the transformation.
Consider spinning a globe: every location faces a new direction, except the poles.
An “eigenvector” is an input that doesn’t change direction when it’s run through the matrix (it points “along the axis”). And although the direction doesn’t change, the size might. The eigenvalue is
the amount the eigenvector is scaled up or down when going through the matrix.
(My intuition here is weak, and I’d like to explore more. Here’s a nice diagram and video.)
Matrices As Inputs
A funky thought: we can treat the operations matrix as inputs!
Think of a recipe as a list of commands (Add 2 cups of sugar, 3 cups of flour…).
What if we want the metric version? Take the instructions, treat them like text, and convert the units. The recipe is “input” to modify. When we’re done, we can follow the instructions again.
An operations matrix is similar: commands to modify. Applying one operations matrix to another gives a new operations matrix that applies both transformations, in order.
If N is “adjust for portfolio for news” and T is “adjust portfolio for taxes” then applying both:
TN = X
means “Create matrix X, which first adjusts for news, and then adjusts for taxes”. Whoa! We didn’t need an input portfolio, we applied one matrix directly to the other.
The beauty of linear algebra is representing an entire spreadsheet calculation with a single letter. Want to apply the same transformation a few times? Use $N^2$ or $N^3$.
Can We Use Regular Addition, Please?
Yes, because you asked nicely. Our “mini arithmetic” seems limiting: multiplications, but no addition? Time to expand our brains.
Imagine adding a dummy entry of 1 to our input: (x, y, z) becomes (x, y, z, 1).
Now our operations matrix has an extra, known value to play with! If we want x + 1 we can write:
[1 0 0 1]
And x + y - 3 would be:
[1 1 0 -3]
Want the geeky explanation? We’re pretending our input exists in a 1-higher dimension, and put a “1” in that dimension. We skew that higher dimension, which looks like a slide in the current one. For
example: take input (x, y, z, 1) and run it through:
[1 0 0 1]
[0 1 0 1]
[0 0 1 1]
[0 0 0 1]
The result is (x + 1, y + 1, z + 1, 1). Ignoring the 4th dimension, every input got a +1. We keep the dummy entry, and can do more slides later.
Mini-arithmetic isn’t so limited after all.
I’ve overlooked some linear algebra subtleties, and I’m not too concerned. Why?
These metaphors are helping me think with matrices, more than the classes I “aced”. I can finally respond to “Why is linear algebra useful?” with “Why are spreadsheets useful?”
They’re not, unless you want a tool used to attack nearly every real-world problem. Ask a businessman if they’d rather donate a kidney or be banned from Excel forever. That’s the impact of linear
algebra we’ve overlooked: efficient notation to bring spreadsheets into our math equations.
Happy math. | {"url":"https://tongyici.net:5555/forum.php?mod=viewthread&tid=1142","timestamp":"2024-11-02T06:14:01Z","content_type":"application/xhtml+xml","content_length":"44043","record_id":"<urn:uuid:a5d7db09-6f83-435c-aa7b-c9675a427997>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00338.warc.gz"} |
In today’s post from Programming Praxis, the goal is to check if two cyclic lists are equal. So if you have the cycles ↻(1 2 3 4 5) and ↻(3 4 5 1 2), they’re equal. Likewise, ↻(1 2 2 1) and ↻(2 1 1
2) are equal. But ↻(1 2 3 4) and ↻(1 2 3 5) are not since they have different elements while ↻(1 1 1) and ↻(1 1 1 1) aren’t since they have different elements.
Basically, there are two ways that you can solve this problem. First, you actually use the cyclic structure and recursively check each start in one list for a matching cycle in the other.
Alternatively, so long as the lengths are equal you can just double one list and search for the other as a subset. We’ll go ahead and code up both.
First, we want to write a semi-straight forward comparison. The function will take two lists. It will recur across each in both for a start and loop in the second until either a match is confirmed or
not. One thing that I want to do is make a cycle structure. We could use mutation to set the last cdr/tail of the list to the head, but instead I’ll make the following structure:
; Store a cycle as the current head and original (reset) head
(define-struct cycle (current original))
; Convert a list to a cycle
(define (list->cycle ls)
(make-cycle ls ls))
; Convert a cycle to a list
(define (cycle->list c)
(cycle-take (cycle-length c) c))
; Return the first item of a cycle
(define (cycle-head c)
(if (null? (cycle-current c))
(car (cycle-original c))
(car (cycle-current c))))
; Return all but the first item of a cycle
(define (cycle-tail c)
(if (null? (cycle-current c))
(make-cycle (cdr (cycle-original c)) (cycle-original c))
(make-cycle (cdr (cycle-current c)) (cycle-original c))))
; Get the length of a cycle
(define (cycle-length c)
(length (cycle-original c)))
; Take the first n items from a cycle
(define (cycle-take n c)
(let loop ([i 0] [c c])
(if (= i n)
(cons (cycle-head c) (loop (+ i 1) (cycle-tail c))))))
; Test if a cycle is about to reset
(define (cycle-reset? c)
(null? (cycle-current c)))
Essentially, we’ll keep a pointer to the original list and reset when the current pointer runs out. All of this is of course transparent to anyone using the API, so we could switch it out for another
(using a vector and a current pointer for example) if we wanted. The most useful function yet potentially non-standard function is cycle-reset?. Essentially, it fills what would have been
cycle-null?, except a cycle will never be null. This tests when we’re about to reset to the beginning of the cycle.
There are a bunch of unit tests in the source on GitHub, but reset assured it works.
Now that we have that, the function it relatively straight forward:
; Test if two cycles are equal
(define (cycle-equal? c1 c2)
; Check the lengths first
(define len (cycle-length c1))
(and (= len (cycle-length c2))
(let loop ([ci1 c1] [ci2 c2])
; No matches found
[(cycle-reset? ci1)
; No match found for this start in c1
; Advance c1, reset c2
[(cycle-reset? ci2)
(loop (cycle-tail ci1) c2)]
; Match found at the current element!
[(equal? (cycle-take len ci1)
(cycle-take len ci2))
; Otherwise, no match, advance c2
(loop ci1 (cycle-tail ci2))]))))
Theoretically, the comments should be pretty straight forward. For each starting pair, test if we have matching cycles using cycle-take. That could bail out early to make the code more efficient, but
at the cost of being rather less clean. Really, if you wanted to make this code efficient you’d most likely use a vector and a head pointer anyways.
And here we have a few tests:
> (cycle-equal? (list->cycle '(1 2 3 4 5)) (list->cycle '(1 2 3 4 5)))
> (cycle-equal? (list->cycle '(1 2 3 4 5)) (list->cycle '(3 4 5 1 2)))
> (cycle-equal? (list->cycle '(1 2 2 1)) (list->cycle '(2 1 1 2)))
> (cycle-equal? (list->cycle '(1 1)) (list->cycle '(1 1 1 1)))
> (cycle-equal? (list->cycle '(1 2 3 4)) (list->cycle '(1 2 3 5)))
The next solution is a bit more straight forward if not quite as efficient. Essentially, double one of the lists and then check if the other is in it. For equal cycles, this will be equal but not
others. You do have to check the length first though.
First, we need to write code to check if one given list is a subset anywhere in another. Here’s one way to do that:
; Check if p is a prefix of ls
(define (prefix? ls p)
(or (null? p)
(and (equal? (car ls) (car p))
(prefix? (cdr ls) (cdr p)))))
; Check if a list needle is in the list haystack
(define (contains? haystack needle)
(and (not (null? haystack))
(or (prefix? haystack needle)
(contains? (cdr haystack) needle))))
And with that, checking for equal is a rather minimal function (we’re taking the cycles as lists this time):
; Check if two cycles (as lists) are equal by doubling one
(define (list-cycle-equal? lsc1 lsc2)
(and (= (length lsc1) (length lsc2))
(contains? (append lsc1 lsc1) lsc2)))
And to check that we can use the same tests. We just don’t convert to cycles first:
> (list-cycle-equal? '(1 2 3 4 5) '(1 2 3 4 5))
> (list-cycle-equal? '(1 2 3 4 5) '(3 4 5 1 2))
> (list-cycle-equal? '(1 2 2 1) '(2 1 1 2))
> (list-cycle-equal? '(1 1) '(1 1 1 1))
> (list-cycle-equal? '(1 2 3 4) '(1 2 3 5))
And that’s it. If you’d like, you can see the entire code on GitHub (cycle equality source). All of the functions are already in this post, but there are a bunch of unit tests that might be of
Edit 9 April 2013: A comment from Maurits on the Programming Praxis post got me wondering if it could be done in O(m + n)^1. Basically, their idea was to lexically order both cycles and then check if
they are equal as lists.
To lexically order them, we want to advance the cycle so that the smallest element in the cycle is first. If there is a tie, break it with the element right after each smallest and so on. Something
; Advance a cycle to the lexically minimum position
(define (cycle-lexical-min c [< <] [= =])
; Check if one cycle is less than another
(define (cycle-< c1 c2)
(let loop ([c1 c1] [c1-cnt (cycle-length c1)]
[c2 c2] [c2-cnt (cycle-length c2)])
(and (> c1-cnt 0)
(> c2-cnt 0)
(or (< (cycle-head c1) (cycle-head c2))
(and (= (cycle-head c1) (cycle-head c2))
(loop (cycle-tail c1) (- c1-cnt 1)
(cycle-tail c2) (- c2-cnt 1)))))))
; Lexically sort by storing minimum
(let loop ([min c] [c (cycle-tail c)])
[(cycle-reset? c) min]
[(cycle-< c min) (loop c (cycle-tail c))]
[else (loop min (cycle-tail c))])))
Note: This code uses an updated version of cycle-length that is amortized O(1) (it caches the length). You can see the code for that on GitHub.
One you have the sort, the actual comparison is easy:
; Compare cycles by lexical comparison
(define (lexical-cycle-equal? c1 c2 [< <] [= =])
(equal? (cycle-take (cycle-length c1) (cycle-lexical-min c1 < =))
(cycle-take (cycle-length c2) (cycle-lexical-min c2 < =))))
I’m not completely sure about the runtime of finding the lexical minimum. In the general case (with few duplicates), it’ll be O(n) though. Then there’s another O(n + n) for the cycle-length and
cycle-take, plus a final additional O(max(m, n)) for the equal?. So overall it would be O(3m + 3n + max(m, n)) which is O(m + n). The constant could be improved with a better abstraction, but not the
big-O time.
And of course all of the previous tests still work:
> (lexical-cycle-equal? (list->cycle '(1 2 3 4 5)) (list->cycle '(1 2 3 4 5)) < =)
> (lexical-cycle-equal? (list->cycle '(1 2 3 4 5)) (list->cycle '(3 4 5 1 2)) < =)
> (lexical-cycle-equal? (list->cycle '(1 2 2 1)) (list->cycle '(2 1 1 2)) < =)
> (lexical-cycle-equal? (list->cycle '(1 1)) (list->cycle '(1 1 1 1)) < =)
> (lexical-cycle-equal? (list->cycle '(1 2 3 4)) (list->cycle '(1 2 3 5)) < =)
1. The previous two solutions are O(mn) because they have to compare each starting point pairwise ↩︎ | {"url":"https://blog.jverkamp.com/2013/04/09/cyclic-equality/","timestamp":"2024-11-07T03:19:36Z","content_type":"text/html","content_length":"38093","record_id":"<urn:uuid:2e05af38-6d05-4a6b-aecc-2d2773cd3012>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00761.warc.gz"} |
Bodleian Archives & Manuscripts
Miscellaneous manuscript drafts for paper with this title, perhaps early form of that later published with J.G.F. Francis, Computer Journal, 4. Includes drafts for 'Preliminaries', 'Special
Codiagonal Forms', 'Derivation Form', 'Matrix interpretation of analogue of Bairstow process for Codiagonal form', 'Real roots of an unsymmetric matrix', and a heavily corrected typed version. Folder
includes 8-page manuscript note in another hand 'On theā ¦
Shelfmark: MS. Eng. misc. b. 265/C.112
Extents: 1 file
Dates: n.d. | {"url":"https://archives.bodleian.ox.ac.uk/repositories/2/top_containers/140260","timestamp":"2024-11-04T21:37:38Z","content_type":"text/html","content_length":"33497","record_id":"<urn:uuid:f8143a21-d793-4963-a55a-bf14e9ba92a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00629.warc.gz"} |
The Math Behind the Card Game Dobble
Recently, at a company gathering fun day, I found myself rediscovering an old favorite: Dobble. While the quick pace and competitive spirit made for an enjoyable break, I started thinking about what
makes this game tick. As we raced to spot matching symbols, I realized there must be more to the game’s design than meets the eye. This renewed my curiosity, leading me to explore the mathematical
principles behind the game that ensure every pair of cards always has exactly one symbol in common. Here’s a peek into the fascinating math behind Dobble.
How Dobble Works
Dobble is a card game that challenges players to find a matching symbol between any two cards. The clever design ensures that each pair of cards shares exactly one symbol, no matter which cards are
chosen. This seemingly simple setup is based on the principles of finite geometry and combinatorics, specifically projective planes.
In Dobble, each card features a set of symbols, and the key to the game’s design is that any two cards share exactly one symbol in common. At first thought this may seem as a coincidence, but in fact
it is the result of some careful and beautiful mathematical construction.
How Dobble Really Works
The underlying math in Dobble comes from the field of finite projective planes. A projective plane is a structure in which:
• There are a set number of points (symbols) and lines (cards),
• Every line contains the same number of points,
• Any two lines share exactly one point.
A projective plane of order \(n\) is a special arrangement where:
• There are \(n^2 + n + 1\) points (symbols),
• There are \(n^2 + n + 1\) lines (cards),
• Each line (card) contains exactly \(n + 1\) points (symbols),
• Each point (symbol) appears on \(n + 1\) lines (cards),
• Any two lines (cards) intersect at exactly one point (symbol).
In Dobble, the order \(n = 7\), and this order determines the number of cards, symbols, and the rule that any two cards share exactly one symbol.
This arrangement ensures that for any two cards, there will always be exactly one shared symbol. The particular projective plane used for Dobble is often based on order 7 geometry.
Construction of Dobble Cards
The cards in Dobble are constructed using a projective plane of order 7, which results in the following:
• There are \(7^2 + 7 + 1 = 57\) unique cards,
• Each card contains \(7 + 1 = 8\) symbols,
• Every pair of cards shares exactly 1 symbol.
The image shows a set of Dobble cards, each featuring multiple symbols. The key mathematical feature is that any two cards share exactly one symbol, a property based on the finite projective plane of
order 7 used in the game’s design.
This arrangement is what makes the game work so seamlessly. Without this mathematical structure, it would be extremely difficult to ensure that all cards share one, and only one, symbol with every
other card.
Dobble is more than just a fun and fast-paced card game—it’s a brilliant example of how mathematics can be applied to create engaging and entertaining experiences. The game’s unique structure is a
direct result of the beautiful interplay between finite geometry and combinatorics. So, the next time you sit down to play Dobble, remember that the seemingly simple task of spotting matching symbols
is backed bysuch a fascinating and intricate mathematical design!
In the next part of this series, we’ll delve into the mechaincs of another popular card game - Set. Stay tuned! | {"url":"http://blog.liorp.dev/blog/games/math/math-behind-children-games-part-1/","timestamp":"2024-11-05T11:03:37Z","content_type":"text/html","content_length":"24098","record_id":"<urn:uuid:138d2ef0-faf1-441c-b01b-0c4979fcfe41>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00059.warc.gz"} |
Conservation of Energy (1.3.1) | IB DP Physics 2025 SL Notes | TutorChase
Understanding the Principle
The conservation of energy is an enduring law of physics, asserting that energy within a closed system remains constant. Energy is not created or annihilated but undergoes transformation from one
form to another. This immutable principle aids in demystifying complex physical systems and forecasting diverse processes and reactions.
Energy Transformations
Energy can exist in numerous forms and can transform from one state to another while abiding by the conservation law.
• Work-Energy Theorem: This theorem is a concrete instance of energy conservation. It posits that work done on an object is tantamount to the change in its kinetic energy, illuminating the direct
correlation between energy input through work and the ensuing energy transformation within a system.
Work-energy theorem
Image Courtesy Geeksforgeeks
• Energy States: Physical systems can adopt varied energy states. Each transition, whether embodying heat, light, or motion, adheres unfailingly to the conservation axiom, affirming the constancy
of total energy even amidst continual transformations.
Work Done by a Force
In the realm of physics, when a force acts to displace an object, work ensues. Here, energy is seamlessly transferred from the applying force to the object, instigating a change in the object’s
energy state. This mechanism illuminates the conservation of energy principle in action.
Calculations and Considerations
• Formula: Work done is mathematically expressed as the product of the force applied and the displacement effected, encapsulated in the equation W=F×d.
• Energy Transfer: Every joule of work translates to a joule of energy imparted to the object. This mathematical and conceptual parallelism is instrumental in comprehending energy flow within
Work done
Image Courtesy OnlineMathLearning.com
Sankey Diagrams
A Sankey diagram is a graphic depiction of energy transfers, showcasing the distribution and dissipation of energy within systems. It underscores the energy conservation principle by elucidating the
split between utilised and lost energy.
Key Components
• Arrows: The arrows, varying in size, symbolise the amounts of energy in respective forms. The arrow’s breadth is proportionate to the energy quantity it denotes.
• Energy Inputs and Outputs: The diagram marks a clear dichotomy between energy inputs, useful outputs, and energy losses, presenting a panoramic view of energy distribution within systems.
Sankey diagram
Image Courtesy Siyavula
Creating a Sankey Diagram
• 1. Identify Energy Inputs: Ascertain the total input energy channelled into the system.
• 2. Determine Energy Outputs: Categorise the energy as useful output or as losses – stemming from inefficiency, dissipation, or other systemic limitations.
• 3. Draw Arrows Proportionally: Arrows should be drawn to scale to represent energy quantities with verisimilitude, offering a visual, intuitive grasp of energy allocation within the system.
Mechanical Energy Conservation
Mechanical energy conservation manifests where friction and resistive forces are absent. Here, mechanical energy, comprising kinetic and potential energy, remains unvaried, echoing the conservation
of energy principle.
In the Absence of Friction
• Kinetic and Potential Energy: Kinetic energy is ascribed to motion, and potential energy to position or state. Both energy forms are intrinsic to mechanical energy conservation.
• Energy Exchange: In environments devoid of friction, energy interchanges freely between kinetic and potential energy, while the aggregate mechanical energy is retained.
• Perpetual Motion: Ideal, frictionless settings would witness incessant motion. However, real-world scenarios invariably entail energy losses to friction, air resistance, and other forces.
• Energy Analysis: Grasping this concept is foundational in evaluating mechanical systems. It imparts insights into energy flux and transformation while accounting for actual systemic
Application in Physics Problems
Example: A Pendulum
Visualise a pendulum oscillating to and fro. At its apogee, it is endowed with maximal potential energy and minimal kinetic energy. As it descends, potential energy is transmuted into kinetic energy,
culminating in peak kinetic energy at the nadir of the swing. In an environment purged of air resistance and friction at the pivot, oscillation would be endless, epitomising the conservation of
mechanical energy.
Conservation of momentum in pendulum
Image Courtesy tang90246
• Energy Transformation: The energy oscillation between potential and kinetic forms is unceasing.
• Constant Total Energy: The summative energy (potential plus kinetic) remains unaltered at every point of oscillation, a real-time testament to the conservation of energy.
The principle of the conservation of energy is an elemental doctrine in physics. By delineating the invariant nature of energy within closed systems and underscoring the laws governing energy
transitions, it empowers students to pierce the complexity of physical processes. Every energy conversion, from the microscopic reactions within atoms to the macroscopic movements of celestial
bodies, is governed by this unyielding principle.
The nuanced understanding of work done by a force and the conservation of mechanical energy demystifies intricate phenomena. It unveils the dynamics of energy flow, offering tools for precise
analysis and predictions. The Sankey diagram, serving as a visual envoy, makes abstract concepts tangible, bridging the gap between theoretical postulates and empirical observations.
As students traverse the landscape of physics, the conservation of energy stands as a beacon, illuminating pathways, unravelling complexities, and connecting disparate concepts into a coherent,
intelligible whole. Each law, equation, and principle is a thread woven into the intricate tapestry of energy dynamics, unveiling the harmonious dance of forces and motions that animate the universe.
In the design of machines and industrial equipment, the conservation of energy principle is pivotal. Engineers and designers utilise this principle to optimise energy efficiency and minimise waste.
By understanding that energy cannot be destroyed but only transformed, designers aim to maximise the useful energy output for a given input, mitigating energy losses through friction, heat
dissipation, and other inefficiencies. This often involves the incorporation of energy-saving technologies and materials that reduce resistance, friction, and other energy losses, thereby enhancing
the performance, efficiency, and sustainability of machines and industrial systems.
The conservation of energy principle can be applied to both open and closed systems with appropriate modifications. In closed systems, the total energy remains constant. In open systems, energy can
cross the boundaries of the system. However, the principle still applies; the total energy of the system and its surroundings remains constant. In open systems, the conservation of energy takes into
account the energy entering and leaving the system, alongside the energy transformations occurring within the system, ensuring a comprehensive and balanced energy accounting that aligns with the
universal conservation principle.
In environmental science and sustainability, the conservation of energy principle underscores the finite nature of Earth's energy resources. While energy cannot be created or destroyed, it can be
converted into forms that are challenging to utilise efficiently. Understanding this principle aids in the efficient management and utilisation of energy resources, reducing waste, and mitigating
environmental impacts. It prompts innovations in energy-efficient technologies, renewable energy sources, and sustainable practices that optimise energy use, minimise losses, and mitigate the
transformation of energy into less usable and environmentally detrimental forms.
The conservation of energy principle is closely related to the First Law of Thermodynamics, which states that energy cannot be created or destroyed in an isolated system. The total amount of energy
remains constant, though it can change from one form to another. Essentially, the First Law of Thermodynamics is a specific application of the energy conservation principle, applied to thermodynamic
processes. It incorporates the concept of internal energy, and considers the transfer of energy as heat and work, offering a comprehensive framework for analysing energy transformations in thermal
processes while adhering to the overarching conservation principle.
Energy losses in real-world systems, such as friction, air resistance, and thermal dissipation, are accounted for by broadening the scope of the energy conservation principle. While the total energy
within a closed system remains constant, in practical scenarios, some energy invariably escapes the system or is transformed into less useful forms. Engineers and scientists calculate and analyse
these energy losses to optimise system performance. For instance, in mechanical systems, energy efficiency calculations consider energy losses to provide a realistic measure of the usable energy
output relative to the input, ensuring alignment with the conservation of energy principle.
Practice Questions
A 5kg object is lifted 10 meters above the ground. Explain how the conservation of energy principle applies in this scenario. Include a calculation of the work done and discuss the energy transfers
that occur.
The conservation of energy principle is evident in this scenario as the energy is neither created nor destroyed, but transformed. Initially, the object possesses kinetic energy. As it is lifted, work
done against gravity translates this kinetic energy into gravitational potential energy. The work done can be calculated using the formula W=mgh, which gives W=5kg×10m/s2×10m=500J. So, 500 joules of
work are done to lift the object, and this energy is stored as gravitational potential energy in the object, showcasing an energy transfer consistent with the conservation of energy.
A car of mass 1200kg is moving at a speed of 20m/s when the brakes are applied, bringing it to a stop. Using the principle of the conservation of energy, explain the energy transfers that take place
during this process.
In this case, the car possesses kinetic energy initially, calculated by Ek = 12mv^2 = 12 x 1200 kg x (20m/s)^2 = 240,000J. When the brakes are applied, this kinetic energy is transferred and
dissipated as thermal energy due to friction between the brake pads and wheels, and sound energy due to the noise produced during braking. The total energy before applying the brakes is equal to the
total energy after the car stops, demonstrating the conservation of energy principle, where energy is not lost but transformed into other forms. | {"url":"https://www.tutorchase.com/notes/ib/physics-2025/1-3-1-conservation-of-energy","timestamp":"2024-11-10T19:20:48Z","content_type":"text/html","content_length":"1049132","record_id":"<urn:uuid:2afc18d8-2182-49bb-913b-149ca4b3eeda>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00441.warc.gz"} |
Roll ‘n’ Code - The Robertson Program for Inquiry-based Teaching in Mathematics and Science
Part 1: Quilt Square Creations
• Project Appendix A and engage students in a discussion about which code matches the design on the left. The code for the example starts in the left-most box. Remind students that codes are read
and written from left to right, top to bottom.
• Each student will work with their own coding square (Appendix B).
□ Teachers should choose the coding square option most appropriate for their students.
□ Coding squares with more tiles will increase difficulty for students.
• Show Slide 3 of Appendix A. Use the footnotes or the lesson plan to instruct students how to complete the activity.
• Have students start by rolling the die to determine how many squares to colour in. The number on the die represents the number of squares the student will colour. Have students start with blue,
continue with red, then green, then yellow.
• The student will continue rolling the die until all the squares are filled by a colour.
□ If students are left with blank squares, have them continue rolling the die until all squares are coloured. Use the same order of colours as before (blue, red, green, then yellow.)
• Advance the slideshow to slide 4.
• When the squares are coloured in, have students design a code on the command sheet (Appendix C) to provide instructions on how to replicate their coding square. Students can add boxes and numbers
to their command sheet as necessary. Have students begin their code in a designated area (i.e., top left-hand square.)
□ An example of coding language that can be used is found in Appendix C.
□ The programmer can write the code in two ways, yet keep the same meaning:
☆ Use only symbols (e.g., →|→, to signify two squares to the right).
☆ Use numbers and symbols (e.g., 2 →, to signify two squares to the right)
□ Encourage the programmer to find the simplest way to write their code.
☆ Ask students to share strategies on the simplest or most efficient way to write code.
□ When the code is complete, have students write their name or an identifiable symbol on the back of their code and quilting square.
Part 2: Quilt Square Recreation
• Students should be provided with a new, blank coding square.
• Students will pair up and exchange coding instructions.
• Using the coding instructions from their partner, students will recreate their partner’s coding square.
• When both partners have finished recreating their partner’s coding square, students will bring out the originals to compare.
□ Encourage students to work cooperatively using coding language to recreate the correct coding square from the code. Emphasize it is not a competition, but the task is to master creating and
interpreting the correct code to excel in the challenge.
Part 3: Mix and Match
• Instruct students to hand in their coding squares and coding instructions. Place the coding squares and coding instructions into groups of 6, ensuring that the matching code and square remain in
the same group.
• Arrange students into groups of 6.
• Place six corresponding coding squares and instructions, face up on a surface (i.e., a table, the floor, around the room, etc.)
□ The coding squares may need to be taped down so students cannot flip the square to find the matching name or symbol to the code.
• Once each group has matched the code with its corresponding coding square, have students circulate tables to see if they can match another group of six codes.
□ Encourage collaboration and discussion between students.
• Create a large class display of coding squares using the smaller coding squares, organized by a mathematical concept of choice (i.e., biggest area covered by a colour, fraction of coding square
coloured in blue, etc.)
• Once the coding display is created, challenge groups of students to create code to represent the class’ large quilt. | {"url":"https://wordpress.oise.utoronto.ca/robertson/portfolio-item/roll-n-code/","timestamp":"2024-11-12T15:12:47Z","content_type":"text/html","content_length":"109397","record_id":"<urn:uuid:9420a754-d276-4aaf-8955-6421e9bdf9f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00387.warc.gz"} |
calculate effective energy ball milling
Owing to its inexpensive nature and tunable physicochemical properties, biochar has proved to be a promising adsorbent for a wide range of contaminants including metals/metalloids ( Yang et al.,
2018, Rajapaksha et al., 2018, Sun et al., 2020 ), anions/oxyanions ( Sun et al., 2019, Yang et al., 2019a; Ouyang et al., 2019 ), cosmetics, pharmaceut...
WhatsApp: +86 18838072829
Amorphization via high energy ball milling is a solid state processing technique in which continuously moving balls transfer kinetic/impact energy to the powder particles. It leads to welding,
fracturing, fragmentation and rewelding of the particles. During ball milling, nonequilibrium phases such as supersaturated solid solution, metastable phases, quasicrystalline phases and
nanostructure ...
WhatsApp: +86 18838072829
The numerical model of the highenergy ball milling attritor is built using Solidworks, whereas EDEM, a commercially available DEM code by DEM Solutions Inc.,[] is used for different models are
considered in the simulation, as shown in Figure first model (Figure 1(a)) contains a central shaft with 5layer impellers. The canister has a cylindrical shape with 82 mm diameter ...
WhatsApp: +86 18838072829
Milling constraints include time duration of milling, ball size, the balltosample content proportion, rotation speed, and energy that took part in a vital part of the structureproperty ...
WhatsApp: +86 18838072829
The milling process definitions Cutting speed,v c Indicates the surface speed at which the cutting edge machines the workpiece. Effective or true cutting speed, v e Indicates the surface speed at
the effective diameter (DC ap).This value is necessary for determining the true cutting data at the actual depth of cut (a p).This is a particularly important value when using round insert
cutters ...
WhatsApp: +86 18838072829
To calculate material removal for drilling: Find the diameter of drill bit, cutting speed, and feed rate. Multiply the feed rate in mm/revolution by the cutting speed in mm/min. Multiply the
product with the diameter of the drill bit in mm. Divide the product by 4 to obtain the material removal rate.
WhatsApp: +86 18838072829
Alexis Yovanovic. Indeed. Specific energy consumption (W, kWh/t) refers to the simple quotient of the "net" power applied by the mill (kW) divided by the feed rate (t/h). A mill can have a ...
WhatsApp: +86 18838072829
It was found that the ball mill consumed kWh/t energy to reduce the F 80 feed size of lm to P 80 product size of lm while stirred mill consumed kWh/t of energy to produce ...
WhatsApp: +86 18838072829
feeding (, making metering into the mill difficult), grinding (, plugging the hammer mill screen or blocking the air classifier of a jet mill), and collection (, plugging the bag filters). There
are two ways to grind sticky materials. The first solution is to dry the material prior to grinding, or dry and
WhatsApp: +86 18838072829
The numerical model is shown to be a promising tool for the knowledge of dry milling in a planetary ball mill. KeyWords: Highenergy ball milling, Simulation, Optimization, Parameters, Modeling.
Received: June 5, 2021. Revised: March 11, 2022. Accepted: April 13, 2022. Published: June 29, 2022. 1 Introduction
WhatsApp: +86 18838072829
Calculating the energy consumption of a ball mill involves measuring the input power, the mass of the grinding media, and the speed at which it rotates. You'll need to use the basic laws of...
WhatsApp: +86 18838072829
The result revealed that the energy required by a ball mill, highpressure homogenizer and twin screw extruder were,, and 5 kWh/kg of biomass, respectively . Kim et al. showed that a large amount
of energy was needed by the planetary ball mill for grinding rice straw compared to the attrition mill.
WhatsApp: +86 18838072829
The ball mill Ball milling is a mechanical technique widely used to grind powders into ne particles and blend Being an environmentallyfriendly, costeffective technique, it has found wide
application in industry all over the world. Since this minireview mainly focuses on the conditions applied for the prep
WhatsApp: +86 18838072829
Volume 291, April 2016, Pages 713 Optimization of the high energy ballmilling: Modeling and parametric study Hamid Ghayour a, Majid Abdellahi a, Maryam Bahmanpour b Add to Mendeley https://// Get
rights and content •
WhatsApp: +86 18838072829
The minimal magnitude of ball size is calculated in millimeters from the equation: where is the maximum size of feed (mm); σ is compression strength (MPa); E is modulus of elasticity (MPa); ρb is
density of material of balls (kg/m 3 ); D is inner diameter of the mill body (m).
WhatsApp: +86 18838072829
For typical circuits that involve AG/SAG, HPGR or ball milling, the generation of 75 µm material (denoted as SSE75) is a suitable marker size with which to benchmark performance because it
contains 80% of the surface area generation 1 and energy is proportional to surface area generation 2. As shown in Figure 1, whereas the Bond method uses ...
WhatsApp: +86 18838072829
Skyspring Nanomaterials, Inc. It is a ball milling process where a powder mixture placed in the ball mill is subjected to highenergy collision from the balls. This process was developed by
Benjamin and his coworkers at the International Nickel Company in the late of 1960. It was found that this method, termed mechanical alloying, could ...
WhatsApp: +86 18838072829
Ball Milling. Ball milling is suggested as a novel method for enhancing filler dispersion in different matrices that is environmentally and economically sustainable [85]. It is a solidstate
mixing of powders, usually performed with ball mills, which enables intimate mixing of the constituents through repetitive plastic deformation. By ...
WhatsApp: +86 18838072829
Currently, nanostructured materials proceed to receive increasing attention due to their potential physical, chemical, and mechanical properties compared to their conventional counterparts [1,
2].Between processing methods, highenergy ball milling (HEBM) is the most used for its simplicity, low cost, and production capacity [3,4,5].In this process, the powder is quickly and effectively
WhatsApp: +86 18838072829
This article provides a comprehensive review of the milling techniques that can be used to improve the solubility of poorly watersoluble drugs, which are a major challenge for pharmaceutical
development. The article covers the mechanisms, advantages, limitations and applications of various milling techniques, such as wet media milling, dry media milling, highpressure homogenization,
and ...
WhatsApp: +86 18838072829
Three SAG/ball mill circuits were surveyed revealing that on average, 91% of the supplied energy results in he at losse s, leavin g on ly 9 % for o re bre akage. The slurry absorbs most of the
WhatsApp: +86 18838072829
Since the surface free energy of solids is in the order of 10 2 to 10 3 mJ/m 2, Ocepec et al. (1986) pointed out that producing material having a specific surface of about m 2 /g by milling
should require a specific energy of only about kWh/t.
WhatsApp: +86 18838072829
Highenergy ball milling (HEBM) is one of the most efficient ways to produce tungstenbased nanopowders [46,47,48,49].Compared with other production methods such as freezedrying, [], wet chemical
processes [], hydrogen annealing of tungsten oxides [], spray drying [], and DC arc plasma chemical synthesis [19,23], HEBM is the most efficient and productive process due to the simplicity of
its ...
WhatsApp: +86 18838072829
Step 1: Calculate effective diameter. The thing about a ball nose if if your cut depth is less than the radius of the ball, they your effective diameter varies from the shank diameter of the
cutter. It will be smaller. Simple geometry will tell you what the size is. Here is the basic formula:
WhatsApp: +86 18838072829
The grindingproduct size, P, in a Bond ball mill, which is given by the aperture size which passes 80% of the grinding product as a function of the aperture size of the test screen Pk, can be
expressed by the formula P= P k K 2. These functions for G and P enable us to calculate the value of Wi for any other size of grinding product if we know ...
WhatsApp: +86 18838072829
Planetary Ball Mill PM 100 RETSCH highest fineness. Planetary Ball Mills are used wherever the highest degree of fineness is addition to wellproven mixing and size reduction processes, these
mills also meet all technical requirements for colloidal grinding and provide the energy input necessary for mechanical extremely high centrifugal forces of a planetary ball ...
WhatsApp: +86 18838072829
Milling was then performed in 80% ethanol for 30120 minutes using a highenergy ball mill. The mechanical treatment resulted in a reduction of the fibre length and diameter, probably due to
degradation of the cellulose amorphous regions. ... Ball milling is a simple, fast, costeffective green technology with enormous potential. The studies ...
WhatsApp: +86 18838072829
This technique belongs to the comminution or attrition approach introduced in Chapter 1. In the highenergy ball milling process, coarsegrained structures undergo disassociation as the result of
severe cyclic deformation induced by milling with stiff balls in a highenergy shaker mill [ 8, 9 ].
WhatsApp: +86 18838072829 | {"url":"https://alataverne-merville.fr/2023_Jun_09/9203.html","timestamp":"2024-11-07T22:43:42Z","content_type":"application/xhtml+xml","content_length":"26453","record_id":"<urn:uuid:b4f4ce9d-4491-45c3-ac77-bac8dcf2c645>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00700.warc.gz"} |
Econ 745 Simon Gilchrist Problem Set 3 - Due Tuesday, Oct 8th
Econ 745
Simon Gilchrist
Problem Set 3 - Due Tuesday, Oct 8th
This problem set is empirical. You will have to download an excel data file from the course
dropbox. This excel file has series for annual consumption (a real index), an index of US stock
excess returns (RMRF: market return less risk-free rate), a risk-free rate series, and six returns
on portfolios of stocks, sorted by size (small vs. large) and book-to-market (low b/m, medium
b/m, and high b/m).
Question 1: Preliminaries and cross-section of stocks
(1) Compute the consumption growth: Ct /Ct−1 . Note that returns are from Jan 1 to Dec 31.
You should match the consumption growth between 1929 and 1930 with the 1929 stock market
return. (Some researchers match it with the 1930 stock market return, but I want everyone in
class to do the same thing for these results.) Similarly compute inflation, Pt /Pt−1 . Compute
the market return by adding the risk-free rate to RMRF. Substract inflation from the risk-free
rate, the market return, and the 6 portfolio returns, to obtain real returns (Match the inflation
between 1929 and 1930 with the 1929 stock return, just like for consumption). (Note: the
risk-free rate = short-term US government bonds.)
(2) Report the mean, standard deviation, and autocorrelation of consumption growth, the
risk-free rate, the market real return, and the 6 real portfolio returns.
(3) Report the mean and standard deviation of the log returns as well. (These may be more
relevant for long-run investments, since the long-horizon stock return is Rt→t+T = ΠTs=0
(4) Compute the β of each portfolio return on the market return. Produce a plot with, on
the x-axis, the market β, and the y-axis, the mean excess return. What is the prediction of the
CAPM? What does the data say?
(5) Compute the β of each portfolio return on consumption growth, as well as the consumption β of the stock market. Produce a plot with, on the x-axis, the consumption β, and the
y-axis, the mean excess return. What is the prediction of the Consumption CAPM? What does
the data say?
Question 2: Hansen-Jagannathan bounds
In class we concentrated on the simple H-J inequality with one excess return:
σ(M )
E(Re − Rf )
E(M )
σ(Re − Rf )
However we can in practice find tigher bounds if we use the model for multiple returns Ri . The
question is, what is the minimal σ(M ) given E(M ) and given that E(M Ri ) = 1, for i = 1...N.
Here is one derivation of the bounds (for more [not required], see the Hansen-Jagannathan JPE
1991 paper).
Write a linear regression of M on demeaned returns:
M =a+
bi (Ri − E(Ri )) + ε,
M − E(M ) = R0 b + ε,
where R is the vector of stacked, demeaned returns. We have
b = V ar(R)−1 Cov(M, R).
It follows that
V ar(M ) ≥ b0 V ar(R)b = Cov(M, R)0 V ar(R)−1 Cov(M, R).
From the asset pricing equation,
E(M Ri ) = 1//f orall/i,
Cov(M, R) = E(M R) − E(M )E(R) = 1 − E(M )E(R)
where again R is the vector of returns.
Conclusion: (this is the only thing you need!)
V ar(M ) = σ(M )2 ≥ (1 − E(M )E(R))0 V ar(R)−1 (1 − E(M )E(R)) .
Note that 1 is a vector of 1.
(1) Create a figure showing, as a function of E(M ), the required σ(M ) to make the inequality
hold with equality. First use two assets, the real risk-free rate and the real market return. Then,
use 8 assets (adding the 6 portfolios). Plot the two frontiers on the same picture. x-axis=E(M ),
y-axis=σ(M ). Choose the scale!
(2) Using the H-J bounds. Consider the standard CRRA utility function
Compute the stochastic discount factor. Given the consumption data, compute E(M ) and
σ(M ) for various values of γ = .5, 1, 2, 4, 6, 8, 10, 15, 20, 30, 40, .., 200. Plot the resulting pairs
E(M ), σ(M ) on the figure of question 1. Is there a value of γ such that the SDF satisfies the
H-J bounds?
(3) Now consider the following utility function, with an external habit:
(ct − θht )1−γ
ht = (1 − δ)ht−1 + δct−1 .
Redo the exercise of question (2), trying different values for δ (e.g. .1, .01), and for θ (e.g. .9,
.95) as well as γ = 2, 4. Are there parameter values such that the SDF satisfies the H-J bounds?
Question 3: GMM estimation
This question asks you to estimate by GMM the standard consumption-based model:
Et β
= 1.
Set β = .998, so you only have to estimate γ. For each question below, produce a table with
the first stage and second stage estimates, the J test of overidentification (if possible) and the
associated p-value. (Use Cochrane’s book, chapters 10 and 11, for reference on GMM.)
Overall, we want to know
(a) What is the right value of γ?
(b) Does this model work well or not? With this in mind, make sense of the estimates,
standard errors and J-statistic. Try to explain intuitively the results.
Advice: You can use Matlab’s minimizer (fminsearch) to solve the minimum of the objective
function. It may be useful to plot as a function of γ, the objective function that you try to
minimize. It may be useful to report the average error in the Euler equations, at the estimated
parameter value, i.e. T1 Tt=1 .998 CCt+1
Rt+1 − 1. This is an error in annual returns, e.g. .02
means that the model misses the mean returns by 200 basis points per year. The second stage
of GMM picks an optimal combination of moments to set equal to zero. It may be useful to look
which combination GMM is picking (i.e. the weighting matrix W ) since in some cases, GMM
“discards” some moments because they are not very informative. Please format your results
(a) Estimate the model using only the excess return on the stock market (market return less
risk free rate). 1 moment, 1 parameter!
(b) Estimate the model using only the real return on the stock market.
(c) Estimate the model using only the real risk-free return.
(d) Estimate the model using the real risk-free return and the real market return.
(e) Estimate the model using the real risk-free return and the real excess market return.
(f) Estimate the model using the 8 assets. | {"url":"https://studylib.net/doc/11787289/econ-745-simon-gilchrist-problem-set-3---due-tuesday--oct...","timestamp":"2024-11-10T19:37:10Z","content_type":"text/html","content_length":"57736","record_id":"<urn:uuid:23c75144-24c9-4b03-b538-54268c283571>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00560.warc.gz"} |
Articles about modern variational inference series – B.log
Previously we covered Variational Autoencoders (VAE) — popular inference tool based on neural networks. In this post we'll consider, a followup work from Torronto by Y. Burda, R. Grosse and R.
Salakhutdinov, Importance Weighted Autoencoders (IWAE). The crucial contribution of this work is introduction of a new lower-bound on the marginal log-likelihood $\log p(x)$ which generalizes ELBO,
but also allows one to use less accurate approximate posteriors $q(z \mid x, \Lambda)$.
On a dessert we'll discuss another paper, Variational inference for Monte Carlo objectives by A. Mnih and D. Rezende which aims to broaden the applicability of this approach to models where
reparametrization trick can not be used (e.g. for discrete variables). | {"url":"https://artem.sobolev.name/tags/modern-variational-inference-series.html","timestamp":"2024-11-09T17:15:59Z","content_type":"text/html","content_length":"9020","record_id":"<urn:uuid:0378a28f-3a34-4ade-8711-cdcd0c4f3814>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00196.warc.gz"} |
Electron-volts to volts calculator
Energy in electron-volts (eV) to electrical voltage in volts (V) calculator.
Enter the energy in electron-volts, charge in elementary charge or coulombs and press the Calculate button:
eV to volts calculation with elementary charge
The voltage V in volts (V) is equal to the energy E in electron-volts (eV), divided by the electric charge Q in elementary charge or proton/electron charge (e):
V[(V)] = E[(eV)] / Q[(e)]
eV to volts to calculation with coulombs
The voltage V in volts (V) is equal to 1.602176565×10^-19 times the energy E in electron-volts (eV), divided by the electrical charge Q in coulombs (C):
V[(V)] = 1.602176565×10^-19 × E[(eV)] / Q[(C)]
See also | {"url":"https://jobsvacancy.in/calc/electric/ev-to-volt-calculator.html","timestamp":"2024-11-07T06:21:00Z","content_type":"text/html","content_length":"10621","record_id":"<urn:uuid:7cbf0e4c-a0a2-47fc-869f-1be34887cf17>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00014.warc.gz"} |
MadMath Manifesto
"Look at it this way. When I read a math paper it's no different than a musician reading a score. In each case the pleasure comes from the play of patterns, the harmonics and contrasts... The
essential thing about mathematics is that it gives esthetic pleasure without coming through the senses." (Rudy Rucker, A New Golden Age)
"'I find herein a wonderful beauty,' he told Pandelume. 'This is no science, this is art, where equations fall away to elements like resolving chords, and where always prevails a symmetry either
explicit or multiplex, but always of a crystalline serenity.'" (Jack Vance, The Dying Earth)
The preceding dialogues are both from works of fiction. That being said, they may in fact truly represent how the majority of mathematicians experience their work. For example, Rudy Rucker is himself
a retired professor of mathematics and computers (as well as a science fiction author). My own instructor of advanced statistics would end every proof with the heartfelt words, "And that's the
I've heard that kind of sentiment a lot. But I never experienced mathematics that way. I now have a graduate degree in mathematics and statistics, and currently teach full-time as a lecturer of
college mathematics, and these kinds of declarations still mystify me. Math has never felt "beautiful" or "poetic". I would never in a million years think to describe math as "pleasurable" or
Math drives me
My experience of mathematics is this: Math is a battle. It may be necessary, it may be demanding, it may even be heroic. But the existential reality is that if you're doing math, you've got a
problem. You very literally have a problem, something that is bringing your useful work to a halt, a problem that needs solving. And personally, I don't like problems; I am not fond of them; I wish
they were not there. I want them to be gone, eradicated, and out of my way. I don't like puzzles; I want solutions. And once you have a solution, then you're not doing math anymore. So the process of
mathematics is an experience in infuriation.
So, again: Math is a battle. It is a battle that feels like it must be fought. It can feel like a violent addiction; hours and days and nights disappearing into a mental blackness, unable to track
the time or bodily needs. Becoming aware again at the very edge of exhaustion, hunger, filth, and collapse.
At worst, math can feel like a horrible life-or-death struggle, clawing messily in the midst of muddy, bloody, poisonous trenches. At best, it may feel like an elegant martial-arts move, managing to
use the enemy's weight against itself, to its destruction.
I love seeing a powerful new mathematical theorem. But not because it "gives esthetic pleasure"; I have yet to see that. Rather, because a powerful theorem is the mathematical equivalent to "Nuke it
from orbit – It's the only way to be sure". A compelling philosophy.
On the day that you really need math it will be a high-explosive, demolishing the barrier between you and where you want to go. Is there a pleasure in that? Perhaps, but not from the "play of
patterns, the harmonics and contrasts". Rather, it's because blowing up things is cool. Like at a monster-truck rally, crushing cars is cool. Math may not be beautiful or fun for us, but it is
, and that's what we need from it.
Of course, I also don't know how to a read a music score, so I'm similarly mystified if that's the operating analogy for most mathematicians. Perhaps I'm missing something essential, but I have to
stay true to my own experience. If math is going to be useful or worthwhile then it must literally
rock you
in some way, relieve an unbearable tension, and change your perception of what is possible.
And so, the battle continues.
4 comments:
1. You're dead on about math being the enemy when you actually, in real life, need it.
To me though in school, math seemed like a joke. I mean a funny joke, or a magic trick, or something absurd. When I had to do proofs on the board I would often laugh out loud as some horrible
mess resolved itself cleanly, or herculean efforts were made to cross a tiny distance. Hell, an indirect proof is practically a shaggy dog story.
2. I see the beauty not in solving things, but in understanding the concepts. That understanding makes proving theorems easier, but for me, the understanding is the goal, not solving the problems.
More precisely, the moment when you just understand what some particular notation or theorem actually says. (Latest such moment: The construction of integers by using natural numbers.)
This may change. I'm just an undergraduate at this point.
3. First we feast, then we MATH!
Delta, you just amped up the awesome by a factor of ten. Bravo!
4. As a theoretical computer scientist with a PhD in mathematics and statistics (stochastic processes specialty) I can assure you that my view of mathematics has evolved with time. It was definitely
a battle at the beginning, but with the passing of time and the gaining of understanding, things started to become more "rarefied". You get the feeling of being adrift on a boat, along a river
which becomes wider and wider, until you lose sight of the boundaries. The more things you learn, the more things you see that you need learning to progress. And you discover that you MUST
specialise in some way. | {"url":"http://www.madmath.com/2009/01/angrymath-manifesto.html","timestamp":"2024-11-11T04:43:42Z","content_type":"application/xhtml+xml","content_length":"85702","record_id":"<urn:uuid:8fb0d63f-add9-4dd2-add2-fd758b7571e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00143.warc.gz"} |
Math Is Fun Forum
Registered: 2005-06-28
Posts: 48,354
Women Jokes - I
Q: What did one female firefly say to the other?
A: You glow girl!
* * *
Q: What book do women like the most?
A: "Their husbands checkbook!".
* * *
Q: What do girls and noodles have in common?
A: They both wiggle when you eat them.
* * *
Q: What is the definition of eternity?
A: The time that elapses from when you come till she goes.
* * *
Q: What's the difference between a knife and a woman arguing?
A: A knife has a point.
* * *
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. | {"url":"https://mathisfunforum.com/viewtopic.php?id=32070","timestamp":"2024-11-10T12:53:23Z","content_type":"application/xhtml+xml","content_length":"7270","record_id":"<urn:uuid:f39415b4-bffe-4308-b1c4-6bba19661031>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00611.warc.gz"} |
Latest on abc
I’ve seen reports today (see here and here) that indicate that Mochizui’s IUT papers, which are supposed to contain a proof of the abc conjecture, have been accepted by the journal Publications of
the RIMS. Some of the sources for this are in Japanese (e.g. this and this) and Google Translate has its limitations, so perhaps Japanese speaking readers can let us know if this is a
If this is true, I think we’ll be seeing something historically unparalleled in mathematics: a claim by a well-respected journal that they have vetted the proof of an extremely well-known conjecture,
while most experts in the field who have looked into this have been unable to understand the proof. For background on this story, see my last long blog posting about this (and an earlier one here).
What follows is my very much non-expert understanding of what the current situation of this proof is. It seems likely that there will soon be more stories in the press, and I hope we’ll be hearing
from those who best understand the mathematics.
The papers at issue are Inter-universal Teichmuller Theory I, II, III, IV, available in preprint form since September 2012 (I blogged about them first here). Evidently they were submitted to the
journal around that time, and it has taken over 5 years to referee them. During this 5 year period Mochizuki has logged the changes he has made to the papers here. Mochizuki has written survey
articles here and here, and Go Yamashita has written up his own version of the proof, a 400 page document that is available here.
My understanding is that the crucial result needed for abc is the inequality in Corollary 3.12 of IUT III, which is a corollary of Theorem 3.11, the statement of which covers five and a half pages.
The proof of Theorem 3.11 essentially just says “The various assertions of Theorem 3.11 follow immediately from the definitions and the references quoted in the statements of these assertions”. In
Yamashita’s version, this is Theorem 13.12, listed as the “main theorem” of IUT. There its statement takes 6 pages and the proof, in toto, is “Theorem follows from the definitions.” Anyone trying to
understand Mochizuki’s proof thus needs to make their way through either 350 pages of Yamashita’s version, or IUT I, IUT II and the first 125 pages of IUT III (a total of nearly 500 pages). In
addition, Yamashita explains that the IUT papers are mostly “trivial”, what they do is interpret and combine results from two preparatory papers (this one from 2008, and this one from 2015, last of a
three part series.):
in summary, it seems to the author that, if one ignores the delicate considerations that occur in the course of interpreting and combining the main results of the preparatory papers, together
with the ideas and insights that underlie the theory of these preparatory papers, then, in some sense, the only nontrivial mathematical ingredient in inter-universal Teichmueller theory is the
classical result [pGC], which was already known in the last century!
Looking at these documents, the daunting task facing experts trying to understand and check this proof is quite clear. I don’t know of any other sources where details are written down (there are two
survey articles in Japanese by Yuichiro Hoshi available here).
As far as I know, the current situation of understanding of the proof has not changed significantly since last year, with this seminar in Nottingham the only event bringing people together for talks
on the subject. A small number of those close to Mochizuki claim to understand the proof, but they have had little success in explaining their understanding to others. The usual mechanisms by which
understanding of new ideas in mathematics gets transmitted to others seem to have failed completely in this case.
The news that the papers have gone through a confidential refereeing process I think does nothing at all to change this situation (and the fact that it is being published in a journal whose
editor-in-chief is Mochizuki himself doesn’t help). Until there are either mathematicians who both understand the proof and are able to explain it to others, or a more accessible written version of
the proof, I don’t think this proof will be accepted by the larger math community. Those designing rules for the Millennium prizes (abc could easily have been chosen as on the prize list) faced this
question of what it takes to be sure a proof is correct. You can read their rules here. A journal publication just starts the process. The next step is a waiting period, such that the proof must
“have general acceptance in the mathematics community two years after” publication. Only then does a prize committee take up the question. Unfortunately I think we’re still a long ways from meeting
the “general acceptance” criterion in this case.
One problem with following this story for most of us is the extent to which relevant information is sometimes only available in Japanese. For instance, it appears that Mochizuki has been maintaining
a diary/blog in Japanese, available here. Perhaps those who read the language can help inform the rest of us about this Japanese-only material. As usual, comments from those well-informed about the
topic are welcome, comments from those who want to discuss/argue about issues they’re not well-informed about are discouraged.
Update: Frank Calegari has a long blog post about this here, which I think reflects accurately the point of view of most experts (some of whom chime in at his comment section).
New Scientist has a story here. There’s still a lack of clarity about the status of the paper, whether it is “accepted” or “expected to be accepted”, see the exchange here.
Update: It occurred to me that I hadn’t linked here to the best source for anyone trying to appreciate why experts are having trouble understanding this material, Brian Conrad’s 2015 report on the
Oxford IUT workshop.
Update: Curiouser and curiouser. Davide Castelvecchi of Nature writes here in a comment:
Got an email from the journal PRIMS : “The papers of Prof. Motizuki on inter-universal Teichmuller theory have not yet been accepted in a journal, and so we are sorry but RIMS have no comment on
Update: Peter Scholze has posted a comment on Frank Calegari’s blog, agreeing that the Mochizuki papers do not yet provide a proof of abc. In addition, he identifies a particular point in the proof
of Conjecture 3.12 of IUT III where he is “entirely unable to follow the logic”, despite having asked other experts about it. Others have told him either that they don’t understand this either, or if
they do claim to understand it, have been unable to explain it/unwilling to acknowledge that more explanation is necessary. Interestingly, he notes that he has no problem with the many proofs listed
as “follows trivially from the definitions” since the needed arguments are trivial. It is in the proof of Corollary 3.12, which is non-trivial and supposedly given in detail, that he identifies a
potential problem.
Update: Ivan Fesenko has posted on Facebook an email to Peter Scholze complaining about his criticism of the Mochizuki proof. I suppose this makes clear why the refereeing process for dealing with
evaluating a paper and its arguments is usually a confidential one.
62 Responses to Latest on abc
1. BCnrd,
fair enough :-). Thanks for all your input both here and there.
2. David Roberts makes a good point in one way or another: it would have reflected much more positively on the mathematical community at large considering the media circus all about this work, if we
had something to hold up publicly and say “yes well these manuscripts are all good and well, but part XYZ doesn’t make sense and must be addressed by the author”. Articles like “mathematicians
tormented by proof” may have been prevented etc. (although I suppose one shouldn’t underestimate mass-media’s ability to misunderstand/misrepresent research-level mathematics)
3. The answer to that point is Remark 3.12.2 (ii) of newest version of IUT-III
4. As I understand it, Corollary 3.12 in IUT 3 (“main inequality”) is central to the whole proof of the abc conjecture. Previous sections of IUT (totalling hundreds of pages) just set up all the
definitions and trivial proofs needed to prove Corollary 3.12, and when the corollary is proven abc conjecture is pretty trivial to prove based on that corollary. So what is needed to Mochizui to
explain more in detail how one arrives in to the “main inequality”.
Is this correct understanding?
5. mJ:
Thank you for your comment. It came to my attention this morning from someone else via email that Remark 3.12.2 has been very much expanded since earlier versions (I don’t know when that change
occurred), and that this Remark (not just its part (ii)) should address some aspects of how 3.12 follows from 3.11. The wider awareness about this due to the discussion on Frank Calegari’s blog
and this one has helped in this direction. I immediately brought this to the attention of several other people who have looked a lot into the IUT papers; hopefully it will clarify things. But the
coming days are a time of travel and vacation for many people, so don’t hold your breath. 🙂
6. All,
This blog entry isn’t the place for trying to resolve the mathematical questions raised about the proof. I’m completely incompetent to moderate such a discussion and the set of people interested
in participating in this on a blog may have zero intersection with the small set of people competent to engage productively in it. Mathoverflow is a site more set up for this sort of thing, it’s
an interesting question whether it would work over there.
For settling mathematical issues like this one, there really are very few people in the world with the necessary expertise and talent, capable of telling what claims are unproblematic,
identifying where possible problems lie. What is supposed to happen is discussion between the author and this small group of experts, until they understand the proof and the problems they find
are resolved. This has failed to happen in this case, for a complicated set of reasons, and anyone who feels like it can try and blame participants X, Y, and Z for not doing more. I think the
point Brian Conrad is trying to make is that the math community puts the main responsibility in a case like this on the refereeing process: it is the job of editors to pick referees up to the
task (an extremely difficult and important one in this case), and it is the job of those referees to do everything they can to ensure proofs are solid and written in an understandable way. For
good reasons this process is conventionally done confidentially. Unfortunately, in this case there is evidence of a failed refereeing process (claims the papers have been refereed, while an
expert is pointing to problems that have not been addressed, and many experts have trouble with readability of the papers), and confidentiality makes it impossible for other than a small number
of people to know what went wrong.
Addendum: I just saw the latest comment indicating that maybe the comment from “mJ” is highly relevant, and possibly it is blog discussions that will end up having a role in moving this process
forward. If so, that’s a very unusual way (in comparison to private discussions between experts, the author, and referees) for mathematics to get done.
7. Ivan Fesenko has replied to Peter Scholze. It’s, uh, interesting.
8. Recently mentioned by @math_jin on twitter is a series of talks at the end of this month at the Southern University of Science and Technology in Shenzhen http://math.sustc.edu.cn/event/10808.html
to be given by Fucheng Tan (who apparently works at RIMS http://www.kurims.kyoto-u.ac.jp/en/list/tan.html )
9. I am no mathematician but active scientific PI and I follow this case since Castelvecchis first Nature article in utter fascination. Please forgive me to express my uneducated 5c.
I feel the situation is slowly approaching a state were expert statements get more and more explicit. In both directions and notably with some quite renowned critics. Lets suppose IUTT I-IV are
complete and correct and prove abc. Then already now there is the question: Will Mochizuki ever/in our life times get full credits for his discovery? I fear due to the growing critics(!) it will
not be possible anymore. I think the case just transgressed a point of no return for the maths community.
10. Merle Aucoin,
I think your perception of “growing criticism” misses what is going on here. What experts are now saying publicly is not different than what they have been saying privately for a long time now.
What changed things was the news (the accuracy of which is still unclear) that the journal of Mochizuki’s institution, of which he is editor in chief, was about to publish the IUT proof, claiming
it as properly refereed and a correct proof. This news made experts who felt that the proof had not been understood and checked to their satisfaction feel that they needed to say something in
public, even though this is, for good reason, not the kind of discussion usually held publicly.
Mochizuki is a well-respected mathematician, and skepticism about this proof is not something personal. It’s just a fact that the usual way in which understanding of a proof transmits from an
author to the rest of the math community has not happened here, and experts are just making clear that fact. There is a lot of attention now being paid to what has gone wrong, including more
focus on a specific part of the proof that experts haven’t been able to follow. If this gets clarified and experts start to understand better the proof and be able to check that it works,
Mochizuki will certainly get credit for proving abc.
11. Dear Peter, I see. Thank you for clarifying my misperceptions. I like also to draw your attention to the Wikipedia article on Shinichi Mochizuki, it seems there have been substantial changes and
additions since the end of last year. In my eyes most likely from a very close proximity of SM if not from himself. Especially the section on IUTT seems notable, since it contains an attempt to
explain the gist of it to even laymen:
https://en.wikipedia.org/wiki/Shinichi_Mochizuki .
12. Pingback: abc News | Not Even Wrong
This entry was posted in abc Conjecture, Favorite Old Posts. Bookmark the permalink. | {"url":"https://www.math.columbia.edu/~woit/wordpress/?p=9871","timestamp":"2024-11-05T22:49:51Z","content_type":"text/html","content_length":"90951","record_id":"<urn:uuid:23b40dfc-e0f8-4ff4-a805-a6744f8a8ad2>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00291.warc.gz"} |
GAN 1: Basic GAN Networks
Some notes of GAN(Generative Adversarial Network) Specialization Course by Yunhao Cao(Github@ToiletCommander)
Note: This note has content including week 1-4 (version as of 2022/6/14)
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Acknowledgements: Some of the resources(images, animations) are taken from:
1. Sharon Zhou & Andrew Ng and DeepLearning.AIβ s GAN Specialization Course Track
2. articles on towardsdatascience.com
3. images from google image search(from other online sources)
Since this article is not for commercial use, please let me know if you would like your resource taken off from here by creating an issue on my github repository.
Last updated: 2022/6/14 15:07 CST
Generative Models
π ‘
Generative Models accepts noise and class inputs to produce features (or outputs that are huge in dimensions) while discriminative models only accept huge dimensions of inputs (features) and
determine the classes or labels of those features.
Different Generative Models
In a VAE model, two models, the encoder and the decoder is trained. The encoder tries to find a good representation of the image in the vector space (and hopefully same class rests in a cluster).
Variant: produce a linear subspace by the encoder, pick a random point from the subspace and feed it to decoder to kind of kick in the randomness of the image. The decoderβ s job is to produce an
image given a vector representation in the vector space.
During production, only the decoder is used. We take random point from the vector space and feed it to the decoder.
In a GAN model, two models, generator, and discriminator, are trained. Generator is used to generate fake images and try to fool the discriminator, while the goal of the discriminator is to train to
recognize of two images which is the fake image generated by the generator.
Generator fights with discriminator and they competes with each other. Therefore the β adversarialβ word is used to describe the model.
During production, the discriminator is removed and only the generator is used.
Therefore, the goal of GAN is to optimize the following problem:
$\min_d \max_g J(y,\hat{y})$
Training A Basic GAN Network
Training the Discriminator of the GAN. Update the parameters with discriminator only.
Training the Generator of the GAN. Update the parameters with generator only. Feed in fake images only.
π ‘
During training, the generator and the discriminator is each trained several times, alternating between the process.
π §
Here the class(label) of the generated image is not shown in the architecture, in practice it is good to input the class into the generator model.
BCE Loss (Binary Classification Entropy Loss)
$J(\hat{y},y)=-\frac{1}{m}\sum_{i=1}^m y^{(i)} \cdot log(\hat{y}^{(i)}+(1-y^{(i)}) \cdot log(1-\hat{y}^{(i)})$
It is better than squared error because the loss is significantly higher when you misclassified the points, which helps to optimize the NN much faster.
Therefore, the goal is to optimize
$\min_d \max_g -[\mathbb{E}[log(d(x))]]$
DCGAN - Deep Convolution GAN
Generator of DCGAN
Basic Building Blocks
Generator Block:
ConvTranspose2D β BatchNorm β Relu
Last layer for Generator Block:
ConvTranspose2D β TanH
Unsqeeze noise vector β 4 Generator Blocks, Upscaling Layers then Downscaling Layers, The Height and Width is always increasing since weβ re doing transpose conv at every node.
class Generator(nn.Module):
Generator Class
z_dim: the dimension of the noise vector, a scalar
im_chan: the number of channels in the images, fitted for the dataset used, a scalar
(MNIST is black-and-white, so 1 channel is your default)
hidden_dim: the inner dimension, a scalar
def __init__(self, z_dim=10, im_chan=1, hidden_dim=64):
super(Generator, self).__init__()
self.z_dim = z_dim
# Build the neural network
self.gen = nn.Sequential(
self.make_gen_block(z_dim, hidden_dim * 4),
self.make_gen_block(hidden_dim * 4, hidden_dim * 2, kernel_size=4, stride=1),
self.make_gen_block(hidden_dim * 2, hidden_dim),
self.make_gen_block(hidden_dim, im_chan, kernel_size=4, final_layer=True),
def make_gen_block(self, input_channels, output_channels, kernel_size=3, stride=2, final_layer=False):
Function to return a sequence of operations corresponding to a generator block of DCGAN,
corresponding to a transposed convolution, a batchnorm (except for in the last layer), and an activation.
input_channels: how many channels the input feature representation has
output_channels: how many channels the output feature representation should have
kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
stride: the stride of the convolution
final_layer: a boolean, true if it is the final layer and false otherwise
(affects activation and batchnorm)
# Steps:
# 1) Do a transposed convolution using the given parameters.
# 2) Do a batchnorm, except for the last layer.
# 3) Follow each batchnorm with a ReLU activation.
# 4) If its the final layer, use a Tanh activation after the deconvolution.
# Build the neural block
if not final_layer:
return nn.Sequential(
#### START CODE HERE ####
#### END CODE HERE ####
else: # Final Layer
return nn.Sequential(
#### START CODE HERE ####
#### END CODE HERE ####
def unsqueeze_noise(self, noise):
Function for completing a forward pass of the generator: Given a noise tensor,
returns a copy of that noise with width and height = 1 and channels = z_dim.
noise: a noise tensor with dimensions (n_samples, z_dim)
return noise.view(len(noise), self.z_dim, 1, 1)
def forward(self, noise):
Function for completing a forward pass of the generator: Given a noise tensor,
returns generated images.
noise: a noise tensor with dimensions (n_samples, z_dim)
x = self.unsqueeze_noise(noise)
return self.gen(x)
def get_noise(n_samples, z_dim, device='cpu'):
Function for creating noise vectors: Given the dimensions (n_samples, z_dim)
creates a tensor of that shape filled with random numbers from the normal distribution.
n_samples: the number of samples to generate, a scalar
z_dim: the dimension of the noise vector, a scalar
device: the device type
return torch.randn(n_samples, z_dim, device=device)
Basic Building Block
Discriminator Block:
Conv2D β BatchNorm β LeakyReLU(0.2)
Last Layer of Discriminator Block:
Conv2D β 1x1x1 β Sigmoid
Input Image β Discriminator Block * 3 (Upscaling # layers in the first and second block, downscaling to 1 layer in the third block)
π ’
Note: The Sigmoid Layer is not added since we used BCEWithLogitsLost in PyTorch
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: Discriminator
class Discriminator(nn.Module):
Discriminator Class
im_chan: the number of channels in the images, fitted for the dataset used, a scalar
(MNIST is black-and-white, so 1 channel is your default)
hidden_dim: the inner dimension, a scalar
def __init__(self, im_chan=1, hidden_dim=16):
super(Discriminator, self).__init__()
self.disc = nn.Sequential(
self.make_disc_block(im_chan, hidden_dim),
self.make_disc_block(hidden_dim, hidden_dim * 2),
self.make_disc_block(hidden_dim * 2, 1, final_layer=True),
def make_disc_block(self, input_channels, output_channels, kernel_size=4, stride=2, final_layer=False):
Function to return a sequence of operations corresponding to a discriminator block of DCGAN,
corresponding to a convolution, a batchnorm (except for in the last layer), and an activation.
input_channels: how many channels the input feature representation has
output_channels: how many channels the output feature representation should have
kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
stride: the stride of the convolution
final_layer: a boolean, true if it is the final layer and false otherwise
(affects activation and batchnorm)
# Steps:
# 1) Add a convolutional layer using the given parameters.
# 2) Do a batchnorm, except for the last layer.
# 3) Follow each batchnorm with a LeakyReLU activation with slope 0.2.
# Note: Don't use an activation on the final layer
# Build the neural block
if not final_layer:
return nn.Sequential(
#### START CODE HERE ####
#### END CODE HERE ####
else: # Final Layer
return nn.Sequential(
#### START CODE HERE ####
#### END CODE HERE ####
def forward(self, image):
Function for completing a forward pass of the discriminator: Given an image tensor,
returns a 1-dimension tensor representing fake/real.
image: a flattened image tensor with dimension (im_dim)
disc_pred = self.disc(image)
return disc_pred.view(len(disc_pred), -1)
Mode Collapse
Mode Definition
Mode for different Probability Density Functions
Mode is the area where PDF is max, it marks the value on which $P(X \in dx)$ο»Ώ is max (the most frequent number).
Mode Collapse Problem
Think about in high dimensional space, each class has a normal distribution with some values concentrated at its modes, like below.
Think about a discriminator that has been trained pretty well but just does not determine well enough on some of the classes (in this case, 1 and 7).
This means the discriminator is probably stuck in a local minima.
Now the generator knows that generating the classes of images that the discriminator is weak at will help fool the discriminator.
So therefore the Generator outputs those classes of images
Since the discriminator is stuck in a local optima, it keeps telling generator that the generated images are β realβ , though there might be little difference in its ability to determine one class
from other among those weak classes (in this case 7 > 1).
Therefore the generator learns to produce classes of images of only 1s β The Generator is stuck in one mode, therefore it is called mode collapse.
π ’
The discriminator will eventually learn to differentiate the generator's fakes if there is mode collapse and outskill it, but it will take a long time, and mode collapse into other modes is still
possible in future iterations.
Vanishing Gradient Problem for BCE Loss
See research paper describing the vanishing gradient in GAN.
$Loss_{BCE}(y,\hat{y})=-\frac{1}{m}\sum_{i=1}^m y \cdot log(\hat{y})+(1-y) \cdot log(1-\hat{y})$
Goal of generator:
Approximate real distribution
However, Goal of Discriminator:
Given x, tell apart whether x belongs to the generated distribution or the real distribution.
Well, looks like they are pretty clear & straight forward optimization problems, what is the problem here?
𠀦
The problem is that the task of the discriminator is easier so discriminator gets better and better quicker over time.
As discriminator is more and more powerful, the descriminator with BCE Loss provides less and less gradient to the generator to optimize. And we have the problem of vanishing gradient in the end.
$\frac{\partial J}{\partial \hat{y}}=\frac{1}{m} \cdot \frac{\hat{y}-y}{\hat{y}-\hat{y} \odot \hat{y}}$
Where $a \odot b$ο»Ώ denotes the element-wise product of a and b
Earth Moverβ s Distance
Earth Moverβ s Distance describes the Effort needed to make the generated distribution to the real distribution.
π ₯
Intuitively, how much work would it require to move the pile of dirt given by the generated distribution and mold it into the shape of real distribution. Depends on distance and amount moved.
β οΈ
However due to the form of the function it is approximated by, its not actually used in ML. We will use alternative ways to approximate this distance.
Wassertain Loss
Approximates Earth Moverβ s Distance
π ₯
Instead of requiring a sigmoid layer at the end of the discriminator, the Wassertain Loss(W-Loss) makes the last layer of the discriminator possible to be linear.
Before we introduce the W-Loss, we will see GAN as a min-max game
$\max_{g} \min_{d} \mathbb{E}(J(g,d))$
Where $g(\cdot)$ο»Ώ is the generator function, and $d(\cdot)$ο»Ώ is the discriminator function.
Instead of the logarithm in the cost function $J$ο»Ώ, the W-Loss is formulated as:
$\min_{g} \max_{d} \mathbb{E}[d(x)]-\mathbb{E}[d(g(x))]$
Left: BCE Loss, Right: W-Loss
Conditioning on W-Loss
1. has to be 1-Lipschitz continuous.
a. Norm of the gradient has to always be $\le 1$ο»Ώ: $\forall x, ||abla f(x)||_2 \le 1$ο»Ώ
Technique: Draw two lines, one at gradient = 1 and one at gradient = -1, see if the function fits in between those two lines.
1-L Continuous Enforcement
Weight Clipping
1. However, may limit the ability to learn.
2. Regularization
a. Change the min-max to
b. $\min_{g} \max_{d} \mathbb{E}[d(x)]-\mathbb{E}[d(g(x))] + \lambda reg$ο»Ώ, where $reg$ο»Ώ is the regularization term
c. tends to work better
Regularization Term is done byβ ¦
first interpolating a $\hat{x}$ο»Ώ by applying a $\epsilon$ο»Ώ and $1 - \epsilon$ο»Ώ to real and fake images
$reg = \mathbb{E}(||abla d(\hat{x})||_2 - 1)^2$
Conditional GAN
All Above models are unconditional GANs
Now we want to be able to secify which type of product to generate, so we train the GAN with labeled dataset.
In the previous networks we had a noise vector as the input to the generator, we will now add an one-hot vector to the generator. For the discriminator give images as well as a label (one-hot as
additional channels) and the discriminator will still output a probability.
There are ways to compress data in the discriminator input. But this uncompressed way definitely work.
Controllable Generation
Itβ s different from conditional generation
1. Now we control the feature, not the class
2. We donβ t have to modify the network structures
3. Training dataset doesnβ t need to be labeled
4. Tweaking the noise vector $\vec{z}$ο»Ώ to change feature, while conditional generation alters the model input.
After training, observe the output of two z space vectors $v_1$ο»Ώ and $v_2$ο»Ώ, and we can interpolate the vectors and get an intermediate result (we can do linear interpolation but there are also
other interpolation methods)
π ₯
Now we can control some features of produced images, we want to find a direction that we can just alter one exact feature. The technique of finding such directions are important.
Feature Correlation
Sometimes we will want to alter a feature that has high correlation with other features, for example adding a beard. But beard might be tied to masculinity so it might be possible to not be able to
just change the beard without changing the degree of masculinity.
Z-Space Entanglement
Sometimes modifying a direction in noise vector we might change multiple features at the same time. This usually happens when z-space donβ t have enough dimensions to map one feature at a time.
Classifier Gradients as a method of finding z-space direciton to change a feature
We use a trained classifier to classify a generated image, $g(\vec{z})$ο»Ώ produced by the generator, and we use the classifier to classify if the image contains the feature of not, given by $d(g(\
vec{z}))$ο»Ώ. We can repeat this process until the feature is present in the image by continuously modifying the noise vector $\vec{z}$ο»Ώ in the direction of the gradient, $abla_{\vec{z}} d(g(\vec
{z}))$ο»Ώ. Finally once the classifier says YES, we will use a weighted average of the direction we have stepped through to change the feature later.
Disentanglement Methods
π ₯
Disentangled Z-Space (contains latent factors of variation) means every feature takes exactly one direction(and usually they are unit-vector aligned)
1. Just like what we did for the conditional GAN, we would add labeled data but this time instead of adding one-hot to the generator the information would be embbeded into the noise vector itself.
Loss Function
Add a regularization term (BCE or W-Loss) to encourage each index of the noise vector to be associated with some feature.
There are techniques to do this in an unsupervised way. | {"url":"http://notes.quantumcookie.xyz/GANSpecialization/Notes/1-BasicGAN/","timestamp":"2024-11-06T05:04:38Z","content_type":"text/html","content_length":"127725","record_id":"<urn:uuid:d6e42400-32b5-4cda-a255-77f54b03f6ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00860.warc.gz"} |
Black holes in string theory with higher-derivative corrections
The low-energy limit of superstring theories admits a description in terms of an effective field theory for its massless modes. The corresponding action is given by a double perturbative expansion in
$g_{s}$, the string coupling, and in $\alpha'$, the square of the string length. The leading term of this expansion is given by one of the different ten-dimensional supergravity theories, whereas
subleading terms involve terms of higher order in derivatives. The work presented in this thesis is the result of a research program that starts with the study of supersymmetric solutions of gauged
supergravity and reaches the summit with the understanding of the effects produced by the $\alpha'$ corrections to solutions of the heterotic superstring effective action.
This thesis is divided in two parts. The first one focuses on the supersymmetric solutions of a minimal extension of the STU model of N=1, d=5 supergravity whose main interest lies on the fact that
it can be obtained as a toroidal compactification of ten-dimensional N=1 supergravity coupled to a triplet of SU(2) gauge fields. Concretely, we construct and study solutions describing black holes
and smooth horizonless geometries with non-trivial Yang-Mills fields.
The understanding of this type of solutions from the framework of string theory serves as a motivation for the work of the second part of the thesis, which is devoted to the study of solutions of the
effective action of the heterotic string at first order in $alpha'$. The latter does not simply coincide with the action of N=1, d=10 supergravity coupled to a Yang-Mills vector multiplet as the
Green-Schwarz anomaly cancellation mechanism and supersymmetry enforce us to introduce additional terms in the action. These terms are constructed out of the spin connection with torsion given by the
field strength associated to the Kalb-Ramond 2-form and their contributions to the equations of motion are analogous to those of the Yang-Mills fields. This fact is exploited to construct analytic
supersymmetric black-hole solutions with $\alpha'$ corrections.
The most important lesson to extract from our results is that the mass and the conserved charges of the black holes do get modified by the $\alpha'$ corrections. This is what one would expect on
physical grounds as the corrections act in the string equations of motion as effective sources of energy, momentum and charge. This information is crucial to establish a correspondence between the
parameters that characterize the effective or coarse-grained description (the black hole) and those that characterize the microscopic system of string theory that it describes. The effects on the
charges introduced by the higher-derivative corrections has a major impact in the understanding of the so-called small black holes, which are an effective description of a fundamental string with
winding and momentum charges. Small black holes are singular solutions with vanishing horizon area in the supergravity approximation. It has long been believed that higher-derivative corrections
would be able to stretch the horizon, making the solution regular. Our results reveal that this is not the case at first order in $\alpha'$, and that previous regularizations of heterotic small black
holes actually describe a different microscopic system which is already regular in the supergravity approximation.
The last chapter of the thesis contains the computation of the most general correction to the four-dimensional Kerr solution when the Einstein-Hilbert term is supplemented with higher-curvature terms
up to cubic order, including the possibility of having dynamical couplings. This general set-up includes, as a particular case, the corrections predicted by the heterotic superstring effective | {"url":"https://www.ift.uam-csic.es/es/events/black-holes-string-theory-higher-derivative-corrections","timestamp":"2024-11-08T23:57:30Z","content_type":"application/xhtml+xml","content_length":"39362","record_id":"<urn:uuid:6c4ed17e-8f03-47e4-8e09-ed353d30804d>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00589.warc.gz"} |
Container With Most Water 0(n) LeetCode#11
“The container with the most water” is a LeetCode favorite. Your job is to write a function that takes in an array of numbers and returns the max area of water that could fill a container defined by
the “heights” which represent the height of walls; if you're not following, that is OK! We will get to it :)
This problem is moderately challenging if you have a history with math, at least is was moderately challenging for me. I passed 80% of test cases before needing to look for what I was missing.
PRO TIP — Don’t spend more than 30min on choking points. This is an unnecessary waste of time and probably won't yield positive results. And, even if you eventually solve the problem, you just wasted
hrs of meaningful time “solving” when you could have spent time “understanding”.
The graph represents an array of walls with water filled to the maximum space allowed/ possible.
So, looking at this we can understand that we will want to figure out the maximum area between all heights in our graph.
We will need to keep track of the Max Area and the edges of our “Water Tank”.
So Let’s jump into the deep-end and solve this problem (outfield pun totally intended).
function maxAreaOfWater(heights){
let maxArea=0;
let left = 0;
let right = heights.length -1;
while(left < right){ maxArea = Math.max(maxArea, Math.min(height[left], height[right])* (right - left )); if(height[left] < height[right]){
return maxArea;
Ok, so I wrote a bunch of code, but what the hell does it have to do with anything? what does it mean?
Well, I’m about to explain it now.
Ok, so we enter out function and immediately instantiate some variables, we have a maxArea that will keep track of our maxArea… we have a left and a right that will be pointers used to determine
different areas within our container.
We then create a while loop, we will loop through until the left index is greater than the right index; are left and right limits will shift left and right until the greatest areas are found based…
the best way to envision this is to look at our graph above and imagine the pointers moving.
our maxArea is being set by comparing the current maxArea with the LOWEST VALUE because the is the max level of water possible, multiplied by the space between the right (Greater Value) and left
(Lesser Value). This will return the Max Value and set it to maxArea.
We repeat this process until all the left pointer is forced to pass the right pointer due to no other options. Note: if sides are equal, the left pointer is incremented.
I hope this cleared up any issues you may have had with this problem... the logic behind the max value is a little “Mind-Bending”. Give it time, Repetition is the law of learning.
— Rumi | {"url":"https://adanieljohnson.medium.com/container-with-most-water-0-n-leetcode-11-661e52679aae","timestamp":"2024-11-05T18:29:04Z","content_type":"text/html","content_length":"104064","record_id":"<urn:uuid:074b2ceb-7c2b-4fab-b716-68a98711bff4>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00268.warc.gz"} |
How to Find the Height of a Cylinder? | TIRLA ACADEMY
To find the Height of the cylinder we can use the following formula:
If we have the Lateral Surface Area and radius of the cylinder then the Height of the Cylinder = LSA÷2𝜋r
If we have the Volume and radius of the cylinder then the Height of the Cylinder = Volume of Cylinder÷𝜋r² | {"url":"https://www.tirlaacademy.com/2024/07/how-to-find-height-of-cylinder.html","timestamp":"2024-11-02T23:32:05Z","content_type":"application/xhtml+xml","content_length":"310374","record_id":"<urn:uuid:b837d17d-4108-48ad-8ab7-c8d637dc1859>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00232.warc.gz"} |
pipe a sits 1 meter under the river - Pipelines, Piping and Fluid Mechanics engineering
Not open for further replies.
pipe a sits 1 meter under the river surface its 20cm in diameter and rises vertical 40 meters. pipe a joins pipe b which is also 20cm in diameter. pipe b descends 40 meters downwards. pipe b 1 meter
down opens up to 2 meters in diameter and the rest of pipe b is 2 meters in diameter. pipe a is full of water the water weighs 1250kg and is held in position by check valves. at the top of pipe b
where pipe b widens to 2 meters diameter is a piston that is 2 meters in diameter. the piston weighs 3000kg considering everything is airtight if the piston is lowered slowly would it be able to draw
water up pipe a into pipe b. so if the piston is lowered at a constant rate would the water in pipe a be constantly drawn. the bottom of pipe b is open so there is no pressure under the piston
Replies continue below
Recommended for you
How is pipe-a both 1 meter under the river and 40 meters vertical at the same time?
A sketch Please.
I should wait for your sketch, but I think you have an inverted "U" and are trying to draw up water 40m into pipe A from a river by lowering a piston in pipe B.
No, it will not draw up water to 40m, but with perfect seals, you could draw up water in pipe A to a height of 10m +/-. Once the pressure anywhere in the water reaches its vapour pressure, no more
water can be drawn upwards. In a case where water is already filled to the top of the U and held there by check valves, the water in pipe A will form a vapor pocket and any vapor bubbles will collect
at the high point. Any downward motion of the piston will just create a larger vapor pocket at the top of the inverted "U".
--Einstein gave the same test to students every year. When asked why he would do something like that, "Because the answers had changed."
Sounds like an exam question??
There is a student forum.
Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
I don't think it's an exam question, unless it's written in a a true/false, or essay context, as there is no definitive answer to it.
--Einstein gave the same test to students every year. When asked why he would do something like that, "Because the answers had changed."
Simple answer is no, assuming that pipe a is open to the river with on other pump or thing providing pressure/head forcing the water up pipe a.
The max "lift" is between 8 to 9 metres at sea level before you create a vacuum.
You can fill pipe a with water from the top with it sitting on check valves at the bottom, but you will not be able to lift that water.
Forget about mass and "weight" and use pressure.
Basically you can't make water flow uphill.
So unless there is a pump or other thing creating at least 3 bar pressure at the base of pipe a, you won't lift anymore water.
A sketch would definitely help....
Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
I said 10m, if you had perfect seals. Guess I should add "and water at 0°C."
--Einstein gave the same test to students every year. When asked why he would do something like that, "Because the answers had changed."
the pressure in a pipe next to a tank is the height of the water. what will the pressure be at this point when water is being pumped into the tank?
When the pump is running, at any given point along the pipe, add the discharge pressure of the pump to that point's static pressure, then subtract from it the friction loss between pump and that
At a point in a pipeline near a tank bottom, the pressure head will be equal to the height of water in the tank, whether the pump runs or not, however total head is not the same. To find total head,
add (v^2)/2/g.
--Einstein gave the same test to students every year. When asked why he would do something like that, "Because the answers had changed."
thanks 1503-44
is the pressure at the discharge point zero?
The gage pressure at the fluid surface in an unpressurized tank is 0. Its absolute pressure is equal to atmospheric pressure.
The pressure at the discharge from nozzle into the tank is equal to the pressure equivalent of the height of fluid in the tank above the nozzle.
--Einstein gave the same test to students every year. When asked why he would do something like that, "Because the answers had changed."
Thank you
How does one calculate the velocity as it exits the pipe, would it be the same velocity as inside the pipe?
BB4 - are you the OP now?
If not please don't hijack someone else's thread.
The velocity right at the tip of the exit will be the same as the pipe velocity, or maybe 0.5 to 1D inside it. After that the water jet starts to disperse and slow down at the edges, but if it's high
enough it can continue and hit the far side of the tank if it's small enough.
Remember - More details = better answers
Also: If you get a response it's polite to respond to it.
Well it's already jacked ... so I'll continue here.
BB4 If you start a new thread, I'll delete this and put it into your new thread.
Pipe exit velocity
The velocity inside the pipe is not necessarily the same as outside the pipe. Outside the pipe we cannot make the same assumptions that we do inside the pipe. Normally inside the pipe we assume that
the pipe is flowing full and that velocity is equal to the flow rate divided by cross-sectional area. But outside the pipe, we are not sure what the cross-sectional area is.
There is one case where the velocity inside the pipe is equal to the velocity outside the pipe. Let's look at that case in detail.
We will work out an example problem of a whole system.
Let's say there is a pump driving flow through a 50m long 102mm inside diameter pipe that fills a tank. Additionally we will buy a pump able to discharge at a flow rate of Q=0.0246m3/s
at a Total discharge head (pressure head+velocity head) of 6.459m
ID= 102mm
cross-sectional area is A=0.008219m2
I chose those dimensions because the water's velocity in the pipe is then Q/A = 3m/s
Pipe Friction Loss
Head loss to friction over the length of the pipe.
The pipe friction factor is 0.018 (from a friction factor iteration I did previously for this problem)
Friction loss in meters for the 50m length is (f * L * V^2)/(D * 2 *g)
= (0.018 * 50m * (3m/s)^2)/(0.102m * 2 * 9.81m/s2)
F = 4m
Velocity Head Inside the Pipe
Velocity head is (V^2)/2/g
Hv =3^2/2/9.81
Hv = 0.459m
I said in another response above that the pressure, or head at any point along the pipe is equal to pump discharge head- friction loss from pump to that point. At 50m, just before entering the tank,
the total head (pressure head +velocity head) is
Pump total discharge head of 6.459m - pipe friction of 4m
Hp -F = 2.459m
So we have 2.459m of total head just before entering the tank (still inside the pipe). Just for fun, let's see how much of that is velocity head and how much of that is Pressure Head?
Total head -velocity head = pressure head.
2.459 -0.459
= 2m of Pressure Head
OK, so now let's see what happens at the entry to the tank. As you would expect, it depends on how full the tank is. If the tank is empty, we would expect that we can fill it, at least with some
water coming from the pipe. Well that's if we bought a big enough pump with enough head capacity to push it in there.
We have a total head in the pipe connection at the tank of 2.459m. If total head in the tank is less, we know we can get water in the pipe into the tank. So we need to know what the head in the tank
is at the nozzle connection point. It might help to think of a double swing door at the tank entry point. We can open the door by just pushing on it, or by running at it and slaming into it, so we
will try to do both at the same time. The Pressure Head will push the door and the velocity head will do the slamming. Note that this is potential energy of pressure and kinetic energy from the
velocity. Added together we have 2.459 m of energy to open the door into the tank. But the tank may have some energy keeping it closed. What has the tank got?
The tank has potential energy from any fluid inside it. Does it have kinetic energy? No. The velocity of fluid in a tank is 0. No kinetic energy. So, the tank only has fluid level to hold that door
closed. We have 2.459m, so iif the tank has less than 2.459 m of water in it (above the height of the nozzle), we're getting in, at least with some of our water. If the water level in the tank is
below our nozzle door, we get in with full flow. Whatever our pressure and velocity heads combined will push into it. That's (V^2)/(2g) and we have both Pressure and Velocity to play with. The total
of 2.459m
Let's solve for Velocity. 2.459m = V^2/2/g
V = sqrt(2.459m * 2 * 9.81m/s2)
V=6.98 m/s
We're going to blow the door off it's hinges, at least when we start.
Since we are outside the pipe, the cross-sectional area of the flow depends of the "jet stream" shape. Our inside pipe flow rate of 0.0246 m3/s continues into the tank, but now at a velocity of 6.98
So we recalculate the cross-sectional area of Flow to be (from V= Q/A), A = Q/V
A = 0.0246 m3/s / 7m/s = 0.00351 m2
Jet stream diameter is now (from A = 1/4 * pi D^2)
D = sqrt(4 * A / pi) = 0.0669 m or 66.9 mm, down from 102mm
But filling the tank will stop, if the tank ever gets to a level of 2.459m above our nozzle. The energy on both sides of the door will be equal and the door closes. Or, if there isn't a door, the
flow just stops. There is no longer an energy difference remaining to drive flow into the tank..
Now the process of filling while the door is closing subjects the pipe and pump to some dynamic effects. As flow comes to a halt, your pump may increase pressure, depending on its output head vs flow
curve. The pipe friction loss will reduce considerably all the way to zero. If it does, then you will have 4 more meters of head to reinstate flow into the pipe and tank. The door will open rapidly,
but at low flow rates. Depending in how all those effects work together, surging will likely occur. But that's a subject for another thread, for sure.
--Einstein gave the same test to students every year. When asked why he would do something like that, "Because the answers had changed."
So far no sketch!
Mr -44 , Please review your calculation about friction loss , diameter to be considered not pipe section.
Thanks Pierre. D it is. My finger got on the wrong line.
--Einstein gave the same test to students every year. When asked why he would do something like that, "Because the answers had changed."
We have a total head in the pipe connection at the tank of 2.459m.
Head at the tank connection is not created by the pump but by the static liquid height in the tank. Total head of the flowing fluid at the tank connection point is the static height of liquid in the
tank plus the vlocity head of the fluid as it enters the tank.
If total head in the tank is less, we know we can get water in the pipe into the tank.
Water will flow into the tank as long as the static liquid head of the tank is less than the pump no-flow dead head pressure period.
So we need to know what the head in the tank is at the nozzle connection point. It might help to think of a double swing door at the tank entry point. We can open the door by just pushing on it, or
by running at it and slaming into it, so we will try to do both at the same time. The Pressure Head will push the door and the velocity head will do the slamming. Note that this is potential energy
of pressure and kinetic energy from the velocity. Added together we have 2.459 m of energy to open the door into the tank. But the tank may have some energy keeping it closed. What has the tank got?
There is no imbalance of forces at the tank connection between the tank side and the pumps side as the pressure on the pump/pipe side is developed by the pressure on the tank side (which is simply
the static pressure due to height of liquid in the tank). The pump output total pressure minus the friction drop will always equal the tank static pressure at the connection plus the kinetic energy
of the flowing fluid into the tank. (Static pressures in tank and pipe are equal but pipe flow has additional flowing kinetic energy).
The tank has potential energy from any fluid inside it. Does it have kinetic energy? No. The velocity of fluid in a tank is 0. No kinetic energy. So, the tank only has fluid level to hold that door
closed. We have 2.459m, so iif the tank has less than 2.459 m of water in it (above the height of the nozzle), we're getting in, at least with some of our water.
Again there is no discontinuity of pressure between pipe side and tank side as the pressure at the tank connection is due to the static height of the liquid in the tank. The flowrate will balance in
any case so that the static head output of the pump minus the friction loss equals the static height of liquid at the tank connection. The pump output head is not constant but is determined by the
intersection of the system curve with the pump curve.
If the water level in the tank is below our nozzle door, we get in with full flow. Whatever our pressure and velocity heads combined will push into it. That's (V^2)/(2g) and we have both Pressure and
Velocity to play with. The total of 2.459m
If the water level is below door then pressure at the tank connection will then be 0 psig so you are operating the pump on a different part of the pump curve at a different output head such that the
pump output static head minus friction loss now = 0 psig, but with a difference in head of the fluid equal to the velocity head of the fluid entering the tank.
Let's solve for Velocity. 2.459m = V^2/2/g
V = sqrt(2.459m * 2 * 9.81m/s2)
V=6.98 m/s
We're going to blow the door off it's hinges, at least when we start.
Pressure head plus velocity head (total head) in the pipe does not convert to all velocity head in the tank. Pressure head in pipe at pipe exist cannot convert into anything because it is in
equillibrium with the static head in the tank so there is no extra pressure head available to convert to any additional velocity.
Since we are outside the pipe, the cross-sectional area of the flow depends of the "jet stream" shape. Our inside pipe flow rate of 0.0246 m3/s continues into the tank, but now at a velocity of 6.98
So we recalculate the cross-sectional area of Flow to be (from V= Q/A), A = Q/V
A = 0.0246 m3/s / 7m/s = 0.00351 m2
A jet stream can never decrease in diameter when it exits the pipe. For water jet in water it will increase due to slowing down by the fluid in the tank and at the same time some of the fluid in the
tank being caught up in the jet causing it to expand. For water in air (like a fire hose nozzle) the jet will maintain its jet diameter longer since there is nothing causing it to slow down and any
air picked up into the stream is of little consequence.
[URL unfurl="true" said:
https://en.m.wikipedia.org/wiki/Bernoulli%27s_principle[/URL]]The simplified form of Bernoulli's equation can be summarized in the following memorable word equation:[1]:§ 3.5
static pressure + dynamic pressure = total pressure
Seems pretty clear.
--Einstein gave the same test to students every year. When asked why he would do something like that, "Because the answers had changed."
Pressure/gamma plus head due to static height of fluid plus velocity head equals total head.
Not open for further replies. | {"url":"https://www.eng-tips.com/threads/pipe-a-sits-1-meter-under-the-river.517912/","timestamp":"2024-11-10T05:04:20Z","content_type":"text/html","content_length":"221325","record_id":"<urn:uuid:335d92af-e792-4d0d-ad0b-c8f931d26f3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00136.warc.gz"} |
Blockchain Prerequisites - Cryptography Concepts
You must have heard about Bitcoin, Ethereum and other cryptocurrencies and the technology behind all of them: blockchain. If you have little or no idea about blockchain technology and want to start
your deep dive into the world of decentralisation, this is the introductory article for you!
A blockchain is basically a clever combination of a number of concepts taken mainly from the fields of cryptography and computer networking. We’ll cover the cryptography concepts in this article.
Cryptography is the study of the design of techniques which ensure the secrecy and authenticity of all information. In simpler terms, cryptography deals with keeping information safe, private and
unchanged over systems and network communications.
The three topics we’ll focus on are -
1. Hash Functions
A cryptographic hash function is a one-way mathematical function which maps an input of any given size to a unique fixed-length output. This fixed-length output, called the message digest, is totally
random and can’t be mapped back to the original input, i.e. irreversible.
The point for you to remember here is that the output, i.e. the digest, is totally random and changing just one bit in the input message changes its digest completely. So figuring out an input
message whose digest satisfies a given “condition” is very hard and requires a brute force search. One example of such “condition” as used in Bitcoin is a digest whose numerical value is less than a
prespecified small number, called target.
Bitcoin uses the SHA-256 cryptographic hash function. Hashing is also a step in creating digital signatures. Check out this article on hash functions for a deeper dive. Other use cases of hash
functions are checking message integrity, authentication, and HashMaps.
2. Asymmetric Encryption
In the process of asymmetric encryption, an arbitrary message called plaintext is converted into an encoded message called ciphertext through a pair of keys. One of the keys is made known to
everyone, this one is called the public key and the other one is kept secret, hence it is called private key.
The point for you to remember here is that as there is a mathematical relationship between both keys, any message encrypted with the private key can be decrypted using the public key. The opposite of
this also true - any message encrypted with the public key can be decrypted using the private key.
One of the most common and most used asymmetric encryption algorithms is RSA.
3. Digital Signatures
Digital signatures are a technique used to provide message authentication and integrity over networks. Simply, digital signatures provide assurance that the message was in fact sent by the sender and
not even a single bit of the message has been changed.
Let’s look at an example to understand how it works - Bran wants to send Arya a message M. To provide Arya with a proof that Bran himself has sent the letter and no one else, he hashes M using a
hashing function H and then encrypts the message digest (output of the hash function) using his private key. This encrypted hash is known as the digital signature - DS of the message M and is sent to
Arya along with M. As Bran’s public key and hashing function H is known to Arya, she would hash the received message using H and compare it to the result of decryption of DS using Bran’s public key.
If the two values match then Arya can be sure that Bran sent the message.
The main point here is that digital signatures are used to institute digital identity in blockchains. So all messages (blocks, transactions, etc.) signed through a specific private key -> Priv_Key
are considered to be from an individual address on the blockchain. In the case of Bitcoin, an “address” is the Base58 encoded hash of the public key corresponding to Priv_Key.
Bitcoin uses ECDSA algorithm for digital signatures. Another very important use case of digital signatures is digital certificates which are quite important in maintaining the security and privacy of
the modern web. | {"url":"https://www.topcoder.com/thrive/articles/Blockchain%20Prerequisites%20-%20Cryptography%20Concepts","timestamp":"2024-11-05T03:17:20Z","content_type":"text/html","content_length":"117706","record_id":"<urn:uuid:2c4e2c4f-a470-435a-8895-e90b48628d8b>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00073.warc.gz"} |
1208 - Operator + and operator ^
Addition is written using the plus sign "+" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example:
1 + 1 = 2 (verbally, "one plus one equals two")
2 + 2 = 4 (verbally, "two plus two equals four")
A bitwise exclusive or takes two bit patterns of equal length and performs the logical XOR operation on each pair of corresponding bits. The result in each position is 1 if the two bits are
different, and 0 if they are the same. For example:
XOR 0011
= 0110
In the C programming language family, the bitwise XOR operator is "^" (caret).
Now Mr. Y has two numbers P and Q, P = A + B and Q = A ^ B. Then Mr. Y asks you that how many combinations of nonnegative numbers (A, B). Notice that the combinations like (3, 2) and (2, 3) count
as the same combination.
In the first line, there is a single integer number T which means T test cases followed.
In the next T lines, every line contains a test case: two integers P and Q. You can assume that 0 <= p, q <= 1,000,000,000.
For each test case, please output a single line with the number of the combinations.
sample input
sample output | {"url":"http://hustoj.org/problem/1208","timestamp":"2024-11-13T16:02:28Z","content_type":"text/html","content_length":"8402","record_id":"<urn:uuid:1f0072e7-2962-46d5-af24-5f1e29f12255>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00452.warc.gz"} |
Contraction of Multiple ITensors?
It seems the contraction * only works for two ITensors. If I have more than two ITensors that share one index, e.g. A(i,j)*B(i,k)*C(i,l) gives wrong results. Is there a function in this library doing
this? Thanks.
Hi, could you please expand your question a bit more? What is the result that you expected and what is the result you are getting? We'd like to know if there is a bug or whether it's just surprising
Hi Miles,
I expect to sum over "i" and get a new tensor @@D(j,k,l) = \sum\_i A(i,j)B(i,k)C(i,l).@@
But A*B*C gives a four rank tensor. It contracts A and B first and then C.
I see. This is for the C++ version, correct? (Though it should be the same for the Julia version.)
I'll answer below - thanks!
Hi Jin,
Thanks for the question. So this is the intended, and expected behavior of the ITensor * operator, namely that first A and B are contracted, then the result of that is contracted with C.
One reason for this is that this is just how operators work in C++. There is no notation in C++ that would allow simultaneous contraction of three or more ITensors in that language using operator
overloading (*,+, and similar), apart from defining a function call such as multicontract(A,B,C,D,...) (which we might do in the future).
Another reason is that this behavior is quite often the desired behavior that one does want, so it is not in any way a bad thing. It just may not have been what you were expecting. The ITensor
interface is based on tensor diagram notation which also uses a pairwise contraction convention. Contracting over an index shared between three or more tensors requires the notion of a "hyperedge" in
a tensor network graph, which is perfectly fine to introduce but I'm just mentioning it to say it's not the standard thing that diagrams express.
We do offer something like a hyperedge in ITensor though: it is the delta(m,n,p) ITensor, which is a special ITensor with all its (multi-)diagonal elements set to 1.0. Not all routines involving
delta tensors are as optimized as they could be, though some are and in general it's a useful tool.
Hope that helps - | {"url":"https://www.itensor.org/support/2538/contraction-of-multiple-itensors?show=2539","timestamp":"2024-11-05T01:08:30Z","content_type":"text/html","content_length":"30592","record_id":"<urn:uuid:85841348-c7c4-4c19-970f-c0d2e3a21204>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00142.warc.gz"} |
Solve each system by the elimination method or a combination of the elimination and substi- tution methods. $$ \begin{aligned} &x^{2}+6 y^{2}=9\\\ &4 x^{2}+3 y^{2}=36 \end{aligned} $$
Short Answer
Expert verified
The solutions are \( (3, 0) \) and \( (-3, 0) \).
Step by step solution
Multiply equations to align coefficients
Multiply the first equation by 4 to align the coefficients of \(x^2\). This turns the first equation into \(4x^2 + 24y^2 = 36\). Now the system is: \[\begin{aligned} 4x^2 + 24y^2 &= 36 \ 4x^2 + 3y^2
&= 36 \end{aligned}\]
Subtract the second equation from the modified first equation
Subtract the second equation from the modified first equation to eliminate \(x^2\). \[\begin{aligned} (4x^2 + 24y^2) - (4x^2 + 3y^2) &= 36 - 36 \ 21y^2 &= 0 \ y^2 &= 0 \ y &= 0 \end{aligned}\]
Substitute back to find x
Substitute \(y = 0\) into the original equations. Starting with the first equation: \[x^2 + 6(0)^2 = 9 \ x^2 = 9 \ x = \pm 3\] Thus, the solutions are \(x = 3\) and \(x = -3\).
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
system of equations
A system of equations is a collection of two or more equations with a common set of unknowns. In algebra, these unknowns are usually represented by variables like \(x\) and \(y\). Solving a system of
equations means finding all possible values of these variables that satisfy all equations in the system simultaneously. When you have two equations like:
\(x^2 + 6y^2 = 9\) and \(4x^2 + 3y^2 = 36\), the goal is to find the pairs \((x, y)\) that make both equations true.
Solving systems of equations is a fundamental skill in algebra because it appears in many real-world problems, such as physics, economics, and engineering.
solving systems of equations
There are several methods to solve systems of equations, including graphing, substitution, and elimination. The choice of method can depend on the specific system of equations you're dealing with.
Here are some common techniques for solving systems of equations:
• Graphing: Plot each equation on a graph and find the point(s) where they intersect. This method is useful for visualizing solutions.
• Substitution: Solve one equation for one variable, and then substitute that expression into the other equation. This method is helpful when one equation is easily solved for one of the variables.
• Elimination: Combine the equations in such a way that one of the variables is eliminated, enabling you to solve for the other variable. This method is often quicker and works well for linear
In our example, multiply, subtract, and substitute to find the solutions for both variables.
substitution method
The substitution method is another common technique for solving systems of equations. This method involves solving one of the equations for one of the variables, and then substituting that expression
into the other equation.
Here's how the substitution method works:
• Solve one of the equations for one variable: First, pick either equation and solve it for one of the variables. For example, if you have \(x + y = 2\) and \(x - y = 0\), you could solve the
second equation for \(x\) to get \(x = y\).
• Substitute this expression into the other equation: Substitute \(x = y\) into the first equation: \(y + y = 2\).
• Solve for the remaining variable: Simplify and solve: \(2y = 2\), giving \(y = 1\).
• Find the other variable: Substitute \(y = 1\) back into the expression for the first variable: \(x = 1\).
In the given exercise, after expressing one variable in terms of another, you substitute your findings back into the original equation to solve for the remaining variable.
algebraic methods
Algebraic methods for solving systems of equations include a variety of techniques that use algebraic manipulation to find solutions. These are crucial for dealing with equations that can't be easily
graphed or visualized. Some key algebraic methods include:
• Elimination: As shown in the original exercise, elimination involves aligning coefficients and eliminating one variable to solve for another. In this case, the method depends on multiplying
equations to set coefficients equal and then subtracting to eliminate one variable.
• Substitution: As explained before, substitution uses one equation to express a variable in terms of another and solves subsequently.
• Factorization: If an equation can be factored, the factored form can sometimes provide solutions more easily. This technique is often used for quadratic systems.
• Use of identities: Recognizing and using algebraic identities (like the difference of squares) can simplify complex equations.
Algebraic methods require practice, but they are powerful tools for solving equations that describe many practical problems. The more familiar you are with these methods, the easier it will be to
solve a wide range of algebraic problems. | {"url":"https://www.vaia.com/en-us/textbooks/math/intermediate-algebra-11-edition/chapter-11/problem-34-solve-each-system-by-the-elimination-method-or-a-/","timestamp":"2024-11-14T05:25:18Z","content_type":"text/html","content_length":"249672","record_id":"<urn:uuid:62f35322-48ee-4f28-a324-8408dee25923>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00249.warc.gz"} |
AggWAvg | Deephaven
Version: Java (Groovy)
AggWAvg returns an aggregator that computes the weighted average of values, within an aggregation group, for each input column.
AggWAvg(weightColumn, pairs...)
Parameter Type Description
weightColumn String The weight column for the calculation.
The input/output names for the column(s) for the calculations.
pairs String... • "X" will output the weighted average of values in the X column for each group.
• "Y = X" will output the weighted average of values in the X column for each group and rename it to Y.
• "X, A = B" will output the weighted average of values in the X column for each group and the weighted average of values in the B column and rename it to A.
If an aggregation does not rename the resulting column, the aggregation column will appear in the output table, not the input column. If multiple aggregations on the same column do not rename the
resulting columns, an error will result, because the aggregations are trying to create multiple columns with the same name. For example, in table.aggBy([agg.AggSum(“X”), agg.AggAvg(“X”)]), both the
sum and the average aggregators produce column X, which results in an error.
An aggregator that computes the weighted average of values, within an aggregation group, for each input column.
In this example, AggWAvg returns the weighted average of values of Number, as weighed by Weight and grouped by X.
import static io.deephaven.api.agg.Aggregation.AggWAvg
source = newTable(
stringCol("X", "A", "B", "A", "C", "B", "A", "B", "B", "C"),
stringCol("Y", "M", "N", "O", "N", "P", "M", "O", "P", "M"),
intCol("Number", 55, 76, 20, 130, 230, 50, 73, 137, 214),
intCol("Weight", 1, 2, 1, 3, 2, 1, 4, 1, 2),
result = source.aggBy([AggWAvg("Weight", "Number")], "X")
In this example, AggWAvg returns the weighted average of values of Number (renamed to WAvgNumber), as weighed by Weight and grouped by X.
import static io.deephaven.api.agg.Aggregation.AggWAvg
source = newTable(
stringCol("X", "A", "B", "A", "C", "B", "A", "B", "B", "C"),
stringCol("Y", "M", "N", "O", "N", "P", "M", "O", "P", "M"),
intCol("Number", 55, 76, 20, 130, 230, 50, 73, 137, 214),
intCol("Weight", 1, 2, 1, 3, 2, 1, 4, 1, 2),
result = source.aggBy([AggWAvg("Weight", "WAvgNumber = Number")], "X")
In this example, AggWAvg returns the weighted average of values of Number (renamed to WAvgNumber), as weighed by Weight and grouped by X and Y.
import static io.deephaven.api.agg.Aggregation.AggWAvg
source = newTable(
stringCol("X", "A", "B", "A", "C", "B", "A", "B", "B", "C"),
stringCol("Y", "M", "N", "O", "N", "P", "M", "O", "P", "M"),
intCol("Number", 55, 76, 20, 130, 230, 50, 73, 137, 214),
intCol("Weight", 1, 2, 1, 3, 2, 1, 4, 1, 2),
result = source.aggBy([AggWAvg("Weight", "WAvgNumber = Number")], "X", "Y")
In this example, AggWAvg returns the weighted average of values of Number1 (renamed to WAvgNumber1), and the weighted average of values of Number2 (renamed to WAvgNumber2), both as weighed by Weight
and grouped by X.
import static io.deephaven.api.agg.Aggregation.AggWAvg
source = newTable(
stringCol("X", "A", "B", "A", "C", "B", "A", "B", "B", "C"),
intCol("Weight", 1, 2, 1, 3, 2, 1, 4, 1, 2),
intCol("Number1", 55, 76, 20, 130, 230, 50, 73, 137, 214),
intCol("Number2", 25, 14, 51, 23, 12, 15, 17, 72, 21),
result = source.aggBy([AggWAvg("Weight", "WAvgNumber1 = Number1", "WAvgNumber2 = Number2")], "X")
In this example, AggWAvg returns the weighted average of values of Number (renamed to WAvgNumber), as weighed by Weight and grouped by X, and AggAvg returns the total average of values of Number, as
grouped by X.
import static io.deephaven.api.agg.Aggregation.AggWAvg
import static io.deephaven.api.agg.Aggregation.AggAvg
source = newTable(
stringCol("X", "A", "B", "A", "C", "B", "A", "B", "B", "C"),
stringCol("Y", "M", "N", "O", "N", "P", "M", "O", "P", "M"),
intCol("Number", 55, 76, 20, 130, 230, 50, 73, 137, 214),
intCol("Weight", 1, 2, 1, 3, 2, 1, 4, 1, 2),
result = source.aggBy([AggWAvg("Weight", "WAvgNumber = Number"),AggAvg("Avg = Number")], "X")
Related documentation | {"url":"https://deephaven.io/core/groovy/docs/reference/table-operations/group-and-aggregate/AggWAvg/","timestamp":"2024-11-12T18:50:39Z","content_type":"text/html","content_length":"124923","record_id":"<urn:uuid:72efe2b5-35d8-407a-9973-09caa8bc8104>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00645.warc.gz"} |
Hypothesis Testing
Hypothesis Testing is the process of verifying if a hypothesis is viable or not. More specifically, we test to determine whether should we reject a hypothesis in favor of the other hypothesis.
The main hypothesis we are testing is called Null Hypothesis (Alternative Hypothesis (
For example, you just get it that for all 50 cities in your country, the average temperature in October is significantly different from the average of the world, your hypotheses should be:
Since ‘different‘ may mean hotter or colder (higher or lower), this is called the 2-tailed test.
If what you want to verify is whether your country is significantly hotter than the overall, your
And this is called a 1-tailed test.
Note that
After stating the 2 hypotheses, we should define a significance level. Significance level (denoted by
For above example (with
About how to compute this probability, we will elaborate in another post. For now, let me tell you the answer: From a normal distribution of mean 24 and std 3, the probability that we can get a set
of 50 sample data points with an average value of
The above 0.0001% is called the p-value. This term is very important, so remember: p-value is the conditional probability of obtaining an at least as an extreme result as your actual data given the
null hypothesis is true. When the p-value is less than the Significance level, we reject the Null hypothesis. (Otherwise, when the p-value is higher than
There is another term that is often used to determine the acceptance or rejection of the Null hypothesis, the Critical value. The relationship between Critical value and the score (says, T-score or
Z-score) is pretty similar to the significance level and the p-value. The critical value shows how extreme our score should at least be in order to reject the Null hypothesis. So if our (T- or Z-)
score is
Note that the above 2 relationships are equivalent, meaning there never exists a case when one indicates a rejection when another indicates an acceptance of the Null hypothesis.
Well, there are a lot of terminologies, right? Please bear with me, there are only a few lefts. The next terms are confidence level and confidence interval.
The Confidence Level is expressed in a percentage form (e.g. 90% confidence level, 95% confidence level), while a value of Confidence Interval is a range (or say an interval) like [20, 30] or [-3,
Return to the above example, you have your cities’ mean temperature on October to be
A 95% confidence level means that: if you repeatedly pick 50 random cities in the world and compute confidence intervals like this, there are 95% of the times that the confidence intervals cover the
true population mean.
The value 0.55, which is half the width of the confidence interval, is called the Margin of Error. The smaller the margin of error, the more confidence we have that our actual data is close to the
true population.
On a side note, there is something weird here. We state that with a 95% confidence level that the average October’s temperature of the world is in the range [25.45, 26.55] to use a sample set to
extrapolate the population, the sample set must be randomly picked. In this example, all the data points we choose are cities in your country, so the process is not random, which makes a big error in
our conclusion.
To sum up
This post gives an introduction to hypothesis testing. The key takeaways we should remember are:
Hypothesis Testing is the process of verifying if a hypothesis is viable or not.
There are 2 hypotheses in the process. The Null hypothesis (Alternative Hypothesis (
Depending on if the Alternative Hypothesis is general (just stating there is a difference) or specific (states about higher or lower) that the test is called a 2-tailed test or 1-tailed test,
The p-value is the conditional probability of obtaining an at least as an extreme result as your actual data given the null hypothesis is true.
The critical value shows how large the (T- or Z-)score must be to reject the Null hypothesis.
The Significance level (denoted by
The Confidence level is expressed in a percentage form, while the value of the confidence interval is a range (an interval). These 2 values are paired with each other. We say something like: with x%
confidence level that the confidence interval is [some value, some value]. The 95% confidence level in the given example means that: if you repeatedly pick 50 random cities in the world and compute
confidence intervals, there is 95% of the times that the confidence intervals cover the true population mean.
The Margin of Error equals half the width of the confidence interval.
To use a sample set to extrapolate the population, the sample set must be a good representative of the whole population.
Test your understanding
Hypothesis Testing - Quiz 2
1 / 5
Is there any cases when both H0 and H1 are true?
2 / 5
Which statement best describes the relationship between the Confidence Level and the Confidence Interval?
3 / 5
The relationship between the p-value and the Significance level is equivalent to the relationship between ...
4 / 5
What best describes the Margin of Error?
5 / 5
In which situation should we reject the Null hypothesis? Choose all that apply.
Your score is
Please rate this quiz
• Wikipedia’s page about Hypothesis Testing: link
• Wikipedia’s page about Confidence Interval: link
• Boston University’s page about Confidence Interval: link
2 thoughts on “Hypothesis Testing”
1. Thanks for a clear explanation. I’d like to add that one important application of hypothesis testing is to infer information about a sample to population. In real, it’s quite difficult to measure
statistical parameters about a whole population, we instead, practically can conduct survey on a sample. From stats of that sample, what can we say about the stats of population.
It would be great if author could give some examples like that. They are likely more convinced than the mentioned one in this post.
Truong Dang.
1. Thanks for your suggestion.
That’s indeed a higher-level use of hypothesis testing (HT).
The original goal of HT is to test for the difference between 2 distributions (each of the 2 could be real – e.g. a dataset, or hypothetical – e.g. a normal distribution N(0, 1)).
If, for example, we choose to compare 1 dataset versus 1 hypothetical distribution. If the result shows that every aspect of the dataset and the distribution is “pretty similar”, we may treat
the dataset as a good representative of the distribution.
Yet, I feel the explanation of “how similar is considered similar enough?”, “in what degree of similarity can we infer a specific property from the sample?” is quite complicated for an
introductory post. Also, the main goal of the post is to introduce the concepts and terminologies. I guess I will have another post elaborating on the application you mentioned when suitable.
Please let me know your thoughts,
Best regards, | {"url":"https://tungmphung.com/hypothesis-testing/","timestamp":"2024-11-07T22:41:14Z","content_type":"text/html","content_length":"184908","record_id":"<urn:uuid:2045b72c-9e51-4bd6-930e-408e5266f088>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00899.warc.gz"} |
Factors Which Affect the Speed of Sound in Air
Isaac Newton showed that the speed of sound in a gas is proportional to the square root of pressure
This relationship implies that the speed of sound in air is independent of the pressure, since if the pressure is doubled, Boyle's Law - pressure times Volume = constant - tells us that the volume is
halved, so that the same mass of gas is concentrated into half the volume and the density is doubled. The expression
On the other hand the speed of sound does depend on temperature. The sound is carried by the air molecules. If the temperature increases, the air molecules have more kinetic energy and so travel
faster. The speed of sound should therefore increases with increasing temperature. Thought of another way, if the temperature increases at constant volume, the pressure will increase and hence so
will the speed of sound. Conversely if the pressure remains constant, then Charles's Law (
Pitch and loudness have no effect on the speed of sound. Reflect that when at a concert, notes from the various instruments and voices are heard at the same time no matter what the distance from the | {"url":"https://astarmathsandphysics.com/o-level-physics-notes/159-factors-which-affect-the-speed-of-sound-in-air.html","timestamp":"2024-11-10T19:11:38Z","content_type":"text/html","content_length":"29787","record_id":"<urn:uuid:b030c47d-269e-4cf2-b7ff-667a688abe8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00509.warc.gz"} |
10.5 Angular Momentum and Its Conservation
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Understand the analogy between angular momentum and linear momentum
• Observe the relationship between torque and angular momentum
• Apply the law of conservation of angular momentum
The information presented in this section supports the following AP® learning objectives and science practices:
• 4.D.2.1 The student is able to describe a model of a rotational system and use that model to analyze a situation in which angular momentum changes due to interaction with other objects or
systems. (S.P. 1.2, 1.4)
• 4.D.2.2 The student is able to plan a data collection and analysis strategy to determine the change in angular momentum of a system and relate it to interactions with other objects and systems.
(S.P. 2.2)
• 4.D.3.1 The student is able to use appropriate mathematical routines to calculate values for initial or final angular momentum, or change in angular momentum of a system, or average torque or
time during which the torque is exerted in analyzing a situation involving torque and angular momentum. (S.P. 2.2)
• 4.D.3.2 The student is able to plan a data collection strategy designed to test the relationship between the change in angular momentum of a system and the product of the average torque applied
to the system and the time interval during which the torque is exerted. (S.P. 4.1, 4.2)
• 5.E.1.1 The student is able to make qualitative predictions about the angular momentum of a system for a situation in which there is no net external torque. (S.P. 6.4, 7.2)
• 5.E.1.2 The student is able to make calculations of quantities related to the angular momentum of a system when the net external torque on the system is zero. (S.P. 2.1, 2.2)
• 5.E.2.1 The student is able to describe or calculate the angular momentum and rotational inertia of a system in terms of the locations and velocities of objects that make up the system. Students
are expected to do qualitative reasoning with compound objects. Students are expected to do calculations with a fixed set of extended objects and point masses. (S.P. 2.2)
Why does Earth keep on spinning? What started it spinning to begin with? And how does an ice skater manage to spin faster and faster simply by pulling her arms in? Why does she not have to exert a
torque to spin faster? Questions like these have answers based in angular momentum, the rotational analog to linear momentum.
By now the pattern is clear—every rotational phenomenon has a direct translational analog. It seems quite reasonable, then, to define angular momentum $LL size 12{L} {}$ as
10.90 $L=Iω.L=Iω. size 12{L=Iω} {}$
This equation is an analog to the definition of linear momentum as $p=mvp=mv size 12{p= ital "mv"} {}$. Units for linear momentum are $kg⋅m/skg⋅m/s size 12{"kg" cdot m rSup { size 8{2} } "/s"} {}$
while units for angular momentum are $kg⋅m2/skg⋅m2/s size 12{"kg" cdot m rSup { size 8{2} } "/s"} {}$. As we would expect, an object that has a large moment of inertia $II size 12{I} {}$, such as
Earth, has a very large angular momentum. An object that has a large angular velocity $ωω size 12{ω} {}$, such as a centrifuge, also has a rather large angular momentum.
Making Connections
Angular momentum is completely analogous to linear momentum, first presented in Uniform Circular Motion and Gravitation. It has the same implications in terms of carrying rotation forward, and it is
conserved when the net external torque is zero. Angular momentum, like linear momentum, is also a property of the atoms and subatomic particles.
Example 10.11 Calculating Angular Momentum of the Earth
No information is given in the statement of the problem; so we must look up pertinent data before we can calculate $L=IωL=Iω size 12{L=Iω} {}$. First, according to Figure 10.12, the formula for the
moment of inertia of a sphere is
10.91 $I = 2 MR 2 5 I = 2 MR 2 5 size 12{I= { {2 ital "MR" rSup { size 8{2} } } over {5} } } {}$
so that
10.92 $L = Iω = 2 MR 2 ω 5 . L = Iω = 2 MR 2 ω 5 . size 12{L=Iω= { {2 ital "MR" rSup { size 8{2} } ω} over {5} } } {}$
Earth's mass $MM size 12{M} {}$ is $5.979×1024 kg5.979×1024 kg size 12{5 "." "979" times "10" rSup { size 8{"24"} } "kg"} {}$ and its radius $RR size 12{R} {}$ is $6.376×106m6.376×106m size 12{6 "."
"376" times "10" rSup { size 8{6} } m} {}$. Earth's angular velocity $ωω size 12{ω} {}$ is, of course, exactly one revolution per day, but we must covert $ωω size 12{ω} {}$ to radians per second to
do the calculation in SI units.
Substituting known information into the expression for $LL size 12{L} {}$ and converting $ωω size 12{ω} {}$ to radians per second gives
10.93 L = 0.45.979×1024kg6.376×106 m21revd = 9.72×1037kg⋅m2⋅rev/d.L = 0.45.979×1024kg6.376×106 m21revd = 9.72×1037kg⋅m2⋅rev/d.alignl { stack { size 12{L=0 "." 4 left (5 "." "979" times "10" rSup {
size 8{"24"} } " kg" right ) left (6 "." "376" times "10" rSup { size 8{6} } " m" right ) rSup { size 8{2} } left ( { {1" rev"} over {d} } right )} {} # " "=9 "." "72" times "10" rSup { size 8{"37"}
} " kg" cdot m rSup { size 8{2} } "rev/d" {} } } {}
Substituting $2π2π size 12{2π} {}$ rad for $11 size 12{1} {}$ rev and $8.64×104s8.64×104s size 12{8 "." "64" times "10" rSup { size 8{4} } s} {}$ for 1 day gives
10.94 L = 9.72×1037kg⋅m22π rad/rev8.64×104 s/d1 rev/d = 7.07×1033 kg⋅m2/s. L = 9.72×1037kg⋅m22π rad/rev8.64×104 s/d1 rev/d = 7.07×1033 kg⋅m2/s.alignl { stack { size 12{L= left (9 "." "72" times "10"
rSup { size 8{"37"} } " kg" cdot m rSup { size 8{2} } right ) left ( { {2π" rad/rev"} over {8 "." "64" times "10" rSup { size 8{4} } " s/d"} } right ) left (1" rev/d" right )} {} # " "=7 "." "07"
times "10" rSup { size 8{"33"} } " kg" cdot m rSup { size 8{2} } "/s" {} } } {}
This number is large, demonstrating that Earth, as expected, has a tremendous angular momentum. The answer is approximate, because we have assumed a constant density for Earth in order to estimate
its moment of inertia.
When you push a merry-go-round, spin a bike wheel, or open a door, you exert a torque. If the torque you exert is greater than opposing torques, then the rotation accelerates, and angular momentum
increases. The greater the net torque, the more rapid the increase in $LL size 12{L} {}$. The relationship between torque and angular momentum is
10.95 $net τ=ΔLΔt.net τ=ΔLΔt. size 12{"net "τ= { {ΔL} over {Δt} } } {}$
This expression is exactly analogous to the relationship between force and linear momentum, $F=Δp/ΔtF=Δp/Δt size 12{F=Δp/Δt} {}$. The equation $net τ=ΔLΔtnet τ=ΔLΔt size 12{"net "τ= { {ΔL} over {Δt}
} } {}$ is very fundamental and broadly applicable. It is, in fact, the rotational form of Newton's second law.
Example 10.12 Calculating the Torque Putting Angular Momentum Into a Lazy Susan
Figure 10.21 shows a llazy susan food tray being rotated by a person in quest of sustenance. Suppose the person exerts a 2.50 N force perpendicular to the lazy susan's 0.260-m radius for 0.150 s. (a)
What is the final angular momentum of the lazy susan if it starts from rest, assuming friction is negligible? (b) What is the final angular velocity of the lazy susan, given that its mass is 4 kg and
assuming its moment of inertia is that of a disk?
We can find the angular momentum by solving $net τ= Δ L Δ t net τ= Δ L Δ t size 12{"net "τ= { {ΔL} over {Δt} } } {}$ for $ΔLΔL size 12{ΔL} {}$, and using the given information to calculate the
torque. The final angular momentum equals the change in angular momentum, because the lazy susan starts from rest. That is, $ΔL=LΔL=L size 12{ΔL=L} {}$. To find the final velocity, we must calculate
$ωω size 12{ω} {}$ from the definition of $LL size 12{L} {}$ in $L=IωL=Iω size 12{L=Iω} {}$.
Solution for (a)
Solving $net τ=ΔLΔtnet τ=ΔLΔt size 12{"net "τ= { {ΔL} over {Δt} } } {}$ for $ΔLΔL size 12{ΔL} {}$ gives
10.96 $ΔL=net τΔt.ΔL=net τΔt. size 12{ΔL= left ("net "τ right ) cdot Δt} {}$
Because the force is perpendicular to $rr size 12{r} {}$, we see that $net τ=rFnet τ=rF size 12{"net "τ= ital "rF"} {}$, so that
10.97 $L = rFΔt=(0.260 m)(2.50 N)(0.150 s) = 9.75×10−2kg⋅m2/s.L = rFΔt=(0.260 m)(2.50 N)(0.150 s) = 9.75×10−2kg⋅m2/s.$
Solution for (b)
The final angular velocity can be calculated from the definition of angular momentum,
10.98 $L=Iω.L=Iω. size 12{L=Iω} {}$
Solving for $ωω size 12{ω} {}$ and substituting the formula for the moment of inertia of a disk into the resulting equation gives
10.99 $ω=LI=L12MR2.ω=LI=L12MR2. size 12{ω= { {L} over {I} } = { {L} over { { size 8{1} } wideslash { size 8{2} } ital "MR" rSup { size 8{2} } } } } {}$
And substituting known values into the preceding equation yields
10.100 $ω=9.75×10−2 kg⋅m2/s0.5004 kg0.260 m=0.721 rad/s.ω=9.75×10−2 kg⋅m2/s0.5004 kg0.260 m=0.721 rad/s. size 12{ω= { {9 "." "75" times "10" rSup { size 8{ - 2} } " kg" cdot m rSup { size 8{2} } "/
s"} over { left (0 "." "500" right ) left (4 "." "00"" kg" right ) left (0 "." "260"" m" right )} } =0 "." "721"" rad/s"} {}$
Note that the imparted angular momentum does not depend on any property of the object but only on torque and time. The final angular velocity is equivalent to one revolution in 8.71 s (determination
of the time period is left as an exercise for the reader), which is about right for a lazy susan.
Take-home Experiment
Plan an experiment to analyze changes to a system's angular momentum. Choose a system capable of rotational motion such as a lazy susan or a merry-go-round. Predict how the angular momentum of this
system will change when you add an object to the lazy susan or jump onto the merry-go-round. What variables can you control? What are you measuring? In other words, what are your independent and
dependent variables? Are there any independent variables that it would be useful to keep constant. Angular velocity, perhaps? Collect data in order to calculate or estimate the angular momentum of
your system when in motion. What do you observe? Collect data in order to calculate the change in angular momentum as a result of the interaction you performed.
Using your data, how does the angular momentum vary with the size and location of an object added to the rotating system?
Example 10.13 Calculating the Torque in a Kick
The person whose leg is shown in Figure 10.22 kicks his leg by exerting a 2,000N force with his upper leg muscle. The effective perpendicular lever arm is 2.20 cm. Given the moment of inertia of the
lower leg is $1.25 kg⋅m21.25 kg⋅m2 size 12{1 "." "25"`"kg" cdot m rSup { size 8{2} } } {}$, (a) find the angular acceleration of the leg. (b) Neglecting the gravitational force, what is the
rotational kinetic energy of the leg after it has rotated through $57.3º57.3º size 12{"57" "." 3`°} {}$ or (1 rad)?
The angular acceleration can be found using the rotational analog to Newton's second law, or $α=net τ/Iα=net τ/I size 12{α="net "τ/I} {}$. The moment of inertia $II size 12{I} {}$ is given and the
torque can be found easily from the given force and perpendicular lever arm. Once the angular acceleration $αα size 12{α} {}$ is known, the final angular velocity and rotational kinetic energy can be
Solution to (a)
From the rotational analog to Newton's second law, the angular acceleration $αα size 12{α} {}$ is
10.101 $α=net τI.α=net τI. size 12{α= { {"net "τ} over {I} } } {}$
Because the force and the perpendicular lever arm are given and the leg is vertical so that its weight does not create a torque, the net torque is thus
10.102 $net t = r?F = 0.0220 m2,000 N = 44m. net t = r?F = 0.0220 m2,000 N = 44m.$
Substituting this value for the torque and the given value for the moment of inertia into the expression for $αα size 12{α} {}$ gives
10.103 $α=44 N⋅m1.25 kg⋅m2=35.2 rad/s2.α=44 N⋅m1.25 kg⋅m2=35.2 rad/s2. size 12{α= { {"44" "." 0" N" cdot m} over {1 "." "25"" kg" cdot m rSup { size 8{2} } } } ="35" "." 2" rad/s" rSup { size 8{2} }
} {}$
Solution to (b)
The final angular velocity can be calculated from the kinematic expression
10.104 $ω2= ω02+2αθω2= ω02+2αθ$
10.105 $ω 2 = 2 αθ ω 2 = 2 αθ size 12{ω rSup { size 8{2} } =2 ital "αθ"} {}$
because the initial angular velocity is zero. The kinetic energy of rotation is
10.106 $KE rot = 1 2 Iω 2 KE rot = 1 2 Iω 2 size 12{"KE" rSub { size 8{"rot"} } = { {1} over {2} } Iω rSup { size 8{2} } } {}$
so it is most convenient to use the value of $ω2ω2 size 12{ω rSup { size 8{2} } } {}$ just found and the given value for the moment of inertia. The kinetic energy is then
10.107 $KErot = 0.51.25 kg⋅m270.4 rad2/s2 = 44 J. KErot = 0.51.25 kg⋅m270.4 rad2/s2 = 44 J.$
These values are reasonable for a person kicking his leg starting from the position shown. The weight of the leg can be neglected in part (a) because it exerts no torque when the center of gravity of
the lower leg is directly beneath the pivot in the knee. In part (b), the force exerted by the upper leg is so large that its torque is much greater than that created by the weight of the lower leg
as it rotates. The rotational kinetic energy given to the lower leg is enough that it could give a ball a significant velocity by transferring some of this energy in a kick.
Making Connections: Conservation Laws
Angular momentum, like energy and linear momentum, is conserved. This universally applicable law is another sign of underlying unity in physical laws. Angular momentum is conserved when net external
torque is zero, just as linear momentum is conserved when the net external force is zero.
Conservation of Angular Momentum
Conservation of Angular Momentum
We can now understand why Earth keeps on spinning. As we saw in the previous example, $ΔL=(netτ)ΔtΔL=(netτ)Δt size 12{ΔL= \( ital "net"τ \) cdot Δt} {}$. This equation means that, to change angular
momentum, a torque must act over some period of time. Because Earth has a large angular momentum, a large torque acting over a long time is needed to change its rate of spin. So what external torques
are there? Tidal friction exerts torque that is slowing Earth's rotation, but tens of millions of years must pass before the change is very significant. Recent research indicates the length of the
day was 18 h some 900 million years ago. Only the tides exert significant retarding torques on Earth, and so it will continue to spin, although ever more slowly, for many billions of years.
What we have here is, in fact, another conservation law. If the net torque is zero, then angular momentum is constant or conserved. We can see this rigorously by considering $net τ= Δ L Δ t net τ= Δ
L Δ t size 12{"net "τ= { {ΔL} over {Δt} } } {}$ for the situation in which the net torque is zero. In that case,
10.108 $net τ=0net τ=0 size 12{"net "τ=0} {}$
implying that
10.109 $Δ L Δ t =0. Δ L Δ t =0. size 12{ { {ΔL} over {Δt} } =0} {}$
If the change in angular momentum $ΔLΔL size 12{ΔL} {}$ is zero, then the angular momentum is constant; thus,
10.110 $L = constant net τ = 0 L = constant net τ = 0 size 12{L="constant " left ("net "τ=0 right )} {}$
10.111 $L = L ′ net τ = 0 . L = L ′ net τ = 0 . size 12{L=L"" " left ("net "τ=0 right )} {}$
These expressions are the law of conservation of angular momentum. Conservation laws are as scarce as they are important.
An example of conservation of angular momentum is seen in Figure 10.23, in which an ice skater is executing a spin. The net torque on her is very close to zero, because there is relatively little
friction between her skates and the ice and because the friction is exerted very close to the pivot point. Both $FF size 12{F} {}$ and $rr size 12{r} {}$ are small, and so $ττ size 12{τ} {}$ is
negligibly small. Consequently, she can spin for quite some time. She can do something else, too. She can increase her rate of spin by pulling her arms and legs in. Why does pulling her arms and legs
in increase her rate of spin? The answer is that her angular momentum is constant, so that
10.112 $L=L′.L=L′. size 12{L=L"} {}$
Expressing this equation in terms of the moment of inertia,
10.113 $Iω=I′ω′,Iω=I′ω′, size 12{Iω=I"ω'} {}$
where the primed quantities refer to conditions after she has pulled in her arms and reduced her moment of inertia. Because $I′I′ size 12{I"} {}$ is smaller, the angular velocity $ω′ω′ size 12{ω"} {}
$ must increase to keep the angular momentum constant. The change can be dramatic, as the following example shows.
Example 10.14 Calculating the Angular Momentum of a Spinning Skater
Suppose an ice skater, such as the one in Figure 10.23, is spinning at 0.800 rev/ s with her arms extended. She has a moment of inertia of $2.34kg⋅m22.34kg⋅m2 size 12{2 "." "34"'"kg" cdot m rSup {
size 8{2} } } {}$ with her arms extended and of $0.363kg⋅m20.363kg⋅m2 size 12{0 "." "363"'"kg" cdot m rSup { size 8{2} } } {}$with her arms close to her body. (These moments of inertia are based on
reasonable assumptions about a 60.0-kg skater.) (a) What is her angular velocity in revolutions per second after she pulls in her arms? (b) What is her rotational kinetic energy before and after she
does this?
In the first part of the problem, we are looking for the skater's angular velocity $ω′ω′ size 12{ { {ω}} sup { " }} {}$ after she has pulled in her arms. To find this quantity, we use the
conservation of angular momentum and note that the moments of inertia and initial angular velocity are given. To find the initial and final kinetic energies, we use the definition of rotational
kinetic energy given by
10.114 $KErot=12Iω2.KErot=12Iω2. size 12{"KE" rSub { size 8{"rot"} } = { {1} over {2} } Iω rSup { size 8{2} } } {}$
Solution for (a)
Because torque is negligible (as discussed above), the conservation of angular momentum given in $Iω=I′ω′Iω=I′ω′ size 12{Iω= { {I}} sup { " } { {ω}} sup { ' }} {}$ is applicable. Thus,
10.115 $L = L ′ L = L ′ size 12{L=L"} {}$
10.116 $Iω = I ′ ω ′ Iω = I ′ ω ′ size 12{Iω=I"ω'} {}$
Solving for $ω′ω′$and substituting known values into the resulting equation gives
10.117 $ω ′ = I I ′ ω = 2.34 kg ⋅ m 2 0 .363 kg ⋅ m 2 0.800 rev/s = 5.16 rev/s. ω ′ = I I ′ ω = 2.34 kg ⋅ m 2 0 .363 kg ⋅ m 2 0.800 rev/s = 5.16 rev/s.$
Solution for (b)
Rotational kinetic energy is given by
10.118 $KErot=12Iω2.KErot=12Iω2. size 12{"KE" rSub { size 8{"rot"} } = { {1} over {2} } Iω rSup { size 8{2} } } {}$
The initial value is found by substituting known values into the equation and converting the angular velocity to rad/s:
10.119 $KE rot = ( 0 . 5) 2 . 34 kg ⋅ m 2 0 . 800 rev/s 2π rad/rev 2 = 29.6 J. KE rot = ( 0 . 5) 2 . 34 kg ⋅ m 2 0 . 800 rev/s 2π rad/rev 2 = 29.6 J.$
The final rotational kinetic energy is
10.120 $KErot′=12I′ω′2.KErot′=12I′ω′2.$
Substituting known values into this equation gives
10.121 K E rot ′ = 0 . 5 0 .363 kg ⋅ m 2 5 . 16 rev/s 2π rad/rev 2 = 191 J. K E rot ′ = 0 . 5 0 .363 kg ⋅ m 2 5 . 16 rev/s 2π rad/rev 2 = 191 J. alignl { stack { size 12{K { {E}} sup { " } rSub {
size 8{"rot"} } = left (0 "." 5 right ) left (0 "." "363"" kg" cdot m rSup { size 8{2} } right ) left [ left (5 "." "16"" rev/s" right ) left (2π" rad/rev" right ) right ] rSup { size 8{2} } } {} # "
"="191"" J" {} } } {}
In both parts, there is an impressive increase. First, the final angular velocity is large, although most world-class skaters can achieve spin rates about this great. Second, the final kinetic energy
is much greater than the initial kinetic energy. The increase in rotational kinetic energy comes from work done by the skater in pulling in her arms. This work is internal work that depletes some of
the skater's food energy.
There are several other examples of objects that increase their rate of spin because something reduced their moment of inertia. Tornadoes are one example. Storm systems that create tornadoes are
slowly rotating. When the radius of rotation narrows, even in a local region, angular velocity increases, sometimes to the furious level of a tornado. Earth is another example. Our planet was born
from a huge cloud of gas and dust, the rotation of which came from turbulence in an even larger cloud. Gravitational forces caused the cloud to contract, and the rotation rate increased as a result.
(See Figure 10.24.)
In case of human motion, one would not expect angular momentum to be conserved when a body interacts with the environment as its foot pushes off the ground. Astronauts floating in space aboard the
International Space Station have no angular momentum relative to the inside of the ship if they are motionless. Their bodies will continue to have this zero value no matter how they twist about as
long as they do not give themselves a push off the side of the vessel.
Check Your Understanding
Is angular momentum completely analogous to linear momentum? What, if any, are their differences?
Yes, angular and linear momentums are completely analogous. While they are exact analogs they have different units and are not directly inter-convertible like forms of energy are. | {"url":"https://texasgateway.org/resource/105-angular-momentum-and-its-conservation?book=79096&binder_id=78556","timestamp":"2024-11-02T17:55:04Z","content_type":"text/html","content_length":"142224","record_id":"<urn:uuid:cf72d8c7-2c65-4048-9d5a-4c13e93aafbe>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00532.warc.gz"} |
CTAN update: pst-math
Date: January 20, 2009 11:17:16 PM CET
Herbert Voss wrote: > I uploaded pst-math.tgz to the uk ctan node. Please replace the > files with the ones in the directory > /graphics/pstricks/contrib/pst-math/ > > pst-math is a collection of
PostScript subroutines for > special mathematical functions like sin hyperbolic > Gamma function , a.s.o. > > This version 0.21 of pst-math.pro has a new subroutine > SIMPSON for numerical
integration of functions written > in PostScript or algebraic notation. i have installed the new version, and updated the catalogue repository. thanks for the upload. users may view the catalogue
entry at
or browse the package directory at
the catalogue entry will be updated (slightly) overnight. Robin Fairbairns For the CTAN team
pst-math – Enhancement of PostScript math operators to use with PSTricks
PostScript lacks a lot of basic operators such as tan, acos, asin, cosh, sinh, tanh, acosh, asinh, atanh, exp (with e base). Also (oddly) cos and sin use arguments in degrees.
Pst-math provides all those operators in a header file pst-math.pro with wrappers pst-math.sty and pst-math.tex.
In addition, sinc, gauss, gammaln and bessel are implemented (only partially for the latter). The package is designed essentially to work with pst-plot but can be used in whatever PS code (such as
PSTricks SpecialCoor "!", which is useful for placing labels).
The package also provides a routine SIMPSON for numerical integration and a solver of linear equation systems.
Package pst-math
Version 0.67 2023-07-03
Copyright 2023 Herbert Voss
Maintainer Herbert Voß
Christophe Jorssen (inactive) | {"url":"https://ctan.org/ctan-ann/id/11882.1232493436@mole.cl.cam.ac.uk","timestamp":"2024-11-02T05:12:34Z","content_type":"text/html","content_length":"15724","record_id":"<urn:uuid:c2e8b68d-7cf5-4f52-be7a-b47cf1a331f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00448.warc.gz"} |
VPEX P4 - Etopika
the monkey lives on a banana tree. The tree can be modelled as a tree (a connected graph with nodes and edges). Bob is currently on the ground, marked node . Every day, new nodes (not always
distinct) grow a banana, and Bob climbs from his current spot to the 2 bananas (in any order) and eats them. He then takes a nap where he is and sleeps until the next day. What is the least distance
he must travel?
Input Specification
The first line contains , the number of nodes, and , the number of days.
The next lines contain , , and , marking a branch between and of length .
The next lines contain and , the location of the 2 bananas that day.
Output Specification
Output the minimum total distance the monkey must travel.
For all subtasks:
Subtask 1 [10%]
Subtask 2 [20%]
Sample Input
Sample Output
On the first day, Bob starts at node and travels to node and then node to eat the bananas.
On the second day, Bob is already at node and eats the banana before travelling to node .
There are no comments at the moment. | {"url":"https://dmoj.ca/problem/vpex1p4","timestamp":"2024-11-14T20:24:03Z","content_type":"text/html","content_length":"25321","record_id":"<urn:uuid:e244463d-99f2-4d42-8f89-37409f5e5966>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00569.warc.gz"} |
Excel formula evaluating some rows correctly, but others incorrectly
Excel is a powerful tool for data analysis, but sometimes, you may encounter a scenario where your formulas evaluate some rows correctly while others yield incorrect results. This can be frustrating
and may lead to misinterpretations of your data. In this article, we'll explore the potential reasons for these inconsistencies and provide insights to help you troubleshoot and resolve the issue
Understanding the Problem
Imagine you have a dataset with a formula intended to compute values based on specific criteria. However, when applying this formula across several rows, you notice that certain rows return results
that don't align with your expectations. For instance, you might be summing values or performing calculations based on conditions that appear to be properly set up, yet discrepancies arise.
Scenario and Original Code
Let's illustrate this with a sample dataset and a formula:
A B C D
Item Price Quantity Total
Apples 2 5 =B2*C2
Bananas 1.5 4 =B3*C3
Cherries 3 =B4*C4
Dates 2 7 =B5*C5
In this example, the formula in column D aims to calculate the total price for each item by multiplying the price (column B) with the quantity (column C). However, you'll notice that the row for
Cherries (B4*C4) might not return the expected total because the Quantity cell is empty.
Reasons for Inconsistencies
There are several reasons why a formula might evaluate some rows correctly while others do not:
1. Empty or Blank Cells: If any of the referenced cells are empty, the resulting calculation will often produce an error or zero. In the case of the Cherries, the lack of a quantity means the total
will default to zero.
2. Data Types: Ensure that the values in your cells are formatted correctly. For instance, text values instead of numbers can lead to unexpected outputs.
3. Formula Errors: Errors in the formula itself, such as incorrect references or missing operators, can lead to inconsistent evaluations.
4. Copy-Paste Issues: Copying a formula down a column might inadvertently change references or include unintended cells.
5. Conditional Logic: If the formula includes conditional statements (like IF, COUNTIF, etc.), verify that the logic applies uniformly to all intended rows.
Tips for Troubleshooting
To fix inconsistencies in your Excel formulas, consider the following tips:
• Check for Blank Cells: Use IFERROR or IF functions to manage empty cells. For example, modify the Total calculation to:
=IF(C2="", 0, B2*C2)
This formula checks if the Quantity is empty and returns zero instead of attempting the multiplication.
• Data Validation: Ensure the data types are correct by using Excel’s Data Validation tools. You can restrict inputs to certain types, such as numbers only.
• Error Checking Tools: Use Excel's built-in error-checking features to identify issues quickly. Go to the Formulas tab and click on "Error Checking."
• Use Absolute References: When copying formulas, ensure you're not unintentionally changing cell references by using absolute references (e.g., $B$2 instead of B2).
By understanding the root causes of why Excel formulas may yield inconsistent results across different rows, you can enhance your data analysis skills and avoid potential pitfalls in your
calculations. Implementing best practices for managing formulas will not only streamline your workflow but also improve the accuracy of your results.
Additional Resources
Final Thoughts
Take the time to carefully inspect your formulas and datasets for common issues. With a systematic approach to troubleshooting, you can resolve inconsistencies and ensure that your Excel computations
are both reliable and accurate. Happy analyzing! | {"url":"https://go-mk-websites.co.uk/post/excel-formula-evaluating-some-rows-correctly-but-others","timestamp":"2024-11-08T18:58:45Z","content_type":"text/html","content_length":"83166","record_id":"<urn:uuid:3d125f61-a1ce-4d95-a2c2-2558bdbbb89d>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00482.warc.gz"} |
Loan Matrix Calculator
Save time from doing "what-ifs?"
The loan matrix is a tool used to calculate:
• the payments for a loan based on different interest rates and terms
• the payments for a loan based on different interest rates and loan amounts
• possible loan amounts based on different interest rates and payment terms for a given payment
The point of a matrix calculator is that it presents multiple results at one time. This can save you from doing a series of what-if calculations.
Please rotate to see calculator
Please rotate device or widen browser for a better experience.
Your device is too small to show this calculator.
Some additional notes:
• Payment method: When the payments start. If the first payment is due on the day the loan originates, then select "start-of-period," otherwise select "end-of-period."
• Compound frequency: If the compounding frequency is not specified by the lender or you don't know it, then select the same value as "payment frequency."
• Step values: By how much to increase the x-axis and y-axis values from their initial values. For example, if you've set the "initial interest rate" to 5% and if you set the "rate step value" to
0.5% then the calculator will set the second value for the interest rate (x-axis) to 5.5%.
On the "Payment Amount - term varies" tab you can calculate a matrix of potential periodic payments for a given loan amount while varying the term and interest rate for the loan.
On the "Payment Amount - loan amount varies" tab you can calculate a matrix of potential periodic payments for a given term while varying the loan amount and interest rate.
The "Loan Amount" tab allows you to calculate a matrix of various loan amounts that can be borrowed for a given payment amount for different terms and interest rates.
The "step values" on either tab control by what amount the interest rate and term are going to increase.
Note: It is not necessary to clear one calculation before doing the next. You can change one value and recalculate.
Comments, suggestions & questions welcomed... | {"url":"https://accuratecalculators.com/loan-matrix","timestamp":"2024-11-06T19:49:54Z","content_type":"text/html","content_length":"107207","record_id":"<urn:uuid:817fb3d7-48bc-4004-a497-14f5e6de63de>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00423.warc.gz"} |
OFTI Introduction
OFTI Introduction
by Isabel Angelo and Sarah Blunt (2018)
OFTI (Orbits For The Impatient) is an orbit-generating algorithm designed specifically to handle data covering short fractions of long-period exoplanets (Blunt et al. 2017). Here we go through steps
of using OFTI within orbitize!
Basic Orbit Generating
Orbits are generated in OFTI through a Driver class within orbitize. Once we have imported this class:
we can initialize a Driver object specific to our data:
myDriver = orbitize.driver.Driver('{}/GJ504.csv'.format(orbitize.DATADIR), # path to data file
'OFTI', # name of algorithm for orbit-fitting
1, # number of secondary bodies in system
1.22, # total mass [M_sun]
56.95, # total parallax of system [mas]
mass_err=0.08, # mass error [M_sun]
plx_err=0.26) # parallax error [mas]
Because OFTI is an object class within orbitize, we can assign all of the OFTI attributes onto a variable (s). We can then generate orbits for s using a function called run_sampler, a method of the
OFTI class. The run_sampler method takes in the desired number of accepted orbits as an input.
Here we use run OFTI to randomly generate orbits until 1000 are accepted:
s = myDriver.sampler
orbits = s.run_sampler(1000)
We have now generated 1000 possible orbits for our system. Here, orbits is a (1000 x 8) array, where each of the 1000 elements corresponds to a single orbit. An orbit is represented by 8 orbital
Here is an example of what an accepted orbit looks like from orbitize:
array([4.93916907e+01, 8.90197501e-03, 2.63925411e+00, 2.44962990e+00,
9.31508665e-01, 1.20302112e-01, 5.74242058e+01, 1.22728974e+00])
To further inspect what each of the 8 elements in your orbit represents, you can view the system.param_idx variable. This is a dictionary that tells you the indices of your orbit that correspond to
semi-major axis (a), eccentricity (e), inclination (i), argument of periastron (aop), position angle of nodes (pan), and epoch of periastron passage (epp). The last two indices are the parallax and
system mass, and the number following the parameter name indicates the number of the body in the system.
{'sma1': 0,
'ecc1': 1,
'inc1': 2,
'aop1': 3,
'pan1': 4,
'tau1': 5,
'plx': 6,
'mtot': 7}
Now that we can generate possible orbits for our system, we want to plot the data to interpret our results. Here we will go through a brief overview on ways to visualize your data within orbitize.
For a more detailed guide on data visualization capabilities within orbitize, see the Orbitize plotting tutorial.
One way to visualize our results is through histograms of our computed orbital parameters. Our orbits are outputted from run_sampler as an array of orbits, where each orbit is represented by a set of
orbital elements:
array([[4.93916907e+01, 8.90197501e-03, 2.63925411e+00, 2.44962990e+00,
9.31508665e-01, 1.20302112e-01, 5.74242058e+01, 1.22728974e+00],
[4.69543031e+01, 1.31571508e-01, 2.52917998e+00, 1.34963602e+00,
4.18692436e+00, 4.17659289e-01, 5.73207900e+01, 1.23162413e+00],
[5.15848551e+01, 1.18074455e-01, 2.26110475e+00, 2.98346893e+00,
2.31713931e+00, 3.94202277e-03, 5.69191065e+01, 1.15389146e+00],
[3.89558225e+01, 3.71357464e-01, 2.94801131e+00, 3.08542398e+00,
4.77645562e+00, 7.15043369e-01, 5.72827098e+01, 1.05721164e+00],
[8.19463988e+01, 1.45955646e-02, 2.11512811e+00, 4.52064036e+00,
4.44802306e+00, 9.32004660e-01, 5.72429430e+01, 1.35323242e+00]])
We can effectively view outputs from run_sampler by creating a histogram of a given orbit element to see its distribution of possible values. Our system.param_idx dictionary is useful here. We can
use it to determine the index of a given orbit that corresponds to the orbital element we are interested in:
{'sma1': 0,
'ecc1': 1,
'inc1': 2,
'aop1': 3,
'pan1': 4,
'tau1': 5,
'plx': 6,
'mtot': 7}
If we want to plot the distribution of orbital semi-major axes (a) in our generated orbits, we would use the index dictionary s.system.param_idx to index the semi-major axis element from each orbit:
sma = [x[s.system.param_idx['sma1']] for x in orbits]
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(sma, bins=30)
plt.xlabel('orbital semi-major axis [AU]')
You can use this method to create histograms of any orbital element you are interested in:
ecc = [x[s.system.param_idx['ecc1']] for x in orbits]
i = [x[s.system.param_idx['inc1']] for x in orbits]
plt.hist(sma, bins=30)
plt.xlabel('orbital semi-major axis [AU]')
plt.hist(ecc, bins=30)
plt.xlabel('eccentricity [0,1]')
plt.hist(i, bins=30)
plt.xlabel('inclination angle [rad]')
In addition to our orbits array, Orbitize also creates a Results class that contains built-in plotting capabilities for two types of plots: corner plots and orbit plots.
Corner Plot
After generating the samples, the run_sampler method also creates a Results object that can be accessed with s.results:
We can now create a corner plot using the function plot_corner within the Results class. This function requires an input list of the parameters, in string format, that you wish to include in your
corner plot. We can even plot all of the orbital parameters at once! As shown below:
corner_figure = myResults.plot_corner(param_list=['sma1', 'ecc1', 'inc1', 'aop1', 'pan1','tau1'])
A Note about Convergence
Those of you with experience looking at corner plots will note that the result here does not look converged (i.e. we need more samples for our results to be statistically significant). Because this
is a tutorial, we didn’t want you to have to wait around for a while for the OFTI results to converge.
It’s safe to say that OFTI should accept a minimum of 10,000 orbit for convergence. For pretty plots to go in publications, we recommend at least 1,000,000 accepted orbits.
Orbit Plot
What about if we want to see how the orbits look in the sky? Don’t worry, the Results class has a command for that too! It’s called plot_orbits. We can create a simple orbit plot by running the
command as follows:
epochs = myDriver.system.data_table['epoch']
orbit_figure = myResults.plot_orbits(
start_mjd=epochs[0] # Minimum MJD for colorbar (here we choose first data epoch)
WARNING: ErfaWarning: ERFA function "d2dtf" yielded 1 of "dubious year (Note 5)" [astropy._erfa.core]
WARNING: ErfaWarning: ERFA function "dtf2d" yielded 1 of "dubious year (Note 6)" [astropy._erfa.core]
WARNING: ErfaWarning: ERFA function "utctai" yielded 1 of "dubious year (Note 3)" [astropy._erfa.core]
WARNING: ErfaWarning: ERFA function "taiutc" yielded 1 of "dubious year (Note 4)" [astropy._erfa.core]
WARNING: ErfaWarning: ERFA function "dtf2d" yielded 8 of "dubious year (Note 6)" [astropy._erfa.core]
<Figure size 1008x432 with 0 Axes>
Advanced OFTI and API Interaction
We’ve seen how the run_sampler command is the fastest way to generate orbits within OFTI. For users interested in what’s going on under-the-hood, this part of the tutorial takes us each step of
run_sampler. Understanding the intermediate stages of orbit-fitting can allow for more customization that goes beyond Orbitize’s default parameters.
We begin again by intializing a sampler object on which we can run OFTI:
myDriver = orbitize.driver.Driver('{}/GJ504.csv'.format(orbitize.DATADIR), # path to data file
'OFTI', # name of algorith for orbit-fitting
1, # number of secondary bodies in system
1.22, # total mass [M_sun]
56.95, # total parallax of system [mas]
mass_err=0.08, # mass error [M_sun]
plx_err=0.26) # parallax error [mas]
In orbitize, the first thing that OFTI does is prepare an initial set of possible orbits for our object through a function called prepare_samples, which takes in the number of orbits to generate as
an input. For example, we can generate 100,000 orbits as follows:
samples = s.prepare_samples(100000)
Here, samples is an array of randomly generated orbits that have been scaled-and-rotated to fit our astrometric observations. The first and second dimension of this array are the number of orbital
elements and total orbits generated, respectively. In other words, each element in samples represents the value of a particular orbital element for each generated orbit:
print('samples: ', samples.shape)
print('first element of samples: ', samples[0].shape)
samples: (8, 100000)
first element of samples: (100000,)
Once our initial set of orbits is generated, the orbits are vetted for likelihood in a function called reject. This function computes the probability of an orbit based on its associated chi squared.
It then rejects orbits with lower likelihoods and accepts the orbits that are more probable. The output of this function is an array of possible orbits for our input system.
orbits, lnlikes = s.reject(samples)
Our orbits array represents the final orbits that are output by OFTI. Each element in this array contains the 8 orbital elements that are computed by orbitize:
We can synthesize this sequence with the run_sampler() command, which runs through the steps above until the input number of orbits has been accepted. Additionally, we can specify the number of
orbits generated by prepare_samples each time the sequence is initiated with an argument called num_samples. Higher values for num_samples will output more accepted orbits, but may take longer to run
since all initially prepared orbits will be run through the rejection step.
orbits = s.run_sampler(100, num_samples=1000)
Saving and Loading Results
Finally, we can save our generated orbits in a file that can be easily read for future use and analysis. Here we will walk through the steps of saving a set of orbits to a file in hdf5 format. The
easiest way to do this is using orbitize.Results.save_results():
Now when you are ready to use your orbits data, it is easily accessible through the file we’ve created. One way to do this is to load the data into a new results object; in this way you can make use
of the functions that we learn before, like plot_corner and plot_orbits. To do this, use the results module:
import orbitize.results
loaded_results = orbitize.results.Results() # create a blank results object to load the data
Alternatively, you can directly access the saved data using the h5py module:
import h5py
f = h5py.File('orbits.hdf5', 'r')
orbits = f['post']
print('orbits array dimensions: ', orbits.shape)
print('orbital elements for first orbit: ', orbits[0])
orbits array dimensions: (100, 8)
orbital elements for first orbit: [42.56737306 0.18520168 2.50497349 1.46225095 3.29419715 0.60649612
57.26589357 1.1478072 ]
And now we can easily work with the saved orbits that were generated by orbitize! Find out more about generating orbits in orbitize! with tutorials here. | {"url":"https://orbitize.readthedocs.io/en/latest/tutorials/OFTI_tutorial.html","timestamp":"2024-11-02T08:23:55Z","content_type":"text/html","content_length":"49544","record_id":"<urn:uuid:cbe11cd1-a077-47d3-a72b-28782dc82999>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00285.warc.gz"} |
Vertex Cover Problem of Computer Science Topics | Question AI
<div><p>Unlock the complexities of the Vertex Cover Problem with this comprehensive guide. You will gain an in-depth understanding of this pivotal concept in Computer Science, exploring its history,
significance in algorithm design and challenging aspects in terms of time complexity. Discover the exciting realm of the Minimum Vertex Cover Problem and the NP Complete Vertex Cover Problem, as well
as their practical applications. Furthermore, delve into the subtleties of the Vertex Cover Decision Problem and its algorithmic approaches. Through step-by-step examples, practical implementations,
and an examination of time complexity, this ensures a thorough exploration of the Vertex Cover Problem.</p> <h2 class="title-big" id="qai_title_1"> Understanding the Vertex Cover Problem in Computer
Science </h2> Let's bring you closer to an interesting and stimulating topic of study within the field of Computer Science known as the Vertex Cover Problem. This subject area is rich with
computational theories and algorithms that are used to solve relevant and often complex real-world problems. <h3 class="title-medium" id="qai_title_2"> What is the Vertex Cover Problem? </h3> The
Vertex Cover Problem is a classic question from the world of mathematical graph theory and computer science. <div class="definition-class"> <p> In simple terms, it's a problem of identifying a
set of vertices (the smallest possible) in a graph where each edge of the graph is incident to at least one vertex from the set. </p> </div> The mathematical representation is: For a given Graph \( G
=(V,E) \), find a minimum subset \( v' \) of \( V \), such that each edge \( e \) in \( E \) is connected to at least one vertex in \( v' \). <pre>// An example of Vertex Cover Problem in
code Graph *G; VerticeCoverProblem(G) { for each edge e in G: if neither endpoint of e is covered: include one endpoint of e in cover; } </pre> <h3 class="title-medium" id="qai_title_3"> Historical
View of Vertex Cover Problem </h3> The Vertex Cover problem has a rich history and is one of the foundational issues in computational theory and algorithmic research. <div class="deep-dive-class"><p>
Its roots trace all the way back to the 1970s when it was first defined and used in computational analysis. It is an NP-complete problem, making it part of the famous Karp's 21 NP-Complete
problems which were stated in 1972 as a challenge to the field of algorithm and complexity theory.</p></div> <h4 class="title-small" id="qai_title_4"> The Evolution of the Vertex Cover Problem </h4>
Over the years, the Vertex Cover Problem has evolved and found its place in various applications. Here's a quick look at its development chronology: <ul> <li>1970s: Initial definition and
recognition as an NP-complete problem.</li> <li>1980s: Introduction of approximation algorithms to provide near-optimal solutions for NP-Complete problems, including the Vertex Cover problem.</li>
<li>1990s - 2000s: Integration into genetic algorithms, artificial intelligence and machine learning approaches for problem-solving.</li> <li>2010s - Present: Current research focusing on parallel
and distributed computing solutions for the Vertex Cover problem, among other large-scale problem-solving techniques.</li> </ul> <h3 class="title-medium" id="qai_title_5"> Importance of the Vertex
Cover Problem in Computer Algorithms </h3> The Vertex Cover Problem is an important concept in the world of computer algorithms. Here are a few reasons why: <ul> <li>It aids in understanding the
fundamental characteristics of NP-complete problems.</li> <li>Playing a pivotal role in complexity theory, it allows for the study of difficult-to-solve problems.</li> <li>It serves as a basis for
creating algorithms that deal with real-life problem solving.</li> </ul> In addition to these, it also plays a role in network design, bioinformatics, operations research, among other fields. Hence,
understanding the Vertex Cover Problem leads to a greater understanding of computer science and the development of superior computational solutions. <h2 class="title-big" id="qai_title_2">Delving
Into the Minimum Vertex Cover Problem</h2> The adventure through the fascinating realm of Computer Science continues as we dive deep into the concept of the Minimum Vertex Cover Problem. It is a
notable topic that carries a lot of importance in the field of graph theory and computer science.) <h3 class="title-medium" id="qai_title_7">Defining the Minimum Vertex Cover Problem</h3> The term
<b>Minimum Vertex Cover Problem</b> refers to a variant of the Vertex Cover Problem. The objective here, as you might have guessed, is to find the smallest possible vertex cover in a given graph.
<div class="definition-class"> <p> A <b>Vertex Cover</b> is a set of vertices that includes at least one endpoint of each edge in the graph. In the "<b>Minimum</b>" version of this problem,
we're trying to find the vertex cover that involves the fewest vertices. </p> </div> From a mathematical perspective, for a given Graph \( G=(V,E) \), the goal is to find the smallest subset \( v
' \) of \( V \), such that every edge \( e \) in \( E \) is connected to at least one vertex in \( v' \). <pre>// Pseudo code of minimum vertex cover problem Graph *G;
MinimumVerticeCoverProblem(G) { while there are uncovered edges e in G: select a vertex v from e having maximum degree: include this vertex v in cover; } </pre> This problem, like most other
NP-complete problems, was standardised and established in the 1970s as a part of Karp's 21 NP-complete problems. It has held a dominant position in the fields of algorithmic research and
computational procedures ever since. Let's explore its practical applications in the real world in the next section. <h3 class="title-medium" id="qai_title_8">Practical Applications of Minimum
Vertex Cover Problem</h3> Make no mistake about it, the Minimum Vertex Cover Problem is not just limited to theoretical discussions and academic debates. It holds extensive real-life applications and
implications in a multitude of fields. <ul> <li>In <b>network security</b>, it represents an efficient way to place the minimum number of security forces on nodes to prevent breaches.</li> <li>In the
<b>telecommunication industry</b>, it guides where to place the least number of antennas for maximum coverage.</li> <li>It finds use in <b>operational research</b> for optimal resource utilisation.</
li> <li>In <b>computational biology</b>, it helps map complex interactions within biological networks.</li> </ul> <table> <tbody><tr><td>Field</td><td>Applications of Minimum Vertex Cover Problem</
td></tr> <tr><td>Network Security</td><td>Optimal Placement of Security Forces</td></tr> <tr><td>Telecommunication</td><td>Efficient Antenna Placement</td></tr> <tr><td>Operational Research</td><td>
Optimal Resource Utilisation</td></tr> <tr><td>Computational Biology</td><td>Mapping Complex Biological Networks</td></tr> </tbody></table> <h3 class="title-medium" id="qai_title_9">Illustrating the
Minimum Vertex Cover Problem through Real-World Examples</h3> Let's make it real! It's always easier to grasp complex concepts when we can link them to real-life scenarios. For instance,
consider a local internet service provider trying to establish a Wi-Fi network in a new housing estate. The houses are scattered and the provider wants to use the least number of towers to cover all
houses. In this scenario, houses are the edges and towers are vertices. The provider's dilemma essentially translates to the Minimum Vertex Cover Problem. <div class="example-class"> <p> In some
cities, CCTV cameras are installed at various locations to monitor street traffic efficiently. Municipalities want to use the smallest number of cameras while ensuring that every junction is covered.
This task resembles the Minimum Vertex Cover problem, where junctions are edges and cameras are vertices. </p> </div> <b>If you grasp the Minimum Vertex Cover Problem, you open the door to a deeper
understanding of graph theory and its applications in solving complex, real-world problems.</b> <h2 class="title-big" id="qai_title_3"> Exploring the NP Complete Vertex Cover Problem </h2> Broadening
your understanding of computer science, let's explore another intriguing domain known as the NP Complete Vertex Cover Problem. As a part of the infamous trio of P, NP, and NP-Complete problems,
delving into the nuances of this problem paves the path to deeper comprehension of computational complexity. <h3 class="title-medium" id="qai_title_11"> Deciphering the NP Complete Vertex Cover
Problem </h3> The <b>NP Complete Vertex Cover Problem</b> isn't just a mouthful to say, it's a thrilling brain tease in the realm of theoretical computer science and optimisation. It's an
excellent playing field to understand the challenges of real-world computational problems and the limitations of current algorithms. <div class="definition-class"> <p> In basic terms, <b>NP-Complete
</b> problems are those whose solutions can be verified quickly (i.e., in polynomial time), but no efficient solution algorithm is known. The Vertex Cover Problem is classified as NP-Complete because
it ticks these criteria. </p> </div> Mathematically, in a given Graph \( G=(V,E) \), if a subset \( v' \) of \( V \) can be found where each edge \( e \) in \( E \) is connected to at least one
vertex in \( v' \), and if this subset can be ascertained to be the smallest such set in polynomial time, then the problem is NP. However, the exhaustive search to find this set is a task
familiar to NP-complete problems. <pre>// Pseudo code illustrating NP Complete Vertex Cover Problem Graph *G; NpCompleteVerticeCoverProblem(G) { for each subset of vertices V' in G: if V' is
a vertex cover: return V'; } // Note: Running time is exponential in the number of vertices of G </pre> <h3 class="title-medium" id="qai_title_12"> Link Between Graph Theory and NP Complete
Vertex Cover Problem </h3> There's a deep-rooted connection between graph theory and the NP Complete Vertex Cover Problem. This link underpins many practical applications in domains ranging from
operations research to bioinformatics. In graph theory, a vertex cover of a graph is a set of vertices that includes at least one endpoint of each edge in the graph. This concept is directly
applicable to understanding the Vertex Cover Problem. As the problem is considered NP-Complete, devising an algorithm that can solve the vertex cover problem efficiently for all types of graphs is
one of the holy grails of theoretical computer science. <h4 class="title-small" id="qai_title_13"> How Graph Theory Influences the NP Complete Vertex Cover Problem </h4> Graph theory plays a
significant role in shaping and understanding the NP Complete Vertex Cover Problem. For starters, the definition of the Vertex Cover Problem itself is a notion extracted straight from graph theory.
Indeed, the vertex cover problem exists s on any graph \( G=(V,E) \), where you're trying to find a subset \( v' \) of vertices \( V \) that touches each edge in \( E \) at least once. The
vertices in \( v' \) essentially "cover" all the edges. <pre>// Pseudo code illustrating the influence of graph theory Graph *G; GraphTheoryInfluence(G) { for each edge e in G: if neither
endpoint of e is covered in v': include it in the vertex cover v'; } </pre> At the heart of the problem lies the concept of 'cover', a fundamental notion in graph theory. Therefore,
understanding graph theory is akin to holding a key to solving the NP Complete Vertex Cover Problem. Additionally, various algorithms from graph theory, such as the well-known Depth-First Search
(DFS) or greedy algorithms, can provide approximation solutions to the NP Complete Vertex Cover Problem. This union of concepts helps shed light on some of the most complex and computationally
challenging problems known in computer science. <h2 class="title-big" id="qai_title_4"> Analysing the Time Complexity of the Vertex Cover Problem </h2> Grasping the fundamentals of the Vertex Cover
Problem is just one step of the journey. It's as crucial to understand its inherent complexity, especially concerning time. When you're running computation-intensive tasks, the time an
algorithm takes can make a vast difference. That's where the concept of time complexity in the Vertex Cover Problem comes into the frame. <h3 class="title-medium" id="qai_title_15"> The Role of
Time Complexity in the Vertex Cover Problem </h3> The time complexity aspect of the Vertex Cover Problem is fundamental to comprehend the efficiency of algorithms that solve this problem. In computer
science, <b>time complexity</b> signifies the computational complexity that describes the amount of computational time taken by an algorithm to run, as a function of the size of the input to the
program. <div class="definition-class"> <p> The <b>Vertex Cover Problem</b> is a quintessential example of an NP-hard problem. In layman terms, it signifies that the problem belongs to a class of
problems which, for large inputs, cannot be solved within an acceptable timeframe using any known algorithm. </p> </div> <p>Regarding the Vertex Cover Problem, time complexity is outlined by a
function corresponding to the number of vertices and edges in the Graph \( G=(V,E) \). Formally, if \( |V| \) is the number of vertices and \( |E| \) is the number of edges, the time complexity for
finding a vertex cover is \( O(2^{|V|}) \), making it an exponential function.</p> This effectively implies that a small increase in the problem size could drastically inflate the time taken to solve
the problem. <h4 class="title-small" id="qai_title_16"> Time Complexity vs Space Complexity in the Vertex Cover Problem </h4> While time complexity is a crucial aspect in understanding the efficiency
of an algorithm, space complexity is equally vital. <b>Space Complexity</b> pertains to the total amount of memory space that an algorithm needs to run to completion. For the Vertex Cover Problem,
the space complexity is a function of the number of vertices. Formally this can be expressed as \( O(|V|) \), implying it is linear with respect to the graph's vertices. <pre>// Pseudo code
representing space requirements in Vertex Cover Problem Graph *G; VertexCoverProblemSpaceRequirements(G) { VertexCover[]; for each vertex v in G: include v in VertexCover; } // At its peak, the space
need could equal the total vertices in the graph. </pre> The complexity conundrum in the domain of computer science is always a trade-off between time and space. Minute improvements in time
complexity can sometimes lead to significant increases in space complexity and vice versa. This is a critical principle to master not just for the Vertex Cover Problem, but for all computer science
problem-solving. <h3 class="title-medium" id="qai_title_17"> Ways to Improve Time Complexity of Vertex Cover Problem Algorithm </h3> Yet all is not lost when it comes to the astronomical time
complexity in the Vertex Cover Problem. While it's true that the problem is intrinsically complex, there are stratagems and tactics to manage and even improve the time complexity of related
algorithms. While the intricacies of these strategies are beyond the immediate scope of this discussion, some of the widely employed methods include heuristics, approximation algorithms, and using
memoisation in dynamic programming. <div class="example-class"> <p> One of the most widely used heuristics is choosing a vertex with the maximum degree (number of edges) at each step. This approach
alone, however, doesn't guarantee the minimum vertex cover and can sometimes even lead to inferior results. </p> </div> Even though these methods don't guarantee an exact solution, they
provide practical application value, especially in large graphs where exact algorithms would be too slow. In a world where data is abundant and computational resources are expensive, these "good
enough" solutions are often more pragmatic and likely to be used in production environments. Let's keep diving deeper into this fascinating vortex of computation and complexity that enriches
the captivating field of computer science. Remember, every step adds to your journey of becoming a computational whiz. <h2 class="title-big" id="qai_title_5"> Deep Dive Into the Vertex Cover Decision
Problem and Its Algorithm </h2> If you're interested in the Vertex Cover Problem, you should also familiarise yourself with the Vertex Cover Decision Problem. While it's a variant of the
Vertex Cover Problem, it rests on a fully different aspect – it focuses on decision rather than optimisation. <h3 class="title-medium" id="qai_title_19"> Understanding Vertex Cover Decision Problem
Within Algorithms </h3> The Vertex Cover Decision Problem is an important concept to grasp when it comes to understanding the foundations of the Vertex Cover Problem. More specifically, it's a
formal question within computer science asking whether a graph has a vertex cover of a certain size. <div class="definition-class"> <p> Unlike its counterpart, the <b>Vertex Cover Decision Problem</
b> simply asks whether a vertex cover of a specified size exists. It doesn't necessarily require the smallest vertex cover; it just needs to confirm if a vertex cover of a particular size is
feasible or not. </p> </div> Since this problem is a decision problem, the output is always binary – it can either be 'Yes' or 'No'. The algorithm designed to solve the Vertex Cover
Decision Problem, therefore, should be smart enough to make this assessment accurately and quickly. The time complexity of the algorithm for the Vertex Cover Decision Problem still leans towards the
exponential side, plotted at \( O(2^{|V|}) \). This complexity arises from the fact that in the worst case, the algorithm needs to visit all combinations of vertices. <h4 class="title-small" id=
"qai_title_20"> The Mechanics of Vertex Cover Decision Problem Algorithm </h4> The algorithm developed to handle the Vertex Cover Decision Problem is actually quite intuitive. It efficiently uses
recursion coupled with conditional statements to precisely discern whether a particular size of vertex cover is achievable within a given graph. The key is to understand that if a graph has a vertex
cover of size \( k \), then it must also have vertex covers of sizes \( k+1, k+2, \ldots, |V| \). Thus, the algorithm essentially needs to create all possible subsets of vertices, and for each
subset, check whether the size is less than or equal to \( k \) and whether it covers all the edges. <pre>// Pseudo code for Vertex Cover Decision Problem VertexCoverDecisionProblem(Graph G, int k) {
for each subset V' of vertices in G: if |V'| <= k and V' covers all edges in G: return 'Yes'; return 'No'; } </pre> This process is repeated until all subsets have been
examined or a subset matching the criteria is found. While this approach is conceptually straightforward, its execution can be highly time-consuming due to the exponential rise in combinations,
especially in larger and denser graphs. <h3 class="title-medium" id="qai_title_21"> Exploring Vertex Cover Problem Approximation Algorithm </h3> While exact algorithms to solve the Vertex Cover
Problem are burdensomely slow for large graphs, approximation algorithms offer a gleam of hope. These cleverly designed algorithms aim to construct an almost optimal solution in significantly less
time. Let's begin with a basic <b>2-Approximation algorithm</b> for the Vertex Cover Problem. This type of algorithm finds a solution in polynomial time whose size is, at most, twice the size of
an optimal solution. The idea behind the 2-Approximation algorithm is relatively simple: * Start with an empty cover * Pick an uncovered edge * Add the vertices of the selected edge to the cover *
Repeat this process until all edges are covered. The time complexity here is clearly polynomial in the size of the graph. Moreover, the size of the vertex cover selected by this algorithm is as most
twice the size of the optimum vertex cover, thereby justifying the tag '2-Approximation'. <h4 class="title-small" id="qai_title_22"> Practical Implementations of Vertex Cover Problem
Approximation Algorithm </h4> You might wonder where these Vertex Cover Problem Approximation Algorithms are used in practice. They pop up in several fields, including network design, bioinformatics,
and operations research, amid other areas. In the field of networks, these algorithms help formulate efficient plans for network coverage, focusing predominantly on reducing costs. Bang in the middle
of the bioinformatics world, the Vertex Cover Problem ties in with protein-protein interaction networks, aiding the analysis and prediction of protein functions. <pre>// Pseudo code for 2-Approximate
Vertex Cover Problem 2ApproxVertexCover(Graph G) { VertexCover = empty set; while (G has edges) { Pick any edge (u,v); Add u and v to VertexCover; Remove all edges connected to either u or v; }
return VertexCover; } // This approach ensures that the final vertex cover size is at most twice the optimal size </pre> Implementing a Vertex Cover Problem Approximation Algorithm can save precious
computation time. But always bear in mind: these solutions aren't perfect and may result in larger vertex covers than needed. <h3 class="title-medium" id="qai_title_23"> Vertex Cover Problem
Solution Example: A Step-by-Step Guide </h3> Let's walk through an example to illuminate how you'd go about solving the Vertex Cover Problem. Imagine you've been given a simple undirected
graph \( G = (V, E) \), with four vertices \( V = \{1, 2, 3, 4\} \) and five edges \( E = \{(1,2), (1,3), (2,4), (2,3), (3,4)\} \). <div class="example-class"> <p> You start by selecting an edge
arbitrarily, say \( (1,2) \). Add vertices 1 and 2 to the vertex cover and you remove all edges that connect to either vertex 1 or vertex 2. This leaves you with just one edge - \( (3,4) \). Now, you
select this remaining edge, and add vertices 3 and 4 to your vertex cover. With this, all edges are covered, so your vertex cover consists of vertices \( \{1, 2, 3, 4\} \). Although this isn't
the smallest vertex cover (which would be vertices \( \{2, 3\} \) or \( \{1, 4\} \)), the approximation algorithm led us to a valid, albeit slightly inflated, solution. </p> </div> This simple
example illustrates how algorithms for the Vertex Cover Problem operate in practice. It's key to remember that these algorithms are not necessarily focused on finding the smallest vertex cover,
but rather on finding a valid vertex cover in an efficient manner.<div class="key-takeaways-class"> <h2 id="qai_title_6">Vertex Cover Problem - Key takeaways</h2> <ul> <li>The <b>Minimum Vertex Cover
Problem</b> refers to a variant of the Vertex Cover Problem. The objective is to find the smallest possible vertex cover in a given graph. A Vertex Cover is a set of vertices that includes at least
one endpoint of each edge in the graph.</li> <li>In practical scenarios, the Minimum Vertex Cover Problem has extensive real-life applications in fields such as network security, the
telecommunications industry, operational research, and computational biology.</li> <li>The <b>NP Complete Vertex Cover Problem</b> belongs to P, NP, and NP-Complete problems in theoretical computer
science and optimisation. These problems' solutions can be verified quickly, but no efficient solution algorithm is known.</li> <li>Regarding the Vertex Cover Problem, the <b>time complexity</b>
is outlined by a function corresponding to the number of vertices and edges in the graph. If \( |V| \) is the number of vertices and \( |E| \) is the number of edges, the time complexity for finding
a vertex cover is \( O(2^{|V|}) \), making it an exponential function.</li> <li>The <b>Vertex Cover Decision Problem</b> is a formal question within computer science asking whether a graph has a
vertex cover of a certain size. The problem is a decision problem, so the output is always binary – it can either be 'Yes' or 'No'.</li> </ul> </div></div> | {"url":"https://www.questionai.com/knowledge/kzF7scOyK5-vertex-cover-problem","timestamp":"2024-11-13T09:58:50Z","content_type":"text/html","content_length":"105744","record_id":"<urn:uuid:eaa2ec02-b5c5-4508-a3bb-564b96056ba4>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00641.warc.gz"} |
ANOVA for Linear Model Fits
anova.lm {stats} R Documentation
ANOVA for Linear Model Fits
Compute an analysis of variance table for one or more linear model fits.
## S3 method for class 'lm'
anova(object, ...)
## S3 method for class 'lmlist'
anova(object, ..., scale = 0, test = "F")
object, ... objects of class lm, usually, a result of a call to lm.
test a character string specifying the test statistic to be used. Can be one of "F", "Chisq" or "Cp", with partial matching allowed, or NULL for no test.
scale numeric. An estimate of the noise variance \sigma^2. If zero this will be estimated from the largest model considered.
Specifying a single object gives a sequential analysis of variance table for that fit. That is, the reductions in the residual sum of squares as each term of the formula is added in turn are given in
as the rows of a table, plus the residual sum of squares.
The table will contain F statistics (and P values) comparing the mean square for the row to the residual mean square.
If more than one object is specified, the table has a row for the residual degrees of freedom and sum of squares for each model. For all but the first model, the change in degrees of freedom and sum
of squares is also given. (This only make statistical sense if the models are nested.) It is conventional to list the models from smallest to largest, but this is up to the user.
Optionally the table can include test statistics. Normally the F statistic is most appropriate, which compares the mean square for a row to the residual sum of squares for the largest model
considered. If scale is specified chi-squared tests can be used. Mallows' C_p statistic is the residual sum of squares plus twice the estimate of \sigma^2 times the residual degrees of freedom.
An object of class "anova" inheriting from class "data.frame".
The comparison between two or more models will only be valid if they are fitted to the same dataset. This may be a problem if there are missing values and R's default of na.action = na.omit is used,
and anova.lmlist will detect this with an error.
Chambers, J. M. (1992) Linear models. Chapter 4 of Statistical Models in S eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.
See Also
The model fitting function lm, anova.
drop1 for so-called ‘type II’ ANOVA where each term is dropped one at a time respecting their hierarchy.
## sequential table
fit <- lm(sr ~ ., data = LifeCycleSavings)
## same effect via separate models
fit0 <- lm(sr ~ 1, data = LifeCycleSavings)
fit1 <- update(fit0, . ~ . + pop15)
fit2 <- update(fit1, . ~ . + pop75)
fit3 <- update(fit2, . ~ . + dpi)
fit4 <- update(fit3, . ~ . + ddpi)
anova(fit0, fit1, fit2, fit3, fit4, test = "F")
anova(fit4, fit2, fit0, test = "F") # unconventional order
version 4.4.1 | {"url":"https://stat.ethz.ch/R-manual/R-patched/library/stats/html/anova.lm.html","timestamp":"2024-11-13T14:34:30Z","content_type":"text/html","content_length":"5245","record_id":"<urn:uuid:2a642b93-73f4-4f8d-ba12-634a78682a45>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00297.warc.gz"} |
Generalization of Fibonacci ratios: Tribonacci etc.
Generalization of Fibonacci ratios
Each Fibonacci number is the sum of its two predecessors. My previous post looked at generalizing this to the so-called Tribonacci numbers, each being the sum of its three predecessors. One could
keep going, defining the Tetrabonacci numbers and in general the n-Fibonacci numbers for any n at least 2.
For the definition to be complete, you have to specify the first n of the n-Fibonacci numbers. However, these starting values hardly matter for our purposes. We want to look at the limiting ratio of
consecutive n-Fibonacci numbers, and this doesn’t depend on the initial conditions. (If you were determined, you could find starting values where this isn’t true. It’s enough to pick integer initial
values, at least one of which is not zero.)
As shown in the previous post, the ratio is the largest eigenvalue of an n by n matrix with 1’s on the first row and 1’s immediately below the main diagonal. The characteristic polynomial of such a
matrix is
λ^n − λ^n−1 − λ^n−2 − … −1
and so we look for the largest zero of this polynomial. We can sum the terms with negative coefficients as a geometric series and show that the eigenvalues satisfy
λ^n − 1/(2 − λ) = 0.
So the limiting ratio of consecutive n-Fibonacci numbers is the largest root of the above equation. You could verify that when n = 2, we get the golden ratio φ as we should, and when n = 3 we get
around 1.8393 as in the previous post.
As n gets large, the limiting ratio approaches 2. You can see this by taking the log of the previous equation.
n = −log(2 − λ)/log(λ).
As n goes to infinity, λ must approach 2 so that the right side of the equation also goes to infinity.
2 thoughts on “Generalization of Fibonacci ratios”
1. One thing I like about the matrix formulation of the problem is it also gives you a prescription for an exact closed-form answer for Fibonacci (and generalizations),
not just an asymptotic form. All you need to do is expand the initial data vector in the eigenbasis of the matrix and weight the coefficients by powers of the eigenvalues and sum. Poof!
2. How the set of equations λ^n – λ^n-1 – λ^n-2 – … -1=0 are related to Pisot number? | {"url":"https://www.johndcook.com/blog/2015/09/07/generalization-of-fibonacci-ratios/","timestamp":"2024-11-13T21:49:43Z","content_type":"text/html","content_length":"52995","record_id":"<urn:uuid:804f70b5-24d5-4143-8ab2-4e6834e391e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00515.warc.gz"} |
SubArrays · The Julia Language
Julia's SubArray type is a container encoding a "view" of a parent AbstractArray. This page documents some of the design principles and implementation of SubArrays.
One of the major design goals is to ensure high performance for views of both IndexLinear and IndexCartesian arrays. Furthermore, views of IndexLinear arrays should themselves be IndexLinear to the
extent that it is possible.
Consider making 2d slices of a 3d array:
julia> A = rand(2,3,4);
julia> S1 = view(A, :, 1, 2:3)
2×2 view(::Array{Float64, 3}, :, 1, 2:3) with eltype Float64:
0.839622 0.711389
0.967143 0.103929
julia> S2 = view(A, 1, :, 2:3)
3×2 view(::Array{Float64, 3}, 1, :, 2:3) with eltype Float64:
0.839622 0.711389
0.789764 0.806704
0.566704 0.962715
view drops "singleton" dimensions (ones that are specified by an Int), so both S1 and S2 are two-dimensional SubArrays. Consequently, the natural way to index these is with S1[i,j]. To extract the
value from the parent array A, the natural approach is to replace S1[i,j] with A[i,1,(2:3)[j]] and S2[i,j] with A[1,i,(2:3)[j]].
The key feature of the design of SubArrays is that this index replacement can be performed without any runtime overhead.
The strategy adopted is first and foremost expressed in the definition of the type:
struct SubArray{T,N,P,I,L} <: AbstractArray{T,N}
offset1::Int # for linear indexing and pointer, only valid when L==true
stride1::Int # used only for linear indexing
SubArray has 5 type parameters. The first two are the standard element type and dimensionality. The next is the type of the parent AbstractArray. The most heavily-used is the fourth parameter, a
Tuple of the types of the indices for each dimension. The final one, L, is only provided as a convenience for dispatch; it's a boolean that represents whether the index types support fast linear
indexing. More on that later.
If in our example above A is a Array{Float64, 3}, our S1 case above would be a SubArray{Float64,2,Array{Float64,3},Tuple{Base.Slice{Base.OneTo{Int64}},Int64,UnitRange{Int64}},false}. Note in
particular the tuple parameter, which stores the types of the indices used to create S1. Likewise,
julia> S1.indices
(Base.Slice(Base.OneTo(2)), 1, 2:3)
Storing these values allows index replacement, and having the types encoded as parameters allows one to dispatch to efficient algorithms.
Performing index translation requires that you do different things for different concrete SubArray types. For example, for S1, one needs to apply the i,j indices to the first and third dimensions of
the parent array, whereas for S2 one needs to apply them to the second and third. The simplest approach to indexing would be to do the type-analysis at runtime:
parentindices = Vector{Any}()
for thisindex in S.indices
if isa(thisindex, Int)
# Don't consume one of the input indices
push!(parentindices, thisindex)
elseif isa(thisindex, AbstractVector)
# Consume an input index
push!(parentindices, thisindex[inputindex[j]])
j += 1
elseif isa(thisindex, AbstractMatrix)
# Consume two input indices
push!(parentindices, thisindex[inputindex[j], inputindex[j+1]])
j += 2
elseif ...
Unfortunately, this would be disastrous in terms of performance: each element access would allocate memory, and involves the running of a lot of poorly-typed code.
The better approach is to dispatch to specific methods to handle each type of stored index. That's what reindex does: it dispatches on the type of the first stored index and consumes the appropriate
number of input indices, and then it recurses on the remaining indices. In the case of S1, this expands to
Base.reindex(S1, S1.indices, (i, j)) == (i, S1.indices[2], S1.indices[3][j])
for any pair of indices (i,j) (except CartesianIndexs and arrays thereof, see below).
This is the core of a SubArray; indexing methods depend upon reindex to do this index translation. Sometimes, though, we can avoid the indirection and make it even faster.
Linear indexing can be implemented efficiently when the entire array has a single stride that separates successive elements, starting from some offset. This means that we can pre-compute these values
and represent linear indexing simply as an addition and multiplication, avoiding the indirection of reindex and (more importantly) the slow computation of the cartesian coordinates entirely.
For SubArray types, the availability of efficient linear indexing is based purely on the types of the indices, and does not depend on values like the size of the parent array. You can ask whether a
given set of indices supports fast linear indexing with the internal Base.viewindexing function:
julia> Base.viewindexing(S1.indices)
julia> Base.viewindexing(S2.indices)
This is computed during construction of the SubArray and stored in the L type parameter as a boolean that encodes fast linear indexing support. While not strictly necessary, it means that we can
define dispatch directly on SubArray{T,N,A,I,true} without any intermediaries.
Since this computation doesn't depend on runtime values, it can miss some cases in which the stride happens to be uniform:
julia> A = reshape(1:4*2, 4, 2)
4×2 reshape(::UnitRange{Int64}, 4, 2) with eltype Int64:
julia> diff(A[2:2:4,:][:])
3-element Vector{Int64}:
A view constructed as view(A, 2:2:4, :) happens to have uniform stride, and therefore linear indexing indeed could be performed efficiently. However, success in this case depends on the size of the
array: if the first dimension instead were odd,
julia> A = reshape(1:5*2, 5, 2)
5×2 reshape(::UnitRange{Int64}, 5, 2) with eltype Int64:
julia> diff(A[2:2:4,:][:])
3-element Vector{Int64}:
then A[2:2:4,:] does not have uniform stride, so we cannot guarantee efficient linear indexing. Since we have to base this decision based purely on types encoded in the parameters of the SubArray, S
= view(A, 2:2:4, :) cannot implement efficient linear indexing.
• Note that the Base.reindex function is agnostic to the types of the input indices; it simply determines how and where the stored indices should be reindexed. It not only supports integer indices,
but it supports non-scalar indexing, too. This means that views of views don't need two levels of indirection; they can simply re-compute the indices into the original parent array!
• Hopefully by now it's fairly clear that supporting slices means that the dimensionality, given by the parameter N, is not necessarily equal to the dimensionality of the parent array or the length
of the indices tuple. Neither do user-supplied indices necessarily line up with entries in the indices tuple (e.g., the second user-supplied index might correspond to the third dimension of the
parent array, and the third element in the indices tuple).
What might be less obvious is that the dimensionality of the stored parent array must be equal to the number of effective indices in the indices tuple. Some examples:
A = reshape(1:35, 5, 7) # A 2d parent Array
S = view(A, 2:7) # A 1d view created by linear indexing
S = view(A, :, :, 1:1) # Appending extra indices is supported
Naively, you'd think you could just set S.parent = A and S.indices = (:,:,1:1), but supporting this dramatically complicates the reindexing process, especially for views of views. Not only do you
need to dispatch on the types of the stored indices, but you need to examine whether a given index is the final one and "merge" any remaining stored indices together. This is not an easy task,
and even worse: it's slow since it implicitly depends upon linear indexing.
Fortunately, this is precisely the computation that ReshapedArray performs, and it does so linearly if possible. Consequently, view ensures that the parent array is the appropriate dimensionality
for the given indices by reshaping it if needed. The inner SubArray constructor ensures that this invariant is satisfied.
• CartesianIndex and arrays thereof throw a nasty wrench into the reindex scheme. Recall that reindex simply dispatches on the type of the stored indices in order to determine how many passed
indices should be used and where they should go. But with CartesianIndex, there's no longer a one-to-one correspondence between the number of passed arguments and the number of dimensions that
they index into. If we return to the above example of Base.reindex(S1, S1.indices, (i, j)), you can see that the expansion is incorrect for i, j = CartesianIndex(), CartesianIndex(2,1). It should
skip the CartesianIndex() entirely and return:
(CartesianIndex(2,1)[1], S1.indices[2], S1.indices[3][CartesianIndex(2,1)[2]])
Instead, though, we get:
(CartesianIndex(), S1.indices[2], S1.indices[3][CartesianIndex(2,1)])
Doing this correctly would require combined dispatch on both the stored and passed indices across all combinations of dimensionalities in an intractable manner. As such, reindex must never be
called with CartesianIndex indices. Fortunately, the scalar case is easily handled by first flattening the CartesianIndex arguments to plain integers. Arrays of CartesianIndex, however, cannot be
split apart into orthogonal pieces so easily. Before attempting to use reindex, view must ensure that there are no arrays of CartesianIndex in the argument list. If there are, it can simply
"punt" by avoiding the reindex calculation entirely, constructing a nested SubArray with two levels of indirection instead. | {"url":"https://docs.julialang.org/en/v1/devdocs/subarrays/","timestamp":"2024-11-12T10:18:52Z","content_type":"text/html","content_length":"33303","record_id":"<urn:uuid:caa0ccdd-dd9a-4850-b09b-ba8d6499a605>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00375.warc.gz"} |
Filon's Integration Formula
A formula for Numerical Integration,
and the remainder term is
Abramowitz, M. and Stegun, C. A. (Eds.). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, pp. 890-891, 1972.
Tukey, J. W. In On Numerical Approximation: Proceedings of a Symposium Conducted by the Mathematics Research Center, United States Army, at the University of Wisconsin, Madison, April 21-23, 1958
(Ed. R. E. Langer). Madison, WI: University of Wisconsin Press, p. 400, 1959.
© 1996-9 Eric W. Weisstein | {"url":"http://drhuang.com/science/mathematics/math%20word/math/f/f138.htm","timestamp":"2024-11-13T08:17:26Z","content_type":"text/html","content_length":"8421","record_id":"<urn:uuid:accc68b7-f1eb-4afb-94dc-5e00f4d3460a>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00047.warc.gz"} |
NeuralNetwork | Closed-loop optimization | Boulder Opal | References | Q-CTRL Documentation
class boulderopal.closed_loop.NeuralNetwork(bounds, seed=None)
The neural network optimizer.
• bounds (Bounds) – The bounds on the test points.
• seed (int or None , optional) – Seed for the random number generator used in the optimizer. If set, must be a non-negative integer. Use this option to generate deterministic results from the
The neural network optimizer builds and trains a neural network to fit the cost landscape with the data it receives. Then a set of test points are returned, which minimize the neural network’s fitted
cost landscape. A gradient based optimizer is used to minimize this landscape, with the points starting from different random initial values.
This method is recommended when you can provide a large amount of data about your system.
The network architecture used by this optimizer is chosen for its good performance on a variety of quantum control tasks.
For best results, you should pass an array of initial_parameters evenly sampled over the whole parameter space. | {"url":"https://docs.q-ctrl.com/references/boulder-opal/boulderopal/closed_loop/NeuralNetwork","timestamp":"2024-11-13T09:57:22Z","content_type":"text/html","content_length":"56365","record_id":"<urn:uuid:10bd08a8-6bad-44e4-b584-9752dd70178f>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00863.warc.gz"} |
Exploring Correlation Functions in Quantum Checkerboard Arrays
Written on
Chapter 1: Introduction to Checkerboard Arrays
In my previous article, I discussed how researchers developed a checkerboard pattern of atoms using laser excitations. This research focused on creating a lattice structure where the energy states of
adjacent atoms alternate between their fundamental state and an excited state, resembling a checkerboard. With this knowledge, we can delve into fascinating physical properties to verify that we have
indeed established a true checkerboard arrangement. Given that these are quantum systems, we cannot simply observe individual atoms; instead, we must utilize probabilistic methods.
Section 1.1: Understanding Correlation Functions
One approach we can take is to calculate correlation functions between neighboring lattice points, which helps us determine if atomic pairs exhibit genuine anti-correlation. Correlation functions
represent statistical averages of physical states at various lattice points. A specific correlation function, known as the 'connected density-density correlator', evaluates the average correlation
between two atoms separated by k columns and l rows.
The angled brackets denote expected values, while the n_iā s signify Rydberg operators, with the index i indicating the corresponding atom in the lattice. In a lattice of 256 atoms, the indices i
and j range from 1 to 256. Rydberg operators are components of the Hamiltonian that identify when an atom is in the Rydberg state ā they evaluate to 1 when aligned with such states. Mathematically,
this is expressed as:
When atoms are prepared in a specific state, the expectation value of this operator at a given lattice point i reflects the state contracted on both sides of this operator. In a similar vein, the
operator n_i n_j captures the joint probability of these states both being in the Rydberg state.
Intuitively, this metric indicates the correlation between two distinct atoms being excited simultaneously. This important quantity can be experimentally measured and compared with computational
simulations to assess our progress. It is noteworthy that the structure of this expression hints at a correlation-type measurement, with the numerator resembling the covariance of two random
Section 1.2: Investigating Correlation Lengths
We can also analyze how correlation varies with the distances between atoms. For instance, in a study by Sepehr Ebadi et al., a horizontal correlation length (the correlation between particles in the
same row) was calculated as 11.1, while a vertical correlation length was found to be 11.3. This paper asserts that these correlation lengths are significantly larger than those reported in previous
research, indicating a successful creation of a robust anti-correlated system.
A graph illustrating the correlation function fits from the study shows a line fit of the correlation function against the vertical and horizontal distances. Additionally, the paper investigates the
individual states through a single state readout.
Chapter 2: Phase Transitions and Their Implications
What does the system's state resemble as it transitions into the checkerboard phase? Are there notable behaviors in the critical region as it collapses into this arrangement? These inquiries are
explored through the study of phase transitions. The parameters that dictate the power-law relationships between observable physical quantities are referred to as critical exponents.
This video tutorial delves into the creation of a checkerboard pattern using a 2D array in C++, illustrating the principles behind the setup.
In this tutorial, viewers learn how to generate a heatmap in Python using Seaborn, alongside tips for adjusting the style of the heatmap for better visualization.
[3] S. R. White, Phys. Rev. Lett. 69, 2863 (1992) ā Numerical simulations of the two-dimensional array
[4] Sepehr Ebadi et al. Quantum Phases of Matter on a 256-Atom Programmable Quantum Simulator, arXiv:2012.12281 [quant-ph] | {"url":"https://dhuleshwarfabcoats.com/exploring-correlation-functions-in-quantum-checkerboard-arrays.html","timestamp":"2024-11-06T18:08:29Z","content_type":"text/html","content_length":"10622","record_id":"<urn:uuid:af2cfb31-ed60-4734-8053-f4685c3fc028>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00201.warc.gz"} |
On Nonlinear Angular Momentum Theories, Their Representations and Associated Hopf Structures
On Nonlinear Angular Momentum Theories, Their Representations and Associated Hopf Structures
Nonlinear sl(2) algebras subtending generalized angular momentum theories are studied in terms of undeformed generators and bases. We construct their unitary irreducible representations in such a
general context. The linear sl(2)-case as well as its q-deformation are easily recovered as specific examples. Two other physically interesting applications corresponding to the... Show more | {"url":"https://synthical.com/article/On-Nonlinear-Angular-Momentum-Theories%2C-Their-Representations-and-Associated-Hopf-Structures-fae1ec8c-ffbf-11ed-9b54-72eb57fa10b3?","timestamp":"2024-11-05T22:31:40Z","content_type":"text/html","content_length":"62562","record_id":"<urn:uuid:08e55051-9568-4225-be80-a5d7c700a99f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00330.warc.gz"} |
A Simulated Annealing optimization method
x_best = optim_sa(x0,f,ItExt,ItInt,T0,Log,temp_law,param_temp_law,neigh_func,param_neigh_func)
[x_best,f_best] = optim_sa(..)
[x_best,f_best,mean_list] = optim_sa(..)
[x_best,f_best,mean_list,var_list] = optim_sa(..)
[x_best,f_best,mean_list,var_list,f_history] = optim_sa(..)
[x_best,f_best,mean_list,var_list,f_history,temp_list] = optim_sa(..)
[x_best,f_best,mean_list,var_list,f_history,temp_list,x_history] = optim_sa(..)
[x_best,f_best,mean_list,var_list,f_history,temp_list,x_history,iter] = optim_sa(..)
the initial solution
the objective function to be optimized (the prototype if f(x))
the number of temperature decrease
the number of iterations during one temperature stage
the initial temperature (see compute_initial_temp to compute easily this temperature)
if %T, some information will be displayed during the run of the simulated annealing
the temperature decrease law (see temp_law_default for an example of such a function)
a structure (of any kind - it depends on the temperature law used) which is transmitted as a parameter to temp_law
a function which computes a neighbor of a given point (see neigh_func_default for an example of such a function)
a structure (of any kind like vector, list, it depends on the neighborhood function used) which is transmitted as a parameter to neigh_func
the best solution found so far
the objective function value corresponding to x_best
the mean of the objective function value for each temperature stage. A vector of float (optional)
the variance of the objective function values for each temperature stage. A vector of float (optional)
the computed objective function values for each iteration. Each input of the list corresponds to a temperature stage. Each input of the list is a vector of float which gathers all the objective
function values computed during the corresponding temperature stage - (optional)
the list of temperature computed for each temperature stage. A vector of float (optional)
the parameter values computed for each iteration. Each input of the list corresponds to a temperature stage. Each input of the list is a vector of input variables which corresponds to all the
variables computed during the corresponding temperature stage - (optional - can slow down a lot the execution of optim_sa)
a double, the actual number of external iterations in the algorithm (optional).
A Simulated Annealing optimization method.
Simulated annealing (SA) is a generic probabilistic meta-algorithm for the global optimization problem, namely locating a good approximation to the global optimum of a given function in a large
search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities).
The current solver can find the solution of an optimization problem without constraints or with bound constraints. The bound constraints can be customized with the neighbour function. This algorithm
does not use the derivatives of the objective function.
The solver is made of Scilab macros, which enables a high-level programming model for this optimization solver. The SA macros are based on the parameters Scilab module for the management of the
(many) optional parameters.
To use the SA algorithm, one should perform the following steps :
• configure the parameters with calls to init_param and add_param especially the neighbor function, the acceptance function, the temperature law,
• compute an initial temperature with a call to compute_initial_temp,
• find an optimum by using the optim_sa solver.
The algorithm is based on an iterative update of two points :
• the current point is updated by taking into account the neighbour and the acceptance functions,
• the best point is the point which achieved the minimum of the objective function over the iterations.
While the current point is used internally to explore the domain, only the best point is returned by the function. The algorithm is based on an external loop and an internal loop. In the external
loop, the temperature is updated according to the temperature function. In the internal loop, the point is updated according to the neighbour function. A new point is accepted depending on its
associated function value or the value of the acceptance function, which value depends on the current temperature and a uniform random number.
The acceptance of the new point depends on the output values produced by the rand function. This implies that two consecutive calls to the optim_sa will not produce the same result. In order to
always get exactly the same results, please initialize the random number generator with a valid seed.
See the Demonstrations, in the "Optimization" section and "Simulated Annealing" subsection for more examples.
The objective function
The objective function is expected to have the following header.
In the case where the objective function needs additional parameters, the objective function can be defined as a list, where the first argument is the cost function, and the second argument is the
additional parameter. See below for an example.
In the following example, we search the minimum of the Rastriging function. This function has many local minimas, but only one single global minimum located at x = (0,0), where the function value is
f(x) = -2. We use the simulated annealing algorithm with default settings and the default neighbour function neigh_func_default.
function y=rastrigin(x)
y = x(1)^2+x(2)^2-cos(12*x(1))-cos(18*x(2));
x0 = [2 2];
Proba_start = 0.7;
It_Pre = 100;
It_extern = 100;
It_intern = 1000;
x_test = neigh_func_default(x0);
T0 = compute_initial_temp(x0, rastrigin, Proba_start, It_Pre);
Log = %T;
[x_opt, f_opt, sa_mean_list, sa_var_list] = optim_sa(x0, rastrigin, It_extern, It_intern, T0, Log);
mprintf("optimal solution:\n"); disp(x_opt);
mprintf("value of the objective function = %f\n", f_opt);
t = 1:length(sa_mean_list);
Configuring a neighbour function
In the following example, we customize the neighbourhood function. In order to pass this function to the optim_sa function, we setup a parameter where the "neigh_func" key is associated with our
particular neighbour function. The neighbour function can be customized at will, provided that the header of the function is the same. The particular implementation shown below is the same, in
spirit, as the neigh_func_default function.
function f=quad(x)
p = [4 3];
f = (x(1) - p(1))^2 + (x(2) - p(2))^2
// We produce a neighbor by adding some noise to each component of a given vector
function x_neigh=myneigh_func(x_current, T, param)
nxrow = size(x_current,"r")
nxcol = size(x_current,"c")
sa_min_delta = -0.1*ones(nxrow,nxcol);
sa_max_delta = 0.1*ones(nxrow,nxcol);
x_neigh = x_current + (sa_max_delta - sa_min_delta).*rand(nxrow,nxcol) + sa_min_delta;
x0 = [2 2];
Proba_start = 0.7;
It_Pre = 100;
It_extern = 50;
It_intern = 100;
saparams = init_param();
saparams = add_param(saparams,"neigh_func", myneigh_func);
// or: saparams = add_param(saparams,"neigh_func", neigh_func_default);
// or: saparams = add_param(saparams,"neigh_func", neigh_func_csa);
// or: saparams = add_param(saparams,"neigh_func", neigh_func_fsa);
// or: saparams = add_param(saparams,"neigh_func", neigh_func_vfsa);
T0 = compute_initial_temp(x0, quad, Proba_start, It_Pre, saparams);
Log = %f;
// This should produce x_opt = [4 3]
[x_opt, f_opt] = optim_sa(x0, quad, It_extern, It_intern, T0, Log, saparams)
Passing extra parameters
In the following example, we use an objective function which requires an extra parameter p. This parameter is the second input argument of the quadp function. In order to pass this parameter to the
objective function, we define the objective function as list(quadp,p). In this case, the solver makes so that the syntax includes a second argument.
function f=quadp(x, p)
f = (x(1) - p(1))^2 + (x(2) - p(2))^2
x0 = [-1 -1];
p = [4 3];
Proba_start = 0.7;
It_Pre = 100;
T0 = compute_initial_temp(x0, list(quadp,p) , Proba_start, It_Pre);
[x_opt, f_opt] = optim_sa(x0, list(quadp,p) , 10, 1000, T0, %f)
Configuring an output function
In the following example, we define an output function, which also provide a stopping rule. We define the function outfun which takes as input arguments the data of the algorithm at the current
iteration and returns the boolean stop. This function prints a message into the console to inform the user about the current state of the algorithm. It also computes the boolean stop, depending on
the value of the function. The stop variable becomes true when the function value is near zero. In order to let optim_sa know about our output function, we configure the "output_func" key to our
outfun function and call the solver. Notice that the number of external iterations is %inf, so that the external loop never stops. This allows to check that the output function really allows to stop
the algorithm.
function f=quad(x)
p = [4 3];
f = (x(1) - p(1))^2 + (x(2) - p(2))^2
function stop=outfunc(itExt, x_best, f_best, T, saparams)
[mythreshold,err] = get_param(saparams,"mythreshold",0);
mprintf ( "Iter = #%d, \t x_best=[%f %f], \t f_best = %e, \t T = %e\n", itExt , x_best(1),x_best(2) , f_best , T )
stop = ( abs(f_best) < mythreshold )
x0 = [-1 -1];
saparams = init_param();
saparams = add_param(saparams,"output_func", outfunc );
saparams = add_param(saparams,"mythreshold", 1.e-2 );
T0 = compute_initial_temp(x0, quad , 0.7, 100, saparams);
[x_best, f_best, mean_list, var_list, temp_list, f_history, x_history , It ] = optim_sa(x0, quad , %inf, 100, T0, %f, saparams);
The previous script produces the following output. Notice that the actual output of the algorithm depends on the state of the random number generator rand: if we had not initialize the seed of the
uniform random number generator, we would have produced a different result.
Iter = #1, x_best=[-1.000000 -1.000000], f_best = 4.100000e+001, T = 1.453537e+000
Iter = #2, x_best=[-0.408041 -0.318262], f_best = 3.044169e+001, T = 1.308183e+000
Iter = #3, x_best=[-0.231406 -0.481078], f_best = 3.002270e+001, T = 1.177365e+000
Iter = #4, x_best=[0.661827 0.083743], f_best = 1.964796e+001, T = 1.059628e+000
Iter = #5, x_best=[0.931415 0.820681], f_best = 1.416565e+001, T = 9.536654e-001
Iter = #6, x_best=[1.849796 1.222178], f_best = 7.784028e+000, T = 8.582988e-001
Iter = #7, x_best=[2.539775 1.414591], f_best = 4.645780e+000, T = 7.724690e-001
Iter = #8, x_best=[3.206047 2.394497], f_best = 9.969957e-001, T = 6.952221e-001
Iter = #9, x_best=[3.164998 2.633170], f_best = 8.317924e-001, T = 6.256999e-001
Iter = #10, x_best=[3.164998 2.633170], f_best = 8.317924e-001, T = 5.631299e-001
Iter = #11, x_best=[3.164998 2.633170], f_best = 8.317924e-001, T = 5.068169e-001
Iter = #12, x_best=[3.961464 2.903763], f_best = 1.074654e-002, T = 4.561352e-001
Iter = #13, x_best=[3.961464 2.903763], f_best = 1.074654e-002, T = 4.105217e-001
Iter = #14, x_best=[3.961464 2.903763], f_best = 1.074654e-002, T = 3.694695e-001
Iter = #15, x_best=[3.931929 3.003181], f_best = 4.643767e-003, T = 3.325226e-001
See also
• compute_initial_temp — A SA function which allows to compute the initial temperature of the simulated annealing
• neigh_func_default — A SA function which computes a neighbor of a given point
• temp_law_default — A SA function which computed the temperature of the next temperature stage
"Simulated annealing : theory and applications", P.J.M. Laarhoven and E.H.L. Aarts, Mathematics and its applications, Dordrecht : D. Reidel, 1988
"Theoretical and computational aspects of simulated annealing", P.J.M. van Laarhoven, Amsterdam, Netherlands : Centrum voor Wiskunde en Informatica, 1988
"Genetic algorithms and simulated annealing", Lawrence Davis, London : Pitman Los Altos, Calif. Morgan Kaufmann Publishers, 1987 | {"url":"https://help.scilab.org/docs/2023.0.0/ru_RU/optim_sa.html","timestamp":"2024-11-05T23:48:43Z","content_type":"text/html","content_length":"63359","record_id":"<urn:uuid:a55db7c6-e6b9-4568-95ec-c1457ffaf021>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00879.warc.gz"} |
Know What is Laplace Transformation & Laplace Transform Table
Skills you’ll Learn
Introduction to Laplace Transformation
Laplace transform using First Shifting Theorem
About this Free Certificate Course
Laplace transform is an integral transform that converts a real variable function, usually time, to a function of a complex variable or complex frequency. It is named after a great French
mathematician, Pierre Simon De Laplace. It changes one signal into another based on some fixed rules and equations. It is considered the best way to convert differential equations into algebraic
equations and convolution into multiplication. It holds a significant role in control system engineering. This transform has various applications in different types of science and engineering since
it is a tool used to solve differential equations. This course will give its subscribers good clarity on Laplace transformation, the first shifting theorem, and its various applications by solving
multiple examples.
Great Learning offers Post Graduate Programs in various fields of sciences, management, and other additional skills. You can join our courses to develop the advanced skills required to build your
career and earn a certificate from renowned universities.
Course Outline
Introduction to Laplace Transform
Laplace transform is an integral transform that converts a function of a real variable, usually time, to a function of a complex variable or complex frequency. It changes one signal into another
based on some fixed set of rules and equations.
Laplace Transform using First Shifting Theorem
Laplace transform is a constant multiplied by a function, has an inverse of the constant multiplied by the inverse of the function.
Laplace Transform Examples
Laplace transform is specially used to solve differential equations and is extensively applied in mechanical engineering and electrical engineering branches. Evaluating improper integrals, complex
impedance of a capacitor, partial fraction expansion, phase delay are a few examples of the same.
What our learners enjoyed the most
68% of learners found all the desired skills & tools
Ratings & Reviews of this Course
Laplace Transform Course, Thank You!
Thank you so much for this course. The course is incredibly useful.
10 Million+ learners
Success stories
Can Great Learning Academy courses help your career? Our learners tell us how.
And thousands more such success stories..
Frequently Asked Questions
What is the Laplace Transform used for?
Laplace transform is an integral transform that converts a function of a real variable, usually time, to a function of a complex variable or complex frequency. This transform is also used to analyze
dynamical systems and simplify a differential equation into a simple algebraic expression. It changes one signal into another based on some fixed set of rules and equations. It is considered the best
way to convert differential equations into algebraic equations and convolution into multiplication. It holds a major role in control system engineering. This transform has its applications in various
applications in different types of science and engineering since it is a tool for solving differential equations.
How do you explain Laplace Transform?
Laplace transform is an integral transform that converts a function of a real variable, usually time, to a function of a complex variable or complex frequency. It is named after a great French
mathematician, Pierre Simon De Laplace. It is considered a generalized Fourier series. It changes one signal into another based on some fixed set of rules and equations. It is considered the best way
to convert an ordinary differential equation into algebraic equations and convolution into multiplication which aids in solving them faster and in a very easier approach.
Is Laplace Transform Easy?
Laplace transform is easy to apply to the problems and solve them if the constants are known. It is easy even to solve the complex nonhomogeneous differential equations.
What are the Types of Laplace Transform?
Different Types of Laplace Transforms are:
• Bilateral Laplace Transform or Two-sided Laplace Transform
• Inverse Laplace transform
• Probability Theory
When can you use Laplace Transform?
Laplace transform is used to simplify differential equations to an algebraic equation that is much easier to solve. It is restricted to solving complex differential equations with known constants. If
there is a differential equation without known constants, then this method is totally useless. Some other method has to be applied to solve such problems.
Should I Memorize Laplace Transforms?
It is always good to have a copy of Laplace transform tables. In that case, you do not need to memorize it, otherwise, you should learn them and register them if you want to ace Laplace transform.
How do you Calculate Laplace?
Follow these simple steps to solve Laplace:
Method: Basics
Step 1: Substitute the function into the definition of Laplace transform.
Step 2: Evaluate the integrals through any means of your comfort.
Step 3: Evaluate the Laplace of the power function.
Method: Laplace Properties
Step 1: Find the Laplace of a function multiplied by eᵃᵗ.
Step 2: Find the Laplace of a function multiplied by tᵑ.
Step 3: Determine Laplace of a stretched function f(at)
Step 4: find the Laplace of a derivative F(t).
What is the Laplace of 0?
Laplace of zero is computed as eᶺ(-infinity*t). This result shows 0. So, the Laplace of 0 is 0.
Will I get a certificate after completing this Laplace Transformation free course?
Yes, you will get a certificate of completion for Laplace Transformation after completing all the modules and cracking the assessment. The assessment tests your knowledge of the subject and badges
your skills.
How much does this Laplace Transformation course cost?
It is an entirely free course from Great Learning Academy. Anyone interested in learning the basics of Laplace Transformation can get started with this course. It is an entirely free course from
Great Learning Academy. Anyone interested in learning the basics of Laplace Transformation can get started with this course.
Is there any limit on how many times I can take this free course?
Once you enroll in the Laplace Transformation course, you have lifetime access to it. So, you can log in anytime and learn it for free online.
Can I sign up for multiple courses from Great Learning Academy at the same time?
Yes, you can enroll in as many courses as you want from Great Learning Academy. There is no limit to the number of courses you can enroll in at once, but since the courses offered by Great Learning
Academy are free, we suggest you learn one by one to get the best out of the subject.
Why choose Great Learning Academy for this free Laplace Transformation course?
Great Learning Academy provides this Laplace Transformation course for free online. The course is self-paced and helps you understand various topics that fall under the subject with solved problems
and demonstrated examples. The course is carefully designed, keeping in mind to cater to both beginners and professionals, and is delivered by subject experts. Great Learning is a global ed-tech
platform dedicated to developing competent professionals. Great Learning Academy is an initiative by Great Learning that offers in-demand free online courses to help people advance in their jobs.
More than 5 million learners from 140 countries have benefited from Great Learning Academy's free online courses with certificates. It is a one-stop place for all of a learner's goals.
What are the steps to enroll in this Laplace Transformation course?
Enrolling in any of the Great Learning Academy’s courses is just one step process. Sign-up for the course, you are interested in learning through your E-mail ID and start learning them for free
Will I have lifetime access to this free Laplace Transformation course?
Yes, once you enroll in the course, you will have lifetime access, where you can log in and learn whenever you want to.
Similar courses you might like
50% Average salary hike
Explore degree and certificate programs from world-class universities that take your career forward.
• Personalized Recommendations
Placement assistance
Personalized mentorship
Detailed curriculum
Learn from world-class faculties
Popular Upskilling Programs
Laplace Transform
What is Laplace Transformation?
Laplace transform, is an integral transform that converts a function of a real variable, usually time, to a function of a complex variable or complex frequency. It is named after a great French
mathematician, Pierre Simon De Laplace. It changes one signal into another based on some fixed set of rules and equations. It is considered the best way to convert an ordinary differential equation
into algebraic equations and convolution into multiplication which aids in solving them faster and in a significantly easier approach.
Laplace transform is considered a generalized Fourier series. This is done so as to allow us to obtain multiple transforms of any function that does not have Fourier transforms. This concept is used
in a lot of domains. Also, it is essential to have a complete understanding of the concept if you are working on mathematical concepts. It holds a significant role in control system engineering. To
analyze control systems, different functions will be carried out. Both the Laplace transform and the inverse Laplace transform are used to analyze the dynamic control system. This transform has its
applications in various applications in different types of science and engineering since it is a tool used to solve differential equations.
A function is considered piecewise continuous if it has a finite number of breaks and if it does not go into infinity at any point. If f(t) is considered a piecewise continuous function, then f(t) is
defined using this approach. It is represented by L{f(t)} or F(s).
Laplace Transform Table:
Laplace transform simplifies a differential equation into a simple algebraic expression. The laplace transform table is available to engineer data on the Laplace transforms.
Below listed are the important properties of Laplace transform:
• Linearity: L {C1 f(t) + C2 g(t)} = C1 L{f(t)} + C2 L{g(t)}
Where C1, C2 are constants; f(t) is the function of time t.
• First shifting theorem
• Change of scale property
• Differentiation
• Integration
• Time shifting
• The final value theorem is applicable in the analysis and design of feedback control systems since Laplace transform gives a solution at initial conditions.
• Initial value theorem
Application of Laplace Transform:
Laplace transform is a derivation of Lerch’s Cancellation Law. In this transformation method, the function in the time domain is transformed into a Laplace function in the frequency domain. This
function will be in the form of an algebraic equation, which can be solved quickly. The obtained solution can be transformed back again into the time domain by using the inverse Laplace transform
Laplace transform is most commonly used for control systems, and it is used to study and analyze systems such as ventilation, heating, air conditioning, etc. All of these are used in all modern-day
construction and building.
Laplace transforms to make a significant contribution to process controls. It helps in performing variable analyses that produce the required results when altered. Conducting experiments with heat
can be registered as an example of this.
Laplace transform is also applied in many other engineering domains like mechanical engineering and electronic engineering, and this proves to be a powerful method.
A differential equation can represent the control action for a dynamic control system, be it mechanical, electrical, hydraulic, thermal, etc. The differential equation of a system is derived
according to physical laws that govern a system. The differential equation is transformed into an algebraic equation to help facilitate the solution of a differential equation describing a control
system. This kind of transformation is achieved with the help of the Laplace transformation technique. This means the time domain differential equation is converted into a frequency domain algebraic
equation form.
A fascinating analogy that will help understand Laplace is an example from English literature. Let’s say you come across an English poem that you cannot grasp. Fortunately, you have a French friend
who is good at making sense of this poem. All you have to do is, translate this poem you don’t understand into French and send it to your friend, considering you are good at French. Your friend will
then explain the literature behind this poem and send it back to you. You understand the French explanation and then transform it into English and hence understand the English poetry.
Inverse Laplace Transform:
The inverse Laplace transform is a piecewise continuous and exponentially restricted real function f(t) with the property:
L{f}(s) = L{f(t)}(s) = F(s)
Here, L denotes Laplace transform.
If any function F(s) has an inverse Laplace transform f(t), then f(t) is uniquely determined considering functions that differ from each other only on a point set having Lebesgue measure zero as the
same. This is called Lerch’s theorem, and this means there is an inverse transform on the range of transform.
Inverse Laplace transform has a space of analytical functions in the region of convergence. It is given by complex integrals known by the names Bromwich integral, the Fourier Millin integral, and
Mellin’s inverse formula.
Laplace transform is restricted to solving complex differential equations with known constants. If there is a differential equation without known constants, this method is useless, and some other
method has to be applied to solve such problems.
This program is designed for individuals who aim to get a thorough understanding of Laplace transform to seek excellence in the field of mathematics and other sciences. This will help college
students crack real-time problems significantly easier and faster. This course will cover all the parts of Laplace transform and also guide its subscribers to crack through a few example problems.
This will also confer the information on inverse Laplace transformation, Laplace transform table, and more. While you learn about Laplace transform, you will increase your comprehension of its
essentials. You will be acquainted with most of its applications and apply them to solve other such problems.
This particular course is 1 hour long, where you will be gaining knowledge on different methods used to crack through the approach and various other related objectives.
Join with Great Learning today and avail Laplace transformation course for free. | {"url":"https://www.mygreatlearning.com/academy/learn-for-free/courses/laplace-transformation","timestamp":"2024-11-12T16:33:24Z","content_type":"text/html","content_length":"370607","record_id":"<urn:uuid:e66340d6-ad4d-4cab-bef8-87b1975a52d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00094.warc.gz"} |
Model Answers for January 2016 IAL Maths Papers - PMT
Model Answers for January 2016 IAL Maths Papers
Here are model solutions to some of the January 2016 International A-level Maths papers. I’ll update these as others become available.
I’d like to thank Taabish Khan and Ayman Zayed Mannan for providing solutions. | {"url":"https://www.physicsandmathstutor.com/a-level-maths/model-answers-january-2016-ial-maths-papers/","timestamp":"2024-11-11T14:52:54Z","content_type":"text/html","content_length":"91162","record_id":"<urn:uuid:883d55ab-5f29-43a2-8b90-94a1df2e7c6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00597.warc.gz"} |
UMD Scientists Find First Clear Evidence of an Upper… | UMD Right Now
All News Releases
UMD Scientists Find First Clear Evidence of an Upper Level Atmosphere on an Exoplanet
Distant “hot Jupiter” has a glowing water stratosphere hot enough to boil iron
An international team of researchers from UMD, University of Exeter, NASA and other institutions has found compelling evidence for a stratosphere on an enormous planet outside of the solar system.
Previous research spanning the past decade has indicated possible evidence for stratospheres on planets in other solar systems, but this finding on planet, WASP-121b, is the first time that glowing
water molecules have been detected—the clearest signal yet to indicate an exoplanet stratosphere.
“When it comes to distant exoplanets, which we can’t see in the same detail as other planets here in our own solar system, we have to rely on proxy techniques to reveal their structure,” said Drake
Deming, a professor of astronomy at UMD and a co-author of the study. “The stratosphere of WASP-121b so hot it can make water vapor glow, which is the basis for our analysis.”
The planet, located approximately 900 light years from Earth, is a gas giant exoplanet commonly referred to as a “hot Jupiter.” The scientists made the new discovery using NASA’s Hubble Space
Telescope. The research is published in the August 3, 2017 issue of the journal Nature.
In our solar system all the planets have some kind of atmosphere, but the composition, density and number of layers vary greatly. Earth’s atmosphere has five layers, of which the troposphere is
closest to the planet surface and in which temperature falls as the altitude increases. The stratosphere is the next layer and here temperatures increase with higher altitudes, a phenomena also found
in the stratosphere of WASP-121b. The last three layers of Earth’s atmosphere are the mesosphere, the thermosphere, and the last layer, the exosphere, which merges with the emptiness of outer space .
To study the stratosphere of WASP-121b, scientists used spectroscopy to analyze how the planet’s brightness changed at different wavelengths of light. Water vapor in the planet's atmosphere, for
example, behaves in predictable ways in response to certain wavelengths of light, depending on the temperature of the water. At cooler temperatures, water vapor blocks light from beneath it. But at
higher temperatures, the water molecules glow.
The phenomenon is similar to what happens with fireworks, which get their colors when metallic substances are heated and vaporized, moving their electrons into higher energy states. Depending on the
material, these electrons will emit light at specific wavelengths as they lose energy. For example, sodium produces orange-yellow light and strontium produces red light.
The water molecules in the atmosphere of WASP-121b similarly give off radiation as they lose energy, but it is in the form of infrared light, which the human eye is unable to detect.
“Theoretical models have suggested that stratospheres may define a special class of ultra-hot exoplanets, with important implications for the atmospheric physics and chemistry,” said Tom Evans,
research fellow at the University of Exeter and lead author of the study. “When we pointed Hubble at WASP-121b, we saw glowing water molecules, implying that the planet has a strong stratosphere.”
WASP-121b has a greater mass and radius than Jupiter, making it much puffier. The exoplanet orbits its host star every 1.3 days, and the two bodies are about as close as they can be to each other
without the star's gravity ripping the planet apart. This close proximity also means that the top of the atmosphere is heated to a blazing hot 2,500 degrees Celsius – the temperature at which iron
exists in gas rather than solid form.
In Earth's stratosphere, ozone traps ultraviolet radiation from the sun, which raises the temperature of this layer of atmosphere. Other solar system bodies have stratospheres, too—methane is
responsible for heating in the stratospheres of Jupiter and Saturn's moon Titan, for example. In solar system planets, the change in temperature within a stratosphere is typically less than 100
degrees Celsius. However, on WASP-121b, the temperature in the stratosphere rises by 1,000 degrees Celsius.
Though researchers have not been able to positively identify the cause of the heating they hope upcoming observations at other wavelengths to address this mystery.
Vanadium oxide and titanium oxide gases are candidate heat sources, as they strongly absorb starlight at visible wavelengths, much like ozone absorbs UV radiation. These compounds are expected to be
present in only the hottest of hot Jupiters, such as WASP-121b, as high temperatures are required to keep them in the gaseous state. Indeed, vanadium oxide and titanium oxide are commonly seen in
brown dwarfs, ‘failed stars’ that have some commonalities with exoplanets.
NASA's forthcoming James Webb Space Telescope will be able to follow up on the atmospheres of planets like WASP-121b with higher sensitivity than any telescope currently in space.
"This super-hot exoplanet is going to be a benchmark for our atmospheric models, and will be a great observational target moving into the Webb era," said Hannah Wakeford, a research fellow at the
University of Exeter and a co-author of the research paper.
Areas of Expertise:
Colleges and Schools:
Follow @UMDRightNow on Twitter for news, UMD experts and campus updates | {"url":"https://umdrightnow.umd.edu/umd-scientists-find-first-clear-evidence-upper-level-atmosphere-exoplanet","timestamp":"2024-11-07T15:37:45Z","content_type":"text/html","content_length":"23077","record_id":"<urn:uuid:6ced644f-3400-4c92-8e9f-ea1a6a5159e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00488.warc.gz"} |
Linux, Games, Programming and some random science stuff
This article compares some of the less obvious details of fourier transform spectrometers and classical grating spectrometers. It contains some apparently not-widely-known observations I made
recently, but is probably not interesting to you if you are not into spectroscopy. This is also why I will skip explaining how the two spectrometers work.
The article will first compare the two types of spectrometers in terms of signal-to-noise ratio, and then in terms of spectral resolution (which also has some implications on SNR, see below).
Signal-to-Noise ratio (SNR) considerations
The scanning issue
The most obvious reason for losing SNR is always throwing away energy. If you have a scanning grating spectrometer which has a slit blocking part of the spectrum, you throw away energy, and thus SNR.
There is not much to discuss here; it is just worth mentioning that the fourier transform (FT) spectrometer is inherently not a scanning spectrometer in this sense; even though it scans over the
interference pattern, at each point in time, all available power interferes to produce the output signal and is thus used.
Scanning spectrometers always lose a factor sqrt(N) in SNR over non-scanning spectrometers, where N is the amount of scan points acquired, if the noise is shot noise (see below); or a factor N in SNR
if the noise is detector noise.
The detector noise versus shot noise issue
From now on, we assume we don’t have scanning spectrometers, because they are out of this game anyways. Assuming all available power is detected at each point in time, there are two main cases where
noise can come from: the signal itself; and the detector.
If the wavelength is long (longer than visible light, i.e. IR and above), there is typically a lot of detector noise. In this case, it is funnily desirable to have as few detectors as possible,
because each detector produces its own share of noise, and the more of them you have, the more noise power you have overall compared to the power in your signal. This means, a CCD or CMOS chip with a
million pixels will have a thousand times as much noise as a single detector pixel.
This is an advantage for the fourier transform spectrometer, since it inherently only has one single detector. It implies a gain in SNR of a factor sqrt(N), where N is the number of detectors. This
advantage is historically probably the main reason fourier transform spectroscopy is used in the infrared domain so much.
The advantage is lost as soon as shot noise (photon counting noise), i.e. noise inherent to the signal, dominates over detector noise; in this case it doesn’t matter how many detectors you have. This
is typically roughly the case for visible light and wavelengths below.
The noise redistribution and temporal stability issues
Since for a FT spectrometer, the interferogram at each point contains an interference pattern from each spectral component in the analyzed light, noise is distributed across the spectrum, and does
not stay localized at the wavelength producing it (e.g. a strong spectral peak with amplitude noise). This is especially a concern when scan times are long if the source intensity is not stable. It
also causes noise (at least the shot noise) from strong peaks to leak into relatively silent areas of the spectrum, reducing dynamic range. The problem can be mitigated by doing quick scans
(averaging them later on if required), or measuring a phase-shifted reference channel in addition.
Limits on resolution: two of them simple, one complicated
A limit on resolution is set in both systems by comparing the wavelength to the “sweep dimension”: for the grating spectrometer, that is the width of the grating; for the FT spectrometer, it is the
maximum distance the mirror is moved. The achieveable resolution is simply the wavelength divided by this sweep dimension. The reason for this limitation is the position-momentum uncertainity
relation from quantum mechanics.
Another limit is imposed by the energy-time uncertainity relation, but in this case that is typically not interesting; it can be completely negated by very short integration times already.
There is however yet another limit on achieveable resolution, which is much more delicate in nature. It is caused by beam quality. When feeding the spectrometer with perfectly parallel light from a
point source, such as approximated by a good-quality laser beam, the limitation disappears. However, when the source is not point-like, there is a certain randomness in the beam; it is not possible
to make it perfectly parallel, and for the grating spectrometer it is easy to see how this hurts resolution. In fact, this is a fundamental problem: this kind of spectrometer works by detecting some
property derived from one component of the photon momentum, subsequently computing the photon energy from that component, assuming that the propagation direction is known. Any disorder in the beam
violates this assumption and worsens the result.
Slits and energy losses
Beam quality can be arbitrarily improved by adding a narrow slit in front of the spectrometer, as is typically done. This comes with the disadvantage of removing large amounts of energy from the
beam, so you have to find a trade-off between resolution and signal-to-noise ratio. It is not possible to losslessly improve beam quality; this would imply a decrease of phase space volume occupied
by the beam, caused by an arrangement of passive components, which is forbidden by thermodynamics. This observation is for example also touched by the theory of gaussian beams.
The reason for the error introduced by beam disorder
In case of the grating spectrometer, the grating distributes photons on the detector screen depending on the z component of their momentum (z being the propagation direction of the beam). However,
the x direction of the momentum (x being the direction perpendicular to the beam which is cut by the slit) also causes photons to spread out on the screen in addition, even if they all have the same
energy; imagine photons hitting a mirror under different angles. This spread is proportional to the x component, and with that, proportional to the beam spread angle.
In case of the fourier transform spectrometer, the error is introduced by differences in path length before the two components interfere. Unless the two arm lengths are exactly equal, the path length
difference will depend on the angle. However — and this is the insight which motivated me to write this post — because it depends on the cosine, the extra path length difference is proportional to
the square of the ray misalignment angle. The fourier transform spectrometer has (more or less by chance I’d say) geometrically aligned the error such that it causes minimum effect on the result.
The fourier transform spectrometer’s [DEL:undocumented:DEL] advantage
Update: This observation is actually known since 1960 and is called the “Jacquinot advantage”.
What does this mean? It’s simple: for both spectrometers, you gain spectral resolution proportional to the inverse diameter of the slit, or pinhole respectively, at least up to the point where your
resolution limit is dominated by the size of your “sweep dimension”. Half the slit width — twice the spectral resolution. There is a significant difference though: in the grating spectrometer case, a
one-dimensional slit is used; in the fourier-transform case it is a two-dimensional round pinhole. Making the slit twice as wide trades a factor 2 in resolution for a factor 2 in signal quality;
making the pinhole twice as wide trades a factor 2 in resolution for a factor 4 in signal quality. To phrase it cleverly, the extra dimension in the pinhole geometry (circle vs. slit, causing the
area to be proportional to the square of the width vs. proportional to just the width) is exactly nature’s way of handing you over the “square” gain in error described in the previous paragraph.
Thus, effectively, for non-point sources which cannot be used to produce a perfectly parallel beam, at the same resolution you have much more light available in a fourier transform spectrometer than
in a grating spectrometer — which means better signal-to-noise ratio.
Note: If you know of any resources already documenting this observation and the reason behind it, I’d be thankful for a comment.
Categories: Everything | {"url":"http://blog.svenbrauch.de/2017/04/25/fourier-transform-spectrometers-vs-grating-spectrometers/","timestamp":"2024-11-11T16:46:00Z","content_type":"text/html","content_length":"43161","record_id":"<urn:uuid:176681bc-02a7-4c9f-85f0-db03b27225c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00701.warc.gz"} |
MOCK TEST - 1 (Special Theory of Relativity) - Mathematical Explorations
Instructions for the Test
Welcome to the Test! Please read the instructions carefully before starting:
1. Number of Questions: The Test consists of multiple questions. Each question will be displayed one at a time.
2. Answering Questions: For each question, select the correct option from the list of multiple choices. Only one option can be selected per question.
3. Navigation:
□ Click the “Save & Next” button to save your answer and proceed to the next question.
□ You can go back to previous questions by clicking the “Previous” button if needed.
4. Progress Bar: A progress bar will be displayed at the top of the test, showing how many questions you’ve attempted.
5. Review: If you are unsure of an answer, you can navigate back to that question and change your response before submitting the test.
6. Results: After completing all the questions, your score will be displayed at the end of the test. The total number of correct answers will be shown.
7. Timing: There is no time limit for this quiz, so take your time to read and answer each question carefully.
Good luck! Click “Start” to begin the test.
Inertial Frame of Reference
An inertial frame of reference is a frame of reference in which an object either remains at rest or moves at a constant velocity unless acted upon by an external force. This concept stems from
Newton’s First Law of Motion (the law of inertia), which states that an object will continue in its state of motion unless influenced by a net external force. Essentially, in an inertial frame, the
laws of physics, especially Newton’s laws of motion, hold true without requiring corrections for acceleration or rotation.
Characteristics of an Inertial Frame of Reference:
1. No Acceleration: An inertial frame is either at rest or moving with a constant velocity (i.e., it is not accelerating).
2. Newton’s Laws Hold True: Newton’s laws of motion can be directly applied without modification in an inertial frame.
3. Relative Nature: No absolute inertial frame exists; one frame may be inertial relative to another if they move at constant velocity with respect to each other.
An example of an inertial frame is a train moving at a constant speed on a straight track. If you were inside the train and threw a ball, the motion of the ball would behave as it would if you were
standing still on the ground.
Galilean Transformation
The Galilean transformation is a mathematical framework used to relate the coordinates of an event as seen from one inertial frame to those from another inertial frame. It was introduced by Galileo
Galilei to explain how the motion of objects is observed in different reference frames moving relative to each other at a constant velocity. The Galilean transformation assumes absolute time and
absolute space, meaning time passes at the same rate for all observers, and space remains unaffected by the relative motion of reference frames.
Galilean Transformation Equations
Consider two inertial frames of reference:
• Frame S, which is stationary.
• Frame S′, which moves with a constant velocity V relative to S along the x-axis.
Let t and t′ represent time in frames S and S′, respectively. Similarly, let x, y, z represent the coordinates in frame S, and x′, y′, z′ represent the coordinates in frame S′.
The transformation between the two frames is described by the following equations:
x' = x - Vty' = y z' = z t' = t
These equations imply:
• x-coordinate transformation: The position in the x-direction depends on the velocity V of the second frame. The moving frame’s position x′ is found by subtracting the product of time t and
velocity V from the position x in the stationary frame.
• y and z transformation: The y and z coordinates remain unchanged because the relative motion between frames occurs only in the x-direction.
• Time invariance: The time t is the same in both frames (absolute time), which is a key feature of Galilean transformation.
Velocity Transformation
The Galilean velocity transformation is obtained by differentiating the position transformation equations with respect to time:
v_x' = v_x - Vv_y' = v_y v_z' = v_z
where v_x, v_y, and v_z are the components of velocity in the stationary frame S, and v_x', v_y', and v_z' are the components in the moving frame S′. This shows that the velocity in the
x-direction differs by the relative velocity V between the frames, while the y and z components remain unchanged.
Assumptions of Galilean Transformation
1. Time is Absolute: Time is considered to be the same for all observers, regardless of their state of motion.
2. Space is Absolute: The spatial coordinates transform linearly, and distances between objects are preserved.
3. Velocities Add Linearly: The velocity of an object as seen from different frames is related by simple addition or subtraction of the relative velocity of the frames.
Limitations of Galilean Transformation
While the Galilean transformation works well at low velocities (i.e., when objects move much slower than the speed of light), it breaks down at higher velocities approaching the speed of light. This
is because the Galilean transformation does not take into account the finite speed of light and the effects of time dilation and length contraction that occur at relativistic speeds.
For high-velocity scenarios, special relativity and the Lorentz transformation must be used instead. Unlike Galilean transformation, Lorentz transformation accounts for the fact that the speed of
light is constant in all inertial frames, leading to the notion of time relativity and length relativity.
Add a Comment | {"url":"https://mathematicalexplorations.co.in/mock-test-1/","timestamp":"2024-11-08T15:39:57Z","content_type":"text/html","content_length":"230419","record_id":"<urn:uuid:39717330-7a07-4320-ad87-6636fba4b1de>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00580.warc.gz"} |
Miles per Hour to Speed of Sound Conversion (mph to sound)
Miles per Hour to Speed of Sound Converter
Enter the speed in miles per hour below to convert it to speed of sound.
Result in Speed of Sound:
Do you want to convert speed of sound to miles per hour?
How to Convert Miles per Hour to Speed of Sound
To convert a measurement in miles per hour to a measurement in speed of sound, multiply the speed by the following conversion ratio: 0.001303 speed of sound/mile per hour.
Since one mile per hour is equal to 0.001303 speed of sound, you can use this simple formula to convert:
speed of sound = miles per hour × 0.001303
The speed in speed of sound is equal to the speed in miles per hour multiplied by 0.001303.
For example,
here's how to convert 500 miles per hour to speed of sound using the formula above.
speed of sound = (500 mph × 0.001303) = 0.651662 sound
Miles per hour and speed of sound are both units used to measure speed. Keep reading to learn more about each unit of measure.
What Are Miles per Hour?
Miles per hour are a measurement of speed expressing the distance traveled in miles in one hour.^[1]
The mile per hour is a US customary and imperial unit of speed. Miles per hour can be abbreviated as mph, and are also sometimes abbreviated as mi/h or MPH. For example, 1 mile per hour can be
written as 1 mph, 1 mi/h, or 1 MPH.
Miles per hour can be expressed using the formula:
v[mph] = d[mi] / t[hr]
The velocity in miles per hour is equal to the distance in miles divided by time in hours.
Learn more about miles per hour.
What Is the Speed of Sound?
The speed of sound is the distance a sound wave travels through an elastic medium. The speed of sound through air at 20 °C is equal to 343 meters per second,^[2] or roughly 767 miles per hour.
Speed of sound can be abbreviated as sound; for example, 1 speed of sound can be written as 1 sound.
Learn more about the speed of sound.
Mile per Hour to Speed of Sound Conversion Table
Table showing various mile per
hour measurements converted to
speed of sound.
Miles Per Hour Speed Of Sound
1 mph 0.001303 sound
2 mph 0.002607 sound
3 mph 0.00391 sound
4 mph 0.005213 sound
5 mph 0.006517 sound
6 mph 0.00782 sound
7 mph 0.009123 sound
8 mph 0.010427 sound
9 mph 0.01173 sound
10 mph 0.013033 sound
20 mph 0.026066 sound
30 mph 0.0391 sound
40 mph 0.052133 sound
50 mph 0.065166 sound
60 mph 0.078199 sound
70 mph 0.091233 sound
80 mph 0.104266 sound
90 mph 0.117299 sound
100 mph 0.130332 sound
200 mph 0.260665 sound
300 mph 0.390997 sound
400 mph 0.521329 sound
500 mph 0.651662 sound
600 mph 0.781994 sound
700 mph 0.912327 sound
800 mph 1.0427 sound
900 mph 1.173 sound
1,000 mph 1.3033 sound
1. Wikipedia, Miles per hour, https://en.wikipedia.org/wiki/Miles_per_hour
2. CK-12, Speed of Sound, https://www.ck12.org/physics/speed-of-sound/lesson/Speed-of-Sound-MS-PS/
More Mile per Hour & Speed of Sound Conversions | {"url":"https://www.inchcalculator.com/convert/mile-per-hour-to-speed-of-sound/","timestamp":"2024-11-09T03:19:09Z","content_type":"text/html","content_length":"68935","record_id":"<urn:uuid:b07dafa6-f4d9-4fca-b5cc-5580be478fb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00012.warc.gz"} |
How To Calculate the Percentage Difference or Increase – Problems , Solutions and Applications in Real LifeHow To Calculate the Percentage Difference or Increase – Problems , Solutions and Applications in Real Life
Nov 12
How To Calculate the Percentage Difference or Increase – Problems , Solutions and Applications in Real Life
• By admin in Calculator, Mathematics, Percentage Difference Calculator
How To Calculate the Percentage Difference or Increase – Problems , Solutions and Applications in Real Life
The percentage difference also known as the percentage increase or decrease is a common calculation in maths, statistics, physics, business, commerce, economics and many other fields. It is used to
indicate the change in the values of a particular measurement over a certain period of time. The delta symbol Δx is used to represent a change in quantity or values.
This percentage difference calculator is straightforward and easy to use. Find the percentage increase or decrease between two numbers. There is another calculator which is used to find the
percentage of a number.
Applications of Percentage Differences in Real Life
The following are some of the measurements whereby the percentage increase or decrease is applied or calculated:
Change in speed (acceleration)
Change in mass
Change in pressure
Change in energy
Change in temperature
Change in volume
Change in density
Change in force
Change in momentum
Change in monthly sales volumes
Change in monthly revenue
Change in monthly profit
Change in monthly costs
Economics and Commerce:
Change in annual GDP
Change in monthly, quarterly and annual export values
Change in monthly, quarterly and annual import values
Change in median income
Change in median rent
Change in the cost of living index
Change in the consumer price index
Population Studies:
Change in national population
Change in city population
Change in male population
Change in female population
Change in demographics
Change in the construction cost index
Change in the material price index
Change in median house prices
Change in number of monthly, quarterly and annual building permits
Change in construction output volumes
Change in construction tender price index
Change in construction input price index
Change in construction output price index
As you can see, the percentage difference is calculated and used in many fields whereby statistics and measurements are recorded.
How To Calculate the Percentage Increase
In order to calculate a percentage increase, you must have two values measured over a time range, for example, let’s say you are measuring daily temperature. Recordings were taken in the morning,
afternoon and night showing 25C°, 28C° and 23C° respectively. Calculate the percentage change in temperature between the morning and afternoon readings?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 25C°, x2 = 28C°
>> x3 = (28 – 25)/25)*100
>> x3 = (3/25)*100
>> x3 = 12%
Answer: The percentage change is Δx = 12%
**Note that in this example, the percentage change is positive, which is a percentage increase.
Problem 2:
Calculate the percentage change in temperature between the afternoon and night readings?
Using the percentage formula Δx = (x2 – x1/x1) * 100
Making substitutions:
>> x1 = 28C°, and x2 = 23C°
>> Δx = (23 – 28/28) * 100
>> Δx = (-5/28) * 100
>> Δx = -17.86%
Answer: The percentage change is Δx = -17.86%
**Note that in this example, the percentage change is negative, which is a percentage decrease.
Percentage Difference – Calculations | Problems and Solutions
Problem 1
A car is travelling at 40km/hr, it then accelerates to 70km/hr. What is the percentage increase in speed?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 40km/hr, x2 = 70km/hr
>> x3 = (70 – 40)/40)*100
>> x3 = (30/40)*100
>> x3 = 75
Answer: The percentage change in speed is Δx = 75%
**Note that in this example, the percentage change is positive, which is a percentage increase.
Problem 2
A car is travelling at 120km/hr, it then decelerates to 85km/hr. By what percent did the speed decrease?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 120km/hr, x2 = 85km/hr
>> x3 = (85 – 120)/120)*100
>> x3 = (-35/120)*100
>> x3 = -29.17%
Answer: The percentage change in speed is Δx = -29.17%
**Note that in this example, the percentage change is negative, which is a percentage decrease.
Problem 3
Wood is a material which absorbs and loses moisture depending on the temperature and relative humidity. A piece of wood weighs 750g, the following morning its swollen weight was 800g. What is the
percentage increase in its weight and how much moisture did it gain?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 750g, x2 = 800g
>> x3 = (800 – 750)/750)*100
>> x3 = (100/750)*100
>> x3 = 13
Answer: The percentage change in weight is Δx = 13%
**Note that in this example, the percentage change is positive, which is a percentage increase.
The wood gained (800 – 750) = 50g of moisture.
Problem 4
Wood is a material which absorbs and loses moisture depending on the temperature and relative humidity. A piece of wood weighs 1000g, the following morning its shrunken weight was 940g. By what
percent did its weight decrease and how much moisture did it lose?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 1000g, x2 = 940g
>> x3 = (940 – 1000)/1000)*100
>> x3 = (-60/1000)*100
>> x3 = -6%
Answer: The percentage change in weight is Δx = -6%
**Note that in this example, the percentage change is negative, which is a percentage decrease.
The wood lost (1000 – 940) = 60g of moisture.
Problem 5
A jewellery store sold 60 necklaces in November and 105 necklaces in December. Calculate the percentage increase in sales volume?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 60 necklaces, x2 = 105 necklaces
>> x3 = (105 – 60)/60)*100
>> x3 = (45/60)*100
>> x3 = 75%
Answer: The percentage change in sales volume is Δx = 75%
**Note that in this example, the percentage change is positive, which is a percentage increase.
Problem 6
A jewellery store sold 105 necklaces in December and 33 necklaces in January of the New Year. Calculate the percentage decline in sales volume?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 105 necklaces, x2 = 33 necklaces
>> x3 = (33 – 105)/105)*100
>> x3 = (-72/105)*100
>> x3 = -68.57%
Answer: The percentage change in sales volume is Δx = -68.57%
**Note that in this example, the percentage change is negative, which is a percentage decrease.
Problem 7
A computer shop sold 112 USB bracelets in May and 140 USB bracelets in June. If each bracelet was priced at $6.13 in May and a 10% promotional discount was offered in June, calculate the percentage
change in the sales revenue?
Step 1
Calculate the monthly sales revenue:
May >> 112 x $6.13 = $686.56
June >> 140 x $6.13 = $858.20
Calculate June Discount >> 10% of $858.20 = (10/100)*$858.20 = $85.82
Therefore, the monthly sales in June were >> $858.20 – $85.82 = $772.38
Step 2
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = $686.56, x2 = $772.38
>> x3 = (772.38 – 686.56)/686.56)*100
>> x3 = (85.82/686.56)*100
>> x3 = 12.5%
Answer: The percentage change in sales revenue is Δx = 12.5%
**Note that in this example, the percentage change is positive, which is a percentage increase.
Problem 8
An ISP Telecom company increased the cost of its prepaid unlimited monthly data plan from $65 to $75 a month. By what percentage was the price increased?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = $65, x2 = $75
>> x3 = (75 – 65)/65)*100
>> x3 = (10/65)*100
>> x3 = 15.38%
Answer: The percentage change in the price of monthly data is Δx = 15.38%
**Note that in this example, the percentage change is positive, which is a percentage increase.
Problem 9
The population of New York City declined from 8,469,153 in 2016 to 8,336,817 people in 2018. Calculate the percentage decline in population over this period?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 8,469,153 people, x2 = 8,336,817 people
>> x3 = (8,336,817 – 8,469,153)/8,469,153)*100
>> x3 = (-132,336/8,469,153)*100
>> x3 = (-0.0156)*100
>> x3 = -1.56%
Answer: The percentage change in the population was Δx = -1.56%
**Note that in this example, the percentage change is negative, which is a percentage decrease.
Problem 10
The Construction Tender Price Index (TPI) in the city of Dublin was 160.5 in the second quarter of 2020 and 152.7 in the second quarter of 2019. Calculate the percentage increase in the tender price
index over this period?
Set up a logical equation. Let x1 be the first reading and x2 be the second reading. Let x3 be the difference between the readings. The delta symbol Δ is used to represent a change in quantity.
Therefore Δx = x3. The change in values in our example will be represented by Δx.
The percentage change formula is Δx = (x2 – x1/x1) * 100
>> x2 – x1 = x3
>> Δx = x3
>> x3 = (x2 – x1/x1) * 100
Making substitutions: x1 = 152.7, x2 = 160.5
>> x3 = (160.5 – 152.7)/152.7)*100
>> x3 = (7.8/152.7)*100
>> x3 = 5.1%
Answer: The percentage change in the tender price index was Δx = 5.1%
**Note that in this example, the percentage change is positive, which is a percentage increase. | {"url":"https://estimationqs.com/how-to-calculate-the-percentage-difference-or-increase/","timestamp":"2024-11-05T15:46:26Z","content_type":"text/html","content_length":"158501","record_id":"<urn:uuid:09aa667d-c891-42f0-9b18-69923881aa29>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00337.warc.gz"} |
Procedure GMP::Solution::ConstraintListing(GMP, solution, Filename, AppendMode)
The procedure GMP::Solution::ConstraintListing outputs a detailed description of a generated mathematical program to file. It uses the solution to provide feasibility, left hand side and derivative
GMP, ! (input) a generated mathematical program
solution, ! (input) a solution
Filename, ! (input) a string
[AppendMode] ! (input/optional) integer, default 0
An element in the set AllGeneratedMathematicalPrograms.
An integer that is a reference to a solution.
The name of the file to which the output is written.
If non-zero, the output will be appended to the file, instead of overwritten.
This function allows one to inspect a generated mathematical program after it is generated, modified, or solved.
Usage example
Given the following declarations:
MathematicalProgram sched;
ElementParameter cp_gmp {
Range : AllGeneratedMathematicalPrograms;
Parameter vars_in_cl {
Range : binary;
InitialData : 0;
Comment : {
"When 1 the variables and bounds are printed
in the constraint listing"
The use of the function GMP::Solution::ConstraintListing is illustrated in the following code fragment.
cp_gmp := gmp::Instance::Generate( sched );
if cp_gmp then
GMP::Solution::RetrieveFromModel( cp_gmp, 1 );
block where constraint_listing_variable_values := vars_in_cl ;
GMP::Solution::ConstraintListing( cp_gmp, 1, "sched.constraintlisting" );
endif ;
The following remarks apply to this code fragment:
• Directly after generation, the generated mathematical program referenced by cp_gmp does not contain a solution. The current values in the model can be used to obtain such a solution using
• The actual call to GMP::Solution::ConstraintListing is placed in a block statement, to permit the programmatic control of output steering options. The available output steering options are in the
option category Solvers General - Standard Reports - Constraints.
The header of a constraint listing
The brief header contains the solve number (the suffix .number) of the mathematical program and the name of the generated mathematical program. Whenever this suffix is less than or equal to twenty,
it is written as a word. When the generated mathematical program is a scheduling problem, containing activities as documented in Activity, the problem schedule domain is also printed, as illustrated
in the following example:
This is the first constraint listing of mySched.
The schedule domain of mySched is the calendar "TimeLine" containing 61 elements
in the range { '2011-03-31' .. '2011-05-30' }.
This is a constraint listing whereby the scheduling problem mySched is solved once. In addition, the problem schedule domain is detailed.
The body of a constraint listing
The body of the constraint listing contains all details in the rows of the generated mathematical program. The information detailed depends both on option settings and the type of row. Lets begin
with a linear row.
An LP row
From AIMMS example Transportation model:
---- MeetDemand The amount transported to customer c should meet its demand
MeetDemand(Alkmaar) .. [ 1 | 2 | Optimal ]
+ 1 * Transport(Eindhoven ,Alkmaar) + 1 * Transport(Haarlem ,Alkmaar)
+ 1 * Transport(Heerenveen,Alkmaar) + 1 * Transport(Middelburg,Alkmaar)
+ 1 * Transport(Zutphen ,Alkmaar) >= 793 ; (lhs=793, scale=0.001)
name lower level upper scale
Transport(Eindhoven,Alkmaar) 0 0 inf 0.001
Transport(Haarlem,Alkmaar) 0 793 inf 0.001
Transport(Heerenveen,Alkmaar) 0 0 inf 0.001
Transport(Middelburg,Alkmaar) 0 0 inf 0.001
Transport(Zutphen,Alkmaar) 0 0 inf 0.001
For each group of constraints, the name of that constraint and its text are printed. Next comes each row of that group, whereby the number of rows per symbolic constraint can be limited by the option
Number_of_Rows_per_Constraint_in_Listing. A row starts with its name and then, within square brackets, the solve number, the row number, and the solution status of the solution. For that row, it is
followed by its contents, whereby all terms containing variables are moved to the left and all terms without variables to the right and summed to mimic the LP form \(Ax \leq b\). Between parentheses
the lhs is computed by filling in the values of the variables. In this version of the model the base unit for weight is ton, but the constraint uses the unit kg which is 0.001 * ton. AIMMS computes
the LP matrix with respect to the base units and subsequently scales to the units of the variables and constraints. Thus we have a scaling factor of 0.001 for both the constraint and the variables.
The coefficients presented are the coefficients after this scaling and as such passed to the solver. The last part of this example shows the variable values, their bounds, and, when relevant, the
scaling factor. This last part is obtained by setting the option constraint listing variable values to on.
An NLP row
Consider the arbitrary objective definition
Variable o {
Range : free;
Definition : x^3 - y^4 + x / y;
Filling in the definition attribute of variable o will let AIMMS construct the constraint o_definition with the same index domain, empty here, and unit, empty here. This constraint is presented as
follows in the constraint listing.
---- o_definition
o_definition .. [ 0 | 2 | not solved yet ]
+ [-4] * x + [5] * y + 1 * o = 0 ; (lhs=-1) ****
name lower level upper
x 1 1 4
y 1 1 5
o -inf 0 inf
x y
-------------------- --------------------
x -6 1
y 1 10
This example is similar to the example of the linear row, but with some extras. First, the coefficients -4 and 5 are denoted between brackets to indicate that they are not fixed coefficients, but
first order derivative values taken at the level values of the variables. We say that the variables x and y appear non-linear in the constraint o_definition. The coefficient 1 before the variable o
is also a first order derivative, but the value of this coefficient does not depend on the values of the variables and is therefore not denoted between brackets. We say that the variable o appears
linearly in the constraint o_definition. Next, to indicate that the constraint is infeasible, it is postfixed by ****. Finally, the Hessian containing the second order derivative values is presented,
by switching the option constraint_listing_Hessian to on. The Hessian is only presented for those variables that appear non-linear in the constraint presented. A typical question concerns the
accuracy of these first and second order derivative values. These derivative values are exact when the non-linear expressions in the constraint only reference differentiable AIMMS intrinsic
functions. The first order derivative values are approximated using differencing, when there is a non-linear expression in the constraint referencing an external function. The second order derivative
values are not available when a non-linear expression references an external function.
A COP row
Consider the artificial constraint:
Constraint element_constraint {
Definition : P(eV) = 7;
This constraint will lead to the following in the constraint listing.
---- element_constraint
element_constraint .. [ 0 | 2 | not solved yet ]
[1,4,7,10,13,..., 28 (size=10)][eV]
= 7 ****
name lower level upper
eV 'a01' 'a01' 'a10'
The main difference between this example and the previous examples is that the presentation is an instantiated symbolic form of the constraints as the presentation of the first and second order
derivatives is meaningless in the context of constraint programming.
Return Value
The procedure returns 1 on success, or 0 otherwise.
A SOLVE statement may produce this constraint listing, depending on the option constraint_listing, in the listing file.
See also
• The Mathematical Program Inspector is an interactive alternative to constraint listings and has additional facilities such as searching for an irreducible infeasibility set for linear program.
• The routine GMP::Instance::Generate. | {"url":"https://documentation.aimms.com/functionreference/algorithmic-capabilities/the-gmp-library/gmp_solution-procedures-and-functions/gmp_solution_constraintlisting.html","timestamp":"2024-11-09T06:24:40Z","content_type":"text/html","content_length":"48274","record_id":"<urn:uuid:fa758103-0948-4f65-8d4a-11e3353863d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00372.warc.gz"} |
Gradient Tensor Invariants in Space Plasmas: Evolution Equations and Statistics
This Ph.D thesis work discusses some topological features of streamlines and magnetic field lines in the framework of Magneto-Hydrodynamic (MHD) theory, i.e., the theoretical framework to describe
the dynamics of fully ionized matter, the plasma, which constitutes most of the 98% of the baryonic matter of the universe. In particular, I derive a mathematical/theoretical model to describe the
evolution of magnetic field and plasma structures in terms of geometrical SO(3) invariants of gradient tensors of the magnetic and velocity field. This theoretical model is then applied to
characterize the different topological and dynamical properties of the structures of magnetic and velocity fields in turbulent plasma environments, such as the solar wind and the circum-terrestrial
space. This work moves from the previous works by Villeifosse (1984) and Cantwell (1991, 1993) who introduced some invariant quantities (named P, Q, R) under rotations related to the characteristic
polynomial of the matrix of the gradient tensor of the velocity field in the framework of incompressible fluids, capable of describing the topological features of the fluid streamlines, and extend
the theoretical description of the dynamics of the gradient tensor invariants to the case of the MHD by deriving the dynamical equations for the Lagrangian evolution of these quantities. Differently
from the fluid case, the dynamical equations of the gradient tensor invariants contain some specific quantities that describe the effects of the magnetic field on plasma motion topology and
structures, so that an evaluation of the relative weights of these terms in respect to the terms appearing in the purely fluid situation would allow to characterize the dynamic regime of the
considered plasma environment. Here, I present an analysis of the relative weight of the magnetic field effect on velocity field structure by considering data from both numerical simulations and in
situ space plasma measurements. In particular, by considering observations and measurements from the NASA-Magnetospheric Multiscale (MMS) mission in the solar wind and in the Earth’s magnetosheath,
the potentiality of the approach based on the gradient tensor invariants in characterizing the dynamical regime of these turbulent plasma environments is discussed, showing the different structural
character of the observed plasma turbulence in spite of the similar spectral features of the magnetic and velocity field fluctuations. The same kind of analysis is also performed in the case of
direct numerical simulations (DNSs) of decaying MHD turbulence. As a last point, I attempt a modelling of the evolution equations of the gradient tensor invariants via a dynamical system approach,
concentrating my attention on the velocity gradient tensor invariants and on the role that the magnetic field plays in the dynamics of the topology of the streamlines. The results of this PhD thesis
work are particularly interesting in the field of interplanetary and magnetospheric plasmas providing a different way to unveil the dynamical properties of turbulent plasmas in the different
heliospheric regions and new insights and elements to model the topology of magnetic field and plasma structures and the interaction between different plasma regions. In other words, the approach,
here presented, could lead to a different point of view to study the dynamics of turbulent space plasma and its role in the description of several Space Weather related phenomena.
Gradient Tensor Invariants in Space Plasmas: Evolution Equations and Statistics / Quattrociocchi, Virgilio. - (2022 Nov 24).
Gradient Tensor Invariants in Space Plasmas: Evolution Equations and Statistics
This Ph.D thesis work discusses some topological features of streamlines and magnetic field lines in the framework of Magneto-Hydrodynamic (MHD) theory, i.e., the theoretical framework to describe
the dynamics of fully ionized matter, the plasma, which constitutes most of the 98% of the baryonic matter of the universe. In particular, I derive a mathematical/theoretical model to describe the
evolution of magnetic field and plasma structures in terms of geometrical SO(3) invariants of gradient tensors of the magnetic and velocity field. This theoretical model is then applied to
characterize the different topological and dynamical properties of the structures of magnetic and velocity fields in turbulent plasma environments, such as the solar wind and the circum-terrestrial
space. This work moves from the previous works by Villeifosse (1984) and Cantwell (1991, 1993) who introduced some invariant quantities (named P, Q, R) under rotations related to the characteristic
polynomial of the matrix of the gradient tensor of the velocity field in the framework of incompressible fluids, capable of describing the topological features of the fluid streamlines, and extend
the theoretical description of the dynamics of the gradient tensor invariants to the case of the MHD by deriving the dynamical equations for the Lagrangian evolution of these quantities. Differently
from the fluid case, the dynamical equations of the gradient tensor invariants contain some specific quantities that describe the effects of the magnetic field on plasma motion topology and
structures, so that an evaluation of the relative weights of these terms in respect to the terms appearing in the purely fluid situation would allow to characterize the dynamic regime of the
considered plasma environment. Here, I present an analysis of the relative weight of the magnetic field effect on velocity field structure by considering data from both numerical simulations and in
situ space plasma measurements. In particular, by considering observations and measurements from the NASA-Magnetospheric Multiscale (MMS) mission in the solar wind and in the Earth’s magnetosheath,
the potentiality of the approach based on the gradient tensor invariants in characterizing the dynamical regime of these turbulent plasma environments is discussed, showing the different structural
character of the observed plasma turbulence in spite of the similar spectral features of the magnetic and velocity field fluctuations. The same kind of analysis is also performed in the case of
direct numerical simulations (DNSs) of decaying MHD turbulence. As a last point, I attempt a modelling of the evolution equations of the gradient tensor invariants via a dynamical system approach,
concentrating my attention on the velocity gradient tensor invariants and on the role that the magnetic field plays in the dynamics of the topology of the streamlines. The results of this PhD thesis
work are particularly interesting in the field of interplanetary and magnetospheric plasmas providing a different way to unveil the dynamical properties of turbulent plasmas in the different
heliospheric regions and new insights and elements to model the topology of magnetic field and plasma structures and the interaction between different plasma regions. In other words, the approach,
here presented, could lead to a different point of view to study the dynamics of turbulent space plasma and its role in the description of several Space Weather related phenomena.
File in questo prodotto:
File Dimensione Formato
accesso aperto
Descrizione: Tesi Virgilio Quattrociocchi
Tipologia: Tesi di dottorato 16.59 MB Adobe PDF Visualizza/Apri
Dimensione 16.59 MB
Formato Adobe PDF
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione. | {"url":"https://ricerca.univaq.it/handle/11697/198066","timestamp":"2024-11-11T17:27:11Z","content_type":"text/html","content_length":"64804","record_id":"<urn:uuid:9e21594e-8187-4be4-8b95-3b617bdb8f83>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00347.warc.gz"} |
Mathematize It! [Grades 6-8]
Hands-on, Practical Guidance for Educators
From math, literacy, equity, multilingual learners, and SEL, to assessment, school counseling, and education leadership, our books are research-based and authored by experts on topics most relevant
to what educators are facing today.
Mathematize It! [Grades 6-8]
Going Beyond Key Words to Make Sense of Word Problems, Grades 6-8
First Edition
Mathematize It! shares a reasoning approach that takes the initial focus off specific numbers and computations and puts it on the actions and relationships expressed in the word problem.
Product Details
• Grade Level: 6-8
• ISBN: 9781506354484
• Published By: Corwin
• Series: Corwin Mathematics Series
• Year: 2020
• Page Count: 264
• Publication date: September 03, 2020
Review Copies
Review copies may be requested by individuals planning to purchase 10 or more copies for a team or considering a book for adoption in a higher ed course. To request a review copy, contact
Help students reveal the math behind the words
“I don’t get what I’m supposed to do!” This is a common refrain from students when asked to solve word problems.
Solving problems is about more than computation. Students must understand the mathematics of a situation to know what computation will lead to an appropriate solution. Many students often pluck
numbers from the problem and plug them into an equation using the first operation they can think of (or the last one they practiced). Students also tend to choose an operation by solely relying on
key words that they believe will help them arrive at an answer, without careful consideration of what the problem is actually asking of them.
Mathematize It! Going Beyond Key Words to Make Sense of Word Problems, Grades 6–8 shares a reasoning approach that helps students dig into the problem to uncover the underlying mathematics, deeply
consider the problem’s context, and employ strong operation sense to solve it. Through the process of mathematizing, the authors provide an explanation of a consistent method—and specific
instructional strategies—to take the initial focus off specific numbers and computations and put it on the actions and relationships expressed in the problem.
Sure to enhance teachers’ own operation sense, this user-friendly resource for Grades 6–8:
· Offers a systematic mathematizing process for students to use when solving word problems
· Gives practice opportunities and dozens of problems to leverage in the classroom
· Provides specific examples of questions and explorations for multiplication and division, fractions and decimals, as well as operations with rational numbers
· Demonstrates the use of visual representations to model problems with dozens of short videos
· Includes end-of-chapter activities and reflection questions
How can you help your students understand what is happening mathematically when solving word problems? Mathematize it! | {"url":"https://us.corwin.com/books/grades-6-8-mathematize-it-253341","timestamp":"2024-11-11T06:53:52Z","content_type":"text/html","content_length":"111164","record_id":"<urn:uuid:cb40228b-ac13-4eee-adfb-5f8d71cc268b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00892.warc.gz"} |
Exploratory and Descriptive Statistics and
For very skewed continuous variables, non-parametric statistics and tests may be more appropriate. These can be generated using the parametric argument. For chi-square tests with small cell sizes,
simulated p-values also can be generated.
egltable(c("mpg", "hp", "qsec", "wt", "vs"),
g = "am", data = mtcars, strict = FALSE,
parametric = FALSE)
Paired Data
We have already seen how to compare descriptives across groups when the groups were independent. egltable() also supports using groups to test paired samples. To use this, the variable passed to the
grouping argument, g must have exactly two levels and you must also pass a variable that is a unique ID per unit and specify paired = TRUE.
By default for continuous, paired data, mean and standard deviations are presented and a paired samples t-test is used. A pseudo Cohen’s d effect size is calculated as the mean of the change score
divided by the standard deviation of the change score. If there are missing data, its possible that the mean difference will be different than the difference in means as the means are calculated on
all available data, but the effect size can only be calculated on complete cases.
## example with paired data
vars = "extra",
g = "group",
data = sleep,
idvar = "ID",
paired = TRUE)
Example parametric descriptive statistics for paired data.
extra 0.75 (1.79) 2.33 (2.00) t(df=9) = 4.06, p = .003, d = 1.28
If we do not want to make parametric assumptions with continuous variables, we can set parametric = FALSE. In this case the descriptives are medians and a paired Wilcoxon test is used. In this
dataset there are ties and a warning is generated about ties and zeroes. This warning is generally ignorable, but if these were central hypothesis tests, it may warrant further testing using, for
example, simulations which are more precise in the case of ties.
vars = "extra",
g = "group",
data = sleep,
idvar = "ID",
paired = TRUE,
parametric = FALSE)
#> Warning in wilcox.test.default(widedat$dv2, widedat$dv1, paired = TRUE): cannot
#> compute exact p-value with ties
#> Warning in wilcox.test.default(widedat$dv2, widedat$dv1, paired = TRUE): cannot
#> compute exact p-value with zeroes
Example non parametric descriptive statistics for paired data.
extra 0.35 (1.88) 1.75 (3.28) Wilcoxon Paired V = 45.00, p = .009
We can also work with categorical paired data. The following code creates a categorical variable, the tertiles of chick weights measured over time. The chick weight dataset has many time points, but
we will just use two.
## paired categorical data example
## using data on chick weights to create categorical data
tmp <- subset(ChickWeight, Time %in% c(0, 20))
tmp$WeightTertile <- cut(tmp$weight,
breaks = quantile(tmp$weight, c(0, 1/3, 2/3, 1), na.rm = TRUE),
include.lowest = TRUE)
No special code is needed to work with categorical variables. egltable() recognises categorical variables and uses McNemar’s test, which is a chi-square of the off diagonals, which tests whether
people (or chicks in this case) change groups equally over time or preferentially move one direction. In this case, a significant result suggests that over time chicks’ weights change preferentially
one way and the descriptive statistics show us that there is an increase in weight tertile from time 0 to time 20.
Continuous and categorical paired data.
weight 41.06 (1.13) 209.72 (66.51) t(df=45) = 17.10, p < .001, d = 2.52
WeightTertile McNemar’s Chi-square = 39.00, df = 3, p < .001
[39,41.7] 32 (64.0%) 0 (0.0%)
(41.7,169] 18 (36.0%) 14 (30.4%)
(169,361] 0 (0.0%) 32 (69.6%) | {"url":"https://cran-r.c3sl.ufpr.br/web/packages/JWileymisc/vignettes/exploratory-vignette.html","timestamp":"2024-11-08T15:38:43Z","content_type":"text/html","content_length":"130068","record_id":"<urn:uuid:1a5a390c-be10-4f4e-8e3f-20c132b0ecb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00677.warc.gz"} |
Make faster Hits and Improve Marksmanship with Your Point Blank Zero
Most field shooting is not precision shooting. A lot of people believe that marksmanship is all about that one precise hit on a small target at any range. To be fair, that is a great demonstration of
marksmanship and precision, but you need to be aware of the limitations.
Most practical shooters, from big game hunters to military members, do not have the luxury of time to check distance, adjust sights, and take a precisely aimed shot.
In the real world, targets actively try to avoid being shot and they won’t expose themselves long enough for you to go through all those steps. In the 1950s, the Army’s ORO researched how people get
shot, and the number one factor was exposure time. So the challenge for the shooter is balancing accuracy requirements against available time.
The Point Blank Zero is one of the most important methods for doing this. You’ve probably seen it called many things, including the Battlesight Zero (BZO) or Maximum Point Blank Range (MPBR). These
are all correct, and reference a sighting method that takes advantage of projectile ballistics.
The Ballistic Arc
As Col Cooper said in his excellent book, The Art of the Rifle, bullets do not travel in straight lines. When you fire a projectile, it moves in a curve known as a ballistic arc.
A lot of new shooters believe that a bullet rises for some distance after leaving the muzzle before it begins to fall again. To be fair, for an external viewer, that does appear to be what happens.
But the catch is that it’s not the bullet design or the way the barrel is mounted that causes it.
The rifle’s sights are not level with one another. When you aim at a target in front of you the slightly shorter front sight means you naturally point the tip of the barrel a little higher. So, in
short, you create an initial launch angle for the bulle
Zeroing a rifle means that you’re picking a point along the bullet’s ballistic arc that intersects with your line of sight.
This graphic illustrates the ballistic arc. In the top half, you can see that the sights are aligned in a way that makes you tilt the barrel upwards, launching the bullet higher.
Ballistic Trajectories
The nice thing about ballistic trajectories is that they are predictable.
If you know the velocity, launch angle, and aerodynamic efficiency of the bullet, you can reasonably predict the flight path of the bullet.
This is another chart created by the internet ballistics guru, Molon. It shows the ballistic arc of the same projectile launched at different angles. Each angle corresponds to a different zero.
Look at each of the three arcs, and pay attention to how high the bullet travels over the line of sight. The highest point over the line of sight is the Maximum Ordinate, also called an Apogee. I’ve
seen both get used interchangeably.
Also notice how the launch angle increases as the far intersection gets further away. By far intersection, I mean the second time the bullet crosses the line of sight. That’s important.
Something else you might notice is that the apogee of the bullet for a 100-meter zero occurs right 100 meters. This has some benefits for practical rifle usage, but I’ll come back to that.
When you establish a point blank zero, you choose a curve shape where the bullet stays within a certain distance above or below the point of aim. Velocity is a huge part of this as well since more
velocity shrinks the group size.
The size of the vital zone you choose will affect the usable point blank range. Sure, you can start aiming higher or lower on the target to compensate– but then you’re no longer using the point blank
zero as designed.
Remember, the goal of the point blank zero is to remove the guesswork. Aim at center mass of the target and fire. If you’ve chosen your vital zone size and zero well, the rest will take care of
Point Blank Zero in Practice
Consider a vital zone of eight inches in some large game animals. The point blank zero would be setting in which the bullet impacts no more than four inches above or below the center of the vital
zone. The distance from the muzzle, point blank, to the maximum distance before the bullet falls below vital zone threshold is the Maximum Point Blank Range.
A larger vital zone means a more generous point blank zero and a longer usable distance for the zero. A smaller vital zone means a more limited MPBR.
Running the Numbers: Establishing MPBR
Your best tool for figuring out a point blank zero is a ballistic calculator. My favorite is JBM Ballistics, but there are many out there that you can install on your phone as well. All you need to
do is feed it your data and the desired “vital zone” radius.
For this chart, I am using a Sierra 69gr SMK fired at 2750 fps. I established that velocity from a my Magnetospeed in on a 20″ rifle. I’ll work with a desired vital zone radius of six inches. That
means above or below the line of sight, so there is a 12-inch total vital zone.
With these values, JBM tells me exactly what the calculated point blank zero should be for my desired 12″ zone: 291 yards.
It also tells me I can maintain my center mass hold, the MPBR, all the way out to 341 yards.
As a bonus, JBM also predicts the maximum ordinate at 161 yards.
If I go back into the calculator again using the same ballistic data and set my zero to 291 yards, you can follow the arc of the bullet using this drop chart. I’ve shaded the important area in red.
Point Blank Zero on Paper
Ok, I’ve shown you the numbers, so it’s not just an opinion. But the numbers are one thing, how does this actually work for real life?
Zeroing at exactly 291 yards isn’t very practical, but it works as a guideline. Increasing my point blank zero to 300 yards gives me about the same result, with a drop of 6.2 inches at 350 yards.
To see how velocity affects MPBR, I made this little graphic. It shows three barrel lengths all zeroed for 300 meters, the red dot. The black dots indicate impact points at other distances.
You can definitely see how the extreme spread between the two farthest impacts decreases as velocity increases. For reference, the highest impact on the 20″ barrel is 4″ over the zero.
Using this data, it’s easy to see why the Army and Marines were drawn to the 300-meter zero as their “do-all” for so long. However, one challenging aspect of this battlesight zero is the maximum
With this zero, the bullet will be about 6 inches above the line of sight at 161 yards. If you want to hit a small target, such as a small gong or a headshot, then aiming center mass of that target
means your shot might go over the target entirely.
If your target is smaller, then you need to adjust your radius and zero to compensate.
This is the reason for the common 50/200 zero. A lot of military trainers noticed that soldiers tended to miss high during engagements with the 300- meter zero. The 50/200 keeps the bullet trajectory
within +/- 2 inches of the line of sight until about 250 yards.
I made a similar chart for that zero as well.
Note that the cluster of shots is significantly tighter here, but that comes at the expense of losing the 400 meter impact all together. The lesson here is that you need to know your realistic
envelope for engagement and plan for it.
Just for fun, I ran the numbers through JBM for my 20″ rifle and a 2″ radius vital zone (for a total of 4″ vital zone circle). The result is a point blank zero of 194 yards and a maximum point blank
range of 226 yards. I highlighted that in yellow.
The red highlight shows the numbers for a 200-yard zero.
Near and Far Intersection
Remember that your zero is the the point where the ballistic arc of the bullet intersects your point of aim. For just about every zero distance outside of 100 meters, there are actually two points
where the bullet crosses your line of sight.
In the above chart, for about a 200-yard point blank zero, you’ll notice that the bullet starts 1.5″ low at the muzzle. That’s because of the sight offset, where the rifle’s sights sit 1.5″ above the
bore of the rifle. Because of the launch angle on our way to a 200-yard impact, you’ll notice that the first intersection of the bullet and line of sight happens around 30 yards.
The trajectory’s apogee happens around 115 yards, and then the bullet arcs downward again to reach the second intersection at 200 yards.
This is where people get the idea of 50/200 or 36/300-zeroes. They’re trying to define the near and far intersections as a way of saying that if you zero at 50 yards, then you’ll also be on at 200.
Well, as all of these charts should illustrate, that’s not true. The curve depends a lot on the ballistic characteristics of the bullet as well as the velocity it’s traveling. You can get close, even
just a few tenths of an inch away, but close isn’t a zero.
So never assume that you’re “good to go” with multilple zeroes. Your zero is the distance you’ve actually verified at. If you want to zero at 25 yards, then fine, but don’t think you’re also zeroed
at 300 until you actually test your rifle at 300.
Applying Point Blank Zero to Optics
With enough practice, you will learn your MPBR well. You’ll begin to recognize situations where you can just “hold center” as my PRS-shooting podcast guest put it, or have to dial your scope.
Point blank zero works particularly well for red dot sights or magnified scopes with standard duplex crosshairs. But what about other reticle designs? You can combine point blank zero with different
reticle shapes to determine more aiming points as well.
For instance, my Trijicon TR24G has a glowing triangle sitting atop a post. However, the heavy post prevents me from using holdovers beyond the bottom edge of the triangle
According to Trijicon, this triangle is 4.2 MOA tall at 4x magnification. Since it is second focal plane, it is 12.8 MOA at 1x, and 8.4 MOA at 2x magnification.
JBM does not produce a point blank zero based on minutes of angle. But it does still produce MOA drop values.
You’ll need to play with the distances until you arrive at an acceptable solution.
If I wanted to keep my impacts to 4.2 MOA above or below the tip of the triangle, then a 275-yard zero works best. Rounding up to a 300-yard zero with the tip of the triangle means that the bottom of
the triangle is at about 410 yards.
If I zoom down to 2x, the bottom of the triangle is now 8.4 MOA from the tip, which correlates to a 500-yard aim point. At 1x, that same point now gives about a 600-yard aim point.
That is, of course, assuming you can even see the target.
This method works reasonably well for any optic. If you happen to have a BDC or any reticle that provides solid holdover points, then you might use that method instead.
Multiple Aiming Points
There is another technique related to the point blank zero but isn’t quite the same thing. The Swiss developed a method of aiming at the belt line or the neck depending on the estimated range to the
They use a 300-meter zero. If the target is “close,” then the shooters aims for the belt. With that aiming point, any missed shots will hit higher, and hit center mass.
If the target is “far,” then the shooter aims at the neck. Shots will fall low and also into center mass.
This technique is Sniping 4th Generation, and it’s used to teach recruits in a single day how to reach an 80% effective hit rate out to 600 meters. If you’re interested in learning more, you can read
my write up on the technique.
The Zen of the 100-Meter Zero
A 25-yard zero keeps me +/- 3 inches out to 250 yards. That’s a good and usable point blank range.
But let’s get away from the numbers and talk about the simple 100-meter zero. If you recall from the ballistic chart earlier, a 100-meter zero correlates to the maximum ordinate. Put another way, if
you zero for 100 meters, then the bullet will never cross above your line of sight.
With a point blank zero set at 200 or 300 meters, there are situations where you have to remember to aim lower, or your bullet will sail over the target. Under stress, you might forget to do that.
Missing high gives you no reference for where you should have aimed.
If you zero for 100 meters, then the bullet will only fall below the point of aim and hit the dirt in front. You can use the impact of missed shots to walk your next shots in.
Wrapping Up
In the end, you have to imagine two opposing ends of a line. On one end is speed, and on the other is precision.
Which is more important to you for the most likely shot you are going to take?
Point blank zeroes work great for hunting or self-defense inside 200 yards on fairly large target zones. If you need to hit small targets at longer ranges, then you might need to shift more towards
precision-oriented zeroing methods and optics.
I hope you found this article useful, please let me know if you have any questions or comments down below.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
16 Comments
“… and it’s used to teach recruits in a single day how to reach an 80% effective hit rate out to 600 meters.” This sentence fragment, by itself, wraps up the entire point-blank-zero philosophy in a
Well done sir!
As an armorer, you should be able to help a customer zero his or her iron sights or scope
One thing I haven’t seen in many discussions about MPBR is including the effects of your system’s precision. I believe this is a contributing factor toward the 50/200 zero. Even though an animal’s
vitals might be 8 inches, your system might only shoot to 2 MOA. If you want to keep everything within the vitals, you need to reduce that size to compensate (or do the detailed math).
It would be interesting to see a 4 MOA dot at each of the ranges on your target plots. This would illustrate what is actually happening, especially at longer ranges like 600 yards.
Forgive me if I missed that in the above discussion.
I’ve been inspired to make a plot based on the S4G concepts, give me a week or so to get that going.
Fix MBPR to MPBR (Maximum Point Blank Range). Nice article though.
Great article Matt! The illustrations really helped visualize the math. Glad Ivan caught the ‘MBPR’ – was a tad confusing. It was the only thing I didn’t understand (thought it might be a missile
target term) but knew your intent! Really well done – thanks!
Outstanding articles. I’m a retired Marine (aviation type) and was never a good shooter. At age 60 I just bought a LWRC M6A2 and for the life of me, I can’t figure out how to achieve a mechanical
zero on it as it’s sites are different from the M16 A2 I used in the Corps. In any event, my goal is to properly align the irons in order to install / co-witness an EOTECH to help my aging eyes. I
was initially trained on the 36 yard BZO in the Corps but after reading your articles I’m no longer sure that is what I need. No longer will I be engaging targets beyond 300 yards, rather I see urban
conflicts and CQG distances as a more realistic scenario. Having said that, would a Laser Collimator or similar device help me get my weapon zeroed? Also, is the point blank range zero a viable
option at this point for me? Thanks in advance, and Semper Fi. Ken Verdoliva, Gunnery Sergeant, USMC (retired)
I’m trying to find ballistic data for zeroing a 12.5 inch barrel in 5.56 with an Eotech HOB 2.26. with 77 grain OTM.
None of the ballistic calculators seem to give an option for barrel length. They also don’t give an ammo option for 77 grain OTM. Should I select the 77 gr HPBT Match King option?
I’m using the JMB calculator. Again, I don’t see an option for barrel length. I’m assuming the data given is for a 20 inch barrel?
I want to make sure I’m getting the correct trajectory data.
I love this article on MPBR! I want to use the data for the MPBR in my zero.
Any help or suggestions would be greatly appreciated!
Thank you! | {"url":"https://www.everydaymarksman.co/marksmanship/point-blank-zero/","timestamp":"2024-11-14T23:40:17Z","content_type":"text/html","content_length":"247122","record_id":"<urn:uuid:117219bc-46ac-4577-b1ef-d9b82eed3801>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00211.warc.gz"} |
Solving second order differential equations
solving second order differential equations Related topics: free printable pre-algebra worksheets
algebra 2 formula sheet
intermediate algebra ninth edition answers
algebras help.com
scott foresman math assessment test to print
algebra step
online radical simplifier
Author Message
Sjka BA Posted: Wednesday 09th of Nov 08:45
To every student expert in solving second order differential equations: I desperately require your very worthwhile aid . I have numerous class assignments for my Remedial
Algebra. I find solving second order differential equations could be beyond my capability . I'm at a total loss as far as how I might . I have considered chartering an algebra
professor or contracting with a teaching center, but each is definitely not inexpensive . Each and every last alternate proposition will be mightily appreciated !
From: 2.0.11
Back to top
IlbendF Posted: Wednesday 09th of Nov 11:04
The best way to get this done is using Algebrator . This software offers a very fast and easy to learn technique of doing math problems. You will definitely start liking algebra
once you use and see how easy it is. I remember how I used to have a difficult time with my Pre Algebra class and now with the help of Algebrator, learning is so much fun. I am
sure you will get help with solving second order differential equations problems here.
From: Netherlands
Back to top
Hiinidam Posted: Friday 11th of Nov 08:06
Yes ! I agree with you! The money back guarantee that comes with the purchase of Algebrator is one of the attractive options. In case you are dissatisfied with the help offered
to you on any math topic, you can get a refund of the payment you made towards the purchase of Algebrator within the number of days specified on the label. Do have a look at
https://softmath.com/algebra-policy.html before you place the order because that gives a lot of information about the topics on which you can expect to get assisted.
From: Greeley, CO,
Back to top
adoaoon Posted: Saturday 12th of Nov 12:58
I want it NOW! Somebody please tell me, how do I order it? Can I do so online ? Or is there any phone number through which we can place an order?
From: Oakland, CA
Back to top
ZaleviL Posted: Sunday 13th of Nov 08:33
A truly piece of math software is Algebrator. Even I faced similar problems while solving percentages, adding functions and y-intercept. Just by typing in the problem workbookand
clicking on Solve – and step by step solution to my math homework would be ready. I have used it through several math classes - Remedial Algebra, Algebra 1 and Basic Math. I
highly recommend the program.
From: floating in
the light, never
Back to top
Sdefom Koopmansshab Posted: Tuesday 15th of Nov 08:02
Click on https://softmath.com/ to get it. You will get a great tool at a reasonable price. And if you are not happy, they give you back your money back so it’s absolutely great.
From: Woudenberg,
Back to top | {"url":"https://www.softmath.com/algebra-software/point-slope/solving-second-order.html","timestamp":"2024-11-11T23:34:31Z","content_type":"text/html","content_length":"43042","record_id":"<urn:uuid:a03c5fe8-5b8a-4c5a-a1d2-a602d120f59e>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00542.warc.gz"} |
title = {Flexible Imputation of Missing Data},
author = {van Buuren, S.},
isbn = {9781138588318},
lccn = {2018017122},
series = {Chapman \& Hall/CRC Interdisciplinary Statistics},
url = {https://www.routledge.com/Flexible-Imputation-of-Missing-Data-Second-Edition/Buuren/p/book/9781138588318},
year = {2018},
publisher = {CRC Press LLC}
address = {New York},
title = {Oceanographic {Analysis} with {R}},
isbn = {978-1-4939-8842-6},
url = {https://www.springer.com/us/book/9781493988426},
abstract = {This book presents the R software environment as a key tool for oceanographic computations and provides a rationale for using R over the more widely-used tools of the field such as MATLAB. Kelley provides a general introduction to R before introducing the ‘oce’ package. This package greatly simplifies oceanographic analysis by handling the details of discipline-specific file formats, calculations, and plots. Designed for real-world application and developed with open-source protocols, oce supports a broad range of practical work. Generic functions take care of general operations such as subsetting and plotting data, while specialized functions address more specific tasks such as tidal decomposition, hydrographic analysis, and ADCP coordinate transformation. In addition, the package makes it easy to document work, because its functions automatically update processing logs stored within its data objects. Kelley teaches key R functions using classic examples from the history of oceanography, specifically the work of Alfred Redfield, Gordon Riley, J. Tuzo Wilson, and Walter Munk. Acknowledging the pervasive popularity of MATLAB, the book provides advice to users who would like to switch to R. Including a suite of real-life applications and over 100 exercises and solutions, the treatment is ideal for oceanographers, technicians, and students who want to add R to their list of tools for oceanographic analysis.},
language = {en},
urldate = {2018-07-22},
publisher = {Springer-Verlag},
author = {Kelley, Dan E.},
month = oct,
year = {2018}
author = {Jean-Francois Mas},
title = {Análisis espacial con R: Usa R como un Sistema de Información Geográfica},
year = {2018},
language = {Spanish},
publisher = {European Scientific Institute},
location = {Republic of Macedonia},
isbn = {978-608-4642-66-4},
pages = {114},
url = {http://eujournal.org/files/journals/1/books/JeanFrancoisMas.pdf}
title = {Data Visualisation with R},
author = {Thomas Rahlf},
year = 2017,
publisher = {Springer International Publishing},
address = {New York},
publisherurl = {https://www.springer.com/us/book/9783319497501},
isbn = {978-3-319-49750-1},
url = {http://www.datavisualisation-r.com},
abstract = {This book introduces readers to the fundamentals of
creating presentation graphics using R, based on 100
detailed and complete scripts. It shows how bar and
column charts, population pyramids, Lorenz curves, box
plots, scatter plots, time series, radial polygons,
Gantt charts, heat maps, bump charts, mosaic and
balloon charts, and a series of different thematic map
types can be created using R’s Base Graphics
System. Every example uses real data and includes
step-by-step explanations of the figures and their
language = {en}
author = {Steven Murray},
title = {Apprendre R en un Jour},
publisher = {SJ Murray},
url = {https://www.amazon.com/dp/B071W6ZJCV/ref=sr_1_1?s=digital-text&ie=UTF8&qid=1496261881&sr=1-1},
year = 2017,
asin = {B071W6ZJCV},
note = {Ebook},
abstract = {'Apprendre R en un Jour' donne au lecteur les
compétences clés au travers d'une approche axée sur
des exemples et est idéal pour les universitaires,
scientifiques, mathématiciens et ingénieurs. Le
livre ne suppose aucune connaissance préalable en
programmation et couvre progressivement toutes les
étapes essentielles pour prendre de l'assurance et
devenir compétent en R en une journée. Les sujets
couverts incluent: comment importer, manipuler,
formater, itérer (en boucle), questionner, effectuer
des statistiques élémentaires sur, et tracer des
graphiques à partir de données, à l'aide d'une
explication étape par étape de la technique et de
démonstrations que le lecteur est encouragé de
reproduire sur son ordinateur, en utilisant des
ensembles de données déjà en mémoire dans R. Chaque
fin de chapitre inclut aussi des exercices (avec
solutions à la fin du livre) pour s'entraîner,
mettre en pratique les compétences clés et habiliter
le lecteur à construire sur les bases acquises au
cours de ce livre d'introduction.}
author = {Lawrence Leemis},
title = {Learning Base R},
publisher = {Lightning Source},
url = {https://www.amazon.com/Learning-Base-Lawrence-Mark-Leemis/dp/0982917481},
year = 2016,
isbn = {978-0-9829174-8-0},
abstract = {Learning Base R provides an introduction to the R
language for those with and without prior programming
experience. It introduces the key topics to begin
analyzing data and programming in R. The focus is on
the R language rather than a particular application.
The book can be used for self-study or an introductory
class on R. Nearly 200 exercises make this book
appropriate for a classroom setting. The chapter
titles are Introducing R; R as a Calculator; Simple
Objects; Vectors; Matrices; Arrays; Built-In
Functions; User-Written Functions; Utilities; Complex
Numbers; Character Strings; Logical Elements;
Relational Operators; Coercion; Lists; Data Frames;
Built-In Data Sets; Input/Output; Probability;
High-Level Graphics; Custom Graphics; Conditional
Execution; Iteration; Recursion; Simulation;
Statistics; Linear Algebra; Packages.}
author = {Vikram Dayal},
title = {An Introduction to R for Quantitative Economics:
Graphing, Simulating and Computing},
publisher = {Springer},
url = {https://www.springer.com/978-81-322-2340-5},
year = 2015,
isbn = {978-81-322-2340-5},
abstract = {This book gives an introduction to R to build up
graphing, simulating and computing skills to enable
one to see theoretical and statistical models in
economics in a unified way. The great advantage of R
is that it is free, extremely flexible and
extensible. The book addresses the specific needs of
economists, and helps them move up the R learning
curve. It covers some mathematical topics such as,
graphing the Cobb-Douglas function, using R to study
the Solow growth model, in addition to statistical
topics, from drawing statistical graphs to doing
linear and logistic regression. It uses data that can
be downloaded from the internet, and which is also
available in different R packages. With some treatment
of basic econometrics, the book discusses quantitative
economics broadly and simply, looking at models in the
light of data. Students of economics or economists
keen to learn how to use R would find this book very
author = {Sun, C.},
title = {Empirical Research in Economics: Growing up with {R}},
publisher = {Pine Square},
address = {Starkville, Mississippi, USA},
edition = {1st},
abstract = {Empirical Research in Economics: Growing up with R
presents a systematic approach to conducting empirical
research in economics with the flexible and free
software of R. At present, there is a lack of
integration among course work, research methodology,
and software usage in statistical analysis of economic
data. The objective of this book is to help young
professionals conduct an empirical study in economics
over a reasonable period, with the expectation of four
months in general.},
pages = 579,
isbn = {978-0-9965854-0-8},
lccn = 2015911715,
url = {https://www.amazon.com/Empirical-Research-Economics-Changyou-Sun/dp/0996585400/ref=aag_m_pw_dp?ie=UTF8&m=A1TZL30UWYSSR8},
year = 2015
title = {{Einführung in die statistische Datenanalyse mit R}},
author = {Matthias Kohl},
year = 2015,
publisher = {bookboon.com},
address = {London},
publisherurl = {https://bookboon.com/de/einfuhrung-in-die-statistische-datenanalyse-mit-r-ebook},
isbn = {978-87-403-1156-3},
note = {In German},
abstract = {Das Buch bietet eine Einführung in die statistische
Datenanalyse unter Verwendung der freien
Statistiksoftware R, der derzeit wohl mächtigsten
Statistiksoftware. Die Analysen werden anhand realer
Daten durchgeführt und besprochen. Nach einer kurzen
Beschreibung der Statistiksoftware R werden wichtige
Kenngrößen und Diagramme der deskriptiven Statistik
vorgestellt. Anschließend werden Empfehlungen für die
Erstellung von Diagrammen gegeben, wobei ein
spezielles Augenmerk auf die Auswahl geeigneter Farben
gelegt wird. Die zweite Hälfte des Buches behandelt
die Grundlagen der schließenden Statistik. Zunächst
wird eine Reihe von Wahrscheinlichkeitsverteilungen
eingeführt und deren Anwendungen anhand von Beispielen
illustriert. Es folgt eine Beschreibung, wie die in
der Praxis unbekannten Parameter der Verteilungen auf
Basis vorliegender Daten geschätzt werden können. Im
abschließenden Kapitel werden statistische
Hypothesentests eingeführt und die für die Praxis
wichtigsten Tests besprochen.},
language = {de}
title = {Introduction to statistical data analysis with {R}},
author = {Matthias Kohl},
year = 2015,
publisher = {bookboon.com},
address = {London},
publisherurl = {https://bookboon.com/en/introduction-to-statistical-data-analysis-with-r-ebook},
isbn = {978-87-403-1123-5},
abstract = {The book offers an introduction to statistical data
analysis applying the free statistical software R,
probably the most powerful statistical software
today. The analyses are performed and discussed using
real data. After a brief description of the
statistical software R, important parameters and
diagrams of descriptive statistics are
introduced. Subsequently, recommendations for
generating diagrams are provided, where special
attention is given to the selection of appropriate
colors. The second half of the book addresses the
basics of inferential statistics. First, a number of
probability distributions are introduced and their
applicability is illustrated by examples. Next, the
book describes how the parameters of these
distributions, which are unknown in practice, may be
estimated from given data. The final chapter
introduces statistical tests and reviews the most
important tests for practical applications.},
language = {en}
author = {Marta Blangiardo and Michela Cameletti},
title = {Spatial and Spatio-temporal Bayesian Models with {R-INLA}},
year = 2015,
publisher = {Wiley},
address = {Chichester, West Sussex, United Kingdom},
isbn = {978-1-118-32655-8},
edition = {1st},
pages = 320,
url = {https://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118326555.html}
title = {Mastering Data Analysis with R},
author = {Gergely Dar{\'o}czi},
publisher = {Packt Publishing},
year = 2015,
month = 9,
isbn = 9781783982028,
url = {https://www.packtpub.com/product/mastering-data-analysis-with-r/9781783982028},
totalpages = 396,
abstract = {An intermediate and practical book on various fields
of data analysis with R: from loading data from text
files, databases or APIs; munging; transformations;
modeling with traditional statistical methods and
machine learning to visualization of tabular, network,
time-series and spatial data with hands-on examples.}
author = {Victor A. Bloomfield},
title = {Using {R} for Numerical Analysis in Science and Engineering},
publisher = {Chapman & Hall/CRC},
url = {http://www.crcpress.com/product/isbn/9781439884485},
year = 2014,
isbn = {978-1439884485},
abstract = {Instead of presenting the standard theoretical treatments
that underlie the various numerical methods used by
scientists and engineers, Using R for Numerical
Analysis in Science and Engineering shows how to use R
and its add-on packages to obtain numerical solutions
to the complex mathematical problems commonly faced by
scientists and engineers. This practical guide to the
capabilities of R demonstrates Monte Carlo,
stochastic, deterministic, and other numerical methods
through an abundance of worked examples and code,
covering the solution of systems of linear algebraic
equations and nonlinear equations as well as ordinary
differential equations and partial differential
equations. It not only shows how to use R's powerful
graphic tools to construct the types of plots most
useful in scientific and engineering work, but also:
* Explains how to statistically analyze and fit data
to linear and nonlinear models
* Explores numerical differentiation, integration, and
* Describes how to find eigenvalues and eigenfunctions
* Discusses interpolation and curve fitting
* Considers the analysis of time serie
Using R for Numerical Analysis in Science and
Engineering provides a solid introduction to the most
useful numerical methods for scientific and
engineering data analysis using R.}
author = {Torsten Hothorn and Brian S. Everitt},
title = {A Handbook of Statistical Analyses Using {R}},
year = 2014,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, Florida, USA},
isbn = {978-1-4822-0458-2},
edition = {3rd},
url = {http://www.crcpress.com/product/isbn/9781482204582}
title = {Datendesign mit {R}. 100 Visualisierungsbeispiele},
author = {Thomas Rahlf},
year = 2014,
publisher = {Open Source Press},
address = {M{\"u}nchen},
publisherurl = {https://www.springer.com/us/book/9783319497501},
isbn = {978-3-95539-094-5},
url = {http:///www.datenvisualisierung-r.de},
note = {In German},
abstract = {Die Visualisierung von Daten hat in den vergangenen
Jahren stark an Beachtung gewonnen. Zu den
traditionellen Anwendungsbereichen in der Wissenschaft
oder dem Marketing treten neue Gebiete wie
Big-Data-Analysen oder der Datenjournalismus. Mit der
Open Source Software R, die sich zunehmend als
Standard im Bereich der Statistiksoftware etabliert,
steht ein m{\"a}chtiges Werkzeug zur Verf{\"u}gung,
das hinsichtlich der Visualisierungsm{\"o}glichkeiten
praktisch keine W{\"u}nsche offen l{\"a}sst. Dieses
Buch f{\"u}hrt in die Grundlagen der Gestaltung von
Pr{\"a}sentationsgrafiken mit R ein und zeigt anhand
von 100 vollst{\"a}ndigen Skript-Beispielen, wie Sie
Balken- und S{\"a}ulendiagramme,
Bev{\"o}lkerungspyramiden, Lorenzkurven,
Streudiagramme, Zeitreihendarstellungen,
Radialpolygone, Gantt-Diagramme, Profildiagramme,
Heatmaps, Bumpcharts, Mosaik- und Ballonplots sowie
eine Reihe verschiedener thematischer Kartentypen mit
dem Base Graphics System von R erstellen. F{\"u}r
jedes Beispiel werden reale Daten verwendet sowie die
Abbildung und deren Programmierung Schritt f{\"u}r
Schritt erl{\"a}utert. Die gedruckte Ausgabe
enth{\"a}lt einen pers{\"o}nlichen Zugangs-Code, der
Ihnen kostenlos Zugriff auf die Online-Ausgabe dieses
Buches gew{\"a}hrt.},
language = {de}
author = {Sarah Stowell},
title = {Using R for Statistics},
publisher = {Apress},
url = {https://www.apress.com/9781484201404},
year = 2014,
isbn = {978-1484201404},
abstract = {R is a popular and growing open source statistical
analysis and graphics environment as well as a
programming language and platform. If you need to use
a variety of statistics, then Using R for Statistics
will get you the answers to most of the problems you
are likely to encounter.
Using R for Statistics is a problem-solution primer
for using R to set up your data, pose your problems
and get answers using a wide array of statistical
tests. The book walks you through R basics and how to
use R to accomplish a wide variety statistical
operations. You'll be able to navigate the R system,
enter and import data, manipulate datasets, calculate
summary statistics, create statistical plots and
customize their appearance, perform hypothesis tests
such as the t-tests and analyses of variance, and
build regression models. Examples are built around
actual datasets to simulate real-world solutions, and
programming basics are explained to assist those who
do not have a development background.
After reading and using this guide, you'll be
comfortable using and applying R to your specific
statistical analyses or hypothesis tests. No prior
knowledge of R or of programming is assumed, though
you should have some experience with statistics.
What you'll learn:
* How to apply statistical concepts using R and some R
* How to work with data files, prepare and manipulate
data, and combine and restructure datasets
* How to summarize continuous and categorical variables
* What is a probability distribution
* How to create and customize plots
* How to do hypothesis testing
* How to build and use regression and linear models
Who this book is for: No prior knowledge of R or of
programming is assumed, making this book ideal if you
are more accustomed to using point-and-click style
statistical packages. You should have some prior
experience with statistics, however.}
title = {Multivariate Time Series Analysis With R and Financial Applications},
author = {Ruey S. Tsay},
year = 2014,
publisher = {John Wiley},
address = {New Jersey},
publisherurl = {https://www.wiley.com/WileyCDA/WileyTitle/productCd-1118617908.html},
isbn = {978-1-118-61790-8},
url = {https://faculty.chicagobooth.edu/ruey-s-tsay/research/multivariate-time-series-analysis-with-r-and-financial-applications},
abstract = {This book is based on my experience in teaching and research
on multivariate time series analysis over the past 30
years. It summarizes the basic concepts and ideas
of analyzing multivariate dependent data, provides
econometric and statistical models useful for describing
the dynamic dependence between variables, discusses the
identifiability problem when the models become too
flexible, introduces ways to search for simplifying
structure hidden in high-dimensional time series,
addresses the applicabilities and limitations of
multivariate time series methods, and, equally important,
develops the R MTS package for readers to apply the
methods and models discussed in the book. The vector
autoregressive models and multivariate volatility models
are discussed and demonstrated.},
language = {en}
title = {Nonlinear Parameter Optimization Using R Tools},
author = {Nash, J.C.},
isbn = {9781118883969},
lccn = {2014000044},
year = {2014},
publisher = {Wiley},
abstract = {A systematic and comprehensive treatment of
optimization software using R. In recent decades,
optimization techniques have been streamlined by
computational and artificial intelligence methods to
analyze more variables, especially under
non–linear, multivariable conditions, more quickly
than ever before. Optimization is an important tool
for decision science and for the analysis of
physical systems used in engineering. Nonlinear
Parameter Optimization with R explores the principal
tools available in R for function minimization,
optimization, and nonlinear parameter determination
and features numerous examples throughout.}
author = {Michael J. Crawley},
title = {Statistics: An Introduction using {R}},
edition = {2nd},
publisher = {Wiley},
year = 2014,
isbn = {978-1-118-94109-6},
url = {http://www.bio.ic.ac.uk/research/crawley/statistics/},
publisherurl = {https://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118941098.html},
abstract = {The book is primarily aimed at undergraduate
students in medicine, engineering, economics and
biology --- but will also appeal to postgraduates who
have not previously covered this area, or wish to
switch to using R.}
title = {Exploration de donn{\'e}es et m{\'e}thodes statistiques avec le logiciel {R}},
author = {Lise Bellanger and Richard Tomassone},
year = 2014,
pages = 480,
edition = {1st},
publisher = {Ellipses},
series = {R{\'e}f{\'e}rences sciences},
isbn = {978-2-7298-8486-4},
url = {http://www.math.sciences.univ-nantes.fr/~bellanger/ouvrage.html},
publisherurl = {https://www.editions-ellipses.fr/accueil/1263-exploration-de-donnees-et-methodes-statistiques-data-analysis-data-mining-avec-le-logiciel-r-9782729884864.html},
abstract = {La Statistique envahit pratiquement tous les
domaines d'application, aucun n'en est exclus; elle
permet d'explorer et d'analyser des corpus de
donn{\'e}es de plus en plus volumineux : l'{\`e}re
des big data et du data mining s'ouvre {\`a} nous !
Cette omnipr{\'e}sence s'accompagne bien souvent de
l'absence de regard critique tant sur l'origine des
donn{\'e}es que sur la mani{\`e}re de les traiter.
La facilit{\'e} d'utilisation des logiciels de
traitement statistique permet de fournir quasi
instantan{\'e}ment des graphiques et des
r{\'e}sultats num{\'e}riques. Le risque est donc
grand d'une acceptation aveugle des conclusions qui
d{\'e}coulent de son emploi, comme simple citoyen ou
comme homme politique. Les auteurs insistent sur les
concepts sans n{\'e}gliger la rigueur, ils
d{\'e}crivent les outils de d{\'e}cryptage des
donn{\'e}es. L'ouvrage couvre un large spectre de
m{\'e}thodes allant du pr{\'e}-traitement des
donn{\'e}es aux m{\'e}thodes de pr{\'e}vision, en
passant par celles permettant leur visualisation et
leur synth{\`e}se. De nombreux exemples issus de
champs d'application vari{\'e}s sont trait{\'e}s
{\`a} l'aide du logiciel libre R, dont les commandes
sont comment{\'e}es. L'ouvrage est destin{\'e} aux
{\'e}tudiants de masters scientifiques ou
d'{\'e}coles d'ing{\'e}nieurs ainsi qu'aux
professionnels voulant utiliser la Statistique de
mani{\`e}re r{\'e}fl{\'e}chie : des sciences de la
vie {\`a} l'arch{\'e}ologie, de la sociologie {\`a}
l'analyse financi{\`e}re.},
language = {fr}
title = {Psychologie statistique avec {R}},
author = {Yvonnick Noel},
series = {Pratique R},
year = 2013,
publisher = {Springer},
address = {Paris},
isbn = {978-2-8178-0424-8},
abstract = {This book provides a detailed presentation of all
basics of statistical inference for psychologists,
both in a fisherian and a bayesian approach. Although
many authors have recently advocated for the use of
bayesian statistics in psychology (Wagenmaker et al.,
2010, 2011; Kruschke, 2010; Rouder et al., 2009)
statistical manuals for psychologists barely mention
them. This manual provides a full bayesian toolbox
for commonly encountered problems in psychology and
social sciences, for comparing proportions, variances
and means, and discusses the advantages. But all
foundations of the frequentist approach are also
provided, from data description to probability and
density, through combinatorics and set algebra. A
special emphasis has been put on the analysis of
categorical data and contingency tables. Binomial and
multinomial models with beta and Dirichlet priors are
presented, and their use for making (between rows or
between cells) contrasts in contingency tables is
detailed on real data. An automatic search of the
best model for all problem types is implemented in the
AtelieR package, available on CRAN. ANOVA is also
presented in a Bayesian flavor (using BIC), and
illustrated on real data with the help of the AtelieR
and R2STATS packages (a GUI for GLM and GLMM in R).
In addition to classical and Bayesian inference on
means, direct and Bayesian inference on effect size
and standardized effects are presented, in agreement
with recent APA recommendations.}
title = {Dynamic Documents with {R} and knitr},
author = {Yihui Xie},
publisher = {Chapman \& Hall/CRC},
year = 2013,
isbn = {978-1482203530},
url = {https://github.com/yihui/knitr-book/},
publisherurl = {https://www.taylorfrancis.com/books/dynamic-documents-knitr-yihui-xie/10.1201/b15166},
abstract = {Suitable for both beginners and advanced users, this
book shows you how to write reports in simple
languages such as Markdown. The reports range from
homework, projects, exams, books, blogs, and web pages
to any documents related to statistical graphics,
computing, and data analysis. While familiarity with
LaTeX and HTML is helpful, the book requires no prior
experience with advanced programs or languages. For
beginners, the text provides enough features to get
started on basic applications. For power users, the
last several chapters enable an understanding of the
extensibility of the knitr package.}
author = {Steven Murray},
title = {Learn {R} in a Day},
publisher = {SJ Murray},
url = {https://www.amazon.com/Learn-R-Day-Steven-Murray-ebook/dp/B00GC2LKOK/ref=cm_cr_pr_pb_t},
year = 2013,
asin = {B00GC2LKOK},
note = {Ebook},
abstract = {`Learn R in a Day' provides the reader with key
programming skills through an examples-oriented
approach and is ideally suited for academics,
scientists, mathematicians and engineers. The book
assumes no prior knowledge of computer programming and
progressively covers all the essential steps needed to
become confident and proficient in using R within a
day. Topics include how to input, manipulate, format,
iterate (loop), query, perform basic statistics on,
and plot data, via a step-by-step technique and
demonstrations using in-built datasets which the
reader is encouraged to replicate on their
computer. Each chapter also includes exercises (with
solutions) to practice key skills and empower the
reader to build on the essentials gained during this
introductory course.}
title = {An Introduction to Analysis of Financial Data with R},
author = {Ruey S. Tsay},
year = 2013,
publisher = {John Wiley},
address = {New Jersey},
publisherurl = {https://www.wiley.com/WileyCDA/WileyTitle/productCd-0470890819.html},
isbn = {978-0-470-89081-3},
url = {https://faculty.chicagobooth.edu/ruey-s-tsay/research/an-introduction-to-analysis-of-financial-data-with-r},
abstract = {This book provides a concise introduction to econometric and
statistical analysis of financial data. It focuses on
scalar financial time series with applications.
High-frequency data and volatility models are
discussed. The book also uses case studies to illustrate
the application of modeling financial data.},
language = {en}
title = {Analyse von Genexpressionsdaten --- mit {R} und
author = {Matthias Kohl},
year = 2013,
publisher = {Ventus Publishing ApS},
address = {London},
publisherurl = {https://bookboon.com/de/analyse-von-genexpressionsdaten-ebook},
isbn = {978-87-403-0349-0},
note = {In German},
abstract = {Das Buch bietet eine Einf{\"u}hrung in die Verwendung
von R und Bioconductor f{\"u}r die Analyse von
Mikroarray-Daten. Es werden die Arraytechnologien von
Affymetrix und Illumina ausf{\"u}hrlich behandelt.
Dar{\"u}ber hinaus wird auch auf andere
Arraytechnologien eingegangen. Alle notwendigen
Schritte beginnend mit dem Einlesen der Daten und der
Qualit{\"a}tskontrolle {\"u}ber die Vorverarbeitung
der Daten bis hin zur statistischen Analyse sowie der
Enrichment Analyse werden besprochen. Jeder der
Schritte wird anhand einfacher Beispiele praktisch
vorgef{\"u}hrt, wobei der im Buch verwendete R-Code
separat zum Download bereitsteht.},
language = {de}
title = {Introductory {R}: A Beginner's Guide to Data
Visualisation and Analysis using {R}},
author = {Knell, Robert J},
year = 2013,
isbn = {978-0-9575971-0-5},
publisher = {(See web site)},
month = {March},
url = {http://www.introductoryr.co.uk},
abstract = {R is now the most widely used statistical software in
academic science and it is rapidly expanding into
other fields such as finance. R is almost limitlessly
flexible and powerful, hence its appeal, but can be
very difficult for the novice user. There are no easy
pull-down menus, error messages are often cryptic and
simple tasks like importing your data or exporting a
graph can be difficult and frustrating. Introductory
R is written for the novice user who knows a bit about
statistics but who hasn't yet got to grips with the
ways of R. This book: walks you through the basics of
R's command line interface; gives a set of simple
rules to follow to make sure you import your data
properly; introduces the script editor and gives
advice on workflow; contains a detailed introduction
to drawing graphs in R and gives advice on how to deal
with some of the most common errors that you might
encounter. The techniques of statistical analysis in
R are illustrated by a series of chapters where
experimental and survey data are analysed. There is a
strong emphasis on using real data from real
scientific research, with all the problems and
uncertainty that implies, rather than well-behaved
made-up data that give ideal and easy to analyse
title = {Methods of Statistical Model Estimation},
author = {Hilbe, Joseph},
isbn = {978-1-4398-5802-8},
orderinfo = {crcpress.txt},
url = {http://www.crcpress.com/product/isbn/9781439858028},
year = 2013,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Methods of Statistical Model Estimation examines the
most important and popular methods used to estimate
parameters for statistical models and provide
informative model summary statistics. Designed for R
users, the book is also ideal for anyone wanting to
better understand the algorithms used for statistical
model fitting. The text presents algorithms for the
estimation of a variety of regression procedures using
maximum likelihood estimation, iteratively reweighted
least squares regression, the EM algorithm, and MCMC
sampling. Fully developed, working R code is
constructed for each method. The book starts with OLS
regression and generalized linear models, building to
two-parameter maximum likelihood models for both
pooled and panel models. It then covers a random
effects model estimated using the EM algorithm and
concludes with a Bayesian Poisson model using
Metropolis-Hastings sampling. The book's coverage is
innovative in several ways. First, the authors use
executable computer code to present and connect the
theoretical content. Therefore, code is written for
clarity of exposition rather than stability or speed
of execution. Second, the book focuses on the
performance of statistical estimation and downplays
algebraic niceties. In both senses, this book is
written for people who wish to fit statistical models
and understand them.}
title = {Introduction to {R} for Quantitative Finance},
author = {Gergely Dar{\'o}czi and Michael Puhle and Edina
Berlinger and P{\'e}ter Cs{\'o}ka and Daniel Havran
and M{\'a}rton Michaletzky and Zsolt Tulassay and Kata
V{\'a}radi and Agnes Vidovics-Dancs},
publisher = {Packt Publishing},
year = 2013,
month = {November},
isbn = 9781783280933,
url = {https://www.packtpub.com/product/introduction-to-r-for-quantitative-finance/9781783280933},
totalpages = 164,
abstract = {The book focuses on how to solve real-world
quantitative finance problems using the statistical
computing language R. ``Introduction to R for
Quantitative Finance'' covers diverse topics ranging
from time series analysis to financial networks. Each
chapter briefly presents the theory behind specific
concepts and deals with solving a diverse range of
problems using R with the help of practical examples.}
title = {Reproducible Research with {R} and {RStudio}},
author = {Gandrud, Christopher},
isbn = {978-1-4665-7284-3},
series = {Chapman \& Hall/CRC The R series},
url = {https://www.taylorfrancis.com/books/reproducible-research-studio-christopher-gandrud/10.1201/b15100},
year = 2013,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Bringing together computational research tools in one
accessible source, Reproducible Research with R and
RStudio guides you in creating dynamic and highly
reproducible research. Suitable for researchers in
any quantitative empirical discipline, it presents
practical tools for data collection, data analysis,
and the presentation of results. The book takes you
through a reproducible research workflow, showing you
how to use: R for dynamic data gathering and automated
results presentation knitr for combining statistical
analysis and results into one document LaTeX for
creating PDF articles and slide shows, and Markdown
and HTML for presenting results on the web Cloud
storage and versioning services that can store data,
code, and presentation files; save previous versions
of the files; and make the information widely
available Unix-like shell programs for compiling large
projects and converting documents from one markup
language to another RStudio to tightly integrate
reproducible research tools in one place.}
author = {Dirk Eddelbuettel},
title = {Seamless R and C++ Integration with Rcpp},
publisher = {Springer},
series = {Use R!},
year = 2013,
address = {New York},
isbn = {978-1-4614-6867-7},
publisherurl = {https://www.springer.com/978-1-4614-6867-7},
abstract = {Seamless R and C ++ Integration with Rcpp provides the
first comprehensive introduction to Rcpp, which has
become the most widely-used language extension for R, and
is deployed by over one-hundred different CRAN and
BioConductor packages. Rcpp permits users to pass
scalars, vectors, matrices, list or entire R objects back
and forth between R and C++ with ease. This brings the
depth of the R analysis framework together with the
power, speed, and efficiency of C++. },
orderinfo = {springer.txt}
title = {Applied Meta-Analysis with {R}},
author = {Chen, Din},
isbn = {978-1-4665-0599-5},
orderinfo = {crcpress.txt},
series = {Chapman \& Hall/CRC Biostatistics series},
url = {http://www.crcpress.com/product/isbn/9781466505995},
year = 2013,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = { In biostatistical research and courses, practitioners
and students often lack a thorough understanding of
how to apply statistical methods to synthesize
biomedical and clinical trial data. Filling this
knowledge gap, Applied Meta-Analysis with R shows how
to implement statistical meta-analysis methods to real
data using R. Drawing on their extensive research and
teaching experiences, the authors provide detailed,
step-by-step explanations of the implementation of
meta-analysis methods using R. Each chapter gives
examples of real studies compiled from the
literature. After presenting the data and necessary
background for understanding the applications, various
methods for analyzing meta-data are introduced. The
authors then develop analysis code using the
appropriate R packages and functions. This systematic
approach helps readers thoroughly understand the
analysis methods and R implementation, enabling them
to use R and the methods to analyze their own
meta-data. Suitable as a graduate-level text for a
meta-data analysis course, the book is also a valuable
reference for practitioners and biostatisticians (even
those with little or no experience in using R) in
public health, medical research, governmental
agencies, and the pharmaceutical industry.}
author = {Stano Pekar and Marek Brabec},
title = {Moderni analyza biologickych dat. 2. Linearni modely
s korelacemi v prostredi {R} [Modern Analysis of
Biological Data. 2. Linear Models with Correlations in
year = 2012,
publisher = {Masaryk University Press},
address = {Brno},
publisherurl = {https://www.press.muni.cz/en/editorial-series-of-munipress/moderni-analyza-biologickych-dat},
isbn = {978-80-21058-12-5},
note = {In Czech},
abstract = {Publikace navazuje na prvni dil Moderni analyzy
biologickych dat a predstavuje vybrane modely a metody
statisticke analyzy korelovanych dat. Tedy linearni
metody, ktere jsou vhodnym nastrojem analyzy dat s
casovymi, prostorovymi a fylogenetickymi zavislostmi v
datech. Text knihy je praktickou priruckou analyzy dat
v prostredi jednoho z nejrozsahlejsich statistickych
nastroju na svete, volne dostupneho softwaru R. Je
sestaven z 19 vzorove vyresenych a okomentovanych
prikladu, ktere byly vybrany tak, aby ukazaly spravnou
konstrukci modelu a upozornily na problemy a chyby,
ktere se mohou v prubehu analyzy dat vyskytnout. Text
je psan jednoduchym jazykem srozumitelnym pro ctenare
bez specialniho matematickeho vzdelani. Kniha je
predevsim urcena studentum i vedeckym pracovnikum
biologickych, zemedelskych, veterinarnich, lekarskych
a farmaceutickych oboru, kteri potrebuji korektne
analyzovat vysledky svych pozorovani ci experimentu s
komplikovanejsi strukturou danou zavislostmi mezi
opakovanymi merenimi stejneho subjektu.},
language = {cz}
title = {Solving Differential Equations in R},
author = {Soetaert, K. and Cash, J. and Mazzia, F.},
isbn = {978-3-642-28070-2},
series = {Use R!},
year = 2012,
publisher = {Springer},
publisherurl = {https://www.springer.com/978-3-642-28070-2},
abstract = {Mathematics plays an important role in many
scientific and engineering disciplines. This book
deals with the numerical solution of differential
equations, a very important branch of mathematics.
Our aim is to give a practical and theoretical
account of how to solve a large variety of
differential equations, comprising ordinary
differential equations, initial value problems and
boundary value problems, differential algebraic
equations, partial differential equations and delay
differential equations. The solution of differential
equations using R is the main focus of this book. It
is therefore intended for the practitioner, the
student and the scientist, who wants to know how to
use R for solving differential equations. However,
it has been our goal that non-mathematicians should
at least understand the basics of the methods, while
obtaining entrance into the relevant literature that
provides more mathematical background. Therefore,
each chapter that deals with R examples is preceded
by a chapter where the theory behind the numerical
methods being used is introduced. In the sections
that deal with the use of R for solving differential
equations, we have taken examples from a variety of
disciplines, including biology, chemistry, physics,
pharmacokinetics. Many examples are well-known test
examples, used frequently in the field of numerical
author = {Sarah Stowell},
title = {Instant {R}: An Introduction to {R} for Statistical
publisher = {Jotunheim Publishing},
year = 2012,
isbn = {978-0-957-46490-2},
url = {http://www.instantr.com/wp-content/uploads/2012/11/},
abstract = {This book gives an introduction to using R, with a
focus on performing popular statistical methods. It is
suitable for anyone that is familiar with basic
statistics and wants to begin using R to analyse data
and create statistical plots. No prior knowledge of R
or of programming is assumed, making this book ideal
if you are more accustomed to using point-and-click
style statistical packages.}
author = {Pfaff, Bernhard},
title = {Financial Risk Modelling and Portfolio Optimisation
with {R}},
publisher = {Wiley},
address = {Chichester, UK},
year = 2012,
isbn = {978-0-470-97870-2},
url = {https://www.pfaffikus.de/books/wiley/},
publisherurl = {https://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470978708.html},
abstract = {Introduces the latest techniques advocated for
measuring financial market risk and portfolio
optimisation, and provides a plethora of R code
examples that enable the reader to replicate the
results featured throughout the book. Graduate and
postgraduate students in finance, economics, risk
management as well as practitioners in finance and
portfolio optimisation will find this book beneficial.
It also serves well as an accompanying text in
computer-lab classes and is therefore suitable for
title = {The {BUGS} Book: A Practical Introduction to
{B}ayesian Analysis},
author = {Lunn, David},
isbn = {978-1-5848-8849-9},
orderinfo = {crcpress.txt},
series = {Chapman \& Hall/CRC Texts in Statistical Science
url = {http://www.crcpress.com/product/isbn/9781584888499},
year = 2012,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Bayesian statistical methods have become widely used
for data analysis and modelling in recent years, and
the BUGS software has become the most popular software
for Bayesian analysis worldwide. Authored by the team
that originally developed this software, The BUGS Book
provides a practical introduction to this program and
its use. The text presents complete coverage of all
the functionalities of BUGS, including prediction,
missing data, model criticism, and prior
sensitivity. It also features a large number of worked
examples and a wide range of applications from various
disciplines. The book introduces regression models,
techniques for criticism and comparison, and a wide
range of modelling issues before going into the vital
area of hierarchical models, one of the most common
applications of Bayesian methods. It deals with
essentials of modelling without getting bogged down in
complexity. The book emphasises model criticism, model
comparison, sensitivity analysis to alternative
priors, and thoughtful choice of prior
distributions---all those aspects of the ``art'' of
modelling that are easily overlooked in more
theoretical expositions. More pragmatic than
ideological, the authors systematically work through
the large range of ``tricks'' that reveal the real
power of the BUGS software, for example, dealing with
missing data, censoring, grouped data, prediction,
ranking, parameter constraints, and so on. Many of the
examples are biostatistical, but they do not require
domain knowledge and are generalisable to a wide range
of other application areas. Full code and data for
examples, exercises, and some solutions can be found
on the book's website.}
title = {Programming Graphical User Interfaces in {R}},
author = {Lawrence, Michael},
isbn = {978-1-4398-5682-6},
orderinfo = {crcpress.txt},
series = {Chapman \& Hall/CRC the R series},
url = {http://www.crcpress.com/product/isbn/9781439856826},
year = 2012,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Programming Graphical User Interfaces with R
introduces each of the major R packages for GUI
programming: RGtk2, qtbase, Tcl/Tk, and gWidgets. With
examples woven through the text as well as stand-alone
demonstrations of simple yet reasonably complete
applications, the book features topics especially
relevant to statisticians who aim to provide a
practical interface to functionality implemented in
R. The accompanying package, ProgGUIinR, includes the
complete code for all examples as well as functions
for browsing the examples from the respective
chapters. Accessible to seasoned, novice, and
occasional R users, this book shows that for many
purposes, adding a graphical interface to one's work
is not terribly sophisticated or time consuming.}
title = {Event History Analysis with R},
author = {G{\"o}ran Brostr{\"o}m},
isbn = {978-1-4398-3164-9},
orderinfo = {crcpress.txt},
series = {Chapman \& Hall/CRC the R series},
url = {http://www.crcpress.com/product/isbn/9781439831649},
year = 2012,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {With an emphasis on social science applications, Event
History Analysis with R presents an introduction to
survival and event history analysis using real-life
examples. Keeping mathematical details to a minimum,
the book covers key topics, including both discrete
and continuous time data, parametric proportional
hazards, and accelerated failure times. A much-needed
primer, Event History Analysis with R is a
didactically excellent resource for students and
practitioners of applied event history and survival
author = {Dimitris Rizopoulos},
title = {Joint Models for Longitudinal and Time-to-Event Data,
with Applications in {R}},
publisher = {Chapman \& Hall/CRC},
address = {Boca Raton},
year = 2012,
isbn = {978-1-4398-7286-4},
url = {http://jmr.R-Forge.R-project.org/},
publisherurl = {http://www.crcpress.com/product/isbn/9781439872864},
abstract = {The last 20 years have seen an increasing interest in
the class of joint models for longitudinal and
time-to-event data. These models constitute an
attractive paradigm for the analysis of follow-up data
that is mainly applicable in two settings: First, when
focus is on a survival outcome and we wish to account
for the effect of an endogenous time-dependent
covariate measured with error, and second, when focus
is on the longitudinal outcome and we wish to correct
for nonrandom dropout. Aimed at applied researchers
and graduate students, this text provides a
comprehensive overview of the framework of random
effects joint models. Emphasis is given on
applications such that readers will obtain a clear
view on the type of research questions that are best
answered using a joint modeling approach, the basic
features of these models, and how they can be extended
in practice. Special mention is given in checking the
assumptions using residual plots, and on dynamic
predictions for the survival and longitudinal
title = {The {R} Student Companion},
author = {Dennis, Brian},
isbn = {978-1-4398-7540-7},
orderinfo = {crcpress.txt},
url = {http://www.crcpress.com/product/isbn/9781439875407},
year = 2012,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {R is the amazing, free, open-access software package
for scientific graphs and calculations used by
scientists worldwide. The R Student Companion is a
student-oriented manual describing how to use R in
high school and college science and mathematics
courses. Written for beginners in scientific
computation, the book assumes the reader has just some
high school algebra and has no computer programming
background. The author presents applications drawn
from all sciences and social sciences and includes the
most often used features of R in an appendix. In
addition, each chapter provides a set of computational
challenges: exercises in R calculations that are
designed to be performed alone or in groups. Several
of the chapters explore algebra concepts that are
highly useful in scientific applications, such as
quadratic equations, systems of linear equations,
trigonometric functions, and exponential
functions. Each chapter provides an instructional
review of the algebra concept, followed by a hands-on
guide to performing calculations and graphing in R. R
is intuitive, even fun. Fantastic, publication-quality
graphs of data, equations, or both can be produced
with little effort. By integrating mathematical
computation and scientific illustration early in a
student's development, R use can enhance one's
understanding of even the most difficult scientific
concepts. While R has gained a strong reputation as a
package for statistical analysis, The R Student
Companion approaches R more completely as a
comprehensive tool for scientific computing and
title = {R for Statistics},
author = {Cornillon, Pierre-Andre},
isbn = {978-1-4398-8145-3},
orderinfo = {crcpress.txt},
url = {http://www.crcpress.com/product/isbn/9781439881453},
year = 2012,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Although there are currently a wide variety of
software packages suitable for the modern
statistician, R has the triple advantage of being
comprehensive, widespread, and free. Published in
2008, the second edition of Statistiques avec R
enjoyed great success as an R guidebook in the
French-speaking world. Translated and updated, R for
Statistics includes a number of expanded and
additional worked examples. Organized into two
sections, the book focuses first on the R software,
then on the implementation of traditional statistical
methods with R. After a short presentation of the
method, the book explicitly details the R command
lines and gives commented results. Accessible to
novices and experts alike, R for Statistics is a clear
and enjoyable resource for any scientist.}
author = {A. B. Shipunov and E. M. Baldin and P. A. Volkova and
A. I. Korobejnikov and S. A. Nazarova and S. V. Petrov
and V. G. Sufijanov},
title = {{Nagljadnaja statistika. Ispoljzuem R! / Vusial
statistics. Use R!}},
pages = 298,
address = {Moscow},
year = 2012,
isbn = {978-5-94074-828-1},
abstract = {This is the first ``big'' book about R in Russian. It
is intended to help people who begin to learn
statistical methods. All explanations are based on R.
The book may also serve as an introduction reference
to R.},
publisher = {DMK Press}
author = {Yves Aragon},
title = {S{\'e}ries temporelles avec {R}. M{\'e}thodes et cas},
publisher = {Springer, Collection Pratique R},
year = 2011,
pages = 265,
edition = {1st},
isbn = {978-2-8178-0207-7},
abstract = {Ce livre {\'e}tudie sous un angle original le concept
de s{\'e}rie temporelle, dont la complexit{\'e}
th{\'e}orique et l'utilisation sont souvent sources de
difficult{\'e}s. La th{\'e}orie distingue par exemple
les notions de s{\'e}ries stationnaire et non
stationnaire, mais il n'est pas rare de pouvoir
mod{\'e}liser une s{\'e}rie par deux mod{\`e}les
incompatibles. De plus, un peu d'intimit{\'e} avec les
s{\'e}ries montre qu'on peut s'appuyer sur des
graphiques vari{\'e}s pour en comprendre assez
rapidement la structure, avant toute
mod{\'e}lisation. Ainsi, au lieu d'{\'e}tudier des
m{\'e}thodes de mod{\'e}lisation, puis de les
illustrer, l'auteur prend ici le parti de
s'int{\'e}resser {\`a} un nombre limit{\'e} de
s{\'e}ries afin de trouver ce qu'on peut dire de
chacune. Avant d'aborder ces {\'e}tudes de cas, il
proc{\'e}de {\`a} quelques rappels et commence par
pr{\'e}senter les graphiques pour s{\'e}ries
temporelles offerts par R. Il revient ensuite sur des
notions fondamentales de statistique math{\'e}matique,
puis r{\'e}vise les concepts et les mod{\`e}les
classiques de s{\'e}ries. Il pr{\'e}sente les
structures de s{\'e}ries temporelles dans R et leur
importation. Il revisite le lissage exponentiel {\`a}
la lumi{\`e}re des travaux les plus r{\'e}cents. Un
chapitre est consacr{\'e} {\`a} la simulation. Six
s{\'e}ries sont ensuite {\'e}tudi{\'e}es par le menu
en confrontant plusieurs approches.}
author = {Pierre Andr{\'e} Cornillon and Eric Matzner-Lober},
title = {R{\'e}gression avec {R}},
publisher = {Springer, Collection Pratique R},
year = 2011,
pages = 242,
edition = {1st},
isbn = {978-2-8178-0183-4},
abstract = {Cet ouvrage expose en d{\'e}tail l'une des
m{\'e}thodes statistiques les plus courantes : la
r{\'e}gression. Il concilie th{\'e}orie et
applications, en insistant notamment sur l'analyse de
donn{\'e}es r{\'e}elles avec le logiciel R. Les
premiers chapitres sont consacr{\'e}s {\`a} la
r{\'e}gression lin{\'e}aire simple et multiple, et
expliquent les fondements de la m{\'e}thode, tant au
niveau des choix op{\'e}r{\'e}s que des hypoth{\`e}ses
et de leur utilit{\'e}. Puis ils d{\'e}veloppent les
outils permettant de v{\'e}rifier les hypoth{\`e}ses
de base mises en {\oe}uvre par la r{\'e}gression, et
pr{\'e}sentent les mod{\`e}les d'analyse de la
variance et covariance. Suit l'analyse du choix de
mod{\`e}le en r{\'e}gression multiple. Les derniers
chapitres pr{\'e}sentent certaines extensions de la
r{\'e}gression, comme la r{\'e}gression sous
contraintes (ridge, lasso et lars), la r{\'e}gression
sur composantes (PCR et PLS), et, enfin, introduisent
{\`a} la r{\'e}gression non param{\'e}trique (spline
et noyau). La pr{\'e}sentation t{\'e}moigne d'un
r{\'e}el souci p{\'e}dagogique des auteurs qui
b{\'e}n{\'e}ficient d'une exp{\'e}rience
d'enseignement aupr{\`e}s de publics tr{\`e}s
vari{\'e}s. Les r{\'e}sultats expos{\'e}s sont
replac{\'e}s dans la perspective de leur utilit{\'e}
pratique gr{\^a}ce {\`a} l'analyse d'exemples
concrets. Les commandes permettant le traitement des
exemples sous le logiciel R figurent dans le corps du
texte. Chaque chapitre est compl{\'e}t{\'e} par une
suite d'exercices corrig{\'e}s. Le niveau
math{\'e}matique requis rend ce livre accessible aux
{\'e}l{\`e}ves ing{\'e}nieurs, aux {\'e}tudiants de
niveau Master et aux chercheurs actifs dans divers
domaines des sciences appliqu{\'e}es.}
address = {Vi\c{c}osa, MG, Brazil},
author = {Peternelli, Luiz Alexandre and Mello, Marcio Pupin},
edition = 1,
isbn = {978-85-7269-400-1},
month = {March},
pages = 185,
publisher = {Editora UFV},
series = {S\'{e}rie Did\'{a}tica},
title = {{Conhecendo o R: uma vis\~{a}o estat\'{\i}stica}},
url = {https://www.editoraufv.com.br/produto/conhecendo-o-r-uma-visao-mais-que-estatistica/1109294},
year = 2011,
abstract = {Este material \'e de grande valia para estudantes ou
pesquisadores que usam ferramentas estat\'isticas em
trabalhos de pesquisa ou em uma simples an\'alise de
dados, constitui ponto de partida para aqueles que
desejam come\c{c}ar a utilizar o R e suas ferramentas
estat\'isticas ou, mesmo, para os que querem ter
sempre \`a m\~ao material de refer\^encia f\'acil,
objetivo e abrangente para uso desse software.}
author = {Paul Teetor},
title = {R Cookbook},
publisher = {O'Reilly},
year = 2011,
isbn = {978-0-596-80915-7},
edition = {first},
abstract = {Perform data analysis with R quickly and efficiently
with the task-oriented recipes in this cookbook.
Although the R language and environment include
everything you need to perform statistical work right
out of the box, its structure can often be difficult
to master. R Cookbook will help both beginners and
experienced statistical programmers unlock and use the
power of R.}
author = {Paul Teetor},
title = {25 Recipes for Getting Started with {R}},
publisher = {O'Reilly},
url = {http://oreilly.com/catalog/9781449303228},
year = 2011,
isbn = {978-1-4493-0322-8},
abstract = {This short, concise book provides beginners with a
selection of how-to recipes to solve simple problems
with R. Each solution gives you just what you need to
know to get started with R for basic statistics,
graphics, and regression. These solutions were
selected from O'Reilly's R Cookbook, which contains
more than 200 recipes for R.}
title = {R Graphics, Second Edition},
author = {Murrell, Paul},
isbn = {978-1-4398-3176-2},
orderinfo = {crcpress.txt},
series = {Chapman \& Hall/CRC the R series},
url = {https://www.stat.auckland.ac.nz/~paul/RG2e/},
year = 2011,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = { Extensively updated to reflect the evolution of
statistics and computing, the second edition of the
bestselling R Graphics comes complete with new
packages and new examples. Paul Murrell, widely known
as the leading expert on R graphics, has developed an
in-depth resource that helps both neophyte and
seasoned users master the intricacies of R
graphics. Organized into five parts, R Graphics covers
both ``traditional'' and newer, R-specific graphics
systems. The book reviews the graphics facilities of
the R language and describes R's powerful grid
graphics system. It then covers the graphics engine,
which represents a common set of fundamental graphics
facilities, and provides a series of brief overviews
of the major areas of application for R graphics and
the major extensions of R graphics.}
author = {Laura Chihara and Tim Hesterberg},
title = {Mathematical Statistics with Resampling and R},
publisher = {Wiley},
url = {https://sites.google.com/site/chiharahesterberg/home},
publisherurl = {https://www.wiley.com/WileyCDA/WileyTitle/productCd-1118029852.html},
year = 2011,
isbn = {978-1-1180-2985-5},
abstract = {Resampling helps students understand the meaning of
sampling distributions, sampling variability,
P-values, hypothesis tests, and confidence intervals.
This book shows how to apply modern resampling
techniques to mathematical statistics. Extensively
class-tested to ensure an accessible presentation,
Mathematical Statistics with Resampling and R utilizes
the powerful and flexible computer language R to
underscore the significance and benefits of modern
resampling techniques. The book begins by introducing
permutation tests and bootstrap methods, motivating
classical inference methods. Striking a balance
between theory, computing, and applications, the
authors explore additional topics such as: Exploratory
data analysis, Calculation of sampling distributions,
The Central Limit Theorem, Monte Carlo sampling,
Maximum likelihood estimation and properties of
estimators, Confidence intervals and hypothesis tests,
Regression, Bayesian methods. Case studies on diverse
subjects such as flight delays, birth weights of
babies, and telephone company repair times illustrate
the relevance of the material. Mathematical
Statistics with Resampling and R is an excellent book
for courses on mathematical statistics at the
upper-undergraduate and graduate levels. It also
serves as a valuable reference for applied
statisticians working in the areas of business,
economics, biostatistics, and public health who
utilize resampling methods in their everyday work.},
edition = {1st},
pages = 440
author = {John Fox and Sanford Weisberg},
title = {An {R} Companion to Applied Regression},
edition = {second},
publisher = {Sage Publications},
year = 2011,
address = {Thousand Oaks, CA, USA},
isbn = {978-1-4129-7514-8},
url = {https://socialsciences.mcmaster.ca/jfox/Books/Companion/index.html},
abstract = {A companion book to a text or course on applied
regression (such as ``Applied Regression Analysis and
Generalized Linear Models, Second Edition'' by John
Fox or ``Applied Linear Regression, Third edition'' by
Sanford Weisberg). It introduces R, and concentrates
on how to use linear and generalized-linear models in
R while assuming familiarity with the statistical
author = {Hrishi Mittal},
title = {R Graphs Cookbook},
publisher = {Packt Publishing},
year = 2011,
isbn = {1849513066},
abstract = {The R Graph Cookbook takes a practical approach to
teaching how to create effective and useful graphs
using R. This practical guide begins by teaching you
how to make basic graphs in R and progresses through
subsequent dedicated chapters about each graph type in
depth. It will demystify a lot of difficult and
confusing R functions and parameters and enable you to
construct and modify data graphics to suit your
analysis, presentation, and publication needs.}
author = {Graham Williams},
title = {Data Mining with {Rattle} and {R}: The art of
excavating data for knowledge discovery},
publisher = {Springer},
year = 2011,
series = {Use R!},
isbn = {978-1-4419-9889-7},
publisherurl = {https://www.springer.com/978-1-4419-9889-7},
url = {https://rattle.togaware.com/},
abstract = {Data mining is the art and science of intelligent data
analysis. By building knowledge from information,
data mining adds considerable value to the ever
increasing stores of electronic data that abound
today. In performing data mining many decisions need
to be made regarding the choice of methodology, the
choice of data, the choice of tools, and the choice of
algorithms. Throughout this book the reader is
introduced to the basic concepts and some of the more
popular algorithms of data mining. With a focus on
the hands-on end-to-end process for data mining,
Williams guides the reader through various
capabilities of the easy to use, free, and open source
Rattle Data Mining Software built on the sophisticated
R Statistical Software. The focus on doing data
mining rather than just reading about data mining is
refreshing. The book covers data understanding, data
preparation, data refinement, model building, model
evaluation, and practical deployment. The reader will
learn to rapidly deliver a data mining project using
software easily installed for free from the Internet.
Coupling Rattle with R delivers a very sophisticated
data mining environment with all the power, and more,
of the many commercial offerings.},
orderinfo = {springer.txt}
title = {Numerical Methods and Optimization in Finance},
publisher = {Academic Press},
year = 2011,
author = {Gilli, Manfred and Maringer, Dietmar and Schumann,
isbn = {978-0-12-375662-6},
abstract = {The book explains tools for computational finance. It
covers fundamental numerical analysis and
computational techniques, for example for option
pricing, but two topics are given special attention:
simulation and optimization. Many chapters are
organized as case studies, dealing with problems like
portfolio insurance or risk estimation; in particular,
several chapters explain optimization heuristics and
how to use them for portfolio selection or the
calibration of option pricing models. Such practical
examples allow readers to learn the required steps for
solving specific problems, and to apply these steps to
other problems, too. At the same time, the chosen
applications are relevant enough to make the book a
useful reference on how to handle given
problems. Matlab and R sample code is provided in the
text and can be downloaded from the book's website; an
R package `NMOF' is also available.},
publisherurl = {https://www.elsevier.com/books/numerical-methods-and-optimization-in-finance/gilli/978-0-12-815065-8},
url = {http://nmof.net}
title = {Analysis of Questionnaire Data with {R}},
author = {Falissard, Bruno},
isbn = {978-1-4398-1766-7},
orderinfo = {crcpress.txt},
url = {http://www.crcpress.com/product/isbn/9781439817667},
year = 2011,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {While theoretical statistics relies primarily on
mathematics and hypothetical situations, statistical
practice is a translation of a question formulated by
a researcher into a series of variables linked by a
statistical tool. As with written material, there are
almost always differences between the meaning of the
original text and translated text. Additionally, many
versions can be suggested, each with their advantages
and disadvantages. Analysis of Questionnaire Data with
R translates certain classic research questions into
statistical formulations. As indicated in the title,
the syntax of these statistical formulations is based
on the well-known R language, chosen for its
popularity, simplicity, and power of its
structure. Although syntax is vital, understanding the
semantics is the real challenge of any good
translation. In this book, the semantics of
theoretical-to-practical translation emerges
progressively from examples and experience, and
occasionally from mathematical
considerations. Sometimes the interpretation of a
result is not clear, and there is no statistical tool
really suited to the question at hand. Sometimes data
sets contain errors, inconsistencies between answers,
or missing data. More often, available statistical
tools are not formally appropriate for the given
situation, making it difficult to assess to what
extent this slight inadequacy affects the
interpretation of results. Analysis of Questionnaire
Data with R tackles these and other common challenges
in the practice of statistics.}
title = {Statistical Computing with {C++} and {R}},
author = {Eubank, Randall L.},
isbn = {978-1-4200-6650-0},
orderinfo = {crcpress.txt},
series = {Chapman \& Hall/CRC the R series},
url = {http://www.crcpress.com/product/isbn/9781420066500},
year = 2011,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {With the advancement of statistical methodology
inextricably linked to the use of computers, new
methodological ideas must be translated into usable
code and then numerically evaluated relative to
competing procedures. In response to this, Statistical
Computing in C++ and R concentrates on the writing of
code rather than the development and study of
numerical algorithms per se. The book discusses code
development in C++ and R and the use of these
symbiotic languages in unison. It emphasizes that each
offers distinct features that, when used in tandem,
can take code writing beyond what can be obtained from
either language alone. The text begins with some
basics of object-oriented languages, followed by a
``boot-camp'' on the use of C++ and R. The authors
then discuss code development for the solution of
specific computational problems that are relevant to
statistics including optimization, numerical linear
algebra, and random number generation. Later chapters
introduce abstract data structures (ADTs) and parallel
computing concepts. The appendices cover R and UNIX
Shell programming. The translation of a mathematical
problem into its computational analog (or analogs) is
a skill that must be learned, like any other, by
actively solving relevant problems. The text reveals
the basic principles of algorithmic thinking essential
to the modern statistician as well as the fundamental
skill of communicating with a computer through the use
of the computer languages C++ and R. The book lays the
foundation for original code development in a research
title = {The {R} Primer},
author = {Ekstrom, Claus Thorn},
isbn = {978-1-4398-6206-3},
orderinfo = {crcpress.txt},
url = {http://www.crcpress.com/product/isbn/9781439862063},
year = 2011,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Newcomers to R are often intimidated by the
command-line interface, the vast number of functions
and packages, or the processes of importing data and
performing a simple statistical analysis. The R Primer
provides a collection of concise examples and
solutions to R problems frequently encountered by new
users of this statistical software. Rather than
explore the many options available for every command
as well as the ever-increasing number of packages, the
book focuses on the basics of data preparation and
analysis and gives examples that can be used as a
starting point. The numerous examples illustrate a
specific situation, topic, or problem, including data
importing, data management, classical statistical
analyses, and high-quality graphics production. Each
example is self-contained and includes R code that can
be run exactly as shown, enabling results from the
book to be replicated. While base R is used
throughout, other functions or packages are listed if
they cover or extend the functionality. After working
through the examples found in this text, new users of
R will be able to better handle data analysis and
graphics applications in R. Additional topics and R
code are available from the book's supporting website
at www.statistics.life.ku.dk/primer.}
author = {Curran, James Michael},
title = {Introduction to Data Analysis with R for Forensic
year = 2011,
publisher = {{CRC} Press},
address = {Boca Raton, {FL}},
isbn = 9781420088267,
lccn = {{HV8073} {.C79} 2011},
keywords = {Criminal investigation, Data processing, Forensic
sciences, Forensic statistics, R {(Computer} program
language), Statistical methods},
publisherurl = {http://www.crcpress.com/product/isbn/9781420088267}
author = {Christian P. Robert and George Casella},
title = {M{\'e}thodes de {Monte-Carlo} avec {R}},
pages = 256,
edition = {1st},
publisher = {Springer},
year = 2011,
series = {Pratique R},
isbn = {978-2-8178-0180-3},
note = {French translation of Introducting Monte Carlo Methods
with R},
abstract = {Les techniques informatiques de simulation sont
essentielles au statisticien. Afin que celui-ci puisse
les utiliser en vue de r{\'e}soudre des probl{\`e}mes
statistiques, il lui faut au pr{\'e}alable
d{\'e}velopper son intuition et sa capacit{\'e} {\`a}
produire lui-m{\^e}me des mod{\`e}les de
simulation. Ce livre adopte donc le point de vue du
programmeur pour exposer ces outils fondamentaux de
simulation stochastique. Il montre comment les
impl{\'e}menter sous R et donne les cl{\'e}s d'une
meilleure compr{\'e}hension des m{\'e}thodes
expos{\'e}es en vue de leur comparaison, sans
s'attarder trop longuement sur leur justification
th{\'e}orique. Les auteurs pr{\'e}sentent les
algorithmes de base pour la g{\'e}n{\'e}ration de
donn{\'e}es al{\'e}atoires, les techniques de
Monte-Carlo pour l'int{\'e}gration et l'optimisation,
les diagnostics de convergence, les cha{\^i}nes de
Markov, les algorithmes adaptatifs, les algorithmes de
Metropolis- Hastings et de Gibbs. Tous les chapitres
incluent des exercices. Les programmes R sont
disponibles dans un package sp{\'e}cifique. Le livre
s'adresse {\`a} toute personne que la simulation
statistique int{\'e}resse et n'exige aucune
connaissance pr{\'e}alable du langage R, ni aucune
expertise en statistique bay{\'e}sienne, bien que
nombre d'exercices rel{\`e}vent de ce champ
pr{\'e}cis. Cet ouvrage sera utile aux {\'e}tudiants
et aux professionnels actifs dans les domaines de la
statistique, des t{\'e}l{\'e}communications, de
l'{\'e}conom{\'e}trie, de la finance et bien d'autres
title = {R Companion to Linear Models},
author = {Chris Hay Jahans},
isbn = {978-1-4398-7365-6},
orderinfo = {crcpress.txt},
url = {http://www.crcpress.com/product/isbn/9781439873656},
year = 2011,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Focusing on user-developed programming, An R Companion
to Linear Statistical Models serves two audiences:
those who are familiar with the theory and
applications of linear statistical models and wish to
learn or enhance their skills in R; and those who are
enrolled in an R-based course on regression and
analysis of variance. For those who have never used R,
the book begins with a self-contained introduction to
R that lays the foundation for later chapters. This
book includes extensive and carefully explained
examples of how to write programs using the R
programming language. These examples cover methods
used for linear regression and designed experiments
with up to two fixed-effects factors, including
blocking variables and covariates. It also
demonstrates applications of several pre-packaged
functions for complex computational procedures.}
title = {Multivariate Generalized Linear Mixed Models Using R},
author = {Berridge, Damon M.},
isbn = {978-1-4398-1326-3},
orderinfo = {crcpress.txt},
url = {http://www.crcpress.com/product/isbn/9781439813263},
year = 2011,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Multivariate Generalized Linear Mixed Models Using R
presents robust and methodologically sound models for
analyzing large and complex data sets, enabling
readers to answer increasingly complex research
questions. The book applies the principles of modeling
to longitudinal data from panel and related studies
via the Sabre software package in R. The authors first
discuss members of the family of generalized linear
models, gradually adding complexity to the modeling
framework by incorporating random effects. After
reviewing the generalized linear model notation, they
illustrate a range of random effects models, including
three-level, multivariate, endpoint, event history,
and state dependence models. They estimate the
multivariate generalized linear mixed models (MGLMMs)
using either standard or adaptive Gaussian
quadrature. The authors also compare two-level fixed
and random effects linear models. The appendices
contain additional information on quadrature, model
estimation, and endogenous variables, along with
SabreR commands and examples. In medical and social
science research, MGLMMs help disentangle state
dependence from incidental parameters. Focusing on
these sophisticated data analysis techniques, this
book explains the statistical theory and modeling
involved in longitudinal studies. Many examples
throughout the text illustrate the analysis of
real-world data sets. Exercises, solutions, and other
material are available on a supporting website.}
author = {Shravan Vasishth and Michael Broe},
title = {{The Foundations of Statistics: A Simulation-based
publisher = {Springer},
year = 2010,
isbn = {978-3-642-16312-8},
publisherurl = {https://www.springer.com/gp/book/9783642163128},
abstract = {Statistics and hypothesis testing are routinely used
in areas (such as linguistics) that are
traditionally not mathematically intensive. In such
fields, when faced with experimental data, many
students and researchers tend to rely on commercial
packages to carry out statistical data analysis,
often without understanding the logic of the
statistical tests they rely on. As a consequence,
results are often misinterpreted, and users have
difficulty in flexibly applying techniques relevant
to their own research --- they use whatever they
happen to have learned. A simple solution is to
teach the fundamental ideas of statistical
hypothesis testing without using too much
mathematics. This book provides a non-mathematical,
simulation-based introduction to basic statistical
concepts and encourages readers to try out the
simulations themselves using the source code and
data provided (the freely available programming
language R is used throughout). Since the code
presented in the text almost always requires the use
of previously introduced programming constructs,
diligent students also acquire basic programming
abilities in R. The book is intended for advanced
undergraduate and graduate students in any
discipline, although the focus is on linguistics,
psychology, and cognitive science. It is designed
for self-instruction, but it can also be used as a
textbook for a first course on statistics. Earlier
versions of the book have been used in undergraduate
and graduate courses in Europe and the US.},
orderinfo = {springer.txt}
author = {Robert A. Muenchen and Joseph M. Hilbe},
title = {{R} for {Stata} Users},
publisher = {Springer},
year = 2010,
series = {Statistics and Computing},
isbn = {978-1-4419-1317-3},
publisherurl = {https://www.springer.com/gp/book/9781441913173},
abstract = {This book shows you how to extend the power of Stata
through the use of R. It introduces R using Stata
terminology with which you are already familiar. It
steps through more than 30 programs written in both
languages, comparing and contrasting the two packages'
different approaches. When finished, you will be able
to use R in conjunction with Stata, or separately, to
import data, manage and transform it, create
publication quality graphics, and perform basic
statistical analyses.},
orderinfo = {springer.txt}
author = {Rob Kabacoff},
title = {{R} in Action},
publisher = {Manning},
url = {https://www.manning.com/books/r-in-action},
year = 2010,
abstract = {R in Action is the first book to present both the R
system and the use cases that make it such a
compelling package for business developers. The book
begins by introducing the R language, including the
development environment. As you work through various
examples illustrating R's features, you'll also get a
crash course in practical statistics, including basic
and advanced models for normal and non- normal data,
longitudinal and survival data, and a wide variety of
multivariate methods. Both data mining methodologies
and approaches to messy and incomplete data are
author = {Pierre-Andr\'e Cornillon and Arnaud Guyader and Fran\c
cois Husson and Nicolas J\'egou and Julie Josse and
Maela Kloareg and Eric Matzner-Lober and Laurent
title = {Statistiques avec {R}},
publisher = {Presses Universitaires de Rennes},
year = 2010,
series = {Didact Statistiques},
isbn = {978-2-7535-1087-6},
edition = {2nd},
url = {http://www.pur-editions.fr/detail.php?idOuv=1836},
abstract = {Apr\`es seulement dix ans d'existence, le logiciel R
est devenu un outil incontournable de statistique et
de visualisation de donn\'ees tant dans le monde
universitaire que dans celui de l'entreprise. Ce
d\'eveloppement exceptionnel s'explique par ses trois
principales qualit\'es: il est gratuit, tr\`es complet
et en essor permanent. Ce livre s'articule en deux
grandes parties : la premi\`ere est centr\'ee sur le
fonctionnement du logiciel R tandis que la seconde met
en oeuvre une vingtaine de m\'ethodes statistiques au
travers de fiches. Ces fiches sont chacune bas\'ees
sur un exemple concret et balayent un large spectre de
techniques classiques en traitement de donn\'ees. Ce
livre s'adresse aux d\'ebutants comme aux utilisateurs
r\'eguliers de R. Il leur permettra de r\'ealiser
rapidement des graphiques et des traitements
statistiques simples ou \'elabor\'es. Pour cette
deuxi\`eme \'edition, le texte a \'et\'e r\'evis\'e et
augment\'e. Certaines fiches ont \'et\'e
compl\'et\'ees, d'autres utilisent de nouveaux
exemples. Enfin des fiches ont \'et\'e ajout\'ees
ainsi que quelques nouveaux exercices.}
author = {Pierre Lafaye de Micheaux and R{\'e}my Drouilhet and
Beno{\^i}t Liquet},
title = {Le Logiciel R. Ma{\^i}triser le langage, effectuer des
analyses statistiques},
publisher = {Springer, Collection Statistiques et Probabilit{\'e}s
year = 2010,
pages = 490,
edition = {1st},
isbn = {9782817801148},
url = {http://www.biostatisticien.eu/springeR/},
abstract = {Ce livre est consacr{\'e} {\`a} un outil d{\'e}sormais
incontournable pour l'analyse de donn{\'e}es,
l'{\'e}laboration de graphiques et le calcul
statistique : le logiciel R. Apr{\`e}s avoir introduit
les principaux concepts permettant une utilisation
sereine de cet environnement informatique
(organisation des donn{\'e}es, importation et
exportation, acc{\`e}s {\`a} la documentation,
repr{\'e}sentations graphiques, programmation,
maintenance, etc.), les auteurs de cet ouvrage
d{\'e}taillent l'ensemble des manipulations permettant
la manipulation avec R d'un tr{\`e}s grand nombre de
m{\'e}thodes et de notions statistiques : simulation
de variables al{\'e}atoires, intervalles de confiance,
tests d'hypoth{\`e}ses, valeur-p, bootstrap,
r{\'e}gression lin{\'e}aire, ANOVA (y compris
r{\'e}p{\'e}t{\'e}es), et d'autres encore. {\'E}crit
avec un grand souci de p{\'e}dagogie et clart{\'e}, et
agr{\'e}ment{\'e} de nombreux exercices et travaux
pratiques, ce livre accompagnera id{\'e}alement tous
les utilisateurs de R -- et cela sur les
environnements Windows, Macintosh ou Linux -- qu'ils
soient d{\'e}butants ou d'un niveau avanc{\'e} :
{\'e}tudiants, enseignants ou chercheurs en
statistique, math{\'e}matiques, m{\'e}decine,
informatique, biologie, psychologie, sciences
infirmi{\`e}res, etc. Il leur permettra de
ma{\^i}triser en profondeur le fonctionnement de ce
logiciel. L'ouvrage sera aussi utile aux utilisateurs
plus confirm{\'e}s qui retrouveront expos{\'e} ici
l'ensemble des fonctions R les plus couramment
author = {Joseph Adler},
title = {{R} in a Nutshell [deutsche Ausgabe]},
edition = {1.},
year = 2010,
pages = 768,
publisher = {O'Reilly Verlag},
address = {K\"oln},
isbn = {978-3-89721-649-5},
publisherurl = {https://oreilly.de/produkt/r-in-a-nutshell/},
language = {de},
abstract = {Das Buch ist ein umfangreiches Handbuch und
Nachschlagewerk zu R. Es beschreibt die Installation
und Erweiterung der Software und gibt einen breiten
\"Uberblick \"uber die Programmiersprache. Anhand
unz\"ahliger Beispiele aus Medizin, Wirtschaft, Sport
und Bioinformatik behandelt es, wie Daten eingelesen,
transformiert und grafisch dargestellt werden. Anhand
realer Datens\"atze werden zahlreiche Methoden und
Verfahren der statistischen Datenanalyse mit R
demonstriert. Die Funktionsreferenz wurde f\"ur die
deutsche Ausgabe vollst\"andig neu verfasst.},
note = {Mit Funktions- und Datensatzreferenz; Begleitpaket
nutshellDE mit Beispieldaten und -code (auf der
Verlagsseite des Buchs).}
author = {John M. Quick},
title = {The Statistical Analysis with {R} Beginners Guide},
publisher = {Packt Publishing},
year = 2010,
isbn = {1849512086},
abstract = {The Statistical Analysis with R Beginners Guide will
take you on a journey as the strategist for an ancient
Chinese kingdom. Along the way, you will learn how to
use R to arrive at practical solutions and how to
effectively communicate your results. Ultimately, the
fate of the kingdom depends on your ability to make
informed, data- driven decisions with R.}
author = {Francois Husson and S\'ebastien L\^e and J\'er\^ome
title = {Exploratory Multivariate Analysis by Example Using
publisher = {Chapman \& Hall/CRC},
year = 2010,
url = {http://factominer.free.fr/book/},
series = {Computer Sciences and Data Analysis},
isbn = {978-1-4398-3580-7},
abstract = {Full of real-world case studies and practical advice,
Exploratory Multivariate Analysis by Example Using R
focuses on four fundamental methods of multivariate
exploratory data analysis that are most suitable for
applications. It covers principal component analysis
(PCA) when variables are quantitative, correspondence
analysis (CA) and multiple correspondence analysis
(MCA) when variables are categorical, and hierarchical
cluster analysis. The authors take a geometric point
of view that provides a unified vision for exploring
multivariate data tables. Within this framework, they
present the principles, indicators, and ways of
representing and visualizing objects that are common
to the exploratory methods. The authors show how to
use categorical variables in a PCA context in which
variables are quantitative, how to handle more than
two categorical variables in a CA context in which
there are originally two variables, and how to add
quantitative variables in an MCA context in which
variables are categorical. They also illustrate the
methods and the ways they can be exploited using
examples from various fields. Throughout the text,
each result correlates with an R command accessible in
the FactoMineR package developed by the authors. All
of the data sets and code are available at
\url{http://factominer.free.fr/book/}. By using the
theory, examples, and software presented in this book,
readers will be fully equipped to tackle real-life
multivariate data.},
orderinfo = {crcpress.txt}
author = {David Ruppert},
title = {Statistics and Data Analysis for Financial
publisher = {Springer},
year = 2010,
series = {Use R!},
isbn = {978-1-4419-7786-1},
publisherurl = {https://www.springer.com/978-1-4419-7786-1},
abstract = {Financial engineers have access to enormous quantities
of data but need powerful methods for extracting
quantitative information, particularly about
volatility and risks. Key features of this textbook
are: illustration of concepts with financial markets
and economic data, R Labs with real-data exercises,
and integration of graphical and analytic methods for
modeling and diagnosing modeling errors. Despite some
overlap with the author's undergraduate textbook
Statistics and Finance: An Introduction, this book
differs from that earlier volume in several important
aspects: it is graduate-level; computations and
graphics are done in R; and many advanced topics are
covered, for example, multivariate distributions,
copulas, Bayesian computations, VaR and expected
shortfall, and cointegration. The prerequisites are
basic statistics and probability, matrices and linear
algebra, and calculus. Some exposure to finance is
orderinfo = {springer.txt}
author = {Christian Robert and George Casella},
title = {Introducing {Monte Carlo} Methods with {R}},
publisher = {Springer},
year = 2010,
series = {Use R},
isbn = {978-1-4419-1575-7},
publisherurl = {https://www.springer.com/978-1-4419-1575-7},
abstract = { Computational techniques based on simulation have now
become an essential part of the statistician's
toolbox. It is thus crucial to provide statisticians
with a practical understanding of those methods, and
there is no better way to develop intuition and skills
for simulation than to use simulation to solve
statistical problems. Introducing Monte Carlo Methods
with R covers the main tools used in statistical
simulation from a programmer's point of view,
explaining the R implementation of each simulation
technique and providing the output for better
understanding and comparison. While this book
constitutes a comprehensive treatment of simulation
methods, the theoretical justification of those
methods has been considerably reduced, compared with
Robert and Casella (2004). Similarly, the more
exploratory and less stable solutions are not covered
here. This book does not require a preliminary
exposure to the R programming language or to Monte
Carlo methods, nor an advanced mathematical
background. While many examples are set within a
Bayesian framework, advanced expertise in Bayesian
statistics is not required. The book covers basic
random generation algorithms, Monte Carlo techniques
for integration and optimization, convergence
diagnoses, Markov chain Monte Carlo methods, including
Metropolis-Hastings and Gibbs algorithms, and adaptive
algorithms. All chapters include exercises and all R
programs are available as an R package called
mcsm. The book appeals to anyone with a practical
interest in simulation methods but no previous
exposure. It is meant to be useful for students and
practitioners in areas such as statistics, signal
processing, communications engineering, control
theory, econometrics, finance and more. The
programming parts are introduced progressively to be
accessible to any reader.},
orderinfo = {springer.txt}
title = {Clinical Trial Data Analysis with {R}},
author = {Chen, Din},
isbn = {978-1-4398-4020-7},
series = {Chapman \& Hall/CRC Biostatistics series},
url = {https://www.taylorfrancis.com/books/clinical-trial-data-analysis-using-ding-geng-din-chen-karl-peace/10.1201/b10478},
year = 2010,
publisher = {Chapman \& Hall/CRC Press},
address = {Boca Raton, FL},
abstract = {Too often in biostatistical research and clinical
trials, a knowledge gap exists between developed
statistical methods and the applications of these
methods. Filling this gap, Clinical Trial Data
Analysis Using R provides a thorough presentation of
biostatistical analyses of clinical trial data and
shows step by step how to implement the statistical
methods using R. The book's practical, detailed
approach draws on the authors' 30 years of real-world
experience in biostatistical research and clinical
development. Each chapter presents examples of
clinical trials based on the authors' actual
experiences in clinical drug development. Various
biostatistical methods for analyzing the data are then
identified. The authors develop analysis code step by
step using appropriate R packages and functions. This
approach enables readers to gain an understanding of
the analysis methods and R implementation so that they
can use R to analyze their own clinical trial
data. With step-by-step illustrations of R
implementations, this book shows how to easily use R
to simulate and analyze data from a clinical trial. It
describes numerous up-to-date statistical methods and
offers sound guidance on the processes involved in
clinical trials.}
author = {Carlo Gaetan and Xavier Guyon},
title = {Spatial Statistics and Modeling},
publisher = {Springer},
year = 2010,
series = {Springer Series in Statistics},
isbn = {978-0-387-92256-0},
publisherurl = {https://www.springer.com/978-0-387-92256-0},
abstract = { Spatial statistics are useful in subjects as diverse
as climatology, ecology, economics, environmental and
earth sciences, epidemiology, image analysis and
more. This book covers the best-known spatial models
for three types of spatial data: geostatistical data
(stationarity, intrinsic models, variograms, spatial
regression and space-time models), areal data
(Gibbs-Markov fields and spatial auto-regression) and
point pattern data (Poisson, Cox, Gibbs and Markov
point processes). The level is relatively advanced,
and the presentation concise but complete. The most
important statistical methods and their asymptotic
properties are described, including estimation in
geostatistics, autocorrelation and second-order
statistics, maximum likelihood methods, approximate
inference using the pseudo-likelihood or Monte-Carlo
simulations, statistics for point processes and
Bayesian hierarchical models. A chapter is devoted to
Markov Chain Monte Carlo simulation (Gibbs sampler,
Metropolis-Hastings algorithms and exact simulation).
A large number of real examples are studied with R,
and each chapter ends with a set of theoretical and
applied exercises. While a foundation in probability
and mathematical statistics is assumed, three
appendices introduce some necessary background. The
book is accessible to senior undergraduate students
with a solid math background and Ph.D. students in
statistics. Furthermore, experienced statisticians and
researchers in the above-mentioned fields will find
the book valuable as a mathematically sound
reference. This book is the English translation of
Mod{\'e}lisation et Statistique Spatiales published by
Springer in the series Math{\'e}matiques & Applications, a
series established by Soci{\'e}t{\'e} de Math{\'e}matiques
Appliqu{\'e}es et Industrielles (SMAI).},
orderinfo = {springer.txt}
author = {Andrew P. Robinson and Jeff D. Hamann},
title = {Forest Analytics with {R}},
publisher = {Springer},
year = 2010,
series = {Use R!},
isbn = {978-1-4419-7761-8},
publisherurl = {https://www.springer.com/978-1-4419-7761-8},
abstract = {Forest Analytics with R combines practical,
down-to-earth forestry data analysis and solutions to
real forest management challenges with
state-of-the-art statistical and data-handling
functionality. The authors adopt a problem-driven
approach, in which statistical and mathematical tools
are introduced in the context of the forestry problem
that they can help to resolve. All the tools are
introduced in the context of real forestry datasets,
which provide compelling examples of practical
applications. The modeling challenges covered within
the book include imputation and interpolation for
spatial data, fitting probability density functions to
tree measurement data using maximum likelihood,
fitting allometric functions using both linear and
non-linear least-squares regression, and fitting
growth models using both linear and non-linear
mixed-effects modeling. The coverage also includes
deploying and using forest growth models written in
compiled languages, analysis of natural resources and
forestry inventory data, and forest estate planning
and optimization using linear programming. The book
would be ideal for a one-semester class in forest
biometrics or applied statistics for natural resources
management. The text assumes no programming
background, some introductory statistics, and very
basic applied mathematics.},
orderinfo = {springer.txt}
editor = {Hrishikesh D. Vinod},
title = {Advances in Social Science Research Using {R}},
publisher = {Springer},
year = 2010,
series = {Lecture Notes in Statistics},
isbn = {978-1-4419-1763-8},
publisherurl = {https://www.springer.com/978-1-4419-1763-8},
abstract = {This book covers recent advances for quantitative
researchers with practical examples from social
sciences. The following twelve chapters written by
distinguished authors cover a wide range of
issues--all providing practical tools using the free R
software. McCullough: R can be used for reliable
statistical computing, whereas most statistical and
econometric software cannot. This is illustrated by
the effect of abortion on crime. Koenker: Additive
models provide a clever compromise between parametric
and non-parametric components illustrated by risk
factors for Indian malnutrition. Gelman: R graphics
in the context of voter participation in US elections.
Vinod: New solutions to the old problem of efficient
estimation despite autocorrelation and
heteroscedasticity among regression errors are
proposed and illustrated by the Phillips curve
tradeoff between inflation and unemployment. Markus
and Gu: New R tools for exploratory data analysis
including bubble plots. Vinod, Hsu and Tian: New R
tools for portfolio selection borrowed from computer
scientists and data-mining experts, relevant to anyone
with an investment portfolio. Foster and Kecojevic:
Extends the usual analysis of covariance (ANCOVA)
illustrated by growth charts for Saudi children.
Imai, Keele, Tingley, and Yamamoto: New R tools for
solving the age-old scientific problem of assessing
the direction and strength of causation. Their job
search illustration is of interest during current
times of high unemployment. Haupt, Schnurbus, and
Tschernig: consider the choice of functional form for
an unknown, potentially nonlinear relationship,
explaining a set of new R tools for model
visualization and validation. Rindskopf: R methods to
fit a multinomial based multivariate analysis of
variance (ANOVA) with examples from psychology,
sociology, political science, and medicine. Neath: R
tools for Bayesian posterior distributions to study
increased disease risk in proximity to a hazardous
waste site. Numatsi and Rengifo: explain persistent
discrete jumps in financial series subject to
orderinfo = {springer.txt}
author = {Victor Bloomfield},
title = {Computer Simulation and Data Analysis in Molecular
Biology and Biophysics: An Introduction Using {R}},
publisher = {Springer},
publisherurl = {https://www.springer.com/978-1-4419-0083-8},
year = 2009,
isbn = {978-1-4419-0083-8},
abstract = {This book provides an introduction, suitable for
advanced undergraduates and beginning graduate
students, to two important aspects of molecular
biology and biophysics: computer simulation and data
analysis. It introduces tools to enable readers to
learn and use fundamental methods for constructing
quantitative models of biological mechanisms, both
deterministic and with some elements of randomness,
including complex reaction equilibria and kinetics,
population models, and regulation of metabolism and
development; to understand how concepts of probability
can help in explaining important features of DNA
sequences; and to apply a useful set of statistical
methods to analysis of experimental data from
spectroscopic, genomic, and proteomic sources. These
quantitative tools are implemented using the free,
open source software program R. R provides an
excellent environment for general numerical and
statistical computing and graphics, with capabilities
similar to Matlab. Since R is increasingly used in
bioinformatics applications such as the BioConductor
project, it can serve students as their basic
quantitative, statistical, and graphics tool as they
develop their careers }
author = {Uwe Ligges},
title = {Programmieren mit {R}},
year = 2009,
publisher = {Springer-Verlag},
address = {Heidelberg},
note = {In German},
isbn = {978-3-540-79997-9},
edition = {3rd},
url = {http://www.statistik.tu-dortmund.de/~ligges/PmitR/},
publisherurl = {https://www.springer.com/978-3-540-79997-9},
abstract = {R ist eine objekt-orientierte und interpretierte
Sprache und Programmierumgebung f\"ur Datenanalyse und
Grafik --- frei erh\"altlich unter der GPL. Das Buch
f\"uhrt in die Grundlagen der Sprache R ein und
vermittelt ein umfassendes Verst\"andnis der
Sprachstruktur. Die enormen Grafikf\"ahigkeiten von R
werden detailliert beschrieben. Der Leser kann leicht
eigene Methoden umsetzen, Objektklassen definieren und
ganze Pakete aus Funktionen und zugeh\"origer
Dokumentation zusammenstellen. Ob Diplomarbeit,
Forschungsprojekte oder Wirtschaftsdaten, das Buch
unterst\"utzt alle, die R als flexibles Werkzeug zur
Datenanalyse und -visualisierung einsetzen m\"ochten.},
language = {de}
author = {Stano Pekar and Marek Brabec},
title = {Moderni analyza biologickych dat. 1. Zobecnene
linearni modely v prostredi {R} [Modern Analysis of
Biological Data. 1. Generalised Linear Models in {R}]},
year = 2009,
publisher = {Scientia},
address = {Praha},
series = {Biologie dnes},
publisherurl = {http://www.scientia.cz/biologie/4373-moderni-analyza-biologickych-dat-zobecnene-linearni-modely-v-prostredi-r.html},
isbn = {978-80-86960-44-9},
note = {In Czech},
abstract = {Kniha je zamerena na regresni modely, konkretne
jednorozmerne zobecnene linearni modely (GLM). Je
urcena predevsim studentum a kolegum z biologickych
oboru a vyzaduje pouze zakladni statisticke vzdelani,
jakym je napr. jednosemestrovy kurz biostatistiky.
Text knihy obsahuje nezbytne minimum statisticke
teorie, predevsim vsak reseni 18 realnych prikladu z
oblasti biologie. Kazdy priklad je rozpracovan od
popisu a stanoveni cile pres vyvoj statistickeho
modelu az po zaver. K analyze dat je pouzit popularni
a volne dostupny statisticky software R. Priklady byly
zamerne vybrany tak, aby upozornily na lecktere
problemy a chyby, ktere se mohou v prubehu analyzy dat
vyskytnout. Zaroven maji ctenare motivovat k tomu, jak
o statistickych modelech premyslet a jak je
pouzivat. Reseni prikladu si muse ctenar vyzkouset sam
na datech, jez jsou dodavana spolu s knihou.},
language = {cz}
author = {Robert A. Muenchen},
title = {{R} for {SAS} and {SPSS} Users},
publisher = {Springer},
year = 2009,
series = {Springer Series in Statistics and Computing},
isbn = {978-1-4614-0685-3},
publisherurl = {https://www.springer.com/978-1-4614-0685-3},
abstract = {This book demonstrates which of the add-on packages
are most like SAS and SPSS and compares them to R's
built-in functions. It steps through over 30 programs
written in all three packages, comparing and
contrasting the packages' differing approaches. The
programs and practice datasets are available for
orderinfo = {springer.txt}
author = {Richard M. Heiberger and Erich Neuwirth},
title = {R Through {Excel}},
publisher = {Springer},
year = 2009,
series = {Use R},
isbn = {978-1-4419-0051-7},
publisherurl = {https://www.springer.com/978-1-4419-0051-7},
abstract = {The primary focus of the book is on the use of menu
systems from the Excel menu bar into the capabilities
provided by R. The presentation is designed as a
computational supplement to introductory statistics
texts. The authors provide RExcel examples for most
topics in the introductory course. Data can be
transferred from Excel to R and back. The clickable
RExcel menu supplements the powerful R command
language. Results from the analyses in R can be
returned to the spreadsheet. Ordinary formulas in
spreadsheet cells can use functions written in R.},
orderinfo = {springer.txt}
author = {Peter D. Hoff},
title = {A First Course in Bayesian Statistical Methods},
publisher = {Springer},
year = 2009,
series = {Springer Series in Statistics for Social and
Behavioral Sciences},
isbn = {978-0-387-92299-7},
publisherurl = {https://www.springer.com/978-0-387-92299-7},
abstract = {This book provides a compact self-contained
introduction to the theory and application of Bayesian
statistical methods. The book is accessible to
readers with only a basic familiarity with
probability, yet allows more advanced readers to
quickly grasp the principles underlying Bayesian
theory and methods. R code is provided throughout the
text. Much of the example code can be run ``as is'' in
R, and essentially all of it can be run after
downloading the relevant datasets from the companion
website for this book. },
orderinfo = {springer.txt}
author = {Paul S. P. Cowpertwait and Andrew Metcalfe},
title = {Introductory Time Series with {R}},
publisher = {Springer},
year = 2009,
series = {Springer Series in Statistics},
isbn = {978-0-387-88697-8},
publisherurl = {https://www.springer.com/978-0-387-88697-8},
abstract = {This book gives you a step-by-step introduction to
analysing time series using the open source software
R. Once the model has been introduced it is used to
generate synthetic data, using R code, and these
generated data are then used to estimate its
parameters. This sequence confirms understanding of
both the model and the R routine for fitting it to the
data. Finally, the model is applied to an analysis of
a historical data set. By using R, the whole
procedure can be reproduced by the reader. All the
data sets used in the book are available on the
website \url{http://www.maths.adelaide.edu.au/emac2009/}.
The book is written for undergraduate students of
mathematics, economics, business and finance,
geography, engineering and related disciplines, and
postgraduate students who may need to analyze time
series as part of their taught program or their
orderinfo = {springer.txt}
author = {Owen Jones and Robert Maillardet and Andrew Robinson},
title = {Introduction to Scientific Programming and Simulation
Using {R}},
publisher = {Chapman \& Hall/CRC},
year = 2009,
address = {Boca Raton, FL},
isbn = {978-1-4200-6872-6},
publisherurl = {https://www.taylorfrancis.com/books/introduction-scientific-programming-simulation-using-owen-jones-robert-maillardet-andrew-robinson/10.1201/9781420068740},
abstract = {This book teaches the skills needed to perform
scientific programming while also introducing
stochastic modelling. Stochastic modelling in
particular, and mathematical modelling in general, are
intimately linked to scientific programming because
the numerical techniques of scientific programming
enable the practical application of mathematical
models to real-world problems.}
author = {M. Henry H. Stevens},
title = {A Primer of Ecology with {R}},
publisher = {Springer},
year = 2009,
series = {Use R},
isbn = {978-0-387-89881-0},
publisherurl = {https://www.springer.com/978-0-387-89881-0},
abstract = {This book combines an introduction to the major
theoretical concepts in general ecology with the
programming language R, a cutting edge Open Source
tool. Starting with geometric growth and proceeding
through stability of multispecies interactions and
species-abundance distributions, this book demystifies
and explains fundamental ideas in population and
community ecology. Graduate students in ecology,
along with upper division undergraduates and faculty,
will all find this to be a useful overview of
important topics.},
orderinfo = {springer.txt}
author = {Kurt Varmuza and Peter Filzmoser},
title = {Introduction to Multivariate Statistical Analysis in
publisher = {CRC Press},
address = {Boca Raton, FL},
year = 2009,
isbn = {9781420059472},
url = {http://cstat.tuwien.ac.at/filz/},
publisherurl = {https://www.routledge.com/Introduction-to-Multivariate-Statistical-Analysis-in-Chemometrics/Varmuza-Filzmoser/p/book/9781420059472},
abstract = {Using formal descriptions, graphical illustrations,
practical examples, and R software tools, Introduction
to Multivariate Statistical Analysis in Chemometrics
presents simple yet thorough explanations of the most
important multivariate statistical methods for
analyzing chemical data. It includes discussions of
various statistical methods, such as principal
component analysis, regression analysis,
classification methods, and clustering. Written by a
chemometrician and a statistician, the book reflects
both the practical approach of chemometrics and the
more formally oriented one of statistics. To enable a
better understanding of the statistical methods, the
authors apply them to real data examples from
chemistry. They also examine results of the different
methods, comparing traditional approaches with their
robust counterparts. In addition, the authors use the
freely available R package to implement methods,
encouraging readers to go through the examples and
adapt the procedures to their own problems. Focusing
on the practicality of the methods and the validity of
the results, this book offers concise mathematical
descriptions of many multivariate methods and employs
graphical schemes to visualize key concepts. It
effectively imparts a basic understanding of how to
apply statistical methods to multivariate scientific
author = {Karl W. Broman and Saunak Sen},
title = {A Guide to QTL Mapping with R/qtl},
publisher = {Springer},
year = 2009,
series = {SBH/Statistics for Biology and Health},
isbn = {978-0-387-92124-2},
publisherurl = {https://www.springer.com/978-0-387-92124-2},
abstract = {This book is a comprehensive guide to the practice of
QTL mapping and the use of R/qtl, including study
design, data import and simulation, data diagnostics,
interval mapping and generalizations, two-dimensional
genome scans, and the consideration of complex
multiple-QTL models. Two moderately challenging case
studies illustrate QTL analysis in its entirety. The
book alternates between QTL mapping theory and
examples illustrating the use of R/qtl. Novice
readers will find detailed explanations of the
important statistical concepts and, through the
extensive software illustrations, will be able to
apply these concepts in their own research.
Experienced readers will find details on the
underlying algorithms and the implementation of
extensions to R/qtl. },
orderinfo = {springer.txt}
author = {Kai Velten},
title = {Mathematical Modeling and Simulation: Introduction for
Scientists and Engineers},
publisher = {Wiley-VCH},
year = 2009,
isbn = {978-3-527-40758-3},
publisherurl = {https://www.wiley.com/WileyCDA/WileyTitle/productCd-3527407588.html},
abstract = {This introduction into mathematical modeling and
simulation is exclusively based on open source
software, and it includes many examples from such
diverse fields as biology, ecology, economics,
medicine, agricultural, chemical, electrical,
mechanical, and process engineering. Requiring only
little mathematical prerequisite in calculus and
linear algebra, it is accessible to scientists,
engineers, and students at the undergraduate level.
The reader is introduced into CAELinux, Calc,
Code-Saturne, Maxima, R, and Salome-Meca, and the
entire book software --- including 3D CFD and
structural mechanics simulation software --- can be
used based on a free CAELinux-Live-DVD that is
available in the Internet (works on most machines and
operating systems).}
author = {Jim Albert},
title = {Bayesian Computation with {R}},
edition = {2nd},
publisher = {Springer},
year = 2009,
series = {Springer Series in Statistics},
isbn = {978-0-387-92298-0},
publisherurl = {https://www.springer.com/978-0-387-92298-0},
abstract = {Bayesian Computing Using R introduces Bayesian
modeling by the use of computation using the R
language. The early chapters present the basic tenets
of Bayesian thinking by use of familiar one and
two-parameter inferential problems. Bayesian
computational methods such as Laplace's method,
rejection sampling, and the SIR algorithm are
illustrated in the context of a random effects model.
The construction and implementation of Markov Chain
Monte Carlo (MCMC) methods is introduced. These
simulation-based algorithms are implemented for a
variety of Bayesian applications such as normal and
binary response regression, hierarchical modeling,
order-restricted inference, and robust modeling.
Algorithms written in R are used to develop Bayesian
tests and assess Bayesian models by use of the
posterior predictive distribution. The use of R to
interface with WinBUGS, a popular MCMC computing
language, is described with several illustrative
examples. The second edition contains several new
topics such as the use of mixtures of conjugate priors
and the use of Zellner's g priors to choose between
models in linear regression. There are more
illustrations of the construction of informative prior
distributions, such as the use of conditional means
priors and multivariate normal priors in binary
regressions. The new edition contains changes in the
R code illustrations according to the latest edition
of the LearnBayes package.},
orderinfo = {springer.txt}
author = {J. O. Ramsay and Giles Hooker and Spencer Graves},
title = {Functional Data Analysis with {R} and {Matlab}},
publisher = {Springer},
year = 2009,
series = {Use R},
isbn = {978-0-387-98184-0},
publisherurl = {https://www.springer.com/978-0-387-98184-0},
abstract = {This volume in the UseR! Series is aimed at a wide
range of readers, and especially those who would like
apply these techniques to their research problems. It
complements Functional Data Analysis, Second Edition
and Applied Functional Data Analysis: Methods and Case
Studies by providing computer code in both the R and
Matlab languages for a set of data analyses that
showcase the functional data analysis. The authors
make it easy to get up and running in new applications
by adapting the code for the examples, and by being
able to access the details of key functions within
these pages. This book is accompanied by additional
web-based support at
\url{http://www.functionaldata.org} for applying
existing functions and developing new ones in either
language. },
orderinfo = {springer.txt}
author = {Hadley Wickham},
title = {ggplot: Elegant Graphics for Data Analysis},
publisher = {Springer},
year = 2009,
series = {Use R},
isbn = {978-0-98140-6},
abstract = {This book will be useful to everyone who has struggled
with displaying their data in an informative and
attractive way. You will need some basic knowledge of
R (i.e., you should be able to get your data into R),
but ggplot2 is a mini-language specifically tailored
for producing graphics, and you'll learn everything
you need in the book. After reading this book you'll
be able to produce graphics customized precisely for
your problems, to and you'll find it easy to get
graphics out of your head and on to the screen or
orderinfo = {springer.txt}
author = {G{\"u}nther Sawitzki},
title = {Computational Statistics},
subtitle = {An Introduction to {R}},
address = {Boca Raton, FL},
publisher = {Chapman \& Hall/CRC Press},
year = 2009,
pages = {XIV + 251},
isbn = {978-1-4200-8678-2},
note = {Includes bibliographical references and index},
language = {eng},
publisherurl = {http://www.crcpress.com/product/isbn/9781420086782},
abstract = {Suitable for a compact course or self-study,
Computational Statistics: An Introduction to R
illustrates how to use the freely available R software
package for data analysis, statistical programming,
and graphics. Integrating R code and examples
throughout, the text only requires basic knowledge of
statistics and computing. This introduction covers
one-sample analysis and distribution diagnostics,
regression, two-sample problems and comparison of
distributions, and multivariate analysis. It uses a
range of examples to demonstrate how R can be employed
to tackle statistical problems. In addition, the
handy appendix includes a collection of R language
elements and functions, serving as a quick reference
and starting point to access the rich information that
comes bundled with R. Accessible to a broad audience,
this book explores key topics in data analysis,
regression, statistical distributions, and
multivariate statistics. Full of examples and with a
color insert, it helps readers become familiar with
author = {Giovanni Petris and Sonia Petrone and Patriza
title = {Dynamic Linear Models with {R}},
publisher = {Springer},
year = 2009,
series = {Use R},
isbn = {978-0-387-77238-7},
publisherurl = {https://www.springer.com/978-0-387-77238-7},
abstract = {After a detailed introduction to general state space
models, this book focuses on dynamic linear models,
emphasizing their Bayesian analysis. Whenever
possible it is shown how to compute estimates and
forecasts in closed form; for more complex models,
simulation techniques are used. A final chapter
covers modern sequential Monte Carlo algorithms. The
book illustrates all the fundamental steps needed to
use dynamic linear models in practice, using R. Many
detailed examples based on real data sets are provided
to show how to set up a specific model, estimate its
parameters, and use it for forecasting. All the code
used in the book is available online. No prior
knowledge of Bayesian statistics or time series
analysis is required, although familiarity with basic
statistics and R is assumed.},
orderinfo = {springer.txt}
author = {Gael Millot},
title = {Comprendre et r\'{e}aliser les tests statistiques
\`{a} l'aide de {R}},
year = 2009,
publisher = {de boeck universit\'{e}},
address = {Louvain-la-Neuve, Belgique},
isbn = {2804101797},
pages = 704,
edition = {1st},
url = {http://perso.curie.fr/Gael.Millot/Publications_livre.htm},
language = {fr},
abstract = {Ce livre s'adresse aux \'{e}tudiants, m\'{e}decins et
chercheurs d\'{e}sirant r\'{e}aliser des tests alors
qu'ils d\'{e}butent en statistique. Son
originalit\'{e} est de proposer non seulement une
explication tr\`{e}s d\'{e}taill\'{e}e sur
l'utilisation des tests les plus classiques, mais
aussi la possibilit\'{e} de r\'{e}aliser ces tests
\`{a} l'aide de R. Illustr\'{e} par de nombreuses
figures et accompagn\'{e} d'exercices avec correction,
l'ouvrage traite en profondeur de notions essentielles
comme la check-list \`{a} effectuer avant de
r\'{e}aliser un test, la gestion des individus
extr\^{e}mes, l'origine de la p value, la puissance ou
la conclusion d'un test. Il explique comment choisir
un test \`{a} partir de ses propres donn\'{e}es. Il
d\'{e}crit 35 tests statistiques sous forme de fiches,
dont 24 non param\'{e}triques, ce qui couvre la
plupart des tests \`{a} une ou deux variables
observ\'{e}es. Il traite de toutes les subtilit\'{e}s
des tests, comme les corrections de continuit\'{e},
les corrections de Welch pour le test t et l'anova, ou
les corrections de p value lors des comparaisons
multiples. Il propose un exemple d'application de
chaque test \`{a} l'aide de R, en incluant toutes les
\'{e}tapes du test, et notamment l'analyse graphique
des donn\'{e}es. En r\'{e}sum\'{e}, cet ouvrage
devrait contenter \`{a} la fois ceux qui recherchent
un manuel de statistique expliquant le fonctionnement
des tests et ceux qui recherchent un manuel
d'utilisation de R.}
author = {Francois Husson and S\'ebastien L\^e and J\'er\^ome
title = {Analyse de donn{\'e}es avec {R}},
publisher = {Presses Universitaires de Rennes},
year = 2009,
url = {http://factominer.free.fr/book/},
series = {Didact Statistiques},
isbn = {978-2-7535-0938-2},
publisherurl = {http://www.pur-editions.fr/detail.php?idOuv=2166},
abstract = {Ce livre est focalis\'e sur les quatre m\'ethodes
fondamentales de l'analyse des donn\'ees, celles qui
ont le plus vaste potentiel d'application : analyse en
composantes principales, analyse factorielle des
correspondances, analyse des correspondances multiples
et classification ascendante hi\'erarchique. La plus
grande place accord\'ee aux m\'ethodes factorielles
tient d'une part aux concepts plus nombreux et plus
complexes n\'ecessaires \`a leur bonne utilisation et
d'autre part au fait que c'est \`a travers elles que
sont abord\'ees les sp\'ecificit\'es des diff\'erents
types de donn\'ees. Pour chaque m\'ethode, la
d\'emarche adopt\'ee est la m\^eme. Un exemple permet
d'introduire la probl\'ematique et concr\'etise
presque pas \`a pas les \'el\'ements th\'eoriques. Cet
expos\'e est suivi de plusieurs exemples trait\'es de
fa\c con d\'etaill\'ee pour illustrer l'apport de la
m\'ethode dans les applications. Tout le long du
texte, chaque r\'esultat est accompagn\'e de la
commande R qui permet de l'obtenir. Toutes ces
commandes sont accessibles \`a partir de FactoMineR,
package R d\'evelopp\'e par les auteurs. Ainsi, avec
cet ouvrage, le lecteur dispose d'un \'equipement
complet (bases th\'eoriques, exemples, logiciels) pour
analyser des donn\'ees multidimensionnelles.}
author = {Ewout W. Steyerberg},
title = {Clinical Prediction Models: A Practical Approach to
Development, Validation, and Updating},
publisher = {Springer},
year = 2009,
series = {SBH/Statistics for Biology and Health},
isbn = {978-0-387-77243-1},
publisherurl = {https://www.springer.com/978-0-387-77243-1},
abstract = {This book provides insight and practical illustrations
on how modern statistical concepts and regression
methods can be applied in medical prediction problems,
including diagnostic and prognostic outcomes. Many
advances have been made in statistical approaches
towards outcome prediction, but these innovations are
insufficiently applied in medical research.
Old-fashioned, data hungry methods are often used in
data sets of limited size, validation of predictions
is not done or done simplistically, and updating of
previously developed models is not considered. A
sensible strategy is needed for model development,
validation, and updating, such that prediction models
can better support medical practice. Clinical
prediction models presents a practical checklist with
seven steps that need to be considered for development
of a valid prediction model. These include preliminary
considerations such as dealing with missing values;
coding of predictors; selection of main effects and
interactions for a multivariable model; estimation of
model parameters with shrinkage methods and
incorporation of external data; evaluation of
performance and usefulness; internal validation; and
presentation formats. The steps are illustrated with
many small case-studies and R code, with data sets
made available in the public domain. The book further
focuses on generalizability of prediction models,
including patterns of invalidity that may be
encountered in new settings, approaches to updating of
a model, and comparisons of centers after case-mix
adjustment by a prediction model. The text is
primarily intended for clinical epidemiologists and
biostatisticians. It can be used as a textbook for a
graduate course on predictive modeling in diagnosis
and prognosis. It is beneficial if readers are
familiar with common statistical models in medicine:
linear regression, logistic regression, and Cox
regression. The book is practical in nature. But it
provides a philosophical perspective on data analysis
in medicine that goes beyond predictive modeling. In
this era of evidence-based medicine, randomized
clinical trials are the basis for assessment of
treatment efficacy. Prediction models are key to
individualizing diagnostic and treatment decision
orderinfo = {springer.txt}
author = {Detlev Reymann},
title = {{Wettbewerbsanalysen f\"ur kleine und mittlere
Unternehmen (KMUs) --- Theoretische Grundlagen und
praktische Anwendung am Beispiel gartenbaulicher
publisher = {Verlag Detlev Reymann},
address = {Geisenheim},
isbn = {978-3-00-027013-0},
abstract = {In diesem Buch werden die Grundlagen wesentlicher
Komponenten von unternehmens- und
konkurrentenbezogenen Wettbewerbsanalysen
dargestellt. Dabei stehen folgende Teilanalysen im
Mittelpunkt: Die Analyse des Einzugsgebietes; die
Ermittlung des Marktpotentials und des Marktanteiles;
die Ermittlung der St\"arken und Schw\"achen im
Verh\"altnis zur Konkurrenz; die Analyse der
Kundenstruktur (Kundentypologisierung). Zu jeder der
Teilanalysen werden nach der Darstellung der
theoretischen Grundlagen Hinweise und Anleitungen zur
praktischen Umsetzung und Durchf\"uhrung gegeben und
jeweils eine vertiefende Betrachtung angeschlossen.
Das Buch zielt insbesondere auf kleine und mittlere
Unternehmen (KMUs) ab, in denen keine gro\ss{}en
spezialisierten Marketingabteilungen existieren.
Verwendet werden Verfahren, bei denen sich zum einen
der zeitliche Aufwand f\"ur die Durchf\"uhrung in
vertretbaren Grenzen h\"alt, zum anderen Analysen, die
mit Hilfe von frei verf\"ugbarer Software oder frei
verf\"ugbaren Daten durchzuf\"uhren sind. F\"ur den
Statistikteil werden R-Skripte verwendet, die alle
frei von der Webseite des Autors heruntergeladen
werden k\"onnen. Es handelt sich dabei um Skripte zur
Berechnung des breaking-points nach Converse, zur
Berechnung der Einkaufswahrscheinlichkeit nach Huff
und zur Erstellung von Profildiagrammen im Rahmen von
SWOT-Analysen sowie von Imageprofilen. Im Kapitel zur
Kundentypologisierung wird die Durchf\"uhrung von
Cluster- und Faktoranlysen zur Typologisierung
erl\"autert und der Anhang gibt Hinweise zur
Installation und zum Einsatz von R f\"ur die
beschriebenen Analysen.},
language = {de},
publisherurl = {http://www.reymann.eu/index.php?option=com_content&task=view&id=10&Itemid=13},
url = {http://www.reymann.org/},
year = 2009
author = {Daniel B. Wright and Kamala London},
title = {Modern Regression Techniques Using {R}: A Practical
publisher = {SAGE},
year = 2009,
address = {London, UK},
isbn = {9781847879035},
publisherurl = {https://uk.sagepub.com/en-gb/eur/modern-regression-techniques-using-r/book233198},
abstract = {Techniques covered in this book include multilevel
modeling, ANOVA and ANCOVA, path analysis, mediation
and moderation, logistic regression (generalized
linear models), generalized additive models, and
robust methods. These are all tested out using a
range of real research examples conducted by the
authors in every chapter, and datasets are available
from the book's web page at
The authors are donating all royalties from the book
to the American Partnership for Eosinophilic
author = {Christian Ritz and Jens C. Streibig},
title = {Nonlinear Regression with R},
publisher = {Springer},
year = 2009,
address = {New York},
isbn = {978-0-387-09615-5},
publisherurl = {https://www.springer.com/978-0-387-09615-5},
abstract = {R is a rapidly evolving lingua franca of graphical
display and statistical analysis of experiments from
the applied sciences. Currently, R offers a wide
range of functionality for nonlinear regression
analysis, but the relevant functions, packages and
documentation are scattered across the R environment.
This book provides a coherent and unified treatment of
nonlinear regression with R by means of examples from
a diversity of applied sciences such as biology,
chemistry, engineering, medicine and toxicology. The
book starts out giving a basic introduction to fitting
nonlinear regression models in R. Subsequent chapters
explain the salient features of the main fitting
function nls(), the use of model diagnostics, how to
deal with various model departures, and carry out
hypothesis testing. In the final chapter grouped-data
structures, including an example of a nonlinear
mixed-effects regression model, are considered.},
orderinfo = {springer.txt}
author = {Andrea S. Foulkes},
title = {Applied Statistical Genetics with {R}: For
Population-Based Association Studies},
publisher = {Springer},
year = 2009,
series = {Use R},
isbn = {978-0-387-89554-3},
publisherurl = {https://www.springer.com/978-0-387-89554-3},
abstract = {In this introductory graduate level text, Dr.~Foulkes
elucidates core concepts that undergird the wide range
of analytic techniques and software tools for the
analysis of data derived from population-based genetic
investigations. Applied Statistical Genetics with R
offers a clear and cogent presentation of several
fundamental statistical approaches that researchers
from multiple disciplines, including medicine, public
health, epidemiology, statistics and computer science,
will find useful in exploring this emerging field.},
orderinfo = {springer.txt}
author = {Alain Zuur and Elena N. Ieno and Neil Walker and
Anatoly A. Saveiliev and Graham M. Smith},
title = {Mixed Effects Models and Extensions in Ecology with
publisher = {Springer},
year = 2009,
address = {New York},
isbn = {978-0-387-87457-9},
publisherurl = {https://www.springer.com/978-0-387-87457-9},
abstract = {Building on the successful Analysing Ecological Data
(2007) by Zuur, Ieno and Smith, the authors now
provide an expanded introduction to using regression
and its extensions in analysing ecological data. As
with the earlier book, real data sets from
postgraduate ecological studies or research projects
are used throughout. The first part of the book is a
largely non-mathematical introduction to linear mixed
effects modelling, GLM and GAM, zero inflated models,
GEE, GLMM and GAMM. The second part provides ten case
studies that range from koalas to deep sea research.
These chapters provide an invaluable insight into
analysing complex ecological datasets, including
comparisons of different approaches to the same
problem. By matching ecological questions and data
structure to a case study, these chapters provide an
excellent starting point to analysing your own data.
Data and R code from all chapters are available from
orderinfo = {springer.txt}
author = {Alain F. Zuur and Elena N. Ieno and Erik Meesters},
title = {A Beginner's Guide to {R}},
publisher = {Springer},
year = 2009,
series = {Use R},
isbn = {978-0-387-93836-3},
publisherurl = {https://www.springer.com/978-0-387-93836-3},
abstract = {Based on their extensive experience with teaching R
and statistics to applied scientists, the authors
provide a beginner's guide to R. To avoid the
difficulty of teaching R and statistics at the same
time, statistical methods are kept to a minimum. The
text covers how to download and install R, import and
manage data, elementary plotting, an introduction to
functions, advanced plotting, and common beginner
mistakes. This book contains everything you need to
know to get started with R. },
orderinfo = {springer.txt}
author = {Stefano M. Iacus},
title = {Simulation and Inference for Stochastic Differential
Equations: With {R} Examples},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-75838-1},
publisherurl = {https://www.springer.com/978-0-387-75838-1},
abstract = {This book is very different from any other publication
in the field and it is unique because of its focus on
the practical implementation of the simulation and
estimation methods presented. The book should be
useful to practitioners and students with minimal
mathematical background, but because of the many R
programs, probably also to many mathematically well
educated practitioners. Many of the methods presented
in the book have, so far, not been used much in
practice because the lack of an implementation in a
unified framework. This book fills the gap. With the
R code included in this book, a lot of useful methods
become easy to use for practitioners and students. An
R package called `sde' provides functionswith easy
interfaces ready to be used on empirical data from
real life applications. Although it contains a wide
range of results, the book has an introductory
character and necessarily does not cover the whole
spectrum of simulation and inference for general
stochastic differential equations. The book is
organized in four chapters. The first one introduces
the subject and presents several classes of processes
used in many fields of mathematics, computational
biology, finance and the social sciences. The second
chapter is devoted to simulation schemes and covers
new methods not available in other milestones
publication known so far. The third one is focused on
parametric estimation techniques. In particular, it
includes exact likelihood inference, approximated and
pseudo-likelihood methods, estimating functions,
generalized method of moments and other techniques.
The last chapter contains miscellaneous topics like
nonparametric estimation, model identification and
change point estimation. The reader non-expert in R
language, will find a concise introduction to this
environment focused on the subject of the book which
should allow for instant use of the proposed material.
To each R functions presented in the book a
documentation page is available at the end of the
orderinfo = {springer.txt}
author = {Simon Sheather},
title = {A Modern Approach to Regression with {R}},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-09607-0},
publisherurl = {https://www.springer.com/978-0-387-09607-0},
abstract = {A Modern Approach to Regression with R focuses on
tools and techniques for building regression models
using real-world data and assessing their
validity. When weaknesses in the model are identified,
the next step is to address each of these
weaknesses. A key theme throughout the book is that it
makes sense to base inferences or conclusions only on
valid models. The regression output and plots that
appear throughout the book have been generated using
R. On the book website you will find the R code used
in each example in the text. You will also find
SAS code and STATA code to produce the equivalent
output on the book website. Primers containing
expanded explanations of R, SAS and STATA and their
use in this book are also available on the book
website. The book contains a number of new real data
sets from applications ranging from rating
restaurants, rating wines, predicting newspaper
circulation and magazine revenue, comparing the
performance of NFL kickers, and comparing finalists in
the Miss America pageant across states. One of the
aspects of the book that sets it apart from many other
regression books is that complete details are provided
for each example. The book is aimed at first year
graduate students in statistics and could also be used
for a senior undergraduate class.},
orderinfo = {springer.txt}
author = {Sarkar, Deepayan},
title = {Lattice: {M}ultivariate Data Visualization with {R}},
publisher = {Springer},
url = {http://lmdvr.r-forge.r-project.org},
year = 2008,
address = {New York},
isbn = {978-0-387-75968-5},
publisherurl = {https://www.springer.com/978-0-387-75968-5},
abstract = {R is rapidly growing in popularity as the environment
of choice for data analysis and graphics both in
academia and industry. Lattice brings the proven
design of Trellis graphics (originally developed for S
by William S. Cleveland and colleagues at Bell Labs)
to R, considerably expanding its capabilities in the
process. Lattice is a powerful and elegant high level
data visualization system that is sufficient for most
everyday graphics needs, yet flexible enough to be
easily extended to handle demands of cutting edge
research. Written by the author of the lattice
system, this book describes it in considerable depth,
beginning with the essentials and systematically
delving into specific low levels details as necessary.
No prior experience with lattice is required to read
the book, although basic familiarity with R is
assumed. The book contains close to 150 figures
produced with lattice. Many of the examples emphasize
principles of good graphical design; almost all use
real data sets that are publicly available in various
R packages. All code and figures in the book are also
available online, along with supplementary material
covering more advanced topics.},
orderinfo = {springer.txt}
author = {Roger S. Bivand and Edzer J. Pebesma and Virgilio
title = {Applied Spatial Data Analysis with {R}},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-78170-9},
publisherurl = {https://www.springer.com/978-0-387-78170-9},
abstract = {Applied Spatial Data Analysis with R is divided into
two basic parts, the first presenting R packages,
functions, classes and methods for handling spatial
data. This part is of interest to users who need to
access and visualise spatial data. Data import and
export for many file formats for spatial data are
covered in detail, as is the interface between R and
the open source GRASS GIS. The second part showcases
more specialised kinds of spatial data analysis,
including spatial point pattern analysis,
interpolation and geostatistics, areal data analysis
and disease mapping. The coverage of methods of
spatial data analysis ranges from standard techniques
to new developments, and the examples used are largely
taken from the spatial statistics literature. All the
examples can be run using R contributed packages
available from the CRAN website, with code and
additional data sets from the book's own website.
This book will be of interest to researchers who
intend to use R to handle, visualise, and analyse
spatial data. It will also be of interest to spatial
data analysts who do not use R, but who are interested
in practical aspects of implementing software for
spatial data analysis. It is a suitable companion
book for introductory spatial statistics courses and
for applied methods courses in a wide range of
subjects using spatial data, including human and
physical geography, geographical information systems,
the environmental sciences, ecology, public health and
disease control, economics, public administration and
political science. The book has a website where
coloured figures, complete code examples, data sets,
and other support material may be found:
orderinfo = {springer.txt}
author = {Roger D. Peng and Francesca Dominici},
title = { Statistical Methods for Environmental Epidemiology
with {R}: A Case Study in Air Pollution and Health },
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-78166-2},
publisherurl = {https://www.springer.com/978-0-387-78166-2},
abstract = {Advances in statistical methodology and computing have
played an important role in allowing researchers to
more accurately assess the health effects of ambient
air pollution. The methods and software developed in
this area are applicable to a wide array of problems
in environmental epidemiology. This book provides an
overview of the methods used for investigating the
health effects of air pollution and gives examples and
case studies in R which demonstrate the application of
those methods to real data. The book will be useful
to statisticians, epidemiologists, and graduate
students working in the area of air pollution and
health and others analyzing similar data. The authors
describe the different existing approaches to
statistical modeling and cover basic aspects of
analyzing and understanding air pollution and health
data. The case studies in each chapter demonstrate
how to use R to apply and interpret different
statistical models and to explore the effects of
potential confounding factors. A working knowledge of
R and regression modeling is assumed. In-depth
knowledge of R programming is not required to
understand and run the examples. Researchers in this
area will find the book useful as a ``live''
reference. Software for all of the analyses in the
book is downloadable from the web and is available
under a Free Software license. The reader is free to
run the examples in the book and modify the code to
suit their needs. In addition to providing the
software for developing the statistical models, the
authors provide the entire database from the National
Morbidity, Mortality, and Air Pollution Study (NMMAPS)
in a convenient R package. With the database, readers
can run the examples and experiment with their own
methods and ideas.},
orderinfo = {springer.txt}
author = {Robert Gentleman},
title = {Bioinformatics with {R}},
publisher = {Chapman \& Hall/CRC},
year = 2008,
address = {Boca Raton, FL},
isbn = {1-420-06367-7}
author = {Robert Gentleman},
title = {{R} Programming for Bioinformatics},
publisher = {Chapman \& Hall/CRC},
year = 2008,
series = {Computer Science \& Data Analysis},
address = {Boca Raton, FL},
isbn = {9781420063677},
url = {http://master.bioconductor.org/help/publications/books/r-programming-for-bioinformatics/},
publisherurl = {http://www.crcpress.com/product/isbn/9781420063677},
abstract = {Thanks to its data handling and modeling capabilities
and its flexibility, R is becoming the most widely
used software in bioinformatics. R Programming for
Bioinformatics builds the programming skills needed to
use R for solving bioinformatics and computational
biology problems. Drawing on the author's experiences
as an R expert, the book begins with coverage on the
general properties of the R language, several unique
programming aspects of R, and object-oriented
programming in R. It presents methods for data input
and output as well as database interactions. The
author also examines different facets of string
handling and manipulations, discusses the interfacing
of R with other languages, and describes how to write
software packages. He concludes with a discussion on
the debugging and profiling of R code.},
orderinfo = {crcpress.txt}
author = {Phil Spector},
title = {Data Manipulation with {R}},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-74730-9},
publisherurl = {https://www.springer.com/978-0-387-74730-9},
abstract = {Since its inception, R has become one of the
preeminent programs for statistical computing and data
analysis. The ready availability of the program,
along with a wide variety of packages and the
supportive R community make R an excellent choice for
almost any kind of computing task related to
statistics. However, many users, especially those
with experience in other languages, do not take
advantage of the full power of R. Because of the
nature of R, solutions that make sense in other
languages may not be very efficient in R. This book
presents a wide array of methods applicable for
reading data into R, and efficiently manipulating that
data. In addition to the built-in functions, a number
of readily available packages from CRAN (the
Comprehensive R Archive Network) are also covered.
All of the methods presented take advantage of the
core features of R: vectorization, efficient use of
subscripting, and the proper use of the varied
functions in R that are provided for common data
management tasks. Most experienced R users discover
that, especially when working with large data sets, it
may be helpful to use other programs, notably
databases, in conjunction with R. Accordingly, the
use of databases in R is covered in detail, along with
methods for extracting data from spreadsheets and
datasets created by other programs. Character
manipulation, while sometimes overlooked within R, is
also covered in detail, allowing problems that are
traditionally solved by scripting languages to be
carried out entirely within R. For users with
experience in other languages, guidelines for the
effective use of programming constructs like loops are
provided. Since many statistical modeling and
graphics functions need their data presented in a data
frame, techniques for converting the output of
commonly used functions to data frames are provided
throughout the book. Using a variety of examples
based on data sets included with R, along with easily
simulated data sets, the book is recommended to anyone
using R who wishes to advance from simple examples to
practical real-life data manipulation solutions.},
orderinfo = {springer.txt}
author = {Pfaff, Bernhard},
title = {Analysis of Integrated and Cointegrated Time Series
with {R}, Second Edition},
edition = {2nd},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-75966-1},
publisherurl = {https://www.springer.com/978-0-387-75966-1},
abstract = {The analysis of integrated and co-integrated time
series can be considered as the main methodology
employed in applied econometrics. This book not only
introduces the reader to this topic but enables him to
conduct the various unit root tests and co-integration
methods on his own by utilizing the free statistical
programming environment R. The book encompasses
seasonal unit roots, fractional integration, coping
with structural breaks, and multivariate time series
models. The book is enriched by numerous programming
examples to artificial and real data so that it is
ideally suited as an accompanying text book to
computer lab classes. The second edition adds a
discussion of vector auto-regressive, structural
vector auto-regressive, and structural vector
error-correction models. To analyze the interactions
between the investigated variables, further impulse
response function and forecast error variance
decompositions are introduced as well as forecasting.
The author explains how these model types relate to
each other. Bernhard Pfaff studied economics at the
universities of G{\"o}ttingen, Germany; Davis,
California; and Freiburg im Breisgau, Germany. He
obtained a diploma and a doctorate degree at the
economics department of the latter entity where he was
employed as a research and teaching assistant. He has
worked for many years as economist and quantitative
analyst in research departments of financial
institutions and he is the author and maintainer of
the contributed R packages ``urca'' and ``vars.''},
orderinfo = {springer.txt}
author = {Peter Dalgaard},
title = {Introductory Statistics with {R}},
edition = {2nd},
year = 2008,
publisher = {Springer},
isbn = {978-0-387-79053-4},
pages = 380,
publisherurl = {https://www.springer.com/gp/book/9780387790534},
orderinfo = {springer.txt},
abstract = {This book provides an elementary-level introduction to
R, targeting both non-statistician scientists in
various fields and students of statistics. The main
mode of presentation is via code examples with liberal
commenting of the code and the output, from the
computational as well as the statistical viewpoint. A
supplementary R package can be downloaded and contains
the data sets. The statistical methodology includes
statistical standard distributions, one- and
two-sample tests with continuous data, regression
analysis, one- and two-way analysis of variance,
regression analysis, analysis of tabular data, and
sample size calculations. In addition, the last six
chapters contain introductions to multiple linear
regression analysis, linear models in general,
logistic regression, survival analysis, Poisson
regression, and nonlinear regression.}
author = {Maria L. Rizzo},
title = {Statistical Computing with {R}},
publisher = {Chapman \& Hall/CRC},
year = 2008,
address = {Boca Raton, FL},
isbn = {9781584885450},
abstract = {This book covers the traditional core material of
computational statistics, with an emphasis on using
the R language via an examples-based approach.
Suitable for an introductory course in computational
statistics or for self-study, it includes R code for
all examples and R notes to help explain the R
programming concepts.},
orderinfo = {crcpress.txt}
author = {Keele, Luke},
title = {Semiparametric Regression for the Social Sciences},
publisher = {Wiley},
address = {Chichester, UK},
year = 2008,
isbn = {978-0470319918},
url = {http://lukekeele.com/},
publisherurl = {https://www.wiley.com/WileyCDA/WileyTitle/productCd-0470319917.html},
abstract = {Smoothing methods have been little used within the
social sciences. Semiparametric Regression for the
Social Sciences sets out to address this situation by
providing an accessible introduction to the subject,
filled with examples drawn from the social and
political sciences. Readers are introduced to the
principles of nonparametric smoothing and to a wide
variety of smoothing methods. The author also explains
how smoothing methods can be incorporated into
parametric linear and generalized linear models. The
use of smoothers with these standard statistical
models allows the estimation of more flexible
functional forms whilst retaining the interpretability
of parametric models. The full potential of these
techniques is highlighted via the use of detailed
empirical examples drawn from the social and political
sciences. Each chapter features exercises to aid in
the understanding of the methods and applications.
All examples in the book were estimated in R. The
book contains an appendix with R commands to introduce
readers to estimating these models in R. All the R
code for the examples in the book are available from
the author's website and the publishers website.}
author = {Jonathan D. Cryer and Kung-Sik Chan},
title = {Time Series Analysis With Applications in {R}},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-75958-6},
publisherurl = {https://www.springer.com/978-0-387-75958-6},
abstract = {Time Series Analysis With Applications in R, Second
Edition, presents an accessible approach to
understanding time series models and their
applications. Although the emphasis is on time domain
ARIMA models and their analysis, the new edition
devotes two chapters to the frequency domain and three
to time series regression models, models for
heteroscedasticty, and threshold models. All of the
ideas and methods are illustrated with both real and
simulated data sets. A unique feature of this edition
is its integration with the R computing environment.
The tables and graphical displays are accompanied by
the R commands used to produce them. An extensive R
package, TSA, which contains many new or revised R
functions and all of the data used in the book,
accompanies the written text. Script files of R
commands for each chapter are available for download.
There is also an extensive appendix in the book that
leads the reader through the use of R commands and the
new R package to carry out the analyses.},
orderinfo = {springer.txt}
author = {John M. Chambers},
title = {Software for Data Analysis: Programming with {R}},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-75935-7},
url = {http://statweb.stanford.edu/~jmc4/Rbook/},
publisherurl = {https://www.springer.com/gp/book/9780387759357},
abstract = {The R version of S4 and other R techniques. This book
guides the reader in programming with R, from
interactive use and writing simple functions to the
design of R packages and intersystem interfaces.},
orderinfo = {springer.txt}
author = {Hrishikesh D. Vinod},
title = {Hands-on Intermediate Econometrics Using {R}:
Templates for Extending Dozens of Practical Examples},
publisher = {World Scientific},
address = {Hackensack, NJ},
year = 2008,
isbn = {10-981-281-885-5},
doi = {10.1142/6895},
abstract = {This book explains how to use R software to teach
econometrics by providing interesting examples, using
actual data applied to important policy issues. It
helps readers choose the best method from a wide array
of tools and packages available. The data used in the
examples along with R program snippets, illustrate the
economic theory and sophisticated statistical methods
extending the usual regression. The R program
snippets are included on a CD accompanying the book.
These are not merely given as black boxes, but include
detailed comments which help the reader better
understand the software steps and use them as
templates for possible extension and modification.
The book has received endorsements from top
author = {G. P. Nason},
title = {Wavelet Methods in Statistics with {R}},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-75960-9},
publisherurl = {https://www.springer.com/978-0-387-75960-9},
abstract = {Wavelet methods have recently undergone a rapid period
of development with important implications for a
number of disciplines including statistics. This book
fulfils three purposes. First, it is a gentle
introduction to wavelets and their uses in statistics.
Second, it acts as a quick and broad reference to many
recent developments in the area. The book
concentrates on describing the essential elements and
provides comprehensive source material references.
Third, the book intersperses R code that explains and
demonstrates both wavelet and statistical methods.
The code permits the user to learn the methods, to
carry out their own analyses and further develop their
own methods. The book is designed to be read in
conjunction with WaveThresh4, the freeware R package
for wavelets. The book introduces the wavelet
transform by starting with the simple Haar wavelet
transform and then builds to consider more general
wavelets such as the Daubechies compactly supported
series. The book then describes the evolution of
wavelets in the directions of complex-valued wavelets,
non-decimated transforms, multiple wavelets and
wavelet packets as well as giving consideration to
boundary conditions initialization. Later chapters
explain the role of wavelets in nonparametric
regression problems via a variety of techniques
including thresholding, cross-validation, SURE,
false-discovery rate and recent Bayesian methods, and
also consider how to deal with correlated and
non-Gaussian noise structures. The book also looks at
how nondecimated and packet transforms can improve
performance. The penultimate chapter considers the
role of wavelets in both stationary and non-stationary
time series analysis. The final chapter describes
recent work concerning the role of wavelets for
variance stabilization for non-Gaussian intensity
estimation. The book is aimed at final year
undergraduate and Masters students in a numerate
discipline (such as mathematics, statistics, physics,
economics and engineering) and would also suit as a
quick reference for postgraduate or research level
activity. The book would be ideal for a researcher to
learn about wavelets, to learn how to use wavelet
software and then to adapt the ideas for their own
orderinfo = {springer.txt}
author = {Clemens Reimann and Peter Filzmoser and Robert Garrett
and Rudolf Dutter},
title = {Statistical Data Analysis Explained: Applied
Environmental Statistics with {R}},
publisher = {Wiley},
address = {Chichester, UK},
year = 2008,
isbn = {978-0-470-98581-6},
url = {http://file.statistik.tuwien.ac.at/StatDA/},
publisherurl = {https://www.wiley.com/WileyCDA/WileyTitle/productCd-047098581X.html},
abstract = {Few books on statistical data analysis in the natural
sciences are written at a level that a
non-statistician will easily understand. This is a
book written in colloquial language, avoiding
mathematical formulae as much as possible, trying to
explain statistical methods using examples and
graphics instead. To use the book efficiently, readers
should have some computer experience. The book starts
with the simplest of statistical concepts and carries
readers forward to a deeper and more extensive
understanding of the use of statistics in
environmental sciences. The book concerns the
application of statistical and other computer methods
to the management, analysis and display of spatial
data. These data are characterised by including
locations (geographic coordinates), which leads to the
necessity of using maps to display the data and the
results of the statistical methods. Although the book
uses examples from applied geochemistry, and a large
geochemical survey in particular, the principles and
ideas equally well apply to other natural sciences,
e.g., environmental sciences, pedology, hydrology,
geography, forestry, ecology, and health
sciences/epidemiology. The book is unique because it
supplies direct access to software solutions (based on
R, the Open Source version of the S-language for
statistics) for applied environmental statistics. For
all graphics and tables presented in the book, the
R-scripts are provided in the form of executable
R-scripts. In addition, a graphical user interface
for R, called DAS+R, was developed for convenient,
fast and interactive data analysis. Statistical Data
Analysis Explained: Applied Environmental Statistics
with R provides, on an accompanying website, the
software to undertake all the procedures discussed,
and the data employed for their description in the
author = {Claude, Julien},
title = {Morphometrics with {R}},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-77789-4},
publisherurl = {https://www.springer.com/978-0-387-77789-4},
abstract = {Quantifying shape and size variation is essential in
evolutionary biology and in many other disciplines.
Since the ``morphometric revolution of the 90s,'' an
increasing number of publications in applied and
theoretical morphometrics emerged in the new
discipline of statistical shape analysis. The R
language and environment offers a single platform to
perform a multitude of analyses from the acquisition
of data to the production of static and interactive
graphs. This offers an ideal environment to analyze
shape variation and shape change. This open-source
language is accessible for novices and for experienced
users. Adopting R gives the user and developer
several advantages for performing morphometrics:
evolvability, adaptability, interactivity, a single
and comprehensive platform, possibility of interfacing
with other languages and software, custom analyses,
and graphs. The book explains how to use R for
morphometrics and provides a series of examples of
codes and displays covering approaches ranging from
traditional morphometrics to modern statistical shape
analysis such as the analysis of landmark data, Thin
Plate Splines, and Fourier analysis of outlines. The
book fills two gaps: the gap between theoreticians and
students by providing worked examples from the
acquisition of data to analyses and hypothesis
testing, and the gap between user and developers by
providing and explaining codes for performing all the
steps necessary for morphometrics rather than
providing a manual for a given software or package.
Students and scientists interested in shape analysis
can use the book as a reference for performing applied
morphometrics, while prospective researchers will
learn how to implement algorithms or interfacing R for
new methods. In addition, adopting the R philosophy
will enhance exchanges within and outside the
morphometrics community. Julien Claude is
evolutionary biologist and palaeontologist at the
University of Montpellier 2 where he got his Ph.D. in
2003. He works on biodiversity and phenotypic
evolution of a variety of organisms, especially
vertebrates. He teaches evolutionary biology and
biostatistics to undergraduate and graduate students
and has developed several functions in R for the
package APE.},
orderinfo = {springer.txt}
author = {Christian Kleiber and Achim Zeileis},
title = {Applied Econometrics with R},
publisher = {Springer},
year = 2008,
address = {New York},
isbn = {978-0-387-77316-2},
publisherurl = {https://www.springer.com/978-0-387-77316-2},
abstract = {This is the first book on applied econometrics using
the R system for statistical computing and graphics.
It presents hands-on examples for a wide range of
econometric models, from classical linear regression
models for cross-section, time series or panel data
and the common non-linear models of microeconometrics
such as logit, probit and tobit models, to recent
semiparametric extensions. In addition, it provides a
chapter on programming, including simulations,
optimization, and an introduction to R tools enabling
reproducible econometric research. An R package
accompanying this book, AER, is available from the
Comprehensive R Archive Network (CRAN) at
\url{https://CRAN.R-project.org/package=AER}. It
contains some 100 data sets taken from a wide variety
of sources, the full source code for all examples used
in the text plus further worked examples, e.g., from
popular textbooks. The data sets are suitable for
illustrating, among other things, the fitting of wage
equations, growth regressions, hedonic regressions,
dynamic regressions and time series models as well as
models of labor force participation or the demand for
health care. The goal of this book is to provide a
guide to R for users with a background in economics or
the social sciences. Readers are assumed to have a
background in basic statistics and econometrics at the
undergraduate level. A large number of examples should
make the book of interest to graduate students,
researchers and practitioners alike. },
orderinfo = {springer.txt}
author = {Benjamin M. Bolker},
title = {Ecological Models and Data in {R}},
year = 2008,
publisher = {Princeton University Press},
isbn = {978-0-691-12522-0},
pages = 408,
url = {http://ms.mcmaster.ca/~bolker/emdbook/},
publisherurl = {http://press.princeton.edu/titles/8709.html},
abstract = {This book is a truly practical introduction to modern
statistical methods for ecology. In step-by-step
detail, the book teaches ecology graduate students and
researchers everything they need to know in order to
use maximum likelihood, information-theoretic, and
Bayesian techniques to analyze their own data using
the programming language R. The book shows how to
choose among and construct statistical models for
data, estimate their parameters and confidence limits,
and interpret the results. The book also covers
statistical frameworks, the philosophy of statistical
modeling, and critical mathematical functions and
probability distributions. It requires no programming
background--only basic calculus and statistics.}
author = {W. John Braun and Duncan J. Murdoch},
title = {A First Course in Statistical Programming with {R}},
year = 2007,
publisher = {Cambridge University Press},
address = {Cambridge},
isbn = {978-0521872652},
pages = 362,
url = {http://rtricks4kids.ok.ubc.ca/wjbraun/other.php},
abstract = {This book introduces students to statistical
programming, using R as a basis. Unlike other
introductory books on the R system, this book
emphasizes programming, including the principles that
apply to most computing languages, and techniques used
to develop more complex projects.}
author = {Scott M. Lynch},
title = {Introduction to Applied Bayesian Statistics and
Estimation for Social Scientists},
publisher = {Springer},
year = 2007,
address = {New York},
isbn = {978-0-387-71265-9},
publisherurl = {https://www.springer.com/978-0-387-71265-9},
abstract = {Introduction to Bayesian Statistics and Estimation for
Social Scientists covers the complete process of
Bayesian statistical analysis in great detail from the
development of a model through the process of making
statistical inference. The key feature of this book
is that it covers models that are most commonly used
in social science research-including the linear
regression model, generalized linear models,
hierarchical models, and multivariate regression
models-and it thoroughly develops each real-data
example in painstaking detail. },
orderinfo = {springer.txt}
author = {Sandrine Dudoit and Mark J. {van der Laan}},
title = {Multiple Testing Procedures and Applications to
publisher = {Springer},
year = 2007,
series = {Springer Series in Statistics},
isbn = {978-0-387-49317-6},
publisherurl = {https://www.springer.com/978-0-387-49317-6},
abstract = {This book provides a detailed account of the
theoretical foundations of proposed multiple testing
methods and illustrates their application to a range
of testing problems in genomics.},
orderinfo = {springer.txt}
author = {Philip J. Boland},
title = {Statistical and Probabilistic Methods in Actuarial
publisher = {Chapman \& Hall/CRC},
year = 2007,
address = {Boca Raton, FL},
isbn = {9781584886952},
publisherurl = {http://www.crcpress.com/product/isbn/9781584886952},
abstract = {This book covers many of the diverse methods in
applied probability and statistics for students
aspiring to careers in insurance, actuarial science,
and finance. It presents an accessible, sound
foundation in both the theory and applications of
actuarial science. It encourages students to use the
statistical software package R to check examples and
solve problems.},
orderinfo = {crcpress.txt}
author = {Michael Greenacre},
title = {Correspondence Analysis in Practice, Second Edition},
publisher = {Chapman \& Hall/CRC},
year = 2007,
address = {Boca Raton, FL},
isbn = {9781584886167},
publisherurl = {https://www.taylorfrancis.com/books/correspondence-analysis-practice-michael-greenacre/10.1201/9781420011234},
abstract = {This book shows how the versatile method of
correspondence analysis (CA) can be used for data
visualization in a wide variety of situations. T his
completely revised, up-to-date edition features a
didactic approach with self-contained chapters,
extensive marginal notes, informative figure and table
captions, and end-of-chapter summaries. It includes a
computational appendix that provides the R commands
that correspond to most of the analyses featured in
the book.}
author = {John Maindonald and John Braun},
title = {Data Analysis and Graphics Using {R}},
edition = {2nd},
year = 2007,
publisher = {Cambridge University Press},
address = {Cambridge},
isbn = {978-0-521-86116-8},
pages = 502,
url = {https://maths-people.anu.edu.au/~johnm/r-book/daagur3.html},
publisherurl = {https://www.cambridge.org/at/academic/subjects/statistics-probability/computational-statistics-machine-learning-and-information-sc/data-analysis-and-graphics-using-r-example-based-approach-3rd-edition?format=HB&isbn=9780521762939},
abstract = {Following a brief introduction to R, this has
extensive examples that illustrate practical data
analysis using R. There is extensive advice on
practical data analysis. Topics covered include
exploratory data analysis, tests and confidence
intervals, regression, genralized linear models,
survival analysis, time series, multi-level models,
trees and random forests, classification, and
author = {Jean-Michel Marin and Christian P. Robert},
title = {Bayesian Core: A Practical Approach to Computational
Bayesian Statistics},
publisher = {Springer},
year = 2007,
address = {New York},
isbn = {978-0-387-38979-0},
publisherurl = {https://www.springer.com/978-0-387-38979-0},
abstract = {This Bayesian modeling book is intended for
practitioners and applied statisticians looking for a
self-contained entry to computational Bayesian
statistics. Focusing on standard statistical models
and backed up by discussed real datasets available
from the book website, it provides an operational
methodology for conducting Bayesian inference, rather
than focusing on its theoretical justifications.
Special attention is paid to the derivation of prior
distributions in each case and specific reference
solutions are given for each of the models.
Similarly, computational details are worked out to
lead the reader towards an effective programming of
the methods given in the book. While R programs are
provided on the book website and R hints are given in
the computational sections of the book, The Bayesian
Core requires no knowledge of the R language and it
can be read and used with any other programming
language. },
orderinfo = {springer.txt}
author = {Dianne Cook and Deborah F. Swayne},
title = {Interactive and Dynamic Graphics for Data Analysis},
publisher = {Springer},
year = 2007,
address = {New York},
isbn = {978-0-387-71761-6},
publisherurl = {https://www.springer.com/978-0-387-71761-6},
abstract = {This richly illustrated book describes the use of
interactive and dynamic graphics as part of
multidimensional data analysis. Chapters include
clustering, supervised classification, and working
with missing values. A variety of plots and
interaction methods are used in each analysis, often
starting with brushing linked low-dimensional views
and working up to manual manipulation of tours of
several variables. The role of graphical methods is
shown at each step of the analysis, not only in the
early exploratory phase, but in the later stages, too,
when comparing and evaluating models. All examples
are based on freely available software: GGobi for
interactive graphics and R for static graphics,
modeling, and programming. The printed book is
augmented by a wealth of material on the web,
encouraging readers follow the examples themselves.
The web site has all the data and code necessary to
reproduce the analyses in the book, along with movies
demonstrating the examples.},
orderinfo = {springer.txt}
author = {David Siegmund and Benjamin Yakir},
title = {The Statistics of Gene Mapping},
publisher = {Springer},
year = 2007,
address = {New York},
isbn = {978-0-387-49684-9},
publisherurl = {https://www.springer.com/978-0-387-49684-9},
abstract = {This book details the statistical concepts used in
gene mapping, first in the experimental context of
crosses of inbred lines and then in outbred
populations, primarily humans. It presents elementary
principles of probability and statistics, which are
implemented by computational tools based on the R
programming language to simulate genetic experiments
and evaluate statistical analyses. Each chapter
contains exercises, both theoretical and
computational, some routine and others that are more
challenging. The R programming language is developed
in the text.},
orderinfo = {springer.txt}
author = {Simon N. Wood},
title = {Generalized Additive Models: An Introduction with {R}},
publisher = {Chapman \& Hall/CRC},
year = 2006,
address = {Boca Raton, FL},
isbn = {9781584884743},
url = {https://CRAN.R-project.org/package=gamair},
abstract = {This book imparts a thorough understanding of the
theory and practical applications of GAMs and related
advanced models, enabling informed use of these very
flexible tools. The author bases his approach on a
framework of penalized regression splines, and builds
a well- grounded foundation through motivating
chapters on linear and generalized linear models.
While firmly focused on the practical aspects of GAMs,
discussions include fairly full explanations of the
theory underlying the methods. The treatment is rich
with practical examples, and it includes an entire
chapter on the analysis of real data sets using R and
the author's add-on package mgcv. Each chapter
includes exercises, for which complete solutions are
provided in an appendix.}
author = {Robert H. Shumway and David S. Stoffer},
title = {Time Series Analysis and Its Applications With {R}
publisher = {Springer},
year = 2006,
address = {New York},
isbn = {978-0-387-29317-2},
publisherurl = {https://www.springer.com/978-0-387-29317-2},
abstract = {Time Series Analysis and Its Applications presents a
balanced and comprehensive treatment of both time and
frequency domain methods with accompanying theory.
Numerous examples using non-trivial data illustrate
solutions to problems such as evaluating pain
perception experiments using magnetic resonance
imaging or monitoring a nuclear test ban treaty. The
book is designed to be useful as a text for graduate
level students in the physical, biological and social
sciences and as a graduate level text in statistics.
Some parts may also serve as an undergraduate
introductory course. Theory and methodology are
separated to allow presentations on different levels.
Material from the earlier 1988 Prentice-Hall text
Applied Statistical Time Series Analysis has been
updated by adding modern developments involving
categorical time sries analysis and the spectral
envelope, multivariate spectral methods, long memory
series, nonlinear models, longitudinal data analysis,
resampling techniques, ARCH models, stochastic
volatility, wavelets and Monte Carlo Markov chain
integration methods. These add to a classical
coverage of time series regression, univariate and
multivariate ARIMA models, spectral analysis and
state-space models. The book is complemented by
ofering accessibility, via the World Wide Web, to the
data and an exploratory time series analysis program
ASTSA for Windows that can be downloaded as Freeware.},
orderinfo = {springer.txt}
author = {Peter J. Diggle and Paulo Justiniano Ribeiro},
title = {Model-based Geostatistics},
publisher = {Springer},
year = 2006,
isbn = {978-0-387-48536-2},
publisherurl = {https://www.springer.com/978-0-387-48536-2},
abstract = {Geostatistics is concerned with estimation and
prediction problems for spatially continuous
phenomena, using data obtained at a limited number of
spatial locations. The name reflects its origins in
mineral exploration, but the methods are now used in a
wide range of settings including public health and the
physical and environmental sciences. Model-based
geostatistics refers to the application of general
statistical principles of modeling and inference to
geostatistical problems. This volume is the first
book-length treatment of model-based geostatistics.},
orderinfo = {springer.txt}
author = {Nhu D. Le and James V. Zidek},
title = {Statistical Analysis of Environmental Space-Time
publisher = {Springer},
year = 2006,
isbn = {978-0-387-35429-3},
publisherurl = {https://www.springer.com/978-0-387-35429-3},
abstract = {This book provides a broad introduction to the subject
of environmental space-time processes, addressing the
role of uncertainty. It covers a spectrum of technical
matters from measurement to environmental epidemiology
to risk assessment. It showcases non-stationary
vector-valued processes, while treating stationarity
as a special case. In particular, with members of
their research group the authors developed within a
hierarchical Bayesian framework, the new statistical
approaches presented in the book for analyzing,
modeling, and monitoring environmental spatio-temporal
processes. Furthermore they indicate new directions
for development.},
orderinfo = {springer.txt}
author = {Lothar Sachs and J{\"u}rgen Hedderich},
title = {{Angewandte Statistik. Methodensammlung mit R}},
year = 2006,
edition = {12th (completely revised)},
publisher = {Springer},
address = {Berlin, Heidelberg},
isbn = {978-3-540-32160-6},
publisherurl = {https://www.springer.com/978-3-540-32160-6},
abstract = {Die Anwendung statistischer Methoden wird heute in der
Regel durch den Einsatz von Computern unterst{\"u}tzt.
Das Programm R ist dabei ein leicht erlernbares und
flexibel einzusetzendes Werkzeug, mit dem der Prozess
der Datenanalyse nachvollziehbar verstanden und
gestaltet werden kann. Diese 12., vollst{\"a}ndig neu
bearbeitete Auflage veranschaulicht Anwendung und
Nutzen des Programms anhand zahlreicher mit R
durchgerechneter Beispiele. Sie erl{\"a}utert
statistische Ans{\"a}tze und gibt leicht fasslich,
anschaulich und praxisnah Studenten, Dozenten und
Praktikern mit unterschiedlichen Vorkenntnissen die
notwendigen Details, um Daten zu gewinnen, zu
beschreiben und zu beurteilen. Neben Hinweisen zur
Planung und Auswertung von Studien erm{\"o}glichen
viele Beispiele, Querverweise und ein
ausf{\"u}hrliches Sachverzeichnis einen gezielten
Zugang zur Statistik, insbesondere f{\"u}r Mediziner,
Ingenieure und Naturwissenschaftler.},
language = {de}
author = {Julian J. Faraway},
title = {Extending Linear Models with {R}: Generalized Linear,
Mixed Effects and Nonparametric Regression Models},
publisher = {Chapman \& Hall/CRC},
year = 2006,
address = {Boca Raton, FL},
isbn = {9781584884248},
url = {http://www.maths.bath.ac.uk/~jjf23/ELM/},
publisherurl = {https://www.taylorfrancis.com/books/extending-linear-model-julian-faraway/10.1201/b15416},
abstract = {This book surveys the techniques that grow from the
regression model, presenting three extensions to that
framework: generalized linear models (GLMs), mixed
effect models, and nonparametric regression
models. The author's treatment is thoroughly modern
and covers topics that include GLM diagnostics,
generalized linear mixed models, trees, and even the
use of neural networks in statistics. To demonstrate
the interplay of theory and practice, throughout the
book the author weaves the use of the R software
environment to analyze the data of real examples,
providing all of the R commands necessary to reproduce
the analyses.}
author = {Jana Jureckova and Jan Picek},
title = {Robust Statistical Methods with {R}},
publisher = {Chapman \& Hall/CRC},
year = 2006,
address = {Boca Raton, FL},
isbn = {9781584884545},
abstract = {This book provides a systematic treatment of robust
procedures with an emphasis on practical application.
The authors work from underlying mathematical tools to
implementation, paying special attention to the
computational aspects. They cover the whole range of
robust methods, including differentiable statistical
functions, distance of measures, influence functions,
and asymptotic distributions, in a rigorous yet
approachable manner. Highlighting hands- on problem
solving, many examples and computational algorithms
using the R software supplement the discussion. The
book examines the characteristics of robustness,
estimators of real parameter, large sample properties,
and goodness-of-fit tests. It also includes a brief
overview of R in an appendix for those with little
experience using the software.},
orderinfo = {crcpress.txt}
author = {Emmanuel Paradis},
title = {Analysis of Phylogenetics and Evolution with {R}},
publisher = {Springer},
year = 2006,
series = {Use R},
address = {New York},
isbn = {978-1-4614-1743-9},
publisherurl = {https://www.springer.com/978-1-4614-1743-9},
abstract = {This book integrates a wide variety of data analysis
methods into a single and flexible interface: the R
language, an open source language is available for a
wide range of computer systems and has been adopted as
a computational environment by many authors of
statistical software. Adopting R as a main tool for
phylogenetic analyses sease the workflow in
biologists' data analyses, ensure greater scientific
repeatability, and enhance the exchange of ideas and
methodological developments.},
orderinfo = {springer.txt}
author = {Brian Everitt and Torsten Hothorn},
title = {A Handbook of Statistical Analyses Using {R}},
publisher = {Chapman \& Hall/CRC},
year = 2006,
address = {Boca Raton, FL},
isbn = {1-584-88539-4},
url = {https://CRAN.R-project.org/package=HSAUR},
abstract = {With emphasis on the use of R and the interpretation
of results rather than the theory behind the methods,
this book addresses particular statistical techniques
and demonstrates how they can be applied to one or
more data sets using R. The authors provide a concise
introduction to R, including a summary of its most
important features. They cover a variety of topics,
such as simple inference, generalized linear models,
multilevel models, longitudinal data, cluster
analysis, principal components analysis, and
discriminant analysis. With numerous figures and
exercises, A Handbook of Statistical Analysis using R
provides useful information for students as well as
statisticians and data analysts.},
orderinfo = {crcpress.txt}
author = {Richard C. Deonier and Simon Tavar{\'e} and Michael
S. Waterman},
title = {Computational Genome Analysis: An Introduction},
publisher = {Springer},
year = 2005,
isbn = {978-0-387-28807-9},
publisherurl = {https://www.springer.com/978-0-387-28807-9},
abstract = {Computational Genome Analysis: An Introduction
presents the foundations of key p roblems in
computational molecular biology and bioinformatics. It
focuses on com putational and statistical principles
applied to genomes, and introduces the mat hematics
and statistics that are crucial for understanding
these applications. A ll computations are done with
orderinfo = {springer.txt}
author = {Paul Murrell},
title = {R Graphics},
publisher = {Chapman \& Hall/CRC},
year = 2005,
address = {Boca Raton, FL},
isbn = {9781584884866},
publisherurl = {https://www.taylorfrancis.com/books/graphics-paul-murrell/10.1201/9781420035025},
url = {http://www.stat.auckland.ac.nz/~paul/RGraphics/rgraphics.html},
abstract = {A description of the core graphics features of R
including: a brief introduction to R; an introduction
to general R graphics features. The ``base'' graphics
system of R: traditional S graphics. The power and
flexibility of grid graphics. Building on top of the
base or grid graphics: Trellis graphics and
developing new graphics functions.},
orderinfo = {crcpress.txt}
author = {Michael J. Crawley},
title = {Statistics: An Introduction using {R}},
publisher = {Wiley},
year = 2005,
isbn = {0-470-02297-3},
url = {http://www.bio.ic.ac.uk/research/crawley/statistics/},
abstract = {The book is primarily aimed at undergraduate
students in medicine, engineering, economics and
biology --- but will also appeal to postgraduates who
have not previously covered this area, or wish to
switch to using R.}
author = {John Verzani},
title = {Using {R} for Introductory Statistics},
publisher = {Chapman \& Hall/CRC},
year = 2005,
address = {Boca Raton, FL},
isbn = {9781584884507},
publisherurl = {https://www.taylorfrancis.com/books/using-introductory-statistics-john-verzani/10.4324/9780203499894},
abstract = {There are few books covering introductory statistics
using R, and this book fills a gap as a true
``beginner'' book. With emphasis on data analysis and
practical examples, `Using R for Introductory
Statistics' encourages understanding rather than
focusing on learning the underlying theory. It
includes a large collection of exercises and numerous
practical examples from a broad range of scientific
disciplines. It comes complete with an online
resource containing datasets, R functions, selected
solutions to exercises, and updates to the latest
features. A full solutions manual is available from
Chapman \& Hall/CRC.}
author = {Fionn Murtagh},
title = {Correspondence Analysis and Data Coding with {JAVA}
and {R}},
publisher = {Chapman \& Hall/CRC},
year = 2005,
address = {Boca Raton, FL},
isbn = {9781420034943},
url = {http://www.cs.rhul.ac.uk/home/fionn/},
publisherurl = {http://www.crcpress.com/product/isbn/9781420034943},
abstract = {This book provides an introduction to methods and
applications of correspondence analysis, with an
emphasis on data coding --- the first step in
correspondence analysis. It features a practical
presentation of the theory with a range of
applications from data mining, financial engineering,
and the biosciences. Implementation of the methods is
presented using JAVA and R software.},
orderinfo = {crcpress.txt}
author = {Brian S. Everitt},
title = {An {R} and {S-Plus} Companion to Multivariate
publisher = {Springer},
year = 2005,
isbn = {978-1-84628-124-2},
url = {https://www.springer.com/978-1-84628-124-2},
abstract = {In this book the core multivariate methodology is
covered along with some basic theory for each method
described. The necessary R and S-Plus code is given
for each analysis in the book, with any differences
between the two highlighted.},
orderinfo = {springer.txt}
author = {Andreas Behr},
title = {Einf\"uhrung in die Statistik mit {R}},
series = {WiSo Kurzlehrb\"ucher},
year = 2005,
publisher = {Vahlen},
address = {M\"unchen},
note = {In German},
isbn = {3-8006-3219-5},
language = {de}
editor = {Robert Gentleman and Vince Carey and Wolfgang Huber
and Rafael Irizarry and Sandrine Dudoit},
title = {Bioinformatics and Computational Biology Solutions
Using {R} and {Bioconductor}},
publisher = {Springer},
year = 2005,
series = {Statistics for Biology and Health},
isbn = {978-0-387-29362-2},
publisherurl = {https://www.springer.com/978-0-387-29362-2},
abstract = {This volume's coverage is broad and ranges across most
of the key capabilities of the Bioconductor project,
including importation and preprocessing of
high-throughput data from microarray, proteomic, and
flow cytometry platforms.},
orderinfo = {springer.txt}
author = {S. Mase and T. Kamakura and M. Jimbo and K. Kanefuji},
title = {Introduction to Data Science for engineers--- Data
analysis using free statistical software {R} (in
publisher = {Suuri-Kogaku-sha, Tokyo},
year = 2004,
month = {April},
isbn = {4901683128},
pages = 254
author = {Richard M. Heiberger and Burt Holland},
title = {Statistical Analysis and Data Display: An Intermediate
Course with Examples in {S-Plus}, {R}, and {SAS}},
publisher = {Springer},
year = 2004,
series = {Springer Texts in Statistics},
isbn = {978-1-4757-4284-8},
url = {http://astro.temple.edu/~rmh/HH},
abstract = {A contemporary presentation of statistical methods
featuring 200 graphical displays for exploring data
and displaying analyses. Many of the displays appear
here for the first time. Discusses construction and
interpretation of graphs, principles of graphical
design, and relation between graphs and traditional
tabular results. Can serve as a graduate-level
standalone statistics text and as a reference book for
researchers. In-depth discussions of regression
analysis, analysis of variance, and design of
experiments are followed by introductions to analysis
of discrete bivariate data, nonparametrics, logistic
regression, and ARIMA time series modeling. Concepts
and techniques are illustrated with a variety of case
studies. S-Plus, R, and SAS executable functions are
provided and discussed. S functions are provided for
each new graphical display format. All code,
transcript and figure files are provided for readers
to use as templates for their own analyses.},
publisherurl = {https://www.springer.com/978-1-4757-4284-8},
orderinfo = {springer.txt}
author = {Julian J. Faraway},
title = {Linear Models with {R}},
publisher = {Chapman \& Hall/CRC},
year = 2004,
address = {Boca Raton, FL},
isbn = {9781584884255},
url = {http://www.maths.bath.ac.uk/~jjf23/LMR/},
publisherurl = {https://www.taylorfrancis.com/books/linear-models-julian-faraway/10.4324/9780203507278},
abstract = {The book focuses on the practice of regression and
analysis of variance. It clearly demonstrates the
different methods available and in which situations
each one applies. It covers all of the standard
topics, from the basics of estimation to missing data,
factorial designs, and block designs, but it also
includes discussion of topics, such as model
uncertainty, rarely addressed in books of this type.
The presentation incorporates an abundance of examples
that clarify both the use of each technique and the
conclusions one can draw from the results.}
author = {Dubravko Dolic},
title = {Statistik mit {R}. Einf\"uhrung f\"ur Wirtschafts-
und Sozialwissenschaftler},
year = 2004,
publisher = {R. Oldenbourg},
address = {M\"unchen, Wien},
note = {In German},
isbn = {3-486-27537-2},
language = {de}
author = {Sylvie Huet and Annie Bouvier and Marie-Anne Gruet and
Emmanuel Jolivet},
title = {Statistical Tools for Nonlinear Regression},
publisher = {Springer},
year = 2003,
address = {New York},
isbn = {978-0-387-21574-7},
publisherurl = {https://www.springer.com/978-0-387-21574-7},
orderinfo = {springer.txt}
author = {Stefano Iacus and Guido Masarotto},
title = {Laboratorio di statistica con {R}},
year = 2003,
publisher = {McGraw-Hill},
address = {Milano},
isbn = {88-386-6084-0},
pages = 384
author = {Giovanni Parmigiani and Elizabeth S. Garrett and
Rafael A. Irizarry and Scott L. Zeger},
title = {The Analysis of Gene Expression Data},
publisher = {Springer},
year = 2003,
address = {New York},
isbn = {978-0-387-21679-9},
publisherurl = {https://www.springer.com/978-0-387-21679-9},
orderinfo = {springer.txt}
author = {William N. Venables and Brian D. Ripley},
title = {Modern Applied Statistics with {S}. Fourth Edition},
publisher = {Springer},
year = 2002,
address = {New York},
isbn = {978-0-387-21706-2},
url = {http://www.stats.ox.ac.uk/pub/MASS4/},
publisherurl = {https://www.springer.com/978-0-387-21706-2},
abstract = {A highly recommended book on how to do statistical
data analysis using R or S-Plus. In the first
chapters it gives an introduction to the S language.
Then it covers a wide range of statistical
methodology, including linear and generalized linear
models, non-linear and smooth regression, tree-based
methods, random and mixed effects, exploratory
multivariate analysis, classification, survival
analysis, time series analysis, spatial statistics,
and optimization. The `on-line complements' available
at the books homepage provide updates of the book, as
well as further details of technical material. },
orderinfo = {springer.txt}
author = {John Fox},
title = {An {R} and {S-Plus} Companion to Applied Regression},
publisher = {Sage Publications},
year = 2002,
address = {Thousand Oaks, CA, USA},
isbn = {0-761-92279-2},
url = {https://socialsciences.mcmaster.ca/jfox/Books/Companion/index.html},
abstract = {A companion book to a text or course on applied
regression (such as ``Applied Regression, Linear
Models, and Related Methods'' by the same author). It
introduces S, and concentrates on how to use linear
and generalized-linear models in S while assuming
familiarity with the statistical methodology.}
author = {Manuel Castej{\'o}n Limas and Joaqu{\'\i}n Ordieres
Mer{\'e} and Fco. Javier de Cos Juez and Fco. Javier
Mart{\'\i}nez de Pis{\'o}n Ascacibar},
title = {Control de Calidad. Metodologia para el analisis
previo a la modelizaci{\'o}n de datos en procesos
industriales. Fundamentos te{\'o}ricos y aplicaciones
con R.},
publisher = {Servicio de Publicaciones de la Universidad de La
year = 2001,
isbn = {84-95301-48-2},
abstract = {This book, written in Spanish, is oriented to
researchers interested in applying multivariate
analysis techniques to real processes. It combines
the theoretical basis with applied examples coded in
author = {Frank E. Harrell},
title = {Regression Modeling Strategies, with Applications to
Linear Models, Survival Analysis and Logistic
publisher = {Springer},
year = 2001,
isbn = {978-3-319-19425-7},
publisherurl = {https://www.springer.com/978-3-319-19425-7},
url = {https://hbiostat.org/doc/rms/},
abstract = {There are many books that are excellent sources of
knowledge about individual statistical tools (survival
models, general linear models, etc.), but the art of
data analysis is about choosing and using multiple
tools. In the words of Chatfield ``... students
typically know the technical details of regression for
example, but not necessarily when and how to apply it.
This argues the need for a better balance in the
literature and in statistical teaching between
techniques and problem solving strategies.'' Whether
analyzing risk factors, adjusting for biases in
observational studies, or developing predictive
models, there are common problems that few regression
texts address. For example, there are missing data in
the majority of datasets one is likely to encounter
(other than those used in textbooks!) but most
regression texts do not include methods for dealing
with such data effectively, and texts on missing data
do not cover regression modeling.},
orderinfo = {springer.txt}
author = {William N. Venables and Brian D. Ripley},
title = {S Programming},
publisher = {Springer},
year = 2000,
address = {New York},
isbn = {978-0-387-21856-4},
url = {http://www.stats.ox.ac.uk/pub/MASS3/Sprog/},
publisherurl = {https://www.springer.com/978-0-387-21856-4},
abstract = {This provides an in-depth guide to writing software in
the S language which forms the basis of both the
commercial S-Plus and the Open Source R data analysis
software systems.},
orderinfo = {springer.txt}
author = {Terry M. Therneau and Patricia M. Grambsch},
title = {Modeling Survival Data: Extending the {Cox} Model},
publisher = {Springer},
year = 2000,
series = {Statistics for Biology and Health},
isbn = {978-1-4757-3294-8},
publisherurl = {https://www.springer.com/978-1-4757-3294-8},
abstract = {This is a book for statistical practitioners,
particularly those who design and analyze studies for
survival and event history data. Its goal is to extend
the toolkit beyond the basic triad provided by most
statistical packages: the Kaplan-Meier estimator,
log-rank test, and Cox regression model.},
orderinfo = {springer.txt}
author = {Jose C. Pinheiro and Douglas M. Bates},
title = {Mixed-Effects Models in {S} and {S-Plus}},
publisher = {Springer},
year = 2000,
isbn = {978-0-387-22747-4},
publisherurl = {https://www.springer.com/978-0-387-22747-4},
abstract = {A comprehensive guide to the use of the `nlme' package
for linear and nonlinear mixed-effects models.},
orderinfo = {springer.txt}
author = {Deborah Nolan and Terry Speed},
title = {Stat Labs: Mathematical Statistics Through
publisher = {Springer},
year = 2000,
series = {Springer Texts in Statistics},
isbn = {978-0-387-22743-6},
url = {https://www.stat.berkeley.edu/users/statlabs/},
publisherurl = {https://www.springer.com/978-0-387-22743-6},
abstract = {Integrates theory of statistics with the practice of
statistics through a collection of case studies
(``labs''), and uses R to analyze the data.},
orderinfo = {springer.txt}
author = {John M. Chambers},
title = {Programming with Data},
publisher = {Springer},
year = 1998,
address = {New York},
isbn = {978-0-387-98503-9},
publisherurl = {https://www.springer.com/978-0-387-98503-9},
abstract = {This ``\emph{Green Book}'' describes version 4 of S, a
major revision of S designed by John Chambers to
improve its usefulness at every stage of the
programming process.},
orderinfo = {springer.txt}
author = {John M. Chambers and Trevor J. Hastie},
title = {Statistical Models in {S}},
publisher = {Chapman \& Hall},
year = 1992,
address = {London},
isbn = {9780412830402},
publisherurl = {http://www.crcpress.com/product/isbn/9780412830402},
abstract = {This is also called the ``\emph{White Book}''. It
described software for statistical modeling in S and
introduced the S3 version of classes and methods.},
orderinfo = {crcpress.txt}
author = {Richard A. Becker and John M. Chambers and Allan
R. Wilks},
title = {The New {S} Language},
publisher = {Chapman \& Hall},
year = 1988,
address = {London},
abstract = {This book is often called the ``\emph{Blue Book}'',
and introduced what is now known as S version 3, or S3.}
This file was generated by bibtex2html 1.99. | {"url":"https://www.r-project.org/doc/bib/R-books_bib.html","timestamp":"2024-11-12T06:26:20Z","content_type":"text/html","content_length":"270514","record_id":"<urn:uuid:5a385c94-b1b3-44f6-a272-23e0bab645da>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00435.warc.gz"} |