url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://gmatclub.com/forum/floyd-started-at-one-end-of-a-trail-at-8-07-a-m-and-hiked-the-entire-325203.html
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 06 Jul 2020, 05:04
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Floyd started at one end of a trail at 8:07 a.m. and hiked the entire
Author Message
TAGS:
### Hide Tags
Intern
Joined: 12 May 2020
Posts: 9
Location: India
Concentration: General Management, Healthcare
GPA: 4
WE: Corporate Finance (Consulting)
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire [#permalink]
### Show Tags
26 May 2020, 10:31
4
4
00:00
Difficulty:
65% (hard)
Question Stats:
63% (03:28) correct 38% (03:33) wrong based on 40 sessions
### HideShow timer Statistics
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire 5 miles to the other end of the trail at an average rate of 3 miles per hour. Connie started at the same end of the trail at 8:30 a.m. and hiked the entire length of the trail at an average rate of 2.5 miles per hour. Finally, Karen started her hike in the same place at 10:00 a.m. and hiked the entire length of the trail at an average rate of 4 miles per hour. During approximately what percent of the 4-hour period from 8:00 a.m. to noon were at least two of the three hikers on the trail?
(A) 34%
(B) 45%
(C) 55%
(D) 59%
(E) 68%
Source: Advanced Quant Manhattan Prep 2020
DS Forum Moderator
Joined: 19 Oct 2018
Posts: 1980
Location: India
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire [#permalink]
### Show Tags
26 May 2020, 15:03
1
1
Floyd takes $$\frac{5}{3}*60 = 100$$ mins to cross the trail. So he is on the trail from 8:07 to 9:47.
Connie takes $$\frac{5}{2.5}*60 = 120$$ mins to cross the trail. So he is on the trail from 8:30 to 10:30.
Karen takes $$\frac{5}{4}*60 = 75$$ mins to cross the trail. So she is on the trail from 10:00 to 11:15.
From 8-8:30, none or only Floyd is on the trail. (30 minutes)
From 9:47-10:00, only Connie is on the trail. ( 13 minutes)
From 10:30- 12:00, none or only Karen is on the trail. (1 hour and 30 minutes)
Total duration from 8:00 a.m. to noon when at least two of the three hikers were on the trail = 240-30-13-90= 107 minutes
40% of 240 = 96 and 50% of 240 = 120
B
rsvp wrote:
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire 5 miles to the other end of the trail at an average rate of 3 miles per hour. Connie started at the same end of the trail at 8:30 a.m. and hiked the entire length of the trail at an average rate of 2.5 miles per hour. Finally, Karen started her hike in the same place at 10:00 a.m. and hiked the entire length of the trail at an average rate of 4 miles per hour. During approximately what percent of the 4-hour period from 8:00 a.m. to noon were at least two of the three hikers on the trail?
(A) 34%
(B) 45%
(C) 55%
(D) 59%
(E) 68%
Source: Advanced Quant Manhattan Prep 2020
Senior Manager
Joined: 27 Feb 2017
Posts: 389
Location: United States (WA)
GMAT 1: 760 Q50 V42
GRE 1: Q169 V168
Re: Floyd started at one end of a trail at 8:07 a.m. and hiked the entire [#permalink]
### Show Tags
26 May 2020, 22:01
1
Well, it took me 3 minutes and 20 seconds to get the right answer.
I do not see a shortcut to this question. So have to perform the following calculation.
(1) 8:07 starts. Duration:1 hour and 40 minutes. So 8:07-9:47
(2) 8:30 starts. Duration: 2 hours. So: 8:30-10:30
(3) 10:00 starts. Duration is over 1 hour. That is enough.
at least two of the three hikers on the trail
8:30-9:47 => Almost 75 minutes
10:00-10:30 => 30 minutes.
75+30 = 105
105/240 = 52.5/120 So, (B) it is.
CEO
Joined: 03 Jun 2019
Posts: 3182
Location: India
GMAT 1: 690 Q50 V34
WE: Engineering (Transportation)
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire [#permalink]
### Show Tags
Updated on: 27 May 2020, 09:20
rsvp wrote:
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire 5 miles to the other end of the trail at an average rate of 3 miles per hour. Connie started at the same end of the trail at 8:30 a.m. and hiked the entire length of the trail at an average rate of 2.5 miles per hour. Finally, Karen started her hike in the same place at 10:00 a.m. and hiked the entire length of the trail at an average rate of 4 miles per hour. During approximately what percent of the 4-hour period from 8:00 a.m. to noon were at least two of the three hikers on the trail?
(A) 34%
(B) 45%
(C) 55%
(D) 59%
(E) 68%
Source: Advanced Quant Manhattan Prep 2020
Given:
1. Floyd started at one end of a trail at 8:07 a.m. and hiked the entire 5 miles to the other end of the trail at an average rate of 3 miles per hour.
2. Connie started at the same end of the trail at 8:30 a.m. and hiked the entire length of the trail at an average rate of 2.5 miles per hour.
3. Finally, Karen started her hike in the same place at 10:00 a.m. and hiked the entire length of the trail at an average rate of 4 miles per hour.
Asked: During approximately what percent of the 4-hour period from 8:00 a.m. to noon were at least two of the three hikers on the trail?
Attachment:
Screenshot 2020-05-27 at 10.42.53 PM.png [ 22.1 KiB | Viewed 472 times ]
Floyd and Connie were together for = 9:47 - 8:30 = 1:17 hrs
Connie and Karen were together for = 10:30 - 10:00 = 0:30 hrs
Floyd and Karen were never together.
Approximately percent of the 4-hour period from 8:00 a.m. to noon when at least two of the three hikers on the trail = 1:47/4:00 = 44.58% ~ 45%
IMO B
_________________
Kinshook Chaturvedi
Email: kinshook.chaturvedi@gmail.com
Originally posted by Kinshook on 27 May 2020, 09:14.
Last edited by Kinshook on 27 May 2020, 09:20, edited 1 time in total.
CEO
Joined: 03 Jun 2019
Posts: 3182
Location: India
GMAT 1: 690 Q50 V34
WE: Engineering (Transportation)
Re: Floyd started at one end of a trail at 8:07 a.m. and hiked the entire [#permalink]
### Show Tags
27 May 2020, 09:18
Hi nick1816
Karen is on trail from 10:00 to 11:15 hrs (75 mins)
nick1816 wrote:
Floyd takes $$\frac{5}{3}*60 = 100$$ mins to cross the trail. So he is on the trail from 8:07 to 9:47.
Connie takes $$\frac{5}{2.5}*60 = 120$$ mins to cross the trail. So he is on the trail from 8:30 to 10:30.
Karen takes $$\frac{5}{4}*60 = 75$$ mins to cross the trail. So she is on the trail from 10:00 to 11:30.
From 8-8:30, none or only Floyd is on the trail. (30 minutes)
From 9:47-10:00, only Connie is on the trail. ( 13 minutes)
From 10:30- 12:00, none or only Karen is on the trail. (1 hour and 30 minutes)
Total duration from 8:00 a.m. to noon when at least two of the three hikers were on the trail = 240-30-13-90= 107 minutes
40% of 240 = 96 and 50% of 240 = 120
B
rsvp wrote:
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire 5 miles to the other end of the trail at an average rate of 3 miles per hour. Connie started at the same end of the trail at 8:30 a.m. and hiked the entire length of the trail at an average rate of 2.5 miles per hour. Finally, Karen started her hike in the same place at 10:00 a.m. and hiked the entire length of the trail at an average rate of 4 miles per hour. During approximately what percent of the 4-hour period from 8:00 a.m. to noon were at least two of the three hikers on the trail?
(A) 34%
(B) 45%
(C) 55%
(D) 59%
(E) 68%
Source: Advanced Quant Manhattan Prep 2020
_________________
Kinshook Chaturvedi
Email: kinshook.chaturvedi@gmail.com
DS Forum Moderator
Joined: 19 Oct 2018
Posts: 1980
Location: India
Re: Floyd started at one end of a trail at 8:07 a.m. and hiked the entire [#permalink]
### Show Tags
27 May 2020, 09:43
That was a typo. Edited!
Kinshook wrote:
Hi nick1816
Karen is on trail from 10:00 to 11:15 hrs (75 mins)
nick1816 wrote:
Floyd takes $$\frac{5}{3}*60 = 100$$ mins to cross the trail. So he is on the trail from 8:07 to 9:47.
Connie takes $$\frac{5}{2.5}*60 = 120$$ mins to cross the trail. So he is on the trail from 8:30 to 10:30.
Karen takes $$\frac{5}{4}*60 = 75$$ mins to cross the trail. So she is on the trail from 10:00 to 11:30.
From 8-8:30, none or only Floyd is on the trail. (30 minutes)
From 9:47-10:00, only Connie is on the trail. ( 13 minutes)
From 10:30- 12:00, none or only Karen is on the trail. (1 hour and 30 minutes)
Total duration from 8:00 a.m. to noon when at least two of the three hikers were on the trail = 240-30-13-90= 107 minutes
40% of 240 = 96 and 50% of 240 = 120
B
rsvp wrote:
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire 5 miles to the other end of the trail at an average rate of 3 miles per hour. Connie started at the same end of the trail at 8:30 a.m. and hiked the entire length of the trail at an average rate of 2.5 miles per hour. Finally, Karen started her hike in the same place at 10:00 a.m. and hiked the entire length of the trail at an average rate of 4 miles per hour. During approximately what percent of the 4-hour period from 8:00 a.m. to noon were at least two of the three hikers on the trail?
(A) 34%
(B) 45%
(C) 55%
(D) 59%
(E) 68%
Source: Advanced Quant Manhattan Prep 2020
Posted from my mobile device
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 11036
Location: United States (CA)
Re: Floyd started at one end of a trail at 8:07 a.m. and hiked the entire [#permalink]
### Show Tags
30 May 2020, 13:27
rsvp wrote:
Floyd started at one end of a trail at 8:07 a.m. and hiked the entire 5 miles to the other end of the trail at an average rate of 3 miles per hour. Connie started at the same end of the trail at 8:30 a.m. and hiked the entire length of the trail at an average rate of 2.5 miles per hour. Finally, Karen started her hike in the same place at 10:00 a.m. and hiked the entire length of the trail at an average rate of 4 miles per hour. During approximately what percent of the 4-hour period from 8:00 a.m. to noon were at least two of the three hikers on the trail?
(A) 34%
(B) 45%
(C) 55%
(D) 59%
(E) 68%
It takes Floyd 5/3 hours or 1 hour and 40 minutes to finish the trail. So he reaches the other end of the trail at 8:07 am + 1 hr 40 min = 9:47 am.
Similarly, it takes Connie 5/2.5 = 2 hours to finish the same trail. So she reaches the other end of the trail at 8:30 am + 2 hrs = 10:30 am.
Finally, it takes Karen 5/4 hours or 1 hour and 15 minutes to finish the trail. So she reaches the other end of the trail at 10 am + 1 hr 15 min = 11:15 am.
We can see that both Floyd and Connie are on the trail from 8:30 am to 9:47 am (i.e., 1 hour and 17 minutes), and both Connie and Karen are on the trail from 10 am to 10:30 am (i.e., 30 minutes). However, all three hikers are not on the trail simultaneously since Floyd finishes his hike before Karen begins hers. Therefore, for a total of 1 hour and 47 minutes we have at least two of the three hikers on the trail. Since 1 hour and 47 minutes = 107 minutes and 4 hours = 240 minutes, the percent of time we have at least two of the three hikers on the trail is 107/240 = 0.4458 or approximately 45%.
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
225 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
Re: Floyd started at one end of a trail at 8:07 a.m. and hiked the entire [#permalink] 30 May 2020, 13:27
|
2020-07-06 13:04:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45008668303489685, "perplexity": 1787.6892556062874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880616.1/warc/CC-MAIN-20200706104839-20200706134839-00285.warc.gz"}
|
https://ch.gateoverflow.in/1185/gate-ch-2021-question-40
|
A binary liquid mixture consists of two species $1$ and $2$. Let $\gamma$ and $x$ represent the activity coefficient and the mole fraction of the species, respectively. Using a molar excess Gibbs free energy model, $\ln \gamma_1 \textit{ vs. } x_1$ and $\ln \gamma _2 \textit{ vs. } x_1$ are plotted. A tangent drawn to the $\ln \gamma_1 \textit{ vs. } x_1$ curve at a mole fraction of $x_1=0.2$ has a slope $=-1.728$.
The slope of the tangent drawn to the $\ln \gamma_2 \textit{ vs. } x_1$ curve at the same mole fraction is ___________ ( correct to $3$ decimal places)
|
2023-03-26 00:26:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88900226354599, "perplexity": 236.64767521100367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00137.warc.gz"}
|
https://www.physicsforums.com/threads/circuit-analysis.841368/
|
# Homework Help: Circuit analysis
1. Nov 4, 2015
### Gbox
1. The problem statement, all variables and given/known data
2. Relevant equations
Thevenin theorem
3. The attempt at a solution
$R_{60+180}=240$
$R_{80||240}=\frac{240*80}{240+80}=60$
$R_{total}=60+20=80$
$=\frac{480}{80000}=0.006A$
How should I proceed?
2. Nov 4, 2015
### phyzguy
I think you are a factor of 10 off, otherwise what you have done is OK to find i. Your next step is to find v0. How much current is flowing through the 180 KOhm resistor?
3. Nov 4, 2015
### Gbox
Sorry, fixed the current.
I know that the current need to split twice before it will get to $R_{180}$
4. Nov 4, 2015
### phyzguy
It only splits once. How much of the total current i goes through the 80 KOhm resistor, and how much through the 60 KOhm + 180 KOhm series combination?
5. Nov 4, 2015
### Gbox
$R_{80}$ gets 0.0045A so $R_{60+180}$ gets 0.0015A therefore it is 360V?
6. Nov 4, 2015
### phyzguy
The 0.0015A is right. Given this, what is v0?
7. Nov 4, 2015
### Gbox
$V_{60}=0.0015*60000=90$ so $V_0=480-90-120=270v$?
why could I say that because $#v_0$ or $R_{80}$ is connected in parallel to the battery, it has the same Voltage?
8. Nov 4, 2015
### phyzguy
Yes, v0 = 270V is correct. This is not the same as the voltage across the 80 KOhm resistor, because some voltage is dropped across the 60 KOhm resistor (90V, as you calculated).
9. Nov 5, 2015
### Grim Arrow
Since both currents unite at the 20k resistor, then u got 240k.80k/320k + 20k= 80000ohms V/Rtotal= 0.006amps net current. From there on u get that 120volts fall at the 20k resistior, and the current through the 60k+180k branch is 0.0015amps. From there the rest current sjould be 0.0045amps
10. Nov 5, 2015
### Grim Arrow
Isnt that right mates?
11. Nov 9, 2015
Yes, thanks
|
2018-05-26 04:55:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3961120545864105, "perplexity": 4950.637011497177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867309.73/warc/CC-MAIN-20180526033945-20180526053945-00241.warc.gz"}
|
https://scicomp.stackexchange.com/questions/33347/why-iterative-method-amg-preconditioned-pcg-is-slower-than-matlab-direct-method/33352
|
# Why iterative method: AMG preconditioned PCG is slower than Matlab direct method 'A\b'?
Recently, I have met a question that
a saying goes that for large linear system: iterative methods are required because of memory problem of direct methods.
But when I implement some experiments in Matlab, I find a strange phenomenon: I used the Poisson equation in 2D and use Q1 element, grid input is 10, and I got a system: $$Ax = f$$ where $$A$$ is $$1050625\times 1050625$$, which is large and sparse.
In principle, we should use iterative methods, such as PCG or minres built-in Matlab, with AMG preconditioner. But when I input A\f in the command window, Matlab direct method only costs $$4.588234$$ seconds, which is fast.
Then, I want to test PCG with AMG preconditioner. I find the time for setup of the AMG preconditioner to be very very long. First, I thought this is because the size of system is not large enough, then I use the grid input =11, and the memory breaks down, the system matrix $$A$$ cannot be generated, so in my PC, I can not get the result that iterative method is better than direct method for large sparse system. Why? Is it the reason that the size is not enough? but the larger system cannot be generated in our personal computer.
I also believe that for large sparse system, iterative methods are necessary, but the numerical results are contrary: Matlab's A\b is so fast.
How should we understand the saying "iterative methods are better than direct methods for large sparse system"? Could you please give me some suggestions?
• Seems to me like Matlab shouldn't even work with a matrix that large, but I'm not an experienced user. Are you sure the inverse is actually working? – EMP Sep 1 at 9:01
• yes,I am sure, I implement this experiments in software "ifiss",the website is:IFISS Software personalpages.manchester.ac.uk/staff/david.silvester/ifiss/… the finite element method and fast iterative methods in incompressible fluid dynamic. A very efficient software for iterative methods. – Zhen-Wei Sun Sep 1 at 9:32
• I dont know if I'd use Matlab and fast and efficient in the same sentence but leaving that aside, I think we need more information. What are the various grid inputs, what do these mean? Have you tested the iterative solver? Are you sure it works? And I still don't believe that matlabs direct solve can actually handle a matrix that large that quickly. How do you know it properly inverted it? Did you test it? – EMP Sep 1 at 12:13
• @EMP side note: there is no matrix inversion happening during A\f in Matlab. – Anton Menshov Sep 1 at 15:13
• Anton thanks for the info. I still have trouble believing that a matrix that large is bring solved that fast in Matlab unless it has some verrrrry special properties. – EMP Sep 1 at 19:07
There are several things to consider in this experiment:
### Why Matlab sparse direct might be "so fast":
• In 2D (of course, problem-dependent), your matrix $$A$$ arising after FEM discretization, after some reorderings might appear to be "close to banded" structure. The smaller is the bandwidth of $$A$$, the more efficient a sparse direct method could handle it. And if the bandwidth is small enough, Matlab probably would choose a specialized banded solver.
• I am not aware of what particular implementation of algebraic multigrid (AMG) preconditioner you are using. While there might be internal performance problems inside it, the AMG itself might be an overkill for your problem.
• sparse linear solvers are now a thing. There was a lot of progress in the last years, and at least part of the effect you are seeing should be attributed to it.
### Why one might want to use an iterative solver, no matter what:
• In general, even the best sparse direct solver during factorization would still generate the fill-in. Thus, the matrix after the factorization would take [significantly] more space compared to the original one, which is certainly an issue for large problems.
• Acceleration of iterative solvers is a more mature field. In different application areas, there is a multitude of fast algorithms (based on, say, Fast Fourier Transform – FFT, Fast Multipole Method – FMM, and others) which would accelerate the matrix-vector product, naturally fitting the iterative linear solver route.
### Things to try:
• Try an iterative solver with a vanilla diagonal preconditioner on your problem. See, what is the number of iterations and decide whether a better preconditioner (like AMG) is even worth considering.
• Consider a test in 3-D if this is in the interest of your application area.
• Check the bandwidth and sparsity pattern of your matrix $$A$$ arising from 2-D Poisson. It might happen, that you are solving a very special case, not a general sparse matrix.
### How to understand the "saying":
for large linear system: iterative methods are required because of memory problem of direct methods.
You may want to critically look at it. Pretty much as at any general blanket statement.
While iterative methods have their advantages, certain problems would still call for direct methods. Moreover, there are plenty of fast direct methods, where the factorization would be also accelerated (say, Hierarchical matrices applied to FEM). The field of unaccelerated sparse linear solvers became much more promising in the last 15 years, I would say. So, that saying might have been a dogma 20 years ago, now it is at least much weaker.
• Great answer. I also think a key to OP's numerical experiments is that he seems to be actually forming the matrix $A$ and computing the matrix-vector product directly. For a Poisson problem, matvecs can be done very quickly, and that is also ignoring other direct techniques like Fast Poisson Solvers that may work depending on his domain. – whpowell96 Sep 1 at 15:45
• Excellent answer,thanks for all.@whpowell96,@Anton, the reason is the special sparsity pattern in 2D poisson, and I have tried in 3D, and matlab A\b failed out of memory , so iterative method is necessary. – Zhen-Wei Sun Sep 2 at 8:48
• From my personal experience I found out that the Poisson problem is a very easy problem to solve for iterative and direct solvers. When moving on to more harder problems like structural problems, the comparison between direct and iterative changes. – vydesaster Sep 4 at 17:32
• @vydesaster yeah, poisson matrix has very special structure so that matlab A\b can solve it fastly in 2 dimension, but failed in 3D for very large system size. – Zhen-Wei Sun Sep 6 at 0:40
Thanks for all your attention. below is the reply from a professor:
The MATLAB sparse solver is a very efficient way of solving linear systems associated with the two-dimensional Laplacian operator. One reason for this is that the CHOLMOD solver is very effectively multithreaded so it can use all available processors in the solution process. For example, my Apple laptop is an I9 six-core architecture and I can see that all six are fully used when I solve the problem you discuss below. In contrast the AMG grid setup is interpreted code and, as you have observed, is extremely slow in a MATLAB environment. It is, however, memory efficient.
I have tried the numerical experiments in 3D, using 5 points difference to discretize the poisson equation:
$$\left\{\begin{array}{l}{-\Delta u=f}, \quad {(x, y,z) \in G=(-1,1)^3} , \\ {u=g,\quad (x, y,z) \in \partial G.}\end{array}\right.$$
when the system size become 1,000,000 X 1,000,000, the matlab command A\b is out of memory. the Matlab code is as follows:
%% poisson in 2D and 3D 5 points difference matrix
clc;clear;
n=10;
e=ones(n,1);
B = [-1 2 -1].*e;
d = [-1 0 1];
Tn = spdiags(B,d,n,n);
e=ones(n-1,1);
I = speye(n);
% 2D
Tn_I = kron(Tn,I);
I_Tn = kron(I,Tn);
A = Tn_I+I_Tn;
% 3D
A = kron(Tn_I,I)+kron(I,Tn_I)+kron(I,I_Tn);
b = sum(A,2);
tic;
A\b;toc
|
2019-11-15 07:28:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4553713798522949, "perplexity": 904.2580696273707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00558.warc.gz"}
|
https://deepai.org/publication/on-fair-and-efficient-allocations-of-indivisible-public-goods
|
DeepAI
# On Fair and Efficient Allocations of Indivisible Public Goods
We study fair allocation of indivisible public goods subject to cardinality (budget) constraints. In this model, we have n agents and m available public goods, and we want to select k ≤ m goods in a fair and efficient manner. We first establish fundamental connections between the models of private goods, public goods, and public decision making by presenting polynomial-time reductions for the popular solution concepts of maximum Nash welfare (MNW) and leximin. These mechanisms are known to provide remarkable fairness and efficiency guarantees in private goods and public decision making settings. We show that they retain these desirable properties even in the public goods case. We prove that MNW allocations provide fairness guarantees of Proportionality up to one good (Prop1), 1/n approximation to Round Robin Share (RRS), and the efficiency guarantee of Pareto Optimality (PO). Further, we show that the problems of finding MNW or leximin-optimal allocations are NP-hard, even in the case of constantly many agents, or binary valuations. This is in sharp contrast to the private goods setting that admits polynomial-time algorithms under binary valuations. We also design pseudo-polynomial time algorithms for computing an exact MNW or leximin-optimal allocation for the cases of (i) constantly many agents, and (ii) constantly many goods with additive valuations. We also present an O(n)-factor approximation algorithm for MNW which also satisfies RRS, Prop1, and 1/2-Prop.
• 24 publications
• 7 publications
• 6 publications
12/19/2021
### Tractable Fragments of the Maximum Nash Welfare Problem
We study the problem of maximizing Nash welfare (MNW) while allocating i...
05/08/2018
### Fair Allocation of Indivisible Public Goods
We consider the problem of fairly allocating indivisible public goods. W...
05/13/2020
### Fair and Efficient Allocations under Subadditive Valuations
We study the problem of allocating a set of indivisible goods among agen...
09/30/2022
### Proportionally Fair Online Allocation of Public Goods with Predictions
We design online algorithms for the fair allocation of public goods to a...
04/29/2022
### Maxmin Participatory Budgeting
Participatory Budgeting (PB) is a popular voting method by which a limit...
08/04/2021
### I Will Have Order! Optimizing Orders for Fair Reviewer Assignment
Scientific advancement requires effective peer review. Papers should be ...
10/20/2017
### The Importance of System-Level Information in Multiagent Systems Design: Cardinality and Covering Problems
A fundamental challenge in multiagent systems is to design local control...
## 1 Introduction
The problem of fair division was formally introduced by Steinhaus [32], and has since been extensively studied in economics and computer science [10, 28]. Recent work has focused on the problem of fair and efficient allocation of indivisible private goods. We label this setting as the model. Here, goods have to be partitioned among agents, and a good provides utility only to the agent who owns it. However, goods are not always private, and may provide utility to multiple agents simultaneously, e.g., books in a public library. The fair and efficient allocation of such indivisible public goods is an important problem.
In this paper we study the setting of , where a set of agents have to select a set of at most goods from a set of given goods. This simple cardinality constraint models several real world scenarios. While previous work has largely focused on the case, e.g., for voting and committee selection [2, 13], there is much less work available for the case of . This setting is important in its own right. We present a few compelling examples.
###### Example 1.
A public library wants to buy books that adhere to preferences of people who might use the library. Clearly, the number of books has to be much greater than the number of people using the library, hence .
###### Example 2.
A family (or a group of friends) wants to decide on a list of movies to watch together for a few months. Here too, . Another example of the same flavor is a committee tasked with inviting speakers at a year-long weekly seminar.
###### Example 3.
Another important example is that of diverse search results for a query. Given a query (say of “computer scientist images”) on a database, we would like to output search results which reflect diversity in terms of specified features (like “gender, race and nationality”). Once again, .
A related setting of public decision making [15] models the scenario in which agents are faced with issues with multiple alternatives per issue, and they must arrive at a decision on each issue. Conitzer et al. [15] showed that this model subsumes the setting.
#### Connections between the models.
A central question motivating this work is:
###### Question 1.
Can we establish fundamental connections between the three models , , and ?
To answer this question, we first describe two well-studied solution concepts for allocating goods in the and models, namely the maximum Nash welfare (MNW) and leximin mechanisms. These mechanisms have been shown to produce allocations that are fair and efficient in the models of and
. The MNW mechanism returns an allocation that maximizes the geometric mean of agents’ utilities, and the leximin mechanism returns an allocation that maximizes the minimum utility, and subject to this, maximizes the second minimum utility, and so on. We label the problems of computing the Nash welfare maximizing (resp. leximin optimal) allocation in the three models as
(resp. ).
We answer Question 1 positively by presenting novel polynomial-time reductions from the model of to , and from to for the problem of computing a Nash welfare maximizing allocation.
PrivateMNW≤PublicMNW≤DecisionMNW (1)
More notably, these reductions also work for the MNW problem when restricted to binary valuations. Apart from establishing fundamental connections between these models, our reductions also determine the complexity of the MNW problem, as we detail below. We also develop similar reductions between the models for the leximin mechanism, showing:
PrivateLex≤PublicLex≤DecisionLex (2)
#### Fairness and efficiency considerations.
We next describe the fairness and efficiency properties that the MNW and leximin mechanisms have been shown to satisfy in the and models.
The standard notion of economic efficiency is Pareto-optimality (PO). An allocation is said to be PO if no other allocation makes an agent better off without making anyone worse off. The classical fairness notion of proportionality requires that every agent gets her proportional value, i.e., -fraction of the maximum value she can obtain in any allocation. However, proportional allocations are not guaranteed to exist.444Consider for example, two agents and and six public goods . Agent has value for and has value for . All other valuations are . Suppose we want to select three of these goods. The proportional share of both agents is . However, in any allocation, the value of at least one agent is at most , implying that proportional allocations need not exist. Hence, we study the notion of Proportionality up to one good (Prop1) for . We say an allocation is Prop1 if for every agent who does not get her proportional value, gets her proportional value after swapping some unselected good with a selected one. For and , Prop1 is defined similarly – in the former, an agent is given an additional good [8, 26]; and in the latter, an agent is allowed to change the decision on a single issue [15]. While Prop1 is an individual fairness notion, it is still important for allocating public goods. For instance, in Example 1, we want allocations in which every agent has some books that cater to her taste, even if her taste differs from the rest of the agents. Likewise, in Example 2, a fair selection of movies must ensure that there are some movies every member can enjoy. We also consider the fairness notion of Round-Robin Share (RRS) [15], which demands that each agent receives at least the utility which she would get if agents were allowed to pick goods in a round-robin fashion, with picking last.
In the and models, an MNW allocation satisfies Prop1 in conjunction with PO [11, 15]. Similarly in both these models, the leximin-optimal allocation satisfies RRS and PO [15]. It is therefore natural to ask:
###### Question 2.
What guarantee of fairness and efficiency do the MNW and leximin mechanisms provide in the model?
Answering this question, we show that an MNW allocation satisfies Prop1, -approximation to RRS, and is PO. Further, a leximin-optimal allocation satisfies RRS, Prop1 and PO.
#### Complexity of computing MNW and leximin-optimal allocations.
Given the desirable fairness and efficiency properties of these mechanisms, we investigate the complexity of computing MNW and leximin-optimal allocations in the model. It is known that is -hard [25] (hard to approximate) and [15] is -hard. Likewise, too is -hard [9]. Therefore, we ask:
###### Question 3.
What is the complexity of and ?
Since and are known to be -hard, our reductions (1) and (2) immediately show that and are -hard. However, we show stronger results that and remain -hard even when the valuations are binary. These results are in stark contrast to the case, which admits polynomial-time algorithms for binary valuations [16, 20]. Further, our reductions between and also directly enable us to show NP-hardness of and . Moreover, a feature of our reductions (Observation 6) enables us to shows that is -hard even for binary valuations, highlighting the utility of our reductions. We also show that and remain -hard even when there are only two agents. We note that for the case of two agents, the -hardness of and does not imply -hardness of and because our reductions between the models do not preserve the number of agents. We summarize our results in Table 1.
In light of the above computational hardness, we turn to approximation algorithms and exact algorithms for special cases. We design a polynomial-time algorithm that returns an allocation which approximates the MNW to a -factor when , and is also Prop1 and satisfies RRS. Finally, we obtain pseudo-polynomial time algorithms for computing MNW and leximin-optimal allocations for constant . These are essentially tight in light of the -hardness for constant .
### 1.1 Other related work
Maximum Nash welfare. The problem of approximating maximum Nash welfare for private goods is well-studied, see e.g., [14, 6, 12, 21]. [18] showed that the MNW problem is NP-hard for allocating public goods subject to matroid or packing constraints. It has also been studied in the context of voting, or multi-winner elections [1]. Fluschnik et al. [19] studied the fair multi-agent knapsack problem, wherein each good has an associated budget, and a set of goods is to be selected subject to a budget constraint. In this context, they studied the objective of maximizing the geometric mean of where is the utility of the agent. They showed that maximizing this objective is -hard, even for binary valuations or constantly many agents with equal budgets and presented a pseudo-polynomial time algorithm for constant .
Leximin. Leximin was developed as a fairness notion in itself [30]. Plaut and Roughgarden [29] showed that for private goods, leximin can be used to construct allocations that are envy-free up to any good. Freeman et al. [20] showed that in the model the MNW and leximin-optimal allocations coincide when valuations are binary.
Core.
Core is a strong property that enforces both PO and proportionality-like fairness guarantees for all subsets of agents. It is well-studied in many settings, including game theory and computer science
[31, 24]. The core of indivisible public goods might be empty. Fain et al. [18] proved that under matroid constraints, a -additive approximation to core exists. On an individual fairness level, 1-additive core is weaker than Prop1 [18].
Participatory Budgeting. The participatory budgeting problem [3, 4] consists of a set of agents (or voters), and a set of projects that require funds, a total available budget, and the preferences of the voters over the projects. The problem is to allocate the budget in a fair and efficient manner. Here typically . Fain et al. [17] showed that the fractional core outcome is polynomial-time computable. This could be modeled as a public goods problem with goods as the projects.
Voting and Committee Selection. These settings involve selecting a set of members from a set of candidates based on the preferences of agents. Usually, here and the fairness notions studied are group fairness like Justified Representation [2], and a core-like notion called stability [13].
## 2 Notation and Preliminaries
#### Problem setting.
For , let denote . An instance of the allocation problem is given by a tuple of a set of agents, a set of public goods, an integer , and a set of valuation functions , one per agent, where each . Unless specified, we assume that . For a subset of goods , denotes the utility agent derives from the goods in . Unless specified, we assume the valuations are additive. In this case, each is specified by non-negative integers , where denotes the value of agent for good . Then for , . We assume without loss of generality that for every agent , there is at least one good with . For brevity, we write in place of for a set . An allocation is a subset of goods which satisfies the cardinality constraint .
#### Nash welfare.
The Nash welfare (NW) of an allocation is given by An allocation with the maximum NW is called an MNW allocation or a Nash optimal allocation.555If the NW is 0 for all allocations, MNW allocations are defined as those which give non-zero utility to maximum number of agents, and then maximize the product of utilities for those agents. Note if , every agent positively values at least one good and thus MNW . We also refer to the product of the agents’ utilities as the Nash product. An allocation approximates MNW to a factor of if , where is an MNW allocation.
#### Leximin.
Given an allocation , let
denote the vector of agent’s utilities under
, sorted in non-decreasing order. For two allocations , we say leximin-dominates if there exists such that and . An allocation is leximin-optimal if no other allocation leximin-dominates it.
#### Fairness notions.
We now discuss fairness notions for the setting. The proportional share of an agent , denoted by is a -share of the maximum value she can obtain from any allocation. Formally:
Propi=1n⋅maxx⊆G,|x|≤kvi(x).
The round-robin share of agent , denoted by , is the minimum value an agent can be guaranteed if the agents pick goods in a round-robin fashion, with picking last. Therefore, this value equals the maximum value of any sized subset. Formally:
RRSi=maxx⊆G,|x|≤⌊k/n⌋vi(x).
For , an allocation is said to satisfy:
1. -Proportionality (-Prop) if , ;
2. -Proportionality up to one good (-Prop1) if , , such that
3. - if for all agents , .
Due to the cardinality constraints in the model, the notion of Prop1 requires that for every agent, there is a way to swap one preferred unpicked good with one picked good, after which the agent gets her proportional share. Since Prop1 in requires only giving an extra good, this makes the definition of Prop1 in slightly more demanding than that in .
#### Pareto-optimality.
An allocation is said to Pareto-dominate an allocation if for all agents , , with at least one of the inequalities being strict. We say is Pareto-optimal (PO) if there is no allocation that Pareto-dominates .
#### Related models.
1. . The classic problem of private goods allocation concerns partitioning a set of goods among the set of agents. Thus, a feasible allocation is an -partition of , where agent is allotted , and derives utility only from .
2. . In this model, a set of agents are required to make decisions on a set of issues. Each issue has a set of alternatives, given by . A feasible allocation or outcome comprises of decisions, where is the decision made on issue . Assuming the valuations are additive, each agent has a value for the alternative of issue . The valuation of the agent for the outcome is then .
## 3 Relating the models
We first show rigorous mathematical connections between the and models w.r.t. computing optimal MNW and leximin allocations.
###### Theorem 4.
polynomial-time reduces to .
###### Proof.
Let be an instance of the model. For , the MNW problem is trivial, since we can select all the goods. For , we can construct an instance of from in polynomial time, such that given an MNW allocation of , we can compute an MNW allocation of in polynomial time. Let . We create public issues: corresponding to each good , we create an issue with two alternatives and . That is, , and for . We create , where . The first agents here correspond to the agents in . The last agents are of two types: agents of type , and agents of type . The valuations are as follows: each agent values alternative ‘’ of the issue at , the agents of type value only alternative ‘’, agents of type value only alternative ‘’. Formally, for , and an alternative of the issue , where :
v′i(j,c)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩vij, if c=1 and i∈[n];1, if n
Let be an allocation for the instance . For , let be the set of issues with decision in . That is, . Let . Then we have:
We now relate to the instance . The decision corresponds to selecting the public good . Let be the corresponding set of public goods. Then for any we have that , since for every . Thus:
NW(x′)=(NW(x)n⋅(k′)kT⋅(m−k′)(m−k)T)1n+mT. (3)
We now have to prove that satisfies . Let be the Nash product of any MNW allocation for the instance , . Clearly, . As , , since we assume every agent has at least one good that she values positively. Define , as . Then if is an MNW allocation for , (3) becomes:
NW(x′)=(Wk′⋅g(k′)T)1/(n+mT). (4)
Let and denote the largest and second-largest values that attains over its domain. We observe that increases in , and decreases in . Hence, implying:
G1 =kk(m−k)m−k;G2=max(g(k−1),g(k+1)).
We now show that that:
.
###### Proof.
Recall that denotes the Nash product of any MNW allocation for the instance , for . We have , and we assume . Recall that function , was defined as .
Let and denote the largest and second-largest values that attains over its domain. We observe that increases in , and decreases in . Hence:
G1 =g(k)=kk(m−k)m−k. G2 =max(g(k−1),g(k+1)).
Now observe that for :
logg(k)−logg(k−1) =k(logk−log(k−1))+(m−k)(log(m−k) −log(m−k+1)), >k⋅1k−12+(m−k)⋅−1m−k≥12k−1≥12m,
and for :
logg(k)−logg(k+1) =k(logk−log(k+1))+(m−k)(log(m−k) −log(m−k−1)), >k⋅−1k+(m−k)⋅1m−k−12, ≥12(m−k)−1≥12m,
using standard properties of logarithms. Thus:
logG1−logG2>12m.
Then we have by recalling that ,
T(logG1−logG2)>2mnlogmV⋅12m≥logWm,
which gives:
GT1>Wm⋅GT2,
as required. Lastly, we consider the cases of and . In both cases, , which gives , as claimed. ∎
Using Claim 1 we have for all :
Wk⋅g(k)T≥GT1>Wm⋅GT2≥Wk′⋅g(k′)T,
Hence, the quantity is maximized when . Recalling (4), we conclude that for the MNW allocation of , the corresponding set has cardinality exactly . Further also maximizes the NW among all allocations of the instance satisfying this cardinality constraint. Thus, in fact is an MNW allocation for . Finally, it is clear that this is a polynomial time reduction. ∎
We next relate the MNW problem in the model with the model.
###### Theorem 5.
polynomial-time reduces to .
###### Proof.
Let be a instance, using which we create a instance as follows. We create agents, i.e. . The first agents correspond to the agents in . The last are dummy agents. We create public goods: for each good , we create a set of copies , . We set . The valuations for , are:
v′i(jℓ)=⎧⎨⎩vij, if i=ℓ and i∈[n];1, if i∈{n+2j−1,n+2j};0, otherwise,
i.e. each agent values exactly one copy, for each at , and for each good , there are exactly two dummy agents who value all copies of .
We now state and use the following claim, and prove it immediately after the proof of Theorem 8.
###### Claim 2.
Any MNW allocation of does not select two goods from same .
Consider any MNW allocation of . We construct a partition, of goods for from this in the following way. For , , define if , and 0 otherwise. Let . Thus, the value that agent gets in is
vi(xi)=∑j∈Gvijxij
Thus, if , and the partition corresponding to as defined above gives an MNW solution for . On the other hand, if , then already gives non-zero value to all dummy agents by Claim 2. Thus, to maximize the total number of agents who get non-zero value, it maximizes the number of agents in who get non-zero value. Call this set . Thus partition has maximum number of agents getting a non-zero value. Finally, it maximizes the Nash product over . Claim 2 also implies that all dummy agents get value . Thus, . Thus even in this case the allocation corresponds to an MNW allocation in . ∎
###### Proof of Claim 2.
Consider first . Suppose for which two goods . Since exactly goods are picked in , there is some , for which no good is picked in for any . This implies that the agents get zero value in , making . However, choosing a good from each gives non-zero value to all dummy agents. At the same time, since , these goods can be chosen so that they give non-zero value to distinct agents in . This makes contradicting Nash optimality of .
Now, if Nash welfare of all allocations in is . Thus, the MNW allocation is the one that maximizes the number of agents who get non zero value and then maximizes the product of values for these agents. Consider any allocation , suppose for which two goods then again for some , agents and get value 0 making . At the same time, even if has goods from all different , since , and each one item from gives value only to one agent , the even in this case. Thus, if , all allocations have Nash welfare in also. Suppose the MNW allocation, had two goods from same for some . Then, there exists a such that no good is selected from . The two goods from give value to exactly four agents - the two dummy agents and two agents who receive their copy of good . Instead, if we exchange one of these goods to a good from , we give non-zero value to at least five agents - dummy agents and at least one of the agents in . We did not change the value of any other agents in this process. Thus, we increase the number of agents who get non-zero value, contradicting the maximality of . Thus, in both cases, all goods are picked from different . ∎
###### Observation 6.
A desirable feature of the above reductions for the MNW problem from instance to is that , i.e., the reduction only creates instances which have 0 and 1 as the only potentially additional values as compared to . We use this feature in establishing the computational complexity of computing an MNW allocation in the model with binary values, see Corollary 25.
We also show similar polynomial-time reductions between the three models for the problem of computing a leximin-optimal allocation.
###### Theorem 7.
polynomial-time reduces to .
###### Proof.
Let be an instance of the model. For , the leximin problem is trivial, since we can select all the goods. When , we can construct an instance of the model from in polynomial time, such that given a leximin allocation of , we can compute a leximin allocation of in polynomial time. To construct , we first create a set of agents. The first agents here correspond to the agents in . The last agents are used in the construction, and ensure that exactly goods are selected in .
We next create public issues: for each good , we create an issue with two alternatives and . That is, , and for .
The valuations are as follows: for an agent , and an alternative of the issue , where :
v′i(j,c)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩vij, if c=1 and i∈[n];α⋅(m−k), if i=n+1 and c=1;α⋅(k), if i=n+2 and c=2;0, otherwise.
where is a sufficiently small constant. Essentially, each agent values the ‘’ decisions of the issue at , the agent values only the ‘’ decisions, and agent values only the ‘’ decisions.
Let be a leximin allocation for the instance . Clearly for all agents, since there is some allocation that gives positive utility to all agents, and the minimum utility only improves in the leximin solution. In particular for all . For , let be the set of issues with decision in . That is, . Let . we note that , and . Since , for each , we have . Suppose . Then any allocation with gives , which is a leximin improvement over , since . Hence .
We now explain how we can relate of to the public goods instance . Intuitively, the decision corresponds to selecting the public good , and corresponds to not selecting . Let be a set of public goods of cardinality . Then for any we have that , since for every . Further since , . Hence is a feasible solution for . Since for all , , is a leximin allocation for .
Since the number of agents and goods created in the reduction are polynomially many in the size of the instance , and all other computations can also be carried out in polynomial time, this is a polynomial time leximin-preserving reduction. ∎
###### Theorem 8.
polynomial-time reduces to .
###### Proof.
The proof follows from essentially the same reduction used to show Theorem 5. ∎
## 4 Properties of MNW and Leximin
We prove that MNW and leximin-optimal allocations satisfy desirable fairness and efficiency properties in the model as well. First, we show some interesting relations between our three fairness notions – , and in the model where .666Note that when , is 0. Any agent who gets value satisfies when trivially. Thus, and coincide when . On the other hand, the proportional value will be non-zero even when if the agent likes at least one good. Thus, there can be no multiplicative relation between and when . Our results are presented in the table below.
###### Lemma 9.
Any allocation that satisfies also satisfies .
###### Proof.
Fix any agent . Let be any allocation that satisfies . Let denote the top goods for agent . We assume that the goods both in and are ordered in decreasing order of valuations according to agent . Now, suppose that top goods of match with top goods of , i.e. and . Note that since is the top goods of agent , we cannot have that for any . We want to prove that implies . If was already satisfying proportionality, it is obvious that is . If , it is again easy to see that is . This is because, if then we already have top goods, giving a proportional allocation. If , then we can remove any good from and exchange it with to ensure proportionality, making the original allocation . Finally, if divides then we have proportionality implied by from Lemma 10.
Thus, we now assume that , with and that is not already a proportional allocation. We know that and . Thus,
v(hℓ+1,…,hk)<1nv(gℓ+1,…,gk) (5)
Now, . Thus,
v(hk)≤1n⋅(k−ℓ)v(gℓ+1,…
|
2023-02-01 02:01:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8984427452087402, "perplexity": 1208.018473525805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00206.warc.gz"}
|
http://math.stackexchange.com/questions/94456/product-topology-of-discrete-sets
|
# Product Topology of Discrete Sets
If I am examining two sets $X$ and $Y$, each with the discrete topology, will $X\times Y$ have a discrete topology? My understanding is yes. I believe this because $X\times Y$ is the finite product of discrete spaces. Every point in $X$ is open and every point in $Y$ is open, and every point in $X\times Y$ is open. Thus $X\times Y$ has a discrete topology.
Is this understanding correct?
-
Yes. In fact, a product of $A$ discrete spaces, with the product topology, each with more than one point is discrete if and only if $A$ is finite. – David Mitra Dec 27 '11 at 16:00
You haven't really explained your argument. Why is it relevant that $X \times Y$ is the finite product of discrete spaces? Why is every point in $X \times Y$ open? – Chris Eagle Dec 27 '11 at 16:01
So, fundamentally, you are correct that to prove a topology on a set $Z$ is discrete, you only need to prove that singletons are open. Why is this enough? Because every subset of $Z$ is a union of singletons, and so if every singleton is open, then so is every subset of $Z$. But, as @ChrisEagle pointed out, you have to prove that singletons in $X\times Y$ are open - you can't just assert it. It depends on your definition of the product topology, however. – Thomas Andrews Dec 27 '11 at 16:25
Yes, this is correct. If you want a formal reasoning, it looks something like this.
Make a basis for the topologies of $X$ and $Y$ by letting each one-element (singleton) subset be a basis element. This would produce the discrete topology. Now every subset of $X\times Y$ containing a single point is a product of two basis elements, and thus a basis element itself, and hence we have discrete topology.
-
For clarity, I think you should distinguish between a point and a singleton set - that is, every singleton set of $X\times Y$ is a product of two basis elements. A point of $X\times Y$ is not a subset of $X\times Y$, so a point cannot be open, only a singleton set can be open. – Thomas Andrews Dec 27 '11 at 18:16
@ThomasAndrews This is very true, edited. – Arthur Dec 27 '11 at 23:42
Yes. Finite products of discrete spaces are again discrete. Both $X$ and $Y$ have as a base the singleton sets, and the product topology on $X\times Y$ will have as a base, the product of singleton sets, meaning every point is open and closed, and hence the topology on $X\times Y$ is discrete.
An infinite product of discrete spaces need not be discrete however.
-
A basis for the product topology on $X \times Y$ is obtained by taking bases $\left\{ U_\alpha\right\}$ and $\left\{ V_\beta \right\}$ of each space $X$ and $Y$, respectively: then, $\left\{ U_\alpha \times V_\beta\right\}$ is a basis of $X \times Y$. Since with the discrete topology you can take as bases of each space all of the points on each space, a basis for $X\times Y$ is the set of all points $(x,y) \in X \times Y$. So $X \times Y$ has the discrete topology indeed.
-
Your only flaw is when you say "and every point in $X\times Y$ is open". This is true, but is what you need to justify. To do this, just say "take a point $(a,b)$ in the product, then $\{a\}$ is open in $X$ and $\{b\}$ is open in $Y$. By definition of the product topology, $\{a\}\times \{b\}=\{(a,b)\}$ is open in $X\times Y$."
-
Given that the projection functions $p_1:X\times Y\rightarrow X$ and $p_2:X\times Y\rightarrow Y$ are continuous, take $(x,y)\in X\times Y$. Clearly: $$\{(x,y)\} = p_1^{-1}(\{x\}) \cap p_2^{-1}(\{y\})$$
But since the topologies on $X$ and $Y$ are discrete, and $p_1$ and $p_2$ are continuous, this means that $p_1^{-1}(\{x\})$ and $p_2^{-1}(\{y\})$ are open in $X\times Y$.
So the singleton set $\{(x,y)\}$ is the intersection of two open sets, and hence is an open set.
Note, this makes clear why this works for finite products, but not infinite products - a singleton in an infinite product would be the infinite intersection of open sets, which is not guaranteed to be open.
-
|
2014-11-26 10:03:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695396184921265, "perplexity": 112.38113268264065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006716.69/warc/CC-MAIN-20141125155646-00078-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://www.scielosp.org/article/aiss/2013.v49n4/418-423/
|
La sanita'tra ragione e passione
Giuseppe Traversa
Istituto Superiore di Sanita, Rome, Italy giuseppe.traversa@iss.it
Roberto D'Amico, Marina Davoli, Luca De Fiore, Roberto Grilli e Paola Mosconi (Eds). Roma: II Pensiero Scientifico Editore; 2013. 124 p.
ISBN 978-88-490-0466-3.
€ 22,00.
[Health service: between reason and passion]
Almost one year ago, in December 2012, a seminar was held in Bologna to commemorate Alessandro Liberati, who had died at the beginning of the year. Some of the most renown experts in the area of evidence-based medicine, clinical research and public health, who had known Liberati for many years, convened to give a contribution. Now, one year later, a book that includes their interventions has been published: La sanita tra ragione e passione. Il Pensiero Scientifico Editore, 2013, Rome.
The book is inspired by the famous "Six memos for the next millennium", the last work by Italo Calvino, who was devising a series of lectures to be given at Harvard in 1985. He grouped his thoughts on literature under six headings (lightness, quickness, exactitude, visibility, multiplicity and consistency).
The participants of the meeting in Bologna were asked to make their contribution on clinical research and health care within each of the six headings. There were two risks. First, the "memos" might not apply to a different context; second, given the great number of participants, the global result could have been a lack of homogeneity. Indeed, both risks were avoided. The work done in preparing the meeting, together with the skillfulness of the contributors, make this book a rare example of insights for researchers, clinicians and experts in health care organization.
It would be impossible to mention all 27 contributors. It is however possible to provide an idea of the views that are presented for further discussions. With regard to "lightness", Hazel Thornton reminds us of the risks of "heavy-handed regulations": "Policymakers concentrate on competencies, neglecting conduct; they consider contracts rather than cultivate covenants; they seek to strengthen State, rather than serve Society". Moreover, in making available to each person all interventions with a clearly positive benefit-risk profile, we should not forget that "More is not necessarily better; we must dare to think about doing less, adopting a lighter touch - advocate for less early detection; less weighty treatment".
Roberto Grilli discusses the risks about the positive attitude that surrounds "quickness" in the diffusion of medical technologies. There are many examples indicating that speed in the dissemination of a new technology is not necessarily associated with a positive outcome, and a more prudent approach might be wiser. Trevor Sheldon argues that "Of course there is often a trade-off between rapidity and quality, and trying to balance speed and reliability is very difficult; we do have to grasp this challenge, though". In this respect, we should recognize that we will almost never be in the position to wait until all the necessary evidence accumulates. Again, Sheldon adds that 'We need to become more comfortable with decision making under uncertainty and find ways of helping decision makers make better use of uncertain evidence; anchoring or informing decisions with the evidence, rather than insisting it is completely based on the research evidence."
The fact that we cannot rely on "exactitude" as the only basis for clinical decisions does not lessen the commitment to reducing the areas of uncertainty. Iain Chalmers cites one of the Liberati's thoughts about the ethical requirement to publish all findings of already conducted studies and the insufficient obligation shown by researchers in trying to solve uncertainties that are relevant for patients. "Research results should be easily accessible to people who need to make decisions about their own health ... Why was I forced to make my decision knowing that information was somewhere but not available? ... How far can we tolerate the butterfly behaviour of researchers, moving on to the next flower well before the previous one has been fully exploited?".
There is a need for pursuing "visibility" in the doctorpatient relationship, especially in areas of uncertainties. As Michele Bellone argues, "Reassurance is of no help, since ignorant citizens would remain ignorant even after any reassurance". On the contrary, providing information implies to keep visible the level of uncertainty in the available evidence and to help people in reaching a decision. It is not only a quest for honesty in giving patients an accurate account of the efficacy and risks involved in any intervention. The fact is, as emphasized by Gianfranco Domenighetti, that when patients are correctly informed about the pros and cons involved in any decision, they are less prone to receiving useless interventions.
The issues raised in the area of visibility are also appreciated when dealing with the idea of "multiplicity". In a complex world, as is the case of heath care and clinical research, it is preferable not to rely on simplistic explanations. In citing Carlo Emilio Gadda, Richard Smith reminds us that "Unexpected catastrophes are never the effect of a single cause. They are like a storm which is generated by a multitude of causes".
In the final chapter of the book, Rodolfo Saracci brilliantly deals with the last memo, "consistency". This is a word frequently encountered in epidemiology, with a meaning of both coherence and soundness. Indeed, there is an impressive lack of consistency between the areas of clinical research most frequently treated by researchers and patients' needs. In trying to change this situation, two different attitudes towards advocacy may be offered. There are experts who are deeply involved with an advocacy role in the same areas that are the object of their research activity. There is also what can be considered a more "objective" attitude. For instance, the view of the economist Von Mises may apply to the implications of clinical research: "Science never tells a man how he should act; it merely shows how a man must act if he wants to attain definite ends".
Everybody would likely agree that the role of research is to add relevant new evidence to the already available knowledge; in the health sector, this translates into the aim to provide more effective options in both prevention and clinical practice. Saracci goes further, and I believe that Liberati would have agreed, in suggesting that the final aim of epidemiology, in its application to research as well as to public health, consists of two related elements: the search for truth and the search for justice.
In the end, the book Health service: between reason and passion provides an overall reflection on the need to attain truth in research, in the best interest of patients. At the same time, the book also appeals to the heart of researchers and clinicians who share the idea that an effective health service is a value to be cultivated in everyday activity.
Edited by: Federica Napolitani Cheyne
Una via d'uscitaper una critica dellamisura di sicurezzae della pericolositàsociale. l'esperienzadell'Ospedale PsichiatricoGiudiziario nellostato di Minas Gerais
Giorgio Bignami
gia Istituto Superiore di Sanita, Rome, Italy welin.bignami@mclink.net
Virgilio De Mattos Merano/Meran: Edizioni alpha beta Verlag; 2012. 199 p. ISBN 978-88-7223-180-7. € 14,00.
[A way out. A critique on security measures and social dangerousness. The experience of the Psychiatric Unit in the State of Minas Gerais]
"...Si tibi ..compertum est Aelium Priscus in eo furore esse, ut continua mentis alienatione omni intellectu careat, ...diligentius custodiendus erit ac, si putabis, etiam vinculo coercendus, quod non tam ad poenam quam ad tutelam eius et securitatem proximorum pertinebit..." *
Marcus Aurelius, "Digestus", 1, 4, 18
The effective management of psychiatric patients who have violated the penal law (formerly the "criminally insane") is difficult for several reasons, including the survival of some obsolete norms in the penal codes of various countries. On one hand, these norms exempt the mentally incompetent defendant from the regular penalty (or reduce the penalty if the subject is "partially z incompetent"). On the other hand, two millennia after Marcus Aurelius' Digestus, the norms prescribe the enw forcement of security measures - generally the confinement in a psychiatric secure unit (in Italian Ospedale Psichiatrico Giudiziario, OPG) for a period of time proportional to the degree of "social dangerousness" (a concept with a rather shaky scientific basis) as assessed by psychiatric expertise. In the real world, this paves z the way to a replication of the security measures, resulting in confinement for a much longer time (not infreQ quently for the rest of the subject's life) than that of the imprisonment of a "mentally competent" criminal. It is true that the laws of different countries often offer the > possibility of alternative measures, e.g., the entrustment to a psychiatric service, to a community for the care of psychiatric patients, etc. Such an alternative, however, is seldom exploited for one or more of several reasons: CO the limitations in the services' resources; the reluctance of various parties (the family to start with) to take the responsibility for the management of a "socially dangerous" (or "potentially dangerous") person; the stigmatizing misconceptions of many people, who are scared to death by the firm belief that the "criminally insane" will reoffend (which of course can happen, but not more frequently, and perhaps less frequently - see later - than in the case of "sane" criminals after their release from jail); and so on and so forth.
At least in Italy, a remarkable exception concerns bosses of powerful and wealthy criminal organizations. These can pay some of the best lawyers and pressure and/or bribe selected forensic psychiatric consultants, who certify that the guy is mad and must be transferred from a jail to an OPG. Afterwards, additional "expertises" are aimed at obtaining the release from the OPG via an "alternative measure", e.g., the hospitalisation in a comfortable private clinic with only nominal surveillance, where bosses are practically free to resume the direction of their criminal organizations. And sooner or later the release is obtained upon certification of healing. Fiddling with such mechanisms can be dangerous for your health: e.g. in 1982, professor Aldo Semerari, director of the Institute of Forensic Psychopathology at the Rome University and a well paid forensic psychiatric expert ( e.g. , he provided expertises in favour of several members of the notorious Roman "Gang of the Magliana"), was assassinated and beheaded in the surroundings of Naples. He had been found "guilty" of double-crossing; that is, taking money from two rival camorra gangs to provide expertises certifying the insanity of both the respective bosses.
The book by De Mattos belongs to the same "180" series of monographs mentioned in a recent review [Guarire si puo. Persone e disturbo mentale. Ann 1st Super Sanita 2013:49(3)319-20, Review by Giorgio Bignami, in English]; and this, thanks to the long-term relations between the author, a Brazilian socio-criminologist and political scientist, Italian psychiatrists of the Trieste group, and jurists and criminologists, particularly two of them to whom the book is dedicated: Alessandro Baratta (+ 2002) and Raffaele De Giorgi (University of Salento), both among the leaders respectively in the field of socio-cr iminological studies on deviance and the law, and in that of theories of social systems (De Giorgi also has a long experience of research and teaching in Latin America). In fact, increasingly significant exchanges between the Trieste group and Brazilian mental health workers were started by seminars and conferences held in Brazil by Franco Basaglia in 1978-9, i.e., shortly before his demise in August 1980 to (see Franco Basaglia, Conference brasiliane, Roma, Raffaele Cortina, 2000). These relations were facilitated by the incorporation in the eighties of the Trieste Mental Health Service in the newly founded WHO Collaborating Center for Research and Training in Mental Health in Italy, whose first president was the late director of the Istituto Superiore di Sanita, Francesco Pocchiari, until his demise in 1989.
The reader must be warned that the original edition of this book was published in 2006, therefore one cannot expect to find updated information in the author's text. Additional information for the years 2005-2010 is given in the Introduction by the translator Ernesto Venturini, a former coworker of Basaglia, and also in many footnotes by the author, by Venturini, and by the editor of the Italian edition, Silvia D'Autilia. These additions provide a clear and well documented comparative analysis of the Italian and Brazilian histories and present situations in the mental health field, including OPGs. As concerns Italy, the inquiry of the Senate Public Health Commission led by professor Ignazio Marino, which documented unbelievable misdoings (part of the Commission's video can be viewed via http://www.you tube.com/watch?v=A535K-IjVjg), led to the approval of an ad hoc law (2012/9, modified by law 2013/57) which prescribes the closure of OPGs. The subsequent to-ing and fro-ing of national and regional ex lege provisions is not yet completed, therefore it is still impossible to foresee how many of the interested subjects will continue to be confined in regional mini-OPG's; or viceversa liberated thanks to alternative measures based on strong support by Mental Health Services, including personalised care, rehabilitation programmes and provisions aimed at solving a variety of problems, including housing and work. This problem is handled very clearly in De Mattos' book, since in Brazil it often happens that after a subject confined in an OPG is certified to be "not any more socially dangerous", he continues to be confined indefinitely in a different (generally smaller) structure with a different label (geriatric hospital, rehabilitation unit, therapeutical residence, etc.,)
The first part of this work is a stimulating historical, cultural and political analysis of the progressive escalation of internment measures aimed at criminal, insane, criminal insane, socially dangerous, troublesome, unproductive and quite a few other types of subjects with undesirable profiles - all of them deprived of most or all of their citizens' rights (from this viewpoint the provisions in our 1948 Constitution and 1978/180 psychiatric law and in the Brazilian 1988 Constitution and 2001/10,216 psychiatric law are quite similar; in addition, the Brazilian Constitution forbids life sentences, of which life confinement in an OPG, or some substitute of it, is practically an equivalent). As concerns the destinies of mentally disturbed criminals, the judgment of the author on the role of psychiatry is even more drastic than that on the role of other apparently more responsible parties, including legislators and judges: e.g., "... since penal right groped its way concerning the concept of non-liability, whereas psychiatry, in its deliria of conquest, supported such a concept, it often happened that false arguments and false solutions were offered..." (p. 107); of this, quite a few examples are given, including those concerning recent neo-lombrosian trends based on neuroscientific "evidence". The author's pessimism concerning not only the present but also the future is explicit: he wanted the book's title to be "Without a way out", but the Brazilian publisher imposed the more optimistic (and marketable) "A way out".
The second part of the book is devoted mainly to a critical analysis of ten representative cases, i.e., former OPG patients who were certified as not being any more "socially dangerous", but could not be liberated for the reasons outlined above (one of them, for example, came from a village where health services were totally absent, with the nearest psychiatrist 300 km away). A small special hospital was created for them, where conditions were practically indistinguishable from those of the OPG in which they had been previously confined. Pace the Constitution and the psychiatric law.
Some light at the distant end of this tunnel is provided by the description in the last part of the book (updated to 2009 in Venturini's introduction) of a special programme which started in the year 2000 in Belo Horizonte (about 2,500,000 inhabitants), the capital city of the State of Minas Gerais (about 20,000,000 inhabitants, i.e., about one tenth of the total population of Brazil); an initiative that gained official status after the coming into force of the aforementioned 2001 psychiatric law. This "Programme of Integral Attention" to the psychiatric patient guilty of a crime, launched by a Belo Horizonte court in collaboration with Mental Health Service workers and administrators, is aimed at terminating, whenever possible, the confinement of patients (or former patients) of OPGs, relying on a series of coordinated professional and other interventions. The results so far obtained are exceptionally good (see p. 31): in the first 10 years, 1058 cases examined; 755 patients enrolled in the programme; 489 already definitively acquitted; 266 still under the judge's surveillance, of which 210 already liberated and living with their families or in therapeutical residences; the remaining ones still under the judges' security measure and the Mental Health Service's intensive care; but with only 25 subjects out of 755 still in confinement. And last but not least, a recidivism rate of only 2%, a worldwide record both for this type of subjects and for "sane" criminals after their release from jail.
These results obtained with limited resources are a clear message for those Italian regional governments whose decrees predispose the transfer of a high percentage of our OPG patients to regional miniOPGs: i.e., secure units for 20 inmates each, but with the possibility of combining two or three units in the same building or compound, resulting in confinement structures substantially larger than a "cosy" therapeutic residence or community; and often quite far from the original residence of the inmate (e.g., the distance between the Northern part of the province of Viterbo and two of the new Latium miniOPGs to be located in an old hospital in Subiaco is about 200 km, at least two hours and a half by car). Alas, "if we want things to stay as they are, things will have to change", as in the famous statement by the bold and shrewd Tancredi in Lampedusa's The Leopard.
* "...If your are certain that Aelius Priscus is in a state of furor, that because of a continuous mental alienation he lacks any understanding ... diliently he will have to be kept in custody, and if you think it appropriate also restrained by bonds, not so much as a penalty, but rather to protect im from himself and for the safety of his neighbors...".
Animal personalities. Behavior, physiology, and evolution
Claudio Carere, Dario Maestripieri (Eds) Chicago and London: The University of Chicago Press; 2013. p. 520. ISBN-13: 978-0-226- 92197-6. ISBN-13: 978-0-226- 92197-2. $110.00 (Cloth).$ 45.00 (Paper).
Personalities (called by some behavioural ecologists "behavioural syndromes") refer to consistent individual differences in inter-correlated suites of behaviour, independent of factors such as sex, age, or social status. Such differences, which make every individual unique in its overt phenotype, tend to be stable across time and contexts.
Personality differences are well known features of humans - there are dedicated journals and societies of personality psychology and individual differences. No one would deny that any person has an own propensity to take risks or way to explore or communicate, or to cope with a disease. Such variation between animals was long overlooked. In the last two decades, starting from pioneering studies on mice with aggressive and non aggressive profiles (the so-called SAL, short attack latency, and LAL, long attack latency [1]) and great tits (a common songbird) with different exploratory behaviour (the so-called FAST and SLOW explorers [2]) it became progressively evident that such intra-specific variation is a fundamental aspect of the animal kingdom. Behavioural profiles involving "packages" of traits including boldness, aggressiveness and exploratory behaviour have been uncovered in hundreds of species, including mammals, birds, fish, and recently extending to invertebrates. Curiously, the first animal study in which the term personality was used was on octopuses [3].
Theoreticians developed models for a deeper understanding of why and how different personalities evolve and are maintained in natural populations. Field studiesinvestigate the fitness consequences and the ecological factors related to personalities in natural populations. Proximate factors are analyzed using genetic and physic ological methods. Unfortunately, despite being a topic at the frontiers of behavioural biology, animal personalities research has still little connection to the human personality research in psychology, economics and social, political and medical sciences. For example, medical science focuses on the relationship between personality and individually optimized treatment of disease ("personalized medicine"), while animal biologists focus on the evolution of personalities and their interaction with the environment. The methods for classifying personality differences are much more refined in the human sciences and their physiological substrates are much better known in humans than in any other animal species. Human personality differences may appear in a new light if similar differences are also found in non-human species and open questions might be addressed in animal study systems.
In this timely volume, the first one synthesizing and integrating the research on animal personality, Claudio Carere (University of Tuscia, Viterbo, Italy) and Dario Maestripieri (University of Chicago, US), two recognized scholars of behavioural biology, provide a collection of essays diverse in biological approaches and levels of investigation as well as in species - from invertebrates to monkeys and apes, including humans. Thirty-five authors have been coordinated to assemble 15 chapters. Evolutionary biology provides the general framework for the study of animal personalities. Important contributions however, are made by ethology, ecology, genetics, endocrinology, neuroscience, and psychology. The editors made an effort to address both the how and why questions, and also to include descriptive, theoretical and experimental studies. The chapters illustrate how personalities vary along multiple dimensions; how they are influenced during ontogeny and in adulthood by genetic, physiological, and environmental factors; what is their functional significance, in terms of how they contribute to reproduction and survival; and what is their relevance for animal conservation in the wild and welfare in captivity and for human health. Three main questions permeate the content: why are there personality differences? How do genes and environment shape them? How do they evolve?
The volume is divided into four sections with a logical progression: Part 1 (Behavioral Characterization of Personalities across Animal Taxa) highlights the comparative/phylogenetic aspect. The paper by Mather and Logue does a good synthesis of the literature showing how subjects as diverse as squid, spiders and grasshoppers exhibit personalities, which is a remarkable accomplishment considering the number of invertebrate organisms (98% of animal species), often considered limited in behavioural and cognitive repertoires. Bell, Foster, and Wund examine personalities in sticklebackfishes, focusing on the selective factors that can favour the evolution of fish personality with a comprehensive excursus on variation in one species. Van Oers and Naguib provide an overview of research on avian personality, including studies addressing behavioural variation, its underlying genetic bases, and its fitness consequences in natural populations. Weiss and Adams examine V nonhuman primates and integrate the findings obtained with different approaches, such as ecology-based and life-history-based approaches, human personality assessment procedures, and multivariate and behaviour genetic approaches. In their chapter the comparative psychologists Gosling and Mehta convincingly argue that studying animal personalities may greatly inform personality studies in humans. These two chapters bridge different aspects of personality research in behavioural ecology and in psychology by discussing similarities and differences in research focus between both and argue that using multiple methods and approaches is most favourable for the field - a conclusion that has recently been drawn by several authors in the field.
Part II (Genetics, Ecology, and Evolution of Animal Personalities) includes theoretical and empirical studies, which address the relation between genetic variation, phenotypic plasticity, ecological factors, and the selective mechanisms favouring the evolution and maintenance of personalities in animals. Van Oers and Sinn address the quantitative and molecular genetics of animal personality, discussing the role of direct genetic effects, maternal effects, and gene-environment interactions in the evolution and manifestation of animal personalities and their heritability. The authors give an accurate and detailed tale of why the over-simplification given by one-to-one trait correlations is not anymore tenable and provide some possible lines for taking explicitly into account the 'correlation structures' as main targets of the evolutionary game. Dingemanse and Reale focus on the role of natural selection and examine the reaction norm and character state view of animal personality with an inspiring review on principles of selection acting on personality and plasticity of traits. Sih's chapter focuses on socio-ecology of animal personality analysing how variation in animal personalities relates to predation, mating, and cooperation as well as how variation in social conditions (e.g., availability of different social partners) affects plasticity of personality and constitutes a rich source of ideas that have received little attention so far. The chapter by Wolf, Van Doorn, Leimar, and Weissing provides a broad overview of the selective pressures favouring the evolution and the maintenance of animal personalities and how these pressures affect the structure ( e.g. , the type of phenotypic traits that cluster together) and the developmental stability of individual differences in personalities.
Part III (Development of Personalities and Their Underlying Mechanisms) addresses the ontogeny of personalities, how they arise as a result of parental influences, and how they are controlled and regulated by different neuroendocrine mechanisms. The chapters of this section integrate empirical research on developmental and physiological aspects conducted mainly in the laboratory. Curley and Branchi review studies of laboratory rodents illustrating the mechanisms through which stable individual differences emerge during development and addressing in particular the role of epigenetic mechanisms (also at molecular level) in personality development. Maestripieri and Groothuis explore maternal effects on offspring personality development and their underlying mechanisms in both oviparous vertebrates (fish, reptiles, and birds) and placental mammals (rodents and primates). In particular, they discuss how maternal behaviour, maternal stress, and prenatal exposure to maternal hormones can shape stable individual differences in offspring physiology and behaviour later in life. Caramaschi, Carere, Sgoifo, and Koolhaas review the relation between physiological and behavioural traits commonly considered in animal personality assessments, with particular regard to stress coping and the activity of the hypothalamic-pituitary-adrenal axis, the hypothalamic pituitary- gonad axis, and the autonomic nervous system. They also discuss evidence linking the neurotransmitters serotonin and dopamine, as well as cortical brain structures such as the hippocampus, to variation in animal personality.
Part IV (Implications of Personality Research for Conservation Biology, Animal Welfare, and Human Health) moves to the applied side. The chapter by Smith and Blumstein represents a novel and useful contribution to the modernization of both conservation and behavioural biology emphasizing how personality variation is an important component of biological diversity, which plays a significant role in the long-term persistence of animal populations. After documenting the main anthropogenic factors reducing behavioural diversity of wild animals, they show that also several conservation actions can affect negatively the diversity of personalities and give concrete recommendations how to manage behavioural diversity of wild animals. The chapter of Huntingford, Mesquita, and Kadri deals with personality in fishes, particularly salmonids, and its implication for an appropriate culture strategy in terms of production and welfare. Finally, Cavigelli, Michael, and Ragan address the importance of research with rodent models of human personality in health and disease processes. Personality studies are increasingly affecting translational medicine. The authors review studies conducted with different strains of laboratory rodents to suggest which of them have behavioural and physiological traits that would permit certain personality types to be resilient or susceptible to specific disease processes. They also compare differential behavioural profiles associated with health trajectories in laboratory rodents to potentially analogous personality traits and associated health and disease trajectories in humans, suggesting associations between hostility and cardiac diseases, behavioural inhibition and allergy, sensation-seeking and drug abuse.
There is currently no other compilation of papers providing such a broad and updated overview about a subject at the forefront of science. Various research perspectives and approaches that grow in parallel under the framework of personality have been brought together striving to develop new avenues of research. They include applied areas with an overall holistic approach to the subject, which makes the volume particularly valuable for a wide audience ranging from undergraduate students uncertain of their future choices, biologists of virtually all disciplines, medical researchers, veterinarians and psychologists.
References
1. Koolhaas JM, Korte SM, De Boer SF, Van der Vegt BJ, Van Reenen CG, Hopster H, De Jong IC, Ruis MAW, Blokhuis HJ. Coping styles in animals: current status in behavior and stress physiology. Neurosci Biobehav Rev 1999;23: 925-35. DOI: 10.1016/ S0149-7634(99)00026-3
2. Drent PJ, van Oers K, van Noordwijk AJ. Realised heritability of personalities in the great tit (Parus major) Proc Roy Soc B 2003;270:45-51. DOI: 10.1098/ rspb.2002.2168
3. Mather JA, Anderson RC. "Personalities" of octopuses (Octopus rubescens). J Comp Psychol 1993;107:336-40. DOI: 10.1037//0735-7036.107.3.336
Enrico Alleva
Istituto Superiore di Sanita, Rome, Italy
alleva@iss.it
Coriandoli nel deserto
Federica Napolitani
Istituto Superiore di Sanità, Rome, Italy federica.napolitani@iss.it
Alessandra Arachi Milano: Feltrinelli Editore; 2012. 137 p.
ISBN 978-88-07-01897-8. € 10,00.
[Strips of paper in thedesert]
The story of Enrico Persico, the Italian physicist who was a close friend and a collaborator of Nobel Prize Enrico Fermi, emerges from the pages of Coriandoli nel deserto, the recent novel written by Alessandra Arachi, writer and journalist of the Corriere della Sera.
From his hospital bed, Enrico Persico retraces the steps of his existence. It is June 1969 and at the age of sixty-eight, he is afflicted by an unknown disease, perhaps a disease of the soul. These will be his last days. His memory slips in the recollection of his past life, an existence lived in the shadow of Enrico Fermi.
"Thirty-four years have passed since that morning (...). No one knows the true story of the origin of the atomic energy. It's a secret between him and me". It is a secret that Persico intends to reveal to the love of a lifetime: the Italian physicist Nella Mortara, the only woman from the Institute of Physics working with the group of young scientists led by Enrico Fermi, "The Via Panisperna boys", named after the street where the Institute was located.
"Dear Nella, your tears ... in our Laboratory of Physics are engraved with fire upon my heart. The same fire of those bombs .... I'll make you understand, I'll tell you what really happened that famous morning of Monday, October 22 1934 in the Laboratory". This is the element of mystery and suspense that accompanies the reader to the conclusion of the story. Persico had tried many times to write a letter to Nella, he wanted to tell her everything about his relationship with Fermi. He had been his high school mate and his best friend after the death of his brother Giulio. But then, he became the genius of modern physics, the Nobel laureate with whom it was impossible to compete, the person who "had cut him out of history. Of life. Of happiness. ... To obey him was my way of living. The only way I knew". Fermi had torn his hopes and his ambitions, throwing them to the wind like those strips of paper he used in Alamagordo on July 16 1945 to estimate the nuclear bomb's yield. "My executioner", says Persico, who on that famous morning of 1934 broke in the linen closet in the Institute of Physics interrupting his kiss, his dream, the hope of a life, snatching his future away from him.
Only about ten lines are dedicated in Wikipedia to Enrico Persico who participated in the research on slowing down neutrons. Strips of papers in the desert tries to interpret this gap, crossing the boundaries between fiction and science, between past and present. The story of the broken love between Enrico Persico and Nella Mortara - the only two scientists from the Group who never married - is seen within the context of the discovery of the atomic energy, the finding of the importance of the slow neutrons. This was the secret revealed to Nella in the last words of his imaginary letter.
"Doctor do you know the importance of slowness?" Asks Professor Persico to the Head of the Department of Infectious and Tropical Diseases at the Policlinics in Rome, while on a stretcher, being wheeled, he was accompanied to undertake his last tests.
Alessandra Arachi has the merit of having infused poetry into science, in this short and enjoyable book. "Nella beloved, when I leave this hospital, I'll take you for a walk to the Campo de' Fiori. I want also Giordano Bruno to listen, while I tell you the secret of atomic energy."
Istituto Superiore di Sanità Roma - Rome - Italy
E-mail: annali@iss.it
|
2023-03-21 14:45:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3277738690376282, "perplexity": 4762.059558005123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00032.warc.gz"}
|
https://mathspace.co/textbooks/syllabuses/Syllabus-813/topics/Topic-18222/subtopics/Subtopic-248791/?textbookIntroActiveTab=overview
|
# 6.02 Substitution into expressions
Lesson
We have seen we can form expressions using numbers, mathematical operations and variables. If an expression contains a variable and we replace the variable with a particular value or expression, this is called substitution. For example, if we had $4$4 full boxes of matches and $12$12 additional loose matches, then the expression $4m+12$4m+12 would give us the total number of matches where $m$m was the number of matches in a full box. If we were then told the additional information that there are $50$50 matches in a full box, we could evaluate the expression to find the total number of matches by making the substitution $m=50$m=50 in the expression:
$4m+12$4m+12 $=$= $4\times50+12$4×50+12 $=$= $200+12$200+12 $=$= $212$212
#### Worked example
##### Example 1
If $x=3$x=3, evaluate the expression $6x-4$6x4.
Think: This means that everywhere the letter $x$x has been written, we will replace it with the number $3$3.
Do:
$6x-4$6x−4 $=$= $6\times3-4$6×3−4 $=$= $18-4$18−4 $=$= $14$14
##### Example 2
If $x=6$x=6 and $y=0.5$y=0.5, evaluate the expression $6x-2y-12$6x2y12.
Think: The same process applies even if there is more than one unknown value, we will replace the letter $x$x with the number $6$6, and the letter $y$y with the number $0.5$0.5. We also need to keep the order of operations in mind when we do these kinds of calculations!
Do:
$6x-2y-12$6x−2y−12 $=$= $6\times6-2\times0.5-12$6×6−2×0.5−12 Replacing $x$x with $6$6, and $y$y with $0.5$0.5. $=$= $36-1-12$36−1−12 Evaluating multiplication before subtraction. $=$= $23$23
##### Example 3
If $a=3$a=3 and $b=-4$b=4, evaluate the expression $a\left(10-2b\right)$a(102b).
Think: Just like before, we will replace the letter $a$a with the number $3$3, and the letter $b$b with the number $-4$4. To avoid confusion with the operations in the expression we will place the negative number within brackets.
Do:
$a\left(10-2b\right)$a(10−2b) $=$= $3\left(10-2\times\left(-4\right)\right)$3(10−2×(−4)) Replace $a$a with $3$3, and $b$b with $\left(-4\right)$(−4). $=$= $3\left(10+8\right)$3(10+8) Simplify the terms within the bracket. $=$= $3\left(18\right)$3(18) Evaluate the bracket before multiplication. $=$= $54$54
Remember!
When making a substitution and evaluating an expression be careful to follow order of operations, just as we did in our first chapter.
When substituting a negative value, place brackets around the value so the sign is not confused with operations in the expression.
#### Practice questions
##### question 1
Evaluate $8x+4$8x+4 when $x=2$x=2.
##### Question 2
If $m=-3$m=3 and $n=4$n=4, evaluate the following:
1. $mn-\left(m-n\right)$mn(mn)
2. $m^2+9n$m2+9n
##### Question 3
Evaluate $\frac{2a\times9}{5b}$2a×95b when $a=25$a=25 and $b=-2$b=2.
1. Find the exact value in simplest form.
### Outcomes
#### ACMEM035
substitute numerical values into algebraic expressions; for example, substitute different values of x to evaluate the expressions 3x/5, 5(2x-4)
|
2021-12-09 03:22:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7955794930458069, "perplexity": 2674.8719633013175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00312.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/cpaa.2016.15.261
|
# American Institute of Mathematical Sciences
• Previous Article
The "hot spots" conjecture on higher dimensional Sierpinski gaskets
• CPAA Home
• This Issue
• Next Article
Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species
January 2016, 15(1): 261-286. doi: 10.3934/cpaa.2016.15.261
## Blow-up scaling and global behaviour of solutions of the bi-Laplace equation via pencil operators
1 Universidad Carlos III de Madrid, Av. Universidad 30, 28911-Leganés 2 Department of Mathematical Sciences, University of Bath, Bath BA2 7AY
Received March 2015 Revised July 2015 Published December 2015
As the main problem, the bi-Laplace equation \begin{eqnarray} \Delta^2 u=0 \quad (\Delta=D_x^2+D_y^2) \end{eqnarray} in a bounded domain $\Omega \subset R^2$, with inhomogeneous Dirichlet or Navier-type conditions on the smooth boundary $\partial \Omega$ is considered. In addition, there is a finite collection of curves \begin{eqnarray} \Gamma = \Gamma_1\cup...\cup\Gamma_m \subset \Omega, \end{eqnarray} on which we assume homogeneous Dirichlet conditions $u=0$, focusing at the origin $0 \in \Omega$ (the analysis would be similar for any other point). This makes the above elliptic problem overdetermined. Possible types of the behaviour of solution $u(x,y)$ at the tip $0$ of such admissible multiple cracks, being a singularity point, are described, on the basis of blow-up scaling techniques and spectral theory of pencils of non self-adjoint operators. Typical types of admissible cracks are shown to be governed by nodal sets of a countable family of harmonic polynomials, which are now represented as pencil eigenfunctions, instead of their classical representation via a standard Sturm--Liouville problem. Eventually, for a fixed admissible crack formation at the origin, this allows us to describe all boundary data, which can generate such a blow-up crack structure. In particular, it is shown how the co-dimension of this data set increases with the number of asymptotically straight-line cracks focusing at 0.
Citation: Pablo Álvarez-Caudevilla, V. A. Galaktionov. Blow-up scaling and global behaviour of solutions of the bi-Laplace equation via pencil operators. Communications on Pure & Applied Analysis, 2016, 15 (1) : 261-286. doi: 10.3934/cpaa.2016.15.261
[1] R. A. Adams, Sobolev Spaces,, Pure and Applied Mathematics, 65 (1975). Google Scholar [2] V. Adamyan and V. Pivovarchik, On spectra of some classes of quadratic operator pencils,, Contributions to operator theory in spaces with an indefinite metric (Vienna, 106 (1998), 23. doi: 10.1007/978-3-0348-8812-7_2. Google Scholar [3] P. Álvarez-Caudevilla and V. A. Galaktionov, The $p$-Laplace equation in domains with multiple crack section via pencil operators,, \emph{Advances Nonlinear Studies}, 15 (2015), 91. Google Scholar [4] J. W. Dettman, Mathematical Methods in Physics and Engineering,, Mc-Graw-Hill, (1969). Google Scholar [5] Yu. V. Egorov, V. A. Galaktionov, V. A. Kondratiev and S. I. Pohozaev, Global solutions of higher-order semilinear parabolic equations in the supercritical range,, \emph{Adv. Differ. Equat., 9 (2004), 1009. Google Scholar [6] D. Funaro, Polynomial Approximation of Differential Equations,, Springer-Verlag, (1992). Google Scholar [7] V. A. Galaktionov, Geometric Sturmian Theory of Nonlinear Parabolic Equations and Applications,, Chapman$\, (2004). doi: 10.1201/9780203998069. Google Scholar [8] V. A. Galaktionov, On extensions of Hardy's inequalities,, \emph{Comm. Cont. Math.}, 7 (2005), 97. doi: 10.1142/S0219199705001659. Google Scholar [9] A. N. Kolmogorov and S. V. Fomin, Elements of the Theory of Functions and Functional Analysis,, Nauka, (1976). Google Scholar [10] V. A. Kondrat'ev, Boundary value problems for parabolic equations in closed regions,, \emph{Trans. Moscow Math. Soc.}, 15 (1966), 400. Google Scholar [11] V. A. Kondrat'ev, Boundary value problems for elliptic equations in domains with conical or angular points,, \emph{Trans. Moscow Math. Soc.}, 16 (1967), 209. Google Scholar [12] M. Krein and H. Langer, On some mathematical principles in the linear theory of damped oscillations of continua. I, II,, \emph{Int. Equat. Oper. Theory}, 1 (1978), 364. doi: 10.1007/BF01682844. Google Scholar [13] A. Lemenant, On the homogeneity of global minimizers for the Mumford-Shah functional when$K$is a smooth cone,, \emph{Rend. Sem. Mat. Univ. Padova}, 122 (2009), 129. doi: 10.4171/RSMUP/122-9. Google Scholar [14] A. S. Markus, Introduction to Spectral Theory of Polynomial Operator Pencils,, Translated from the Russian by H. H. McFaden. Translation edited by Ben Silver. With an appendix by M. V. Keldysh. Transl. of Math. Mon., 71 (1988). Google Scholar [15] C. Sturm, Mémoire sur une classe d'équations à différences partielles,, \emph{J. Math. Pures Appl., 1 (1836), 373. Google Scholar show all references ##### References: [1] R. A. Adams, Sobolev Spaces,, Pure and Applied Mathematics, 65 (1975). Google Scholar [2] V. Adamyan and V. Pivovarchik, On spectra of some classes of quadratic operator pencils,, Contributions to operator theory in spaces with an indefinite metric (Vienna, 106 (1998), 23. doi: 10.1007/978-3-0348-8812-7_2. Google Scholar [3] P. Álvarez-Caudevilla and V. A. Galaktionov, The$p$-Laplace equation in domains with multiple crack section via pencil operators,, \emph{Advances Nonlinear Studies}, 15 (2015), 91. Google Scholar [4] J. W. Dettman, Mathematical Methods in Physics and Engineering,, Mc-Graw-Hill, (1969). Google Scholar [5] Yu. V. Egorov, V. A. Galaktionov, V. A. Kondratiev and S. I. Pohozaev, Global solutions of higher-order semilinear parabolic equations in the supercritical range,, \emph{Adv. Differ. Equat., 9 (2004), 1009. Google Scholar [6] D. Funaro, Polynomial Approximation of Differential Equations,, Springer-Verlag, (1992). Google Scholar [7] V. A. Galaktionov, Geometric Sturmian Theory of Nonlinear Parabolic Equations and Applications,, Chapman$\, (2004). doi: 10.1201/9780203998069. Google Scholar [8] V. A. Galaktionov, On extensions of Hardy's inequalities,, \emph{Comm. Cont. Math.}, 7 (2005), 97. doi: 10.1142/S0219199705001659. Google Scholar [9] A. N. Kolmogorov and S. V. Fomin, Elements of the Theory of Functions and Functional Analysis,, Nauka, (1976). Google Scholar [10] V. A. Kondrat'ev, Boundary value problems for parabolic equations in closed regions,, \emph{Trans. Moscow Math. Soc.}, 15 (1966), 400. Google Scholar [11] V. A. Kondrat'ev, Boundary value problems for elliptic equations in domains with conical or angular points,, \emph{Trans. Moscow Math. Soc.}, 16 (1967), 209. Google Scholar [12] M. Krein and H. Langer, On some mathematical principles in the linear theory of damped oscillations of continua. I, II,, \emph{Int. Equat. Oper. Theory}, 1 (1978), 364. doi: 10.1007/BF01682844. Google Scholar [13] A. Lemenant, On the homogeneity of global minimizers for the Mumford-Shah functional when $K$ is a smooth cone,, \emph{Rend. Sem. Mat. Univ. Padova}, 122 (2009), 129. doi: 10.4171/RSMUP/122-9. Google Scholar [14] A. S. Markus, Introduction to Spectral Theory of Polynomial Operator Pencils,, Translated from the Russian by H. H. McFaden. Translation edited by Ben Silver. With an appendix by M. V. Keldysh. Transl. of Math. Mon., 71 (1988). Google Scholar [15] C. Sturm, Mémoire sur une classe d'équations à différences partielles,, \emph{J. Math. Pures Appl., 1 (1836), 373. Google Scholar
[1] Nuno Costa Dias, Andrea Posilicano, João Nuno Prata. Self-adjoint, globally defined Hamiltonian operators for systems with boundaries. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1687-1706. doi: 10.3934/cpaa.2011.10.1687 [2] Abdallah El Hamidi, Aziz Hamdouni, Marwan Saleh. On eigenelements sensitivity for compact self-adjoint operators and applications. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 445-455. doi: 10.3934/dcdss.2016006 [3] Dachun Yang, Sibei Yang. Maximal function characterizations of Musielak-Orlicz-Hardy spaces associated to non-negative self-adjoint operators satisfying Gaussian estimates. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2135-2160. doi: 10.3934/cpaa.2016031 [4] David F. Parker. Higher-order shallow water equations and the Camassa-Holm equation. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 629-641. doi: 10.3934/dcdsb.2007.7.629 [5] Min Zhu. On the higher-order b-family equation and Euler equations on the circle. Discrete & Continuous Dynamical Systems - A, 2014, 34 (7) : 3013-3024. doi: 10.3934/dcds.2014.34.3013 [6] Anthony Bloch, Leonardo Colombo, Fernando Jiménez. The variational discretization of the constrained higher-order Lagrange-Poincaré equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 309-344. doi: 10.3934/dcds.2019013 [7] Aliang Xia, Jianfu Yang. Normalized solutions of higher-order Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 447-462. doi: 10.3934/dcds.2019018 [8] Erik Kropat, Silja Meyer-Nieberg, Gerhard-Wilhelm Weber. Computational networks and systems-homogenization of self-adjoint differential operators in variational form on periodic networks and micro-architectured systems. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 139-169. doi: 10.3934/naco.2017010 [9] Frédéric Robert. On the influence of the kernel of the bi-harmonic operator on fourth order equations with exponential growth. Conference Publications, 2007, 2007 (Special) : 875-882. doi: 10.3934/proc.2007.2007.875 [10] Yinbin Deng, Qi Gao, Dandan Zhang. Nodal solutions for Laplace equations with critical Sobolev and Hardy exponents on $R^N$. Discrete & Continuous Dynamical Systems - A, 2007, 19 (1) : 211-233. doi: 10.3934/dcds.2007.19.211 [11] Zdeněk Skalák. On the asymptotic decay of higher-order norms of the solutions to the Navier-Stokes equations in R3. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 361-370. doi: 10.3934/dcdss.2010.3.361 [12] Denis R. Akhmetov, Renato Spigler. $L^1$-estimates for the higher-order derivatives of solutions to parabolic equations subject to initial values of bounded total variation. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1051-1074. doi: 10.3934/cpaa.2007.6.1051 [13] Jibin Li, Weigou Rui, Yao Long, Bin He. Travelling wave solutions for higher-order wave equations of KDV type (III). Mathematical Biosciences & Engineering, 2006, 3 (1) : 125-135. doi: 10.3934/mbe.2006.3.125 [14] Huijun He, Zhaoyang Yin. On the Cauchy problem for a generalized two-component shallow water wave system with fractional higher-order inertia operators. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1509-1537. doi: 10.3934/dcds.2017062 [15] Niels Jacob, Feng-Yu Wang. Higher order eigenvalues for non-local Schrödinger operators. Communications on Pure & Applied Analysis, 2018, 17 (1) : 191-208. doi: 10.3934/cpaa.2018012 [16] Qilin Wang, Liu He, Shengjie Li. Higher-order weak radial epiderivatives and non-convex set-valued optimization problems. Journal of Industrial & Management Optimization, 2019, 15 (2) : 465-480. doi: 10.3934/jimo.2018051 [17] Eduardo Martínez. Higher-order variational calculus on Lie algebroids. Journal of Geometric Mechanics, 2015, 7 (1) : 81-108. doi: 10.3934/jgm.2015.7.81 [18] Regina Martínez, Carles Simó. Non-integrability of the degenerate cases of the Swinging Atwood's Machine using higher order variational equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 1-24. doi: 10.3934/dcds.2011.29.1 [19] Wen Deng. Resolvent estimates for a two-dimensional non-self-adjoint operator. Communications on Pure & Applied Analysis, 2013, 12 (1) : 547-596. doi: 10.3934/cpaa.2013.12.547 [20] Jinggang Tan, Jingang Xiong. A Harnack inequality for fractional Laplace equations with lower order terms. Discrete & Continuous Dynamical Systems - A, 2011, 31 (3) : 975-983. doi: 10.3934/dcds.2011.31.975
2018 Impact Factor: 0.925
## Metrics
• PDF downloads (7)
• HTML views (0)
• Cited by (0)
## Other articlesby authors
• on AIMS
• on Google Scholar
[Back to Top]
|
2019-12-16 06:13:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7185102701187134, "perplexity": 5361.069247338352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00380.warc.gz"}
|
http://orbitize.info/en/latest/tutorials/MCMC_tutorial.html
|
# MCMC Introduction¶
by Jason Wang and Henry Ngo (2018)
Here, we will explain how to sample an orbit posterior using MCMC techniques. MCMC samplers take some time to fully converge on the complex posterior, but should be able to explore all posteriors in roughly the same amount of time (unlike OFTI). We will use the parallel-tempered version of the Affine-invariant sample from the ptemcee package, as the parallel tempering helps the walkers get out of local minima. Parallel-tempering can be disabled by setting the number of temperatures to 1, and will revert back to using the regular ensemble sampler from emcee.
## Read in Data and Set up Sampler¶
We use orbitize.driver.Driver to streamline the processes of reading in data, setting up the two-body interaction, and setting up the MCMC sampler.
When setting up the sampler, we need to decide how many temperatures and how many walkers per temperature to use. Increasing the number of temperatures further ensures your walkers will explore all of parameter space and will not get stuck in local minima. Increasing the number of walkers gives you more samples to use, and, for the Affine-invariant sampler, a minimum number is required for good convergence. Of course, the tradeoff is that more samplers means more computation time. We find 20 temperatures and 1000 walkers to be reliable for convergence. Since this is a tutorial meant to be run quickly, we use fewer walkers and temperatures here.
Note that we will only use the samples from the lowest-temperature walkers. We also assume that our astrometric measurements follow a Gaussian distribution.
orbitize can also fit for the total mass of the system and system parallax, including marginalizing over the uncertainties in those parameters.
### Python2 WARNING¶
When using Python2, there is an integer division bug in the ptemcee package. This results in no temperature adaptation. See Issue #128. Do not use parallel tempering (num_temps > 1) with Python2 until this is fixed.
[1]:
import numpy as np
import orbitize
from orbitize import driver
import multiprocessing as mp
# system parameters
num_secondary_bodies = 1
system_mass = 1.75 # [Msol]
plx = 51.44 # [mas]
mass_err = 0.05 # [Msol]
plx_err = 0.12 # [mas]
# MCMC parameters
num_temps = 5
num_walkers = 20
num_threads = mp.cpu_count() # or a different number if you prefer
my_driver = driver.Driver(
filename, 'MCMC', num_secondary_bodies, system_mass, plx, mass_err=mass_err, plx_err=plx_err,
)
WARNING: KEPLER: Unable to import C-based Kepler's equation solver. Falling back to the slower NumPy implementation.
## Running the MCMC Sampler¶
We need to pick how many steps the MCMC sampler should sample. Additionally, because the samples are correlated, we often only save every nth sample. This helps when we run a lot of samples, because saving all the samples requires too much disk space and many samples are unnecessary because they are correlated.
[2]:
total_orbits = 6000 # number of steps x number of walkers (at lowest temperature)
burn_steps = 10 # steps to burn in per walker
thin = 2 # only save every 2nd step
my_driver.sampler.run_sampler(total_orbits, burn_steps=burn_steps, thin=thin)
/Users/bluez3303/miniconda3/envs/python3.6/lib/python3.6/site-packages/orbitize/priors.py:163: RuntimeWarning: invalid value encountered in log
lnprob = -np.log((element_array*normalizer))
/Users/bluez3303/miniconda3/envs/python3.6/lib/python3.6/site-packages/orbitize/priors.py:269: RuntimeWarning: invalid value encountered in log
lnprob = np.log(np.sin(element_array)/normalization)
/Users/bluez3303/miniconda3/envs/python3.6/lib/python3.6/site-packages/orbitize/priors.py:163: RuntimeWarning: invalid value encountered in log
lnprob = -np.log((element_array*normalizer))
/Users/bluez3303/miniconda3/envs/python3.6/lib/python3.6/site-packages/orbitize/priors.py:163: RuntimeWarning: invalid value encountered in log
lnprob = -np.log((element_array*normalizer))
/Users/bluez3303/miniconda3/envs/python3.6/lib/python3.6/site-packages/orbitize/priors.py:163: RuntimeWarning: invalid value encountered in log
lnprob = -np.log((element_array*normalizer))
/Users/bluez3303/miniconda3/envs/python3.6/lib/python3.6/site-packages/orbitize/priors.py:269: RuntimeWarning: invalid value encountered in log
lnprob = np.log(np.sin(element_array)/normalization)
/Users/bluez3303/miniconda3/envs/python3.6/lib/python3.6/site-packages/orbitize/priors.py:269: RuntimeWarning: invalid value encountered in log
lnprob = np.log(np.sin(element_array)/normalization)
/Users/bluez3303/miniconda3/envs/python3.6/lib/python3.6/site-packages/orbitize/priors.py:269: RuntimeWarning: invalid value encountered in log
lnprob = np.log(np.sin(element_array)/normalization)
Burn in complete
300/300 steps completed
Run complete
[2]:
<ptemcee.sampler.Sampler at 0x11df48a90>
After completing the samples, the 'run_sampler' method also creates a 'Results' object that can be accessed with 'my_sampler.results'.
## MCMC Diagnostics¶
The Sampler object also has two convenience functions to examine and modify the walkers in order to diagnose MCMC performance.
First, we can examine 5 randomly selected walkers for two parameters: semimajor axis and eccentricity. We expect 150 steps per walker since there were 6,000 orbits requested with 20 walkers, so that’s 300 orbits per walker. However, we have thinned by a factor of 2, so there are 150 saved steps.
[3]:
sma_chains, ecc_chains = my_driver.sampler.examine_chains(param_list=['sma1','ecc1'], n_walkers=5)
This method returns one matplotlib Figure object for each parameter. If no param_list given, all parameters are returned. Here, we told it to plot 5 randomly selected walkers but we could have specified exactly which walkers with the walker_list keyword. The step_range keyword also determines which steps in the chain are plotted (when nothing is given, the default is to plot all steps). We can also have these plots automatically generate if we called run_sampler with examine_chains=True.
Note that this is just a convenience function. It is possible to recreate these chains from reshaping the posterior samples and selecting the correct entries.
The second diagnostic tool is the chop_chains, which allows us to remove entries from the beginning and/or end of a chain. This updates the corresponding Results object stored in sampler (in this case, my_driver.sampler.results). The burn parameter specifies the number of steps to remove from the beginning (i.e. to add a burn-in to your chain) and the trim parameter specifies the number of steps to remove from the end. If only one parameter is given, it is assumed to be a burn value. If trim is not zero, the sampler object is also updated so that the current position (sampler.curr_pos) matches the new end point. This allows us to continue MCMC runs at the correct position, even if we have removed the last few steps of the chain.
Let’s remove the first and last 25 steps, leaving 100 orbits (or steps) per walker
[4]:
my_driver.sampler.chop_chains(burn=25,trim=25)
Chains successfully chopped. Results object updated.
Now we can examine the chains again to verify what we did. Note that the number of steps removed from either end of the chain is uniform across all walkers.
[5]:
sma_chains, ecc_chains = my_driver.sampler.examine_chains(param_list=['sma1','ecc1'], n_walkers=5)
## Plotting Basics¶
We will make some basic plots to visualize the samples in 'my_driver.sampler.results'. Orbitize currently has two basic plotting functions which returns matplotlib Figure objects. First, we can make a corner plot (also known as triangle plot, scatterplot matrix, or pairs plot) to visualize correlations between pairs of orbit parameters:
[6]:
corner_plot_fig = my_driver.sampler.results.plot_corner() # Creates a corner plot and returns Figure object
corner_plot_fig.savefig('my_corner_plot.png') # This is matplotlib.figure.Figure.savefig()
Next, we can plot a visualization of a selection of orbits sampled by our sampler. By default, the first epoch plotted is the year 2000 and 100 sampled orbits are displayed.
[7]:
epochs = my_driver.system.data_table['epoch']
orbit_plot_fig = my_driver.sampler.results.plot_orbits(
object_to_plot = 1, # Plot orbits for the first (and only, in this case) companion
num_orbits_to_plot= 100, # Will plot 100 randomly selected orbits of this companion
start_mjd=epochs[0] # Minimum MJD for colorbar (here we choose first data epoch)
)
orbit_plot_fig.savefig('my_orbit_plot.png') # This is matplotlib.figure.Figure.savefig()
For more advanced plotting options and suggestions on what to do with the returned matplotlib Figure objects, see the dedicated Plotting tutorial.
We will save the results in the HDF5 format. It will save two datasets: 'post' which will contain the posterior (the chains of the lowest temperature walkers) and 'lnlike' which has the corresponding probabilities. In addition, it saves 'sampler_name' as an attribute of the HDF5 root group.
[8]:
hdf5_filename='my_posterior.hdf5'
import os
# To avoid weird behaviours, delete saved file if it already exists from a previous run of this notebook
if os.path.isfile(hdf5_filename):
os.remove(hdf5_filename)
my_driver.sampler.results.save_results(hdf5_filename)
Saving sampler results is a good idea when we want to analyze the results in a different script or when we want to save the output of a long MCMC run to avoid having to re-run it in the future. We can then load the saved results into a new blank results object.
[9]:
from orbitize import results
loaded_results = results.Results() # Create blank results object for loading
Instead of loading results into an orbitize.results.Results object, we can also directly access the saved data using the 'h5py' python module.
[10]:
import h5py
filename = 'my_posterior.hdf5'
hf = h5py.File(filename,'r') # Opens file for reading
# Load up each dataset from hdf5 file
sampler_name = np.str(hf.attrs['sampler_name'])
post = np.array(hf.get('post'))
lnlike = np.array(hf.get('lnlike'))
hf.close() # Don't forget to close the file
|
2020-03-30 13:09:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39141330122947693, "perplexity": 2897.0871961345274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00274.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Segment_of_Circle
|
Definition:Segment of Circle
Definition
In the words of Euclid:
A segment of a circle is the figure contained by a straight line and a circumference of a circle.
Base
The base of a segment of a circle is the straight line forming one of the boundaries of the seqment.
In the above diagram, $AB$ is the base of the highlighted segment.
Angle of a Segment
In the words of Euclid:
An angle of a segment is that contained by a straight line and a circumference of a circle.
That is, it is the angle the base makes with the circumference where they meet.
It can also be defined as the angle between the base and the tangent to the circle at the end of the base:
Angle in a Segment
In the words of Euclid:
An angle in a segment is the angle which, when a point is taken on the circumference of the segment and straight lines are joined from it to the extremities of the straight line which is the base of the segment, is contained by the straight lines so joined.
And, when the straight lines containing the angle cut off a circumference, the angle is said to stand upon that circumference.
Such a segment is said to admit the angle specified.
Similar Segments
In the words of Euclid:
Similar segments of circles are those which admit equal angles, or in which the angles are equal to one another.
|
2018-11-16 01:53:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6675562262535095, "perplexity": 293.16707123232504}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742968.18/warc/CC-MAIN-20181116004432-20181116030432-00095.warc.gz"}
|
https://revise.im/chemistry/abg/group-2-group-7
|
# Group 2 and Group 7
The elements in group 2 all have alkaline hydroxides, which is why the common name for this group is the alkaline metals. The group 2 elements have the following properties:
• High melting and boiling points.
• A giant metallic structure with strong forces between positive and negative ions.
• They are light metals with low densities which form colourless compounds.
• They all have their outermost electron in the $s$ sub-shell, with two electrons more than the electron configuration of a noble gas. Due to this configuration, alkaline metals are strong reducing agents.
The reaction for a group 2 alkaline is:
$$M \rightarrow M^{2+} + 2e{-}$$
Reactivity increases down the group due to the increasing ease of losing electrons.
### Reaction with oxygen
The group 2 elements react vigorously with oxygen in a redox reaction, forming an oxide with the general formula $MO$ where $M$ is the group 2 element. An example reaction is shown below:
$$2Ca_{(s)} + O_{2(l)} \rightarrow 2CaO_{(s)}$$
In this reaction, the $Ca$ is oxidised from 0 to +2. The $O$ has been reduced from 0 to -2.
### Reaction with water
Group 2 elements react with water to form hydroxides with the general formula $M(OH)_{2}$, along with hydrogen gas. For example:
$$Ca_{(s)} + 2H_{2}O_{(l)} \rightarrow Ca(OH)_{2(aq)} + H_{2(g)}$$
This is also a redox reaction with $Ca$ being oxidised from 0 to +2 and one hydrogen being reduced from +1 to 0. The further down the group, the more vigorously they react.
### Reaction with oxides and hydroxides
Group 2 oxides and hydroxides, formed with the reaction with oxygen and water are bases. These can neutralise acids to form a salt and water. For example:
$$MgO_{(s)} + 2HCl_{(aq)} \rightarrow MgCl_{2(s)} + H_{2}O_{(l)}$$
This is not a redox reaction however as the oxidation numbers remain unchanged.
### Reaction of group 2 oxides with water
Group 2 oxides react with water to form a solution of metal hydroxides. These hydroxides have a typical pH of 10-12. Such reaction is:
$$MgO_{(s)} + H_{2}O_{(l)} \rightarrow Mg(OH)_{2(aq)}$$
### Group 2 hydroxides
Group 2 hydroxides dissolve in water to form alkaline solutions. The solubility of the hydroxides increases down the group. The greater the solubility of the hydroxide, the more $OH^{-}$ ions are released into the solution and the more alkaline it becomes.
$$Ca(OH)_{2(aq)} + aq \rightarrow Ca^{2+}_{(aq)} + 2OH^{-}_{(aq)}$$
### Decomposition of group 2 carbonates
Group 2 carbonates can be decomposed by heat, forming an oxide and carbon dioxide. This type of reaction is called thermal decomposition. The further down the group, the more difficult it is to decompose.
$$MgCO_{3(s)} \rightarrow MgO_{(s)} + CO_{2(g)}$$
The further down the group, the more difficult it is to decompose the group 2 hydroxides.
### Uses of group 2 hydroxides
The alkaline nature of the group 2 hydroxides means that they are often used in order to combat acidity. Examples include:
• Calcium hydroxide, $Ca(OH)_{2}$ which is commonly used by farmers to neutralise acid soils.
• Magnesium hydroxide, $Mg(OH)_{2}$ is used as milk of magnesia to relieve indigestion by neutralising stomach acid.
## Group 7 Elements
The group 7 elements: fluorine ($F$), chlorine ($Cl$), bromine ($Br$), iodine ($I$) and astatine ($At$) are generally known as the halogens. They exist as simple diatomic molecules, with weak van der Waals forces. This gives the halogens low melting and boiling points, which increase down the group due to the increased number of electrons.
Halogens have one electron less than the noble gas configuration making them oxidising agents as they remove electrons from a reaction. The oxidising power is a measure of the strength with which a halogen is able to capture an electron, forming a halide ion.
Down the group halogens become less reactive due to:
• Increased electron shielding
• While the nuclear charge also increases, the effect is largely cancelled out by the shielding and the greater atomic radius.
### Halogen Redox Reactions
The reactivity of halogens can be determined through displacement reactions where halide ions are mixed with aqueous solutions of halogens. A more reactive halogen is able to oxidise and displace a halide of a less reactive halogen.
Bromine is able to oxidise $I^{-}$ as it is the more reactive halogen.
$$Br_{2(aq)} + 2I^{-}_{(aq)} \rightarrow 2Br^{-}_{(aq)} + I_{2(aq)}$$
This reaction is a redox reaction as the $Br$ has been reduced from 0 to -1 while the $I$ has been oxidised from -1 to 0. $Br_{2}$ is unable to oxidise $Cl^{-}$ ions as chlorine is a more reactive halogen.
The result of the reactions can be determined by looking at the colour change. This colour is determined by the halogen in the solution. Chlorohexane can be added to make this change more visible.
Halogen Water Cyclohexane
$Cl_{2}$ Pale green Pale green
$Br_{2}$ Orange Orange
$I_{2}$ Brown Violet
## Disproportionation of chlorine
A disproportionation reaction is where the same element is both oxidised and reduced.
#### In water
Small amounts of chlorine is added to drinking water in order to kill bacteria making it safer to drink. In water, the chlorine forms hydrochloric acid $HCl$, and chloric (I) acid $HClO$.
$$Cl_{2(aq)} + H_{2}O_{(l)} \rightarrow HClO_{(aq)} + HCl_{(aq)}$$
#### In aqueous sodium hydroxide
Household bleach is formed when aqueous sodium hydroxide reacts with dilute sodium hydroxide at room temperature.
$$Cl_{2(aq)} + 2NaOH_{(aq)} \rightarrow NaCl_{(aq)} + NaClO_{(aq)} + H_{2}O_{(l)}$$
## Use of halide compounds
Ionic halides have a halide ion, $X^{-}$ and form compounds with group 1 or group 2 elements. Such compounds are:
• Sodium chloride, $NaCl$ is used as table salt
• Sodium fluoride, $NaF$ is added to toothpaste to combat tooth decay.
• Calcium fluoride, $CaF_{2}$ is used to make lenses to focus infrared light.
### Detecting halide ions
Halide ions can be detected by adding silver nitrate, $AgNO_{3(aq)}$ to the reaction mixture. The process is as follows:
1. The unknown halide is dissolved into water.
2. An aqueous solution of silver nitrate, $AgNO_{3(aq)}$ is added.
3. The silver ions, $Ag^{+}_{(aq)}$ react with any halide ions $X^{-}_{(aq)}$ forming a silver halide precipitate, $AgX_{(s)}$ .
4. These precipitates are coloured and can be examined to find the halide present.
The equation for this reaction is:
$$AgNO_{3(aq)} + MX_{(aq)} \rightarrow AgX_{(s)} + MNO_{3(aq)}$$
The ionic equation is:
$$Ag^{+}_{(aq)} + X^{-}_{(aq)} \rightarrow AgX_{(s)}$$
It can be difficult to determine the colour of the solution therefore aqueous ammonia $NH_{3(aq)}$ can be added. Each halide has a different solubility in ammonia therefore this can be useful in determining the solubility.
Halide $AgNO_{3(aq)}$ Dilute $NH_{3}$ Concentrated $NH_{3}$
$Cl^{-}$ White precipitate Soluble Soluble
$Br^{-}$ Cream precipitate Insoluble Soluble
$I^{-}$ Yellow precipitate Insoluble Insoluble
This type of reaction is a precipitation reaction as a precipitate is formed.
|
2023-03-29 05:51:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5601914525032043, "perplexity": 2803.5069979929444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00583.warc.gz"}
|
https://blog.nkagami.me/posts/brief-on-blame-calculus/
|
A Brief on Blame Calculus
1. What is Blame Calculus and what is it for?
Blame calculus is an extension of a mixed-typed lambda calculus (i.e. containing both typed terms and untyped terms working on the catch-all type Dyn).
The purpose is to create a language where:
• Both untyped terms and typed terms co-exist, and typed terms are well-typed.
• There exists a decidable well-form check, and a semi-decidable well-typedness check.
• All well-typed terms (both typed and untyped) have a single type.
Additions to mixed-typed calculus
It features the following extensions to the type system:
• Refinement types: $\text{Nat} = {x : \text{Int} \mid x \ge 0}$, with runtime casting.
• A fully dynamic type Dyn, with compile-time upcasting and runtime downcasting.
It features the following extensions to the expression list:
• Casting operation: $\left<S\Leftarrow T\right>^{p} s$ is a cast from $s: T$ to $S$ (when $S \sim T$ - $S$ is compatible to $T$) , either
• succeeding returning $s: S$, or
• failing, returning $\Uparrow p$. We call this blaming on $p$.
• An internal transformation expression: $t \triangleright^p s_B$ where $t: \text{Bool}$, either
• succeeding with $s: {B \mid t}$ when $t = \text{True}$, or
• failing with $\Uparrow p$ otherwise.
A blame on $p$ can either be
• positive, where the cast fails due to the inner expression having an incompatible type with the destination type.
• negative, where the cast fails due to the surrounding context failing to comply with the destination type.
• This is possible only from the delayed casting behavior of Blame Calculus:
$$(\left<S \rightarrow T \Leftarrow S' \rightarrow T'\right>^p f)(v : S) \longrightarrow \left<T \Leftarrow T'\right>^p\left(f \left(\left<S' \Leftarrow S\right>^{\overline{p}}v\right)\right)$$
(notice the flip from $p$ to $\overline p$ on the cast of $v$)
Subtyping in Blame Calculus
Instead of the standard subtyping relation, which is undecidable in Blame Calculus (due to refinement types), it opts to an equally powerful system called positive (safe casts without positive blames) (denoted as $S <:^+ T$), negative ($S <:^- T$) and naive ($S <:_n T$) (which means $S <:^+ T$ and $T <:^- S$).
Unlike standard subtyping which is contravariant on the argument and covariant on the return type, naive subtyping is covariant on both sides.
Well-formednes
Two additional syntax extensions are made in order to mix typed and untyped expressions:
• $\lfloor M \rfloor$ casts the typed expression $M$ into an untyped expression. This is well-formed if and only if $M : \text{Dyn}$ (so a cast is most likely to happen somewhere).
• $\lceil M \rceil$ casts the untyped expression $M$ into a typed expression returning $\text{Dyn}$. This is well-formed if and only if
• all free variables in it has type $\text{Dyn}$, and
• all of its subterms have type $\text{Dyn}$.
2. Remarks
The central result of the paper, the Blame Theorem, states that:
Let $t$ be a well-typed term with a subterm $\left<T \Leftarrow S\right>^p s$ containing the only occurrences of $p$ in $t$.
Then:
• If $S <:^+ T$ then $t \not\longrightarrow^* \Uparrow p$.
• If $S <:^- T$ then $t \not\longrightarrow^* \Uparrow \overline p$.
• If $S <: T$ then $t$ will not blame on either $p$ nor $\overline p$.
In other words, from the decidable positive/negative subtyping relations, we can safely deduce which casts will not create a positive or negative blame.
This solidifies the intuition of upcasting/downcasting in the simple OOP sense, and allows us to extend this blame calculus with more structures, as long as the subtyping relations are still decidable.
3. Semantics
Please refer to the paper.
|
2021-09-17 03:17:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8133141994476318, "perplexity": 2588.9549059427513}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780054023.35/warc/CC-MAIN-20210917024943-20210917054943-00335.warc.gz"}
|
https://ltwork.net/reading-strategycreate-a-chart-like-the-one-belowlisting--9630556
|
READING STRATEGYCreate a chart like the one belowlisting the importance of each ofthe terms in using latitude andlongitudeDEGREEEQUATORPRIMEMERIDIAN
Question:
READING STRATEGY Create a chart like the one below
listing the importance of each of
the terms in using latitude and
longitude
DEGREE
EQUATOR
PRIME
MERIDIAN
Which statement is true about mixtures and solutions? a. solutions are mixtures with an uneven, or heterogeneous, distribution.
Which statement is true about mixtures and solutions? a. solutions are mixtures with an uneven, or heterogeneous, distribution. b. mixtures are solutions with an even, or homogenous, distribution. c. solutions are mixtures with an even, or homogenous, distribution....
Multiple in how many different, distinguishable orders can the letters of the word geometry be arranged? 40,320 20,160 80,640
Multiple in how many different, distinguishable orders can the letters of the word geometry be arranged? 40,320 20,160 80,640...
Martha bought a 0.35 -kilogram bottle of spice. She uses 90 grams a week. Is one bottle of spice enough for 5 weeks? Explain.
Martha bought a 0.35 -kilogram bottle of spice. She uses 90 grams a week. Is one bottle of spice enough for 5 weeks? Explain. uh i need help with this.....
From all the available foods, we should select the proper ___ of each.
From all the available foods, we should select the proper ___ of each....
Compare lizzie’s and turner’s feelings. what does a reader learn about lizzie and
Compare lizzie’s and turner’s feelings. what does a reader learn about lizzie and turner from their feelings? lizzie and turner are both afraid of becoming too close. lizzie and turner both feel a similar connection to their environment. lizzie and turner see the world around them very differen...
We learned this last year but i’m dumb s yeee
We learned this last year but i’m dumb s yeee $We learned this last year but i’m dumb s yeee$...
How is digitized information transmitted?
How is digitized information transmitted?...
What part of the Bill of Rights is the basis for this decision?
What part of the Bill of Rights is the basis for this decision?...
Which state ment is not true of athe function? a. the independent variable is x. b. the dependent variable is y c. whent
Which state ment is not true of athe function? a. the independent variable is x. b. the dependent variable is y c. whent he input is 1 the output is -2 d. when the input is 8 the output is 32...
Graph the information presented in the table. Use that graph to predict the week that revenue will equal
Graph the information presented in the table. Use that graph to predict the week that revenue will equal expenses for this small company. Note: Revenue and Expenses are drawn on the vertical axis and Month is on the horizontal axis. Week 6 Week 7 Week 5 Week 8...
What was the significance of natural selection to the theory of evolution?
What was the significance of natural selection to the theory of evolution?...
The table shows the maximum daily temperatures, in degrees fahrenheit, recorded in a
The table shows the maximum daily temperatures, in degrees fahrenheit, recorded in a texas town in may. 84 86 84 88 83 85 90 84 88 84 85 83 85 84 90 84 85 85 83 85 which data display(s) can be used to find the median of the maximum daily temperatures?...
What is 2,230×3.14? give me some hints at least.
What is 2,230×3.14? give me some hints at least....
What is gods purpose of the bible?
What is gods purpose of the bible?...
Which crop did texas farmers produce the most of in the year after the civil war? a. wheat b. corn c.
Which crop did texas farmers produce the most of in the year after the civil war? a. wheat b. corn c. sorghum d. cotton...
If you copied b4 to a2 what would be the resulting cell reference in a2?
If you copied b4 to a2 what would be the resulting cell reference in a2?...
If there are 5.30 mole of o, how many of each of the compounds are present?
If there are 5.30 mole of o, how many of each of the compounds are present?...
The shrinking of the aral sea has led to altered weather and rainfall patterns over the entire region. this is most likely caused
The shrinking of the aral sea has led to altered weather and rainfall patterns over the entire region. this is most likely caused a.) a smaller shoreline b.) less evaporation from a smaller sea c.) more evaporation from irrigated fields d.) higher salt levels in the sea...
|
2022-09-29 23:55:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17675690352916718, "perplexity": 2814.8511277931243}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00785.warc.gz"}
|
http://clay6.com/qa/37863/the-domain-of-the-function-f-x-large-frac-x-3-is
|
Browse Questions
Home >> AIMS
The domain of the function $f(x) =\large\frac{\sin ^{-1} (x-3)}{\sqrt {9-x^2}}$ is
$\begin{array}{1 1}(A)\;[2,3] \\(B)\;[2,3) \\(C)\;[1,2] \\ (D)\;[1,2) \end{array}$
The domain of the function $f(x) =\large\frac{\sin ^{-1} (x-3)}{\sqrt {9-x^2}}$ is $[1,2)$
|
2017-03-27 14:37:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011570811271667, "perplexity": 240.348876658232}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189474.87/warc/CC-MAIN-20170322212949-00553-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.snapxam.com/solver?p=%5Csin%5Cleft%28x%5Cright%29%5E2%2B%5Ccos%5Cleft%28x%5Cright%29%5E2%3D1
|
Try NerdPal! Our new app on iOS and Android
# Prove the trigonometric identity $\sin\left(x\right)^2+\cos\left(x\right)^2=1$
Go!
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
.
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
θ
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
true
## Step-by-step Solution
Problem to solve:
$\sin\left(x\right)^2+\cos\left(x\right)^2=1$
Choose the solving method
1
Applying the pythagorean identity: $\sin^2\left(\theta\right)+\cos^2\left(\theta\right)=1$
$1=1$
2
Since both sides of the equality are equal, we have proven the identity
true
true
$\sin\left(x\right)^2+\cos\left(x\right)^2=1$
### Main topic:
Trigonometric Identities
~ 0.03 s
|
2022-09-27 12:17:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3989849388599396, "perplexity": 8378.589384379373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00079.warc.gz"}
|
http://math.stackexchange.com/questions/103352/a-question-on-the-stirling-approximation-and-logn/103374
|
# A question on the Stirling approximation, and $\log(n!)$
In the analysis of an algorithm this statement has come up:$$\sum_{k = 1}^n\log(k) \in \Theta(n\log(n))$$ and I am having trouble justifying it. I wrote $$\sum_{k = 1}^n\log(k) = \log(n!), \ \ n\log(n) = \log(n^n)$$ and it is easy to see that $\log(n!) \in O(\log(n^n))$, but the "$\Theta$" part is still perplexing. I tried calculating the limit: $$\lim_{n \to\infty} \frac{\log(n!)}{\log(n^n)}$$but the only way that I thought of doing this was to substitute the Stirling approximation in the numerator, and it works, but this would only be justified if $$\lim_{n \to\infty}\frac{\log(n!)}{\log(\sigma(n))} = 1$$(with $\sigma(n) = \sqrt{2\pi} \cdot n^{\frac{1}{2} + n} \cdot e^{-n}$) and is it? It is certainly wrong that $$a_n \sim b_n \ \Rightarrow \ f(a_n) \in \Theta(f(b_n))$$ for a general (continuous) $f : \mathbb{R} \rightarrow \mathbb{R}$, since, for example, $a_n =n^2+n, b_n=n^2$ and $f(x) = e^x$ is a counterexample (since $\frac{n^2 + n}{n^2} \rightarrow 1$, but $\frac{e^{n^2+n}}{e^{n^2}} \rightarrow +\infty$). So here are my questions:
1. Is the statement true under certain hypothesis on $f$ (for example $f \in O(x)$ would be plausible), and in particular in the case of the $f(x) = \log(x)$?
2. If not, how can I proceed in the initial proof, using Stirling, or otherwise?
I do not know (and don't want to use) anything andvanced on the Stirling approximation, such as error estimates; I know that $n! = \sigma(n)(1+ O(\frac{1}{n}))$, and not much more.
Any help is appreciated. Thank you.
-
All you need to do is bound the sum from the left and right by Riemann sums, then take the corresponding Riemann integrals. – Qiaochu Yuan Jan 28 '12 at 19:45
Thank you, this solves 2. avoiding Stirling. If you can provide some insight into 1. I will happily accept your answer. – Emilio Ferrucci Jan 28 '12 at 19:56
Related question: math.stackexchange.com/questions/93403/… – Jonas Meyer Jan 28 '12 at 19:57
@JonasMeyer I saw your comment on the accepted answer. Justifying that particular use of the Stirling approximation is exactly what I was struggling with before I posted the question. – Emilio Ferrucci Jan 28 '12 at 20:05
Actually, there is a more straight way of proving the approximation. Apply the trapezoid rule to get an approximation and then apply Euler-McLaurin's formula to bound that approximation. After taking appropriate limits, you might need to use the idea that the remainder $R$ in Euler-McLaurin's formula respect the relation $R_{n} = \lim_{n \to \infty} R_{n} + O \left( \frac{1}{n^m} \right)$, where $m$ is the power you are terminating. Taking limits and then exponentiating it, you will get the Stirling's formula! You don't need to find the error in approximation if you need $\Theta(n \log n)$ – Jalaj Jan 28 '12 at 20:13
Here's another answer to question 2. By the Stolz–Cesàro theorem,
$$\lim_{n\to\infty}\frac{\log(n!)}{\log(n^n)}=\lim_{n\to\infty}\frac{\log(n+1)}{\log(n+1)+\log\left(\left(1+\frac{1}{n}\right)^n\right)}=1.$$
For a partial answer to question 1, here's a way to see that $a_n\sim b_n$ implies $\log(a_n)\sim\log(b_n)$ (under reasonable hypotheses such as $|\log(b_n)|>\varepsilon$ for some fixed $\varepsilon>0$ and sufficiently large $n$). Note that
$$\frac{\log(a_n)}{\log(b_n)}=\frac{\log(b_n)+\log\left(\frac{a_n}{b_n}\right)}{\log(b_n)}.$$
This also implies that if $a_n\in\Theta(b_n)$ and $|\log(b_n)|\to\infty$, then $\log(a_n)\sim \log(b_n)$.
-
+1 for elegance of the proof! – Jalaj Jan 28 '12 at 20:28
Very nice! I hadn't heard about the Stolz-Cesaro Theorem; but it makes a lot of sense thinking of it as the discrete analog of l'Hopital's rule. – Emilio Ferrucci Jan 28 '12 at 20:36
Okay, lets work out the maths.
By the trapezoid rule, $\ln n! \approx \int_1^n \ln(x)\,{\rm d}x = n \ln n - n + 1$. Now we find the error in the approximation, which one can compute by Euler–Maclaurin formula. After some crude arithmetic, you can compute that
$$\ln (n!) - \frac{\ln n}{2}= n \ln n - n + 1 + \sum_{k=2}^{m} \frac{\mathcal{B}_k\,(-1)^k}{k(k-1)} \left( \frac{1}{n^{k-1}} - 1 \right) + S_{m,n},$$ where $\mathcal{B}_k$ denotes the Bernoulli's number.
Now taking limits, you can compute $$\lim_{n \to \infty} \left( \ln n! - n \ln n + n - \frac{\ln n}{2} \right) = 1 - \sum_{k=2}^{m} \frac{\mathcal{B}_k(-1)^k}{k(k-1)} + \lim_{n \to \infty} S_{m,n}$$.
Now since $S_{m,n}$ satisfies the following property, $$S_{m,n} = \lim_{n \to \infty} S_{m,n} + O \left( \frac{1}{n^m} \right),$$ we get the approximation in the logarithmic form. Taking exponent on both the sides and plugging $m=1$, we get the result: $$n! = \sqrt{2 \pi n}~{\left( \frac{n}{e} \right)}^n \left( 1 + O \left( \frac{1}{n} \right) \right).$$
If you don't care about the $\sqrt{2 \pi n}$ term, I guess you can just use the approximation $\ln n! \approx \sum_{j=1}^n \ln j$ as follows:
$$\sum_{j=1}^n \ln j \approx \int_1^n \ln x \,{\rm d}x = n\ln n - n + 1 = \Theta(n \ln n)$$.
-
Ah ok, this is a nice proof of Stirling's approximation. I don't know much about Bernoulli numbers; but this seems very interesting. However 2. of my question was already answered after the use of the trapezoidal rule, since $n \ln(n) - n +1 \in \Theta(n\ln(n))$ – Emilio Ferrucci Jan 28 '12 at 20:28
I know, but I thought I should pin down my thought process in the comment to your question more concretely. Otherwise, it sounded like a hand-waving argument and "obvious" and "easy to see" are two words that has no place in mathematics. – Jalaj Jan 28 '12 at 20:31
Yes, thanks; I was about to ask you to expand on the comment but you beat me to it by answering! I haven't quite figured out the details yet, but I will look at this more closely when I'm done with my upcoming exam. – Emilio Ferrucci Jan 28 '12 at 20:40
Put $s_n:=\sum_{k=1}^n\ln k$. Then $s_n\leq n\ln n$ and $$s_n=\sum_{k=1}^n\log (n-k+1)=n\log n+\sum_{k=1}^n\log\left(1-\frac{k-1}n\right)$$ and using $\log(1+t)\geq t-t^2/2$ we get
\begin{align*} s_n&\geq n\log n-\sum_{j=0}^{n-1}\frac jn+\frac{j^2}{2n^2}\\ &=n\log n-\frac{n-1}2-\frac{(n-1)(2n-1)}{6n}\\ &=n\log n-\frac{n-1}{2n}\left(n+\frac{2n-1}3\right)\\ &=n\log n-\frac{n-1}2\frac{5n-1}{3n}\\ &=n\log n-\frac{n-1}6\left(5-\frac 1n\right), \end{align*} so $$\frac{s_n}{n\log n}\geq 1-\left(1-\frac 1n\right)\frac 1{6\log n}\left(5-\frac 1n\right),$$ which is below bounded by a positive constant, for example $\frac 12$ for $n$ large enough.
-
Thank you, this answer is very helpful. If you can provide any suggestions on how to answer 1., I will accept it. – Emilio Ferrucci Jan 28 '12 at 20:03
Except if a misunderstand something, we have $e^{\frac 1{n^2}}\sim e^{\frac 1n}$ but not $\frac 1{n^2}\sim\frac 1n$. – Davide Giraudo Jan 28 '12 at 20:08
Should it not be $s_n \geq n \log n - \sum_{j=0}^{n-1} \frac{j}{n} + \frac{j^2}{2n^2}$ because $t$ in your case is $\frac{-j}{n}$? From there you can argue that the term in the negative $\leq \Theta(n)$ and hence $s_n = \Theta(n)$. – Jalaj Jan 28 '12 at 20:08
@DavideGiraudo What I meant in the counterexample was: $\frac{n^2+n}{n^2} \rightarrow 1$, but $\frac{e^{n^2+n}}{e^{n^2}} = \frac{e^{n^2} \cdot e^n}{e^{n^2}} = e^n \rightarrow \infty$ – Emilio Ferrucci Jan 28 '12 at 20:12
An answer to question 1. involves slowly/regularly varying functions, whose study is sometimes summarized as Karamata's theory. Slowly varying functions are positive functions $L$ such that, for every positive $\lambda$, $L(\lambda x)/L(x)\to1$ when $x\to+\infty$.
To see that such functions make the implication in question 1. true, assume that $f$ is slowly varying and furthermore, to simplify things, that $f$ is nondecreasing. Then, if $a_n\sim b_n$ and $a_n,b_n\to+\infty$, $\frac12a_n\leqslant b_n\leqslant 2a_n$ for every $n$ large enough, hence $$\frac{f(\frac12a_n)}{f(a_n)}\leqslant \frac{f(b_n)}{f(a_n)}\leqslant \frac{f(2a_n)}{f(a_n)}.$$ Since the lower bound and the upper bound both converge to $1$, this proves that $f(a_n)\sim f(b_n)$.
Thanks to Karamata's characterization theorem, the same kind of argument may be applied to the wider class of regularly varying functions. These are positive functions $L$ such that, for every positive $\lambda$, $L(\lambda x)/L(x)$ has a finite positive limit when $x\to+\infty$.
Hence the implication: $a_n\sim b_n$ implies $f(a_n)\sim f(b_n)$ if $a_n,b_n\to+\infty$, holds for every regularly varying function $f$.
Examples of slowly varying functions are the powers of $\log(x)$. Examples of regularly varying functions are the powers of $x$ (possibly, times a slowly varying function). Examples of non regularly varying functions are the exponentials of powers of $x$ (possibly, times a regularly varying function).
-
Thank you, this is very interesting; I hadn't heard of these definitions. Actually regularly varying functions are more what I was after, since I was only asking that $f(a_n) \in \Theta(f(b_n))$ (not $f(a_n) \sim f(b_n)$). So polynomials do work (as one would expect), since they are regularly varying. – Emilio Ferrucci Feb 6 '12 at 12:42
All $n$ terms are smaller than or equal to $\log(n)$ and at least half are greater than or equal to $\log\left(\frac{n}{2}\right)$. So the sum is between $\frac{n}{2} \log\left(\frac{n}{2}\right)$ and $n\log(n)$. This gives the answer immediately.
-
I seem to remember having seen this trick on the site not long ago... – Did Jan 14 '13 at 15:29
@did I just found the same argument at math.stackexchange.com/questions/93403/… , thanks. It is a completely standard method in undergrad algorithms 101 classes as well. – user54551 Jan 14 '13 at 18:26
Ok I came up with a (partial) answer to 1. It actually involves proving something stronger: $$\lim_{n \to \infty}(\log(b_n) - \log(a_n)) = \lim_{n \to \infty} \log \left(\frac{a_n}{b_n} \right) = \log \left(\lim_{n \to \infty} \frac{a_n}{b_n} \right) = \log(1) = 0$$ and when two sequences $x_n, y_n$, with $y_n \rightarrow +\infty$ (but probably definitively "far" from $0$ would have been enough) have the property that $y_n - x_n \rightarrow 0 \Rightarrow |y_n - x_n| \rightarrow0$, then: $$0 \leftarrow | y_n| \left|1 - \frac{x_n}{y_n} \right| \ \Rightarrow \ \frac{x_n}{y_n} \rightarrow 1$$ because if $$\exists \ \epsilon >0, \ \ \ \forall \ n_{\epsilon}, \ \ \exists \ n>n_{\epsilon}:\ \ \left|\frac{x_n}{y_n} - 1 \right| > \epsilon$$ then clearly $|y_n | \left|1 - \frac{x_n}{y_n} \right|$ could not converge to zero, since $|y_n| \rightarrow \infty$.
I think what made this proof possible is either the concavity of $\log$ or the fact that $\log \in O(x)$, but I get stuck when dealing with a general function $f$ with one of these properties. Anyway, the question is very much answered and I think Jonas' answer definitely deserves to be accepted :-)
-
|
2015-04-21 15:28:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618452191352844, "perplexity": 315.1833982855991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246642037.57/warc/CC-MAIN-20150417045722-00091-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://rpg.stackexchange.com/questions/78733/is-there-any-power-that-allows-one-to-drain-pp
|
# Is there any power that allows one to drain PP?
I'm currently reading through the rules and wondering: Is there any power that allows one to absorb/drain PP of others?
For example in some other games like wod and part time gods some splats have the ability to devour one anothers soul and thus increase in power, and I'm wondering there if such a power is stated also in M&M
Not permanently, at least with anything from Green Ronin. Part of the foundation of the game is that you have the powers that you have, and that the GM gives you power points as you advance.
Now, that said, there are ways to get things like it. One way is to simply make an arrangement with your GM that, when he awards power points to the group for their actions, yours are sourced from whatever you have drained from others. You probably won't get a mechanical advantage from it, since most GMs increase the power of the group more or less in lockstep, but it would fit your theme. Secondly, you could get a Variable power with the descriptor being powers that you've gained from others. You still have to spend points on it, and you're limited to the "pool" of points there, and would have to reallocate them for new powers, but it's not the worst of solutions. Thirdly, players can use Hero Points to pay for Extra Effort, said Hero Points being gained from a Complication like "Power Source - Hero gains powers by absorbing the souls of his enemies and faces prejudice and fear as a result".
You didn't specify a system, but this does hold true for both 2E and 3E.
Of course, you can probably find a 3rd-party sourcebook that allows for such things, but that's outside of the official rules of the game, and I'd argue for simply talking with your GM to see if they're willing to houserule it in.
• its 3rd edition for me. sadly didnt see any 3rd edition/2nd edition tag so only could use the "normal" tag but then again the same question holds true for both versions anyway (and probalby also the answer. And....I'm the GM but when possible I try to stick to the rules so wanted to know if there is anything I'm overlooking or iif there is any workaround there. Apr 16 '16 at 19:49
• m-and-m-3e and m-and-m-2e FWIW. If you're the GM, then just impose your own system. As mentioned above, that can simply be how your players get power points. Apr 16 '16 at 21:10
• thanks didnt see those (as the general one is a bit longer I expeted them to be similar). Apr 17 '16 at 6:05
Use transform affliction. Change the powers to affects others only, use quirks to make the exchange permanent. Done.
• I don’t believe that this would work the way the OP intended; the Affliction would be able to remove other people’s powers with a Transformed result, but it wouldn’t empower you when you use it. Dec 1 '19 at 4:37
|
2022-01-16 11:05:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5793502926826477, "perplexity": 1162.7410433215284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00479.warc.gz"}
|
http://mathoverflow.net/feeds/question/36549
|
Coefficients from Stone Weierstrass versus Fourier Transform - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-18T04:55:49Z http://mathoverflow.net/feeds/question/36549 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/36549/coefficients-from-stone-weierstrass-versus-fourier-transform Coefficients from Stone Weierstrass versus Fourier Transform Dorian 2010-08-24T13:13:44Z 2010-08-25T11:00:13Z <p>Usually one shows the density of the functions $\sin(kx)$ in $L^2([0,1])$ using the Fourier transform. This in fact comes from the Stone-Weirstrass theorem however and then uses the density of continuous functions in $L^2([0,1])$.</p> <p>However, the Stone Weirstrass theorem can be used to show, for example, that the functions $e^{ikx}$ are dense in $C([0,1])$ and hence dense in $L^1([0,1])$ as well. So we obtain (not-necessarily-unique) coefficients $c_k$ such that $f_k(x) =c_ke^{ikx}$ converge to any given $f \in L^1([01])$. How should I think about these coefficients? How do they relate to the Fourier series of $f$ (with basis $e^{ikx})$? </p> http://mathoverflow.net/questions/36549/coefficients-from-stone-weierstrass-versus-fourier-transform/36582#36582 Answer by Otis Chodosh for Coefficients from Stone Weierstrass versus Fourier Transform Otis Chodosh 2010-08-24T18:43:25Z 2010-08-24T18:43:25Z <p>Re-reading your question, I think that I see what you are asking.</p> <p>Per @Andrea Ferretti's comments, you have to be careful to distinguish between ${e^{inx}}$ and $span \ {e^{inx}}$. You certainly are interested the latter. Sorry if my comments were sloppy and confusing above.</p> <p>So, I think that the it goes like this:</p> <p>From some corollary of Stone-Weirstrauss you can show that $span \ {e^{inx}}$ is dense in $C(\mathbb{S}^1)$ with the supremum norm. Because we know that $C(\mathbb{S}^1)\hookrightarrow L^2([0,1])$ has its image a dense subset of $L^2([0,1])$ and we know that if $f_n \to f$ in the supremum topology on $C(\mathbb{S}^1)$, then the images also converge in $L^2([0,1])$.</p> <p>Thus, by this reasoning, for $f\in L^2([0,1])$ we can find $f_n \in span \ {e^{inx}}$ such that $f_n = L^2([0,1])$. Lets write $$f_n = \sum_{k\in \mathbb{Z}} c_k^{(n)} e^{ikx}$$ where all but finitely many of the $c_k^{(n)}$ are zero (this is because in the span of infinitely many objects we only take a finite number of them to add together) </p> <p>Now, what I think you are asking is: what can we say about the coefficients $c_k^{(n)}$? The answer is that they converge to the $k$-th Fourier coefficient of $f$ as $n\to\infty$ because $$\hat f(k) = \langle f, e^{ikx} \rangle = \lim_{n\to\infty} \langle f_n ,e^{ikx}\rangle = \lim_{n\to\infty} c_k^{(n)}$$</p> <p>In fact if $c_k^{(n)}$ are arbitrary complex numbers, defining $f_n$ as above, we see that $$\Vert f - f_n \Vert_{L^2} = \sum_{k\in \mathbb{Z}} |\hat f(k) - c_k^{(n)}|^2$$ assuming convergence. Thus, if $(c_k^{(n)})_k \to (\hat f(k))_k$ as $n\to\infty$ in $\ell^2(\mathbb{Z})$ then $f_n\to f$ in $L^2$, which is a pretty weak condition.</p> http://mathoverflow.net/questions/36549/coefficients-from-stone-weierstrass-versus-fourier-transform/36585#36585 Answer by Helge for Coefficients from Stone Weierstrass versus Fourier Transform Helge 2010-08-24T19:00:23Z 2010-08-25T11:00:13Z <p>Just a comment if you choose coefficients $c_{k,n}$ such that $$\lim_{n\to\infty} \left(\sum_{k=-n}^{n} c_{k,n} e^{2\pi i n x}\right) \to f (x)$$ in some sense, e.g. $L^1$, then these are not unique. It is even known that the obvious choice $c_{k,n} = \hat{f}(k) = \int e^{-2\pi i n x} f(x) dx$ is not the best. It's much better to choose $$c_{k,n} = \left(1 - \frac{|k|}{n}\right) \hat{f}(k).$$ Then one Cesaro sums the Fourier series, and this is known to converge. </p> <p>As pointed out by Zen Harper below, I should mention that with the choice $c_{k,n} = \hat{f}(k)$ for $-n \leq k \leq n$, the Fourier series of a $L^1$ function must not converge. In fact it can diverge almost-everywhere.</p> <p>Having said these things, the obvious advantage of this is, that everything is explicit and does not rely on any abstract hocus pocus.</p> <p>I realized one more thing: Consider the case $f \in L^2$. Then the choice $c_{k,n} = \hat{f}(k)$ for $-n \leq k \leq n$ is optimal. This follows from easy Hilbert space theory!</p> http://mathoverflow.net/questions/36549/coefficients-from-stone-weierstrass-versus-fourier-transform/36593#36593 Answer by Mike Hall for Coefficients from Stone Weierstrass versus Fourier Transform Mike Hall 2010-08-24T22:02:09Z 2010-08-24T22:02:09Z <p>The Fourier transform is bounded from $L^1([0,1])$ to $\ell^\infty(Z)$ (with norm 1 if you expand in terms of $e^{2\pi ikx}$). If we take a sequence of finite sums of the form <code>$f_n = \sum_k c_{n,k} e^{2\pi ikx}$</code> where for each $n$ there are only finitely many terms which are not zero ("trigonometric polynomials"), and <code>$\|f_n -f\|_{L^1}\to 0$</code>, then for all $k$, we have <code>$c_{n,k} \to \hat f(k)$</code> as $n\to \infty$.</p> <p>It may help to think of the kind of stuff that doesn't happen in $L^2$: Namely, Kolmogorov famously showed in his first publication that there exist $L^1$ functions $f$ whose Fourier series diverge almost everywhere (and I believe one can arrange for the divergence to be everywhere as well). If such a function $f$ can be written as an infinite sum $f(x) = \sum c_k e^{2\pi ikx}$, with the sum converging in $L^1$ and the coefficients not necessarily the Fourier coefficients, then the sum must converge almost everywhere, so it can't be the Fourier series. By the previous paragraph, it follows that no such representation as a convergent infinite sum is possible.</p> <p>So what must happen as one uses density of $C(X)$ and Stone-Weierstrass to approximate such a function $f$ by trigonometric polynomials? If <code>$\|f_n - f\|_{L^1} < \epsilon$</code>, then for all $k$, $|\hat f_n(k) - \hat f(k)| < \epsilon$. One way to think about it is that $\hat f_n(k)$ is forced to be like $\hat f(k)$ whenever the latter is large compared to $\epsilon$, while there's potentially up to an $\epsilon$ of leeway in every coefficient. It is clear that $f_n$ can't just match $f$ in all the Fourier coefficients which are larger than $\epsilon$, though, or else the $L^1$ norm of the resulting sequence would diverge. So density and Stone-Weierstrass are somehow smart enough to use the (less than) $\epsilon$ of room to carry out the $L^1$ approximation. </p>
|
2013-06-18 04:55:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967151939868927, "perplexity": 269.89099183261226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706934574/warc/CC-MAIN-20130516122214-00001-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.math.uic.edu/seminars/view_seminar?id=4290
|
## Model Theory Seminar
Sherwood Hachtman
UIC
Block representability of $\ell_p$ in types
Abstract: We will go over Chapter 11 of Iovino's book, Applications of Model Theory to Functional Analysis.
Tuesday March 8, 2016 at 2:00 PM in SEO 427
UIC LAS MSCS > seminars >
|
2017-09-22 09:50:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2249334156513214, "perplexity": 7566.549153514907}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688932.49/warc/CC-MAIN-20170922093346-20170922113346-00213.warc.gz"}
|
https://math.eretrandre.org/tetrationforum/archive/index.php?thread-298.html
|
# Tetration Forum
Full Version: Laplace transform of tetration
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Let a be a base $1 < a < \e^{1/\e}$. Then we can build a regular tetration function $tet_a$ around either of the fixed points. In either case, the function will be periodic, with period given by $per(a) = 2 \pi \i / \log(\log(f))$ for f the fixed point.
Thus the Laplace transform of sexp_a is:
$tet_a(z) = \sum_{k \in Z} \e^{per(a) k z} c_k$
Here, if we expand around the lower fixed point, all the positive coefficients will be zero, since the function tends to the fixed point at $+\inf$. Similarly, if we expand around the upper fixed point, all the negative coefficients will be zero. In either case, $c_0$ is the chosen fixed point.
Now from the equation above, we have $tet_a(z+1) = \sum_{k \in Z} \e^{per(a) k z} [\e^{per(a) k} c_k]$. But by definition, this is equal to $\exp_a(tet_a(z)) = \sum_{n \in N} \frac{$$\sum_{k \in Z} e^{per(a) k z} c_k \log(a)$$^n}{n!}$.
By equating the terms of the resulting Laplace series, we get the equation $c_k e^{per(a) k} = \sum_{n \in N} \frac{(\log a)^n}{n!} \sum_{\Sigma k_i = k} $\prod_{i=1}^n c_{k_i}$$. The inner sum is over all integer sequences of length n which sum to k. The finitude of this sum is ensured by the fact that either all positive or all negative coefficients are zero.
(06/01/2009, 06:14 PM)BenStandeven Wrote: [ -> ]Thus the Laplace transform of sexp_a is:
Isnt that the Fourier deveopment?
Quote:By equating the terms of the resulting Laplace series, we get the equation $c_k e^{per(a) k} = \sum_{n \in N} \frac{(\log a)^n}{n!} \sum_{\Sigma k_i = k} $\prod_{i=1}^n c_{k_i}$$. The inner sum is over all integer sequences of length n which sum to k. The finitude of this sum is ensured by the fact that either all positive or all negative coefficients are zero.
And actually the $c_k$ are the coefficients of the inverse Schröder powerseries.
Incidentally Dmitrii and I just finished an article about exactly that topic, which I append.
Wow, nice article! I wept.
I think one of the parts that was new to me was the proof that the tetrations developed at the fixed points 2 and 4 are different. You show that their periods are different, thus they must be different. So simple!
|
2021-06-20 23:55:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 11, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9825685024261475, "perplexity": 453.5516971465892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00534.warc.gz"}
|
http://cloud.originlab.com/doc/Tutorials/OC-Call-NAG
|
# 12.2.6 Tutorial:Calling NAG Functions From Origin C
## Summary
Calling a NAG function from an Origin C function is very much like calling any other Origin C function. You must familiarize yourself with the desired NAG function to gain an understanding of what parameters the function requires to be passed as arguments and what parameters the function returns. Once familiar with the function, you must develop code that follows the function's requirements.
The NAG header file containing the function's prototype must be included, required parameters must be correctly declared, sized, and initialized, and the function call must follow the function's prototype as described in header file. The objective of this tutorial is to demonstrate how to call a NAG function from an Origin C function.
Minimum Origin Version Required: Origin 8.1 SR1
## What you will learn
This tutorial will show you how to:
• Understand NAG functions
• Get Ready to Debug Sample Code
• How to See the Declaration of NAG Function
• How to Get NAG Error Code
• How to Use Function Pointer
## Understand NAG Functions
The primary resource for understanding NAG functions is NAG library. The library also can be found in Origin C Reference. For example, d01sjc NAG function:
1. From the Origin menu select Help:Programming:OriginC. In the Origin C Reference book, expand the Global Functions book, expand the NAG Functions book, and choose Accessing NAG Functions Category and Help.
3. The selected page is one PDF file. Study the nag_1d_quad_gen_1 function as needed to understand the description of the function, the function's prototype, and the description of all arguments. Sample data and an example program calling the function are also often included.
Secondary resource for understanding the Origin C NAG functions is Examples book. From the Origin menu select Help:Programming:OriginC, expand Examples book, expand Analysis book, choose Accessing NAG Functions, there are some examples to show how to call NAG functions in Origin C.
## Get Ready to Debug Sample Code
The best way to understand how to write an Origin C function that calls a NAG function is to step through an example function in Debug mode. Follow the steps below to set up Origin and Code Builder to execute such a sample Origin C function in Debug mode.
1. From the Code Builder menu, select File: New. This opens the New File dialog box.
2. In the File Name text box, type: TestNAG, Keep Add to Workspace checkbox is checked. Click OK. The file TestNAG.c is added to the workspace.
3. Select and copy the following function, and paste it into the TestNAG.c file. Be sure to paste the text below the line that reads "// Include your own header files here."
// Include your own header files here.
#include <OC_nag.h>
//NAG_CALL denotes proper calling convention. You may treat it like a function pointer
static double NAG_CALL f_callback_ex(double x, Nag_User *comm)
{
int *use_comm = (int *)comm->p;
return (x*sin(x*30.0)/sqrt(1.0-x*x/(PI*PI*4.0)));
}
void nag_d01sjc_ex()
{
double a = 0.0;
double b = PI * 2.0; // integration interval
double epsabs, abserr, epsrel, result;
// you may use epsabs and epsrel and this quantity to enhance your desired precision
// when not enough precision encountered
epsabs = 0.0;
epsrel = 0.0001;
// The max number of sub-intervals needed to evaluate the function in the integral
// The more diffcult the integrand the larger max_num_subint should be
// For most problems 200 to 500 is adequate and recommmended
int max_num_subint = 200;
NagError fail;
Nag_User comm;
static int use_comm[1] = {1};
comm.p = (Pointer)&use_comm;
d01sjc(f_callback_ex, a, b, epsabs, epsrel, max_num_subint,&result, &abserr, &qp, &comm, &fail);
// For the error other than the following three errors which are due to bad input parameters
// or allocation failure NE_INT_ARG_LT NE_BAD_PARAM NE_ALLOC_FAIL
// You will need to free the memory allocation before calling the integration routine again to
// avoid memory leakage
if (fail.code != NE_INT_ARG_LT && fail.code != NE_BAD_PARAM && fail.code != NE_ALLOC_FAIL)
{
NAG_FREE(qp.sub_int_beg_pts);
NAG_FREE(qp.sub_int_end_pts);
NAG_FREE(qp.sub_int_result);
NAG_FREE(qp.sub_int_error);
}
printf("%10.6f", result);
}
#include <OC_nag.h>
This header file containing all the header files of NAG functions, and all type define and error code define. So just include this one function should be enough.
## How to See the Declaration of NAG Function
See the declaration of NAG functions from the header file:
1. Acivate TestNAG.c file that created in above section, move scoll bar to find out #include <OC_nag.h> line.
2. Right click anywhere in the line and select Open "OC_nag.h". This opens the NAG header file.
3. Move scoll bar to find out #include <NAG_MARK25\oc_nag_all.h> line, and also right click to select Open "NAG_MARK25\oc_nag_all.h". This opens the header file containing all nag headers at Mark25.
4. In the Search combo box type nagd01.h and press ENTER button to find out this line. Function d01sjc belongs to d01 category, so the header file name should be nagd01.h.
5. Right click anywhere of this line to choose Open "nagd01.h". This opens the header file including the prototype of the functions.
6. In the Search combo box, type d01sjc and press ENTER button to go to the declaration of this function.
To see the declaration of functions from NAG PDF:
## How to Get NAG Error Code
1. Reactivate the TestNAG.c window in Code Builder. In this file, the NagError variable fail is defined and passed as last argument to function d01sjc.
2. NAG function returns error code into NagError variable code item. In this example, can access NAG error code by fail.code.
How to know what error codes will be got:
1. Open Origin C Help from Origin menu Help: Programming: OriginC, expand Origin C Reference book, Global Functions book, NAG Functions book and choose Accessing NAG Functions Category and Help.
1. In Chapters of NAG C Library table, choose d01 to enter Quadrature page, select d01sjc in the table of this category to open the PDF help of this NAG function.
1. Drag scall bar to page down to 6 Error Indicators and Warnings part, there list all error codes for this function and the related description. Can directly use these error codes in Origin C if included correct header file (include <OC_nag.h>, this header containing all NAG headers), for example NE_INT_ARG_LT, NE_BAD_PARAM, NE_ALLOC_FAIL used in TestNAG.c file.
## How to Use Function Pointer
1. Open nagd01.h file from Origin program path \OriginC\system\NAG_MARK25 folder.
2. In this file, find out the declaration of d01sjc function. The first argument type of this function is NAG_D01SJC_FUN.
3. Double-click NAG_D01SJC_FUN to high-light it. From menu choose Edit: Find in Files to open Find in Files dialog. Set settings same as the following picture, click Find button.
4. Searching result display in Output window. Double click nag_types.h line to go to this file, typedef NAG_D01_TS_FUN NAG_D01SJC_FUN line, you can find the define of NAG_D01_TS_FUN nearby.
5. The define of NAG_D01_TS_FUN is
typedef double (NAG_CALL * NAG_D01_TS_FUN)(double, Nag_User *comm);
User defined function should keep the same return type and argument list as this define. And NAG_CALL denotes proper calling convention and it should be used in your own function.
6. Activate TestNAG.c file. There is a function named f_callback_ex and it used as function pointer in d01sjc as the first argument.
double NAG_CALL f_callback_ex(double x, Nag_User *comm)
{
int *use_comm = (int *)comm->p;
return (x*sin(x*30.0)/sqrt(1.0-x*x/(PI*PI*4.0)));
}
|
2019-09-16 14:39:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6622585654258728, "perplexity": 7600.808840090465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00398.warc.gz"}
|
http://vzfp.from.bz/delimiter-in-postgresql.html
|
Escaping delimiter in postgresql. PostgreSQL is my go-to for evaluating interoperability with other analytics products such as R, SAS, Python, Tableau, and Power BI. Range types are a unique feature of PostgreSQL, managing two dimensions of data in a single column, and allowing advanced processing. As before, the syntax is:. InSpy is a Python-based LinkedIn enumeration tool with two functionalities: TechSpy and EmpSpy. Lucky for us, PostgreSQL has a built-in tool called COPY that hammers data in and out the database with one quick query. PostgreSQL comes with an easy to use export tool to export data from PostgreSQL database. In our example the '$$' string is used. I will change the way the driver is deployed. Splitting a string is a very common requirement for all PostgreSQL Database Developers. SQL aggregation rocks! My. But alas, REVERSE is not available in GreenPlum (Postgres 8. PostgreSQL Generate_Series Hierarchical data in SQL How to Create a Copy of a Database in PostgreSQL Export CSV How to Replace Nulls with 0s in SQL How to Start a PostgreSQL Server on Mac OS X IF in the SELECT. What we in fact want is to specify QUOTE and DELIMITER characters that can't appear at all in the JSON. Convert CSV to SQL. What is this, and why does it launch ever. Split string by delimiters in bash Posted: January 6, 2014 | Author: crcok | Filed under: Linux, Scripting, Solutions, Tips, Ubuntu | Leave a comment In a bash script, take a string, take values between predefined delimiters (“;” in this case) and put them into an array. Well you can also read from postgres using COPY, but that's for some other time. Complex regular expressions can also be used to extract the desired data from other kinds of text files (e. Good Delimiter for copy command. For example, Microsoft SQL Server uses the BULK INSERT SQL command which is pretty similar to PostgreSQL's COPY command, while Oracle has a command line tool called sqlloader. Therefore, I started looking for a good string splitting function. I am trying to create a database with several tables. The number of rows returned is determined by how many values are in each string. Export or Save Excel Files with Pipe (or Other) Delimiters Instead of Commas Lowell Heddings @lowellheddings Updated August 3, 2016, 11:34pm EDT If you're working with some awful corporate system that should have been retired in the dark ages, you might have had to produce a file from Excel with some weird delimiters other than comma or tab. Default delimiter for text[] data type is "," (comma). I needed to be able to accept a string of comma-delimited integers and then JOIN on those ID's. MySQL also supports CSV file imports with the LOAD DATA INFILE command or by using the mysqlimport utility. With Sqoop, you can import data from a relational database system into HDFS. Do you see the preview at the bottom of the screenshot? We have not yet imported any data! We are in. Any character or string can be set as a delimiter. Postgres COPY in java. The greatest challenge with the pivot problem is to recognize it when you run into it. Syntax: Example: PostgreSQL SPLIT_PART() function : In the example below, the delimiter of the defined string is '-#-' and specified field number is 2. As the results are cached by SQL Workbench/J anyway, it is suggested to set this parameter to a value greater then zero to disable the caching in the driver. So in this case ' ' means that the delimiter would be a space. Below is the script with using the DELIMITER command -. Someday they will have an array datatype like PostgreSQL. Advanced InfoUnload for PostgreSQL or AIPG, allows PostgreSQL data to be unloaded in a multitude of formats including XML, JSON, and delimited/non-delimited files. Following figure shows how to set row delimiter as an expression in bulk insert task. If you’re using Postgres, there is a COPY command that will export to CSV. TAB files to show up in your file dialog? o do the former, I believe you'll need to create an Export specification and reference it's name in the argument list. The latter form is perhaps preferred. It is recommended that you download pgAdmin 4 instead. How to install the database in Postgres. The result set needed to be a linear history of users' activities, with each row consisting of a user identifier, activity type, location, and timestamp. These types of requests are implemented according to a defined protocol , which is a set of rules that both the client and server have agreed to. From CSV to Postgres Table. Syntax GROUP_CONCAT([DISTINCT] exp [ORDER BY sorting] [SEPARATOR 'sep']) Quick Example SELECT GROUP_CONCAT(city) FROM cities GROUP BY state; Separator Comma (,) by default, '' eliminates separator NULL Values Skipped Max Length 1024 by default, specified by group_concat_max_len system variable Version. -U the user to connect with. In this example we will see How to Split String Into Array in Java. You want to import it into Postgres and a table called "your_table": Create the database table. Alas, this option is only resource-feasible for small structures. COPYコマンドは、 PostgreSQL のテーブルと標準のファイルシステムのファイル間でデータを移動します。 COPY TO コマンドはテーブルの内容をファイル に コピーします。. 1 Julian Klappenbach Apr 26, 2012 9:03 PM ( in response to James Perkins ) Yeah, there's a couple of things going on, perhaps a huge bug, or a change in APIs that has been completely undocumented. Any PostgreSQL sql script executed by Flyway, can be executed by the PostgreSQL command-line tool and other PostgreSQL-compatible tools (after the placeholders have been replaced). The problem was the delimiter which by default is ";", but should be "," in. It has Name column which has repeated player names along with their trophy names. I already explained how you can use character classes to match a single character out of several possible characters. If you have a timestamp without time zone column and you're storing timestamps as UTC, you need to tell PostgreSQL that, and then tell it to convert it to your local time zone. txt mysql-5. In our example the '$$' string is used. To import from Amazon S3 into RDS for PostgreSQL, your database must be running PostgreSQL version 10. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large. ) In PostgreSQL the function body is basically a long string. 1) Find your PSQL URL from the Heroku Posgres admin panel and run it in your terminal: psql "dbname=foo host=bar. Related course Master SQL with Python Programming. In this case, there may be three tables to reference: users, roles, and user_role_xref. > > delimiter field. Prerequisites: This tutorial assumes that you have already completed the steps required to gain access to the eICU Collaborative Research Database on PhysioNet. Hans-Jürgen Schönig. 1) Find your PSQL URL from the Heroku Posgres admin panel and run it in your terminal: psql "dbname=foo host=bar. The code referenced in this post was written to work in Postgres 10. Using COPY command in PostgreSQL Posted on 08/19/2010 by Syed Ullah Postgres has a very useful 'COPY' command that can be used to transfer data between text files and database tables. Hamlen ([email protected] The main module exported by the package leverages the stream transform API but input doesn't have to be an readable stream and output doesn't have to be a writable stream. For example, a TSV file,. This command allows users who do not have Superuser privileges…. The most common are $$and //. If you are using Windows or 32-bit architectures you need to download the appropriate binary. Data fields are. PostgreSQL Java tutorial tutorial covers the basics of PostgreSQL programming in Java language. I only found a exp for separate by positions substr({Barcode2},8,25). Outputting to CSV in Postgresql Posted on December 11, 2011 by dlarochelle I was inspired to write my own blog post on generating CSV output in Postgresql after researching the topic and finding numerous posts with wrong answers. Can you suggest a unique delimiter that I can use for this COPY command Thanks Sharmial. Create functions in PostgreSQL, which are very useful for various features when working with a large. The str_1, str_2, etc. This command allows users who do not have Superuser privileges…. The functions vary in how they determine the position of the substring to return. The code will not work as expected in Postgres 9. PostgreSQL is a very powerful standards-compliant RDBMS providing integral object-oriented and/or relational database functionality. Using Charindex + Cross Apply to split delimiter-separated values (SQL Server). I already explained how you can use character classes to match a single character out of several possible characters. To import a file into postgres using python is actually very easy. Because a delimiter inside a character literal does not define the end of the statement, no special treatment is needed for Postgres. It is implemented by a custom model manager. One common problem faced by users in countries other than United States is the formatting of decimal values. Using STRING_AGG(), We can concatenate strings using different type of delimiter symbols. Perform Inserting multiple rows in a single PostgreSQL query data import, export, replication, and synchronization easily. PostgreSQL is an object-relational database management system (ORDBMS). PostgreSQL Interview Questions and Answers (13) - Page 1 Latest and authentic Interview questions. Anyhow since TAB is the default delimiter when using cut you can simply not use the -d switch and it will return the results you are looking for as shown in the below example. Access PostgreSQL-compatible databases from BI, analytics, and reporting tools, through easy-to-use bi-directional wire-protocol drivers. For example, we may have data on our. It adds support for geographic objects allowing location queries to be run in SQL. Convert CSV to SQL. The explorer is a visual aid for helping to craft your queries. A lot of people bring this kind of data into a tool such as OpenOffice, Excel or MS Access, massage it into a delimeted format and then pull it in with PostgreSQL copy command or some other means. After installing postgreSQL, open the psql as: Program Files > PostgreSQL 9. In the File format section, select Custom delimiters and check Tab. In postgres there is a simple string function SPLIT_PART to get the n-th part of a given string that is splitted on defined delimiters. Click on >> your database >> click on import button from the file menu >> click on browse button & select the file that has the data to be imported >> click execute button. Follow the following steps to see the syntax of all commands in postgreSQL. Import CSV and line delimited JSON into PostgreSQL the easy way. csv' DELIMITER ',' CSV HEADER; In this video (Import CSV File To PostgreSQL) we are going to learn how to create a new PostgreSQL table and then how to. How does this problem get triggered in the normal course of operations. You can use Sqoop to import data from a relational database management system (RDBMS) such as MySQL or Oracle into the Hadoop Distributed File System (HDFS), transform the data in Hadoop MapReduce, and then export the data back into an RDBMS. Is it guaranteed that, in your data, each of these symbols is used ONLY as a delimiter and nothing else? (Or else, that the tokens are somehow protected, perhaps by enclosing them wit. Advanced InfoUnload for PostgreSQL or AIPG, allows PostgreSQL data to be unloaded in a multitude of formats including XML, JSON, and delimited/non-delimited files. Any PostgreSQL sql script executed by Flyway, can be executed by the PostgreSQL command-line tool and other PostgreSQL-compatible tools (after the placeholders have been replaced). If you want your data set to include empty values, just add one or more pipe characters at the end - the more pipes you enter, the greater the probability of an empty value being generated. Data fields are. collection of one-liners. The function that I created as an example for this post takes a string as input and add a '_' as the delimiter between digits and letters. The first is the ubiquitous PostgreSQL, my preferred open source, row-based, relational database. The application may be used on Linux, FreeBSD, Solaris, macOS and Windows platforms to manage PostgreSQL 8. Convert CSV to SQL. Line-delimited JSON is one common format used to exchange data between systems and for streaming JSON data. Create functions in PostgreSQL, which are very useful for various features when working with a large. As you can see this is something I think we may call alternating double-quotes comma delimited file with embedded commas, I simply needed to remove the occasional quotes and replace the delimiter to pipe to be able to process the file preserving the original quotes. I am trying to import data into postgres. Last but not least, passing the CSV modifier to the COPY command lets it know we are dealing with a CSV file, and lets PostgreSQL escape the data accordingly. The column delimiter is often changed from a comma to some other character. We indicate the delimiter (comma, in this case) by passing ',' to the DELIMITERS modifier of the COPY command. A lot of us are using various postgreSQL 8. (14 replies) Hi, What is a good delimiter to use for COPY command. Standard SQL syntax with statement delimiter ; Compatibility. A lot of people bring this kind of data into a tool such as OpenOffice, Excel or MS Access, massage it into a delimeted format and then pull it in with PostgreSQL copy command or some other means. This example is a nifty way for arbitrary attribute-values to be collected into one table. It allows. This controller lets you send an FTP "retrieve file" or "upload file" request to an FTP server. Think that's fixed now. Do you see the preview at the bottom of the screenshot? We have not yet imported any data! We are in. The server declaration is an opportunity to pass options that will apply to all tables created using the extension. From CSV to Postgres Table. In SQL Server 2005 and higher, this can be done using XML, but I don’t particularly see the need for the overhead. The latter separates the join condition from the expression, keeping the expression simpler. Using Literal Character Strings A literal string is a sequence of bytes or characters, enclosed within either two single quotes (' ') or two double quotes (" "). Lets see how to export PostgreSQL table to CSV file in different methods. vocabulary FROM 'vocabulary. Click on >> your database >> click on import button from the file menu >> click on browse button & select the file that has the data to be imported >> click execute button. (14 replies) Hi, What is a good delimiter to use for COPY command. 4 through 9. Export or Save Excel Files with Pipe (or Other) Delimiters Instead of Commas Lowell Heddings @lowellheddings Updated August 3, 2016, 11:34pm EDT If you’re working with some awful corporate system that should have been retired in the dark ages, you might have had to produce a file from Excel with some weird delimiters other than comma or tab. The tab character is the most common replacement and the resulting format is sometimes referred to as TSV. Note that "|" is used as a delimiter to read the entire line. This is sort of a shot in the dark, I don’t have any reason to believe there’s a better solution to this but I thought I’d give it a try. Custom - File that uses user-defined delimiter, escape, and quote characters. Time fields are fully supported by vector file layers (depending on file format), PostgreSQL, MS SQL Server and temporary scratch layers; Saving layers to vector files will preserve time fields if supported by vector layer format (eg MapInfo. In the blog post, Oryan describes how to achieve the high performance of MemSQL's columnstore for queries while keeping transaction data in PostgreSQL for updates - Ed. cut command in Linux with examples The cut command in UNIX is a command for cutting out the sections from each line of files and writing the result to standard output. Alas, this option is only resource-feasible for small structures. A useful technique within PostgreSQL is to use the COPY command to insert values directly into tables from external files. csv file into the table, whilst simultaneously generating a serial number number in column 1 (ID) which starts at number 1 and counts upwards (i. To set CLIENT_ENCODING parameter in PostgreSQL. set client_encoding to 'latin1' Another thing to guard against is nulls, while exporting , if some fields are null then PostgreSQL will add '/N' to represent a null field, this is fine but may cause issues if you are trying to import that data in say SQL server. 2013-22-17. In PostgreSQL database it is possible to save / export and results of a database query to a simple text file. Data fields are comma separated, strings may contain commas, in which case those are escaped:. We indicate the delimiter (comma, in this case) by passing ',' to the DELIMITERS modifier of the COPY command. If the order of elements is irrelevant, multisets and collect can also be used to pass a type-safe list to an application. This suite of sample programs describes how to read a column oriented file of fixed length records and fixed length fields and create a comma-delimited file (filename. One of the tables is intended to hold about 1. , separated by the separator. I know that COPY() will escape tabs (as \t), and we can use that from psql with the \copy command, but that does not include a header row of the column names. And eventually you have to specify the field separator in your original file by typing DELIMITER and the field separator itself between apostrophes. If you need to export data from a PostgreSQL database in psql , there is a fairly easy way to do this wouthout requiring superuser privelleges. Therefore, I started looking for a good string splitting function. If you prefer not to enter your password in the connection string, store the password securely in a PostgreSQL Password File, a MySQL Options File or similar files for other systems. This script will return one big string where the rows are separated by a comma. PostgreSQL Syntax. Export or Save Excel Files with Pipe (or Other) Delimiters Instead of Commas Lowell Heddings @lowellheddings Updated August 3, 2016, 11:34pm EDT If you’re working with some awful corporate system that should have been retired in the dark ages, you might have had to produce a file from Excel with some weird delimiters other than comma or tab. For example, Microsoft SQL Server uses the BULK INSERT SQL command which is pretty similar to PostgreSQL's COPY command, while Oracle has a command line tool called sqlloader. Can you suggest a unique delimiter that I can use for this COPY command Thanks Sharmial. I could get around this by > > typing a tab into WordPad, then copying and pasting it into the > > delimiter field. This script is tested on these platforms by the author. Ever since I wrote Converting multiple rows into a single comma separated row, I was trying to find a SQL command which will do the reverse, which is converting the single comma separated row back to multiple rows. when I use {CR}{LF} as row delimiter, I get the following error: [Flat File Source [11985]] Error: The column delimiter for column "Column 4" was not found. 3 on Amazon RDS PostgreSQL version 10. In the Import File dialog, specify the data conversion settings and click OK. 3) position. Delimiter - Single/Multiple characters value act as a separator between concatenated string values. 2 → SQL Shell(psql). az postgres server georestore -g testgroup -n testsvrnew --source-server testsvr -l westus2 --sku-name GP_Gen5_2 Geo-restore 'testsvr2' into a new server 'testsvrnew', where 'testsvrnew' is in a different resource group from 'testsvr2'. In the following example, we shall split the string Kotlin TutorialsepTutorial KartsepExamples with the Delimiter sep. PostgreSQL, often shortened to “Postgres,” is a relational database management system with an object-oriented approach, meaning that information can be represented as objects or classes in PostgreSQL schemas. A delimited text file is an attribute table with each column separated by a defined character and each row separated by a line break. You can use other characters as field delimiters, but separators such as ^A or Ctrl-A should be represented in Unicode (\u0001) using UTF-16 encoding (see Wikipedia ASCII , Unicode , and UTF-16 ). SQL aggregation rocks! My previous post demonstrated FlexyPool metrics capabilities and all connection related statistics were exported in CSV format. When I run a script specifying the file is comma delimited I would like it to ignore comma's that are in between speech marks. Any CockroachDB sql script executed by Flyway, can be executed by the CockroachDB command-line tool and other PostgreSQL-compatible tools (after the placeholders have been replaced). For example, I may need to write a quick report that shows the users’ names, their login ID, and all of the roles that they fulfill. Let’s assume that you have a text file which has the following data and you want to import the data into a SQL Server database table. I recently wanted to ingest a line-delimited JSON file into Postgres for some quick data exploration. First, you declare the extension that implements the data wrapper. 3, see the PostgreSQL documentation. PHP, or PHP: Hypertext Preprocessor, is a widely used, general-purpose scripting language that was originally designed for web development, to produce dynamic web pages. Kotlin – Split String using Delimiter(s) Example – Split String using single Delimiter. The function that I created as an example for this post takes a string as input and add a ‘_’ as the delimiter between digits and letters. A similar utility thats far less talked about, but equally as valuable is Postgres's copy utility. InfoUnload PostgreSQL supports multiple formats for output files (XML, JSON, Delimited, Non-delimited). The second parameter of the string_agg() is the delimiter, as evident from the result. Apr 28, 2017 · 4 min read. -Copied: trunk/roundcubemail/skins/default/templates/contact. psql -p 5433 -U dba dbname P. Export data from postgreSql to csv \Documents and Settings\All Users\Documents\data. Let's walk through a handy little Perl script that connects and loads a CSV data file to PostgreSQL. Postgres has no idea wtf time you actually stored, so it just treats it as whatever the current server timezone is (UTC in our case, from setting timezone in the postgresql. In the following query, using CROSS APPLY operator to work with STRING_SPLIT table-valued function. 3 million rows of data from the U. I have a comma delimited file that sometimes has addresses details in. Can you suggest a unique delimiter that I can use for this COPY command Thanks Sharmial. The newest product from Infotel expands the pg_dump unload utility to transfer data to any SQL and non-SQL repository including but not limited to Db2, Oracle, SQL Server, Hadoop. Perform complete system migration from one PostgreSQL instance to another. September 5, 2016 Scripts, Sql Server comma separated values, Convert comma separated value into a table, split comma separated values, Split comma separated values in Sql, Split delimited String in Sql, Split function in Sql, Split string to an array in Sql, Sql, Sql Server Basavaraj Biradar. PostgreSQL, often shortened to “Postgres,” is a relational database management system with an object-oriented approach, meaning that information can be represented as objects or classes in PostgreSQL schemas. when I use {CR}{LF} as row delimiter, I get the following error: [Flat File Source [11985]] Error: The column delimiter for column "Column 4" was not found. In this short article I will share with an example, how to split and convert a comma separated / delimited string to a table using Split function in SQL Server 2005, 2008 and 2012 versions. Each row of data in the output ends with a carriage return-line feed combination. But, when it comes to separating the. In the Import File dialog, specify the data conversion settings and click OK. Install PostgreSQL For this tutorial you will need the PostgreSQL dbms and the psycopg2 module. This is sort of a shot in the dark, I don’t have any reason to believe there’s a better solution to this but I thought I’d give it a try. COPY 命令可以快速的导入数据到 postgresql 数据库中,文件格式类似 TXT 、 CVS 之类。 适合批量导入数据,速度比较快。注意 COPY 只能用于表,不能用于视图。. cybertec-postgresql. 07 Thanks in advance Hello, My client requires data to be written to a tab delimited file format (txt). Every approach I found instead inserted the entire JSON object in a JSONB field. If the position is greater than the number of parts after splitting, the SPLIT_PART() function returns an empty string. Is there a way to change the delimiter to something else ? limit my search to r/PostgreSQL. But, when it comes to separating the. data' with delimiter '\t' null as ; But I'm. com), which is one of the market leaders in this field and has served countless customers around the globe since the year 2000. PostgreSQL aggregate function for comma separated list with distinct values. There is a dedicated UI for importing DSV (CSV and TSV) files to the database. Can you suggest a unique delimiter that I can use for this COPY command Thanks Sharmial. Every once in a great while, the enterprising SQL analyst is confronted with data that is not relational in nature. when I use comma as row delimiter, then each column turns into a row in this case. net, which will explain about various aspects of a delimited file, code explanation and source code. az postgres server georestore -g testgroup -n testsvrnew --source-server testsvr -l westus2 --sku-name GP_Gen5_2 Geo-restore 'testsvr2' into a new server 'testsvrnew', where 'testsvrnew' is in a different resource group from 'testsvr2'. The Geometry definition secction will be auto-populated if it finds a suitable X and Y. Syntax GROUP_CONCAT([DISTINCT] exp [ORDER BY sorting] [SEPARATOR 'sep']) Quick Example SELECT GROUP_CONCAT(city) FROM cities GROUP BY state; Separator Comma (,) by default, '' eliminates separator NULL Values Skipped Max Length 1024 by default, specified by group_concat_max_len system variable Version. The most common are$$ and //. In this case, there may be three tables to reference: users, roles, and user_role_xref. If you have data in delimited text files from other programs that you need to get into an Excel workbook, you can use the import wizard to import delimited text files and convert them into usable. All the fields get shifted down by one, with the result that (a) your COPY fails because of a data type mismatch, or (b) your data is silently accepted in a mangled state. csv” … but the excel show me only on tab withe 65 000 records… how can I find others. I’m trying to insert data into tables on a postgres database. In the blog post, Oryan describes how to achieve the high performance of MemSQL's columnstore for queries while keeping transaction data in PostgreSQL for updates - Ed. More than 1 year has passed since last update. In this example we will see How to Split String Into Array in Java. The latter separates the join condition from the expression, keeping the expression simpler. One possibility is to use the database connect features provided by the computational platforms. Apr 28, 2017 · 4 min read. PostgreSQLのcopyではまったのでメモしておく。環境はWindows7 64bit/32bit、PostgreSQLは9. This protects you if the delimiter appears in the data values. Postgresql JSONB Column, how to query for multiple specific elements in a JSON array. Is there any way to install only a PostgreSQL client, the terminal-based one, psql, on a CentOS7 system, without installing the complete PostgreSQL Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge. csv' DELIMITER E'\t' CSV ENCODING 'UTF8'; The synpuf 5% data file is UTF8 and doesn’t have header records. But your setting is wrong if is a multi letter word so should not be using the math italic font use \mathrm{if} or (much) better use the cases environment from the amsmath package. In the Import File dialog, specify the data conversion settings and click OK. It has a number of built-in indicators (MACD, RSI, Exponential (EMA) and Standard Moving Averages (SMA) and more), or you can write your own with the inbuilt formula language and display them as standalone charts or overlays. Using fputcsv to output a CSV with a tab delimiter is a little tricky since the delimiter field only takes one character. PostgreSQL Column-to-Row Transposition I recently had a need to generate a geolocation history of user activity. I couldn't find this delimiter in the Sqoop generated. So there is a default delimiter which is space. The BOL description for 'ragged right' is: 'Ragged right files are those in which every column has a fixed width, except for the last column. when I use {CR}{LF} as row delimiter, I get the following error: [Flat File Source [11985]] Error: The column delimiter for column "Column 4" was not found. Fixed width data is probably the most annoying data to import because you need some mechanism to break the columns at the column boundaries. Waiting for PostgreSQL 10 – Identity columns On 6th of April 2017, Peter Eisentraut committed patch: Identity columns This is the SQL standard-conforming variant of PostgreSQL's serial columns. Postgresql is a database, and its primary goal is to be efficient when storing and querying information that is stored on disk. export postgresql to csv Software - Free Download export postgresql to csv - Top 4 Download - Top4Download. Hans-Jürgen Schönig. Well you can also read from postgres using COPY, but that's for some other time. The current implementation of PostgreSQL returns 't' for true and 'f' for false. az postgres server georestore -g testgroup -n testsvrnew --source-server testsvr -l westus2 --sku-name GP_Gen5_2 Geo-restore 'testsvr2' into a new server 'testsvrnew', where 'testsvrnew' is in a different resource group from 'testsvr2'. How can I import a PostgreSQL database with pgAdmin? I have an exists PostgreSQL database i want to import it into my localhost with pgAdmin tools, i want to know:- how i can import and export a PostgreSQL- How i can execute an exist script into p. Ever since I wrote Converting multiple rows into a single comma separated row, I was trying to find a SQL command which will do the reverse, which is converting the single comma separated row back to multiple rows. Of course, the Python script failed with 11. If you are using Windows or 32-bit architectures you need to download the appropriate binary. Range types are a unique feature of PostgreSQL, managing two dimensions of data in a single column, and allowing advanced processing. I've been using the command:. A useful technique within PostgreSQL is to use the COPY command to insert values directly into tables from external files. Row delimiter. user_mapping FROM ' oracle-source-user-mapping. You can use alternation to match a single regular expression out of several possible regular expressions. This is particularity true when dealing with the so-called entity-attribute-value (EAV) model: it does not look like a pivot problem, but it can nevertheless be solved in the very same way. For more information on storing data with Amazon S3, see Create a Bucket in the Amazon Simple Storage Service Getting Started Guide. (14 replies) Hi, What is a good delimiter to use for COPY command. Convert CSV to SQL. The data is in a pipe-delimited text file. Using STRING_SPLIT() in SQL Server 2016 and higher and Few lines of code to create a function which splits the strings with delimiters on older versions. Counting Comma-Delimited Values in Postgres, MySQL, Amazon Redshift and MS SQL Server. I have hundred of Ids which i have to pass to store procedure. > Yes, I can do this with a COPY command. For example MySQL team does cloc --report-file=mysql-5. Data fields are. I would like to use the psql "\copy" command to pull data from a tab-delimited file into Postgres. InSpy attempts to identify technologies by matching job descriptions to keywords from a newline-delimited file. Since PostgreSQL 9. 2 > SQL Shell(psql) Use the following command to see the syntax of a specific command. Using a foreign data wrapper in PostgreSQL is a three step process. In our example the '' string is used. First, you declare the extension that implements the data wrapper. This is nearly my first effort to use PostgreSQL. For example, if you set a connection string that includes "User=postgres", and then reset the connection string to "Data Source=localhost", the UserId property is no longer set to "postgres". The PostgreSQL JDBC driver is deployed to QuerySurge Agents, and the Connection is set up using the Connection Extensibility feature of the QuerySurge Connection Wizard. Im trying to use COPY command to copy data from one table to another in 2 different databases. Since PostgreSQL 9. I was hoping to use code similar to that below to simply import everything as text. How To Export From PGAdmin, Export PGAdmin Data To CSV | Question Defense. From the Perl point of view, this is a rather unfortunate choice. The code referenced in this post was written to work in Postgres 10. What can this tool do? INSERT, UPDATE, DELETE, MERGE, and SELECT statements can be created. The only difference when used without GROUP BY is that the function is applied to the whole rows. Every once in a great while, the enterprising SQL analyst is confronted with data that is not relational in nature. Let’s assume that you have a text file which has the following data and you want to import the data into a SQL Server database table. For example, a TSV file,. COPYコマンドは、 PostgreSQL のテーブルと標準のファイルシステムのファイル間でデータを移動します。 COPY TO コマンドはテーブルの内容をファイル に コピーします。. After installing postgreSQL, open the psql as: Program Files > PostgreSQL 9. Data fields are. In the Import File dialog, specify the data conversion settings and click OK. Source table or view.
|
2019-11-22 02:02:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1702253520488739, "perplexity": 3399.6191407460888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00213.warc.gz"}
|
http://mathematica.stackexchange.com/questions/42795/using-mathematica-in-conjunction-with-lyx-path-issues
|
# Using Mathematica in Conjunction with LyX. Path Issues?
I am attempting make life easier, and, instead of using Mathematica at the same time that I'm using LyX (a front end for LaTeX), I was hoping that I could just use LyX and use the CAS option to do some of the simple algebraic computations that Mathematica can handle so powerfully.
Of course, this has been a headache for me, as I am not very comfortable with anything Unix related. I already know where to locate Mathematica, its absolute path name, and can put it inside the "PATH prefix" that LyX demands of me. But, even doing that, I cannot get an output on the simplest of computations.
Is anyone familiar with this, because I'm up against a wall. I have no idea what to do. I've been trying for hours.
By the way, as is evident from the picture, I am on a Mac, I don't know if that makes a significant difference.
Thanks!
Edit:
-
Would you elaborate ? Because I don't really see a question being asked here.. Also, any kind of error log is more than welcome :) – Sektor Feb 23 at 21:51
Certainly. I'm wondering why LyX and Mathematica aren't communicating. I've pointed LyX (with what I assume is the correct path) towards Mathematica and basically said, 'OK, now do some basic calculations when I ask you to.' But, this is not the case, LyX and Mathematica don't seem to understand that I wish for them to interact. Whenever I make LyX do a calculation with Mathematica, nothing happens. LyX says that it is going to get Mathematica, it pops up saying "math-extern Mathematica", but nothing happens! Even though I've pointed LyX towards Mathematica! So that's where the issue lies. :) – Matthew Feb 23 at 22:05
These posts might be useful, they mention a batch script to set the path (they don't actually mention when to run it though). link – Ymareth Feb 24 at 9:08
There is a solution described here, where the relevant part is:
...define a wrapper script in your PATH, e.g., /usr/local/bin/math. In this executable script, I have only two lines:
#!/bin/sh
/Applications/Mathematica.app/Contents/MacOS/MathKernel "\$@"
I have Mathematica 9.0.1 and Lyx 2.0.7.1 on OS X 10.9.1, and after creating the above "math" script in /usr/local/bin (BTW, I had to do "chmod 755 math" as well) I tested it using the following steps in Lyx:
1. Insert / Math / Inline Formula
2. Type "Expand[(x+y)^2]" - then leave the cursor just after the "]"
3. Edit / Math / Use Computer Algebra System / Mathematica
Lyx and Mathematica then cooperate to fill in the rest of the equation to give "Expand[(x+y)^2] = x^2 + 2 x y + y^2".
-
|
2014-11-24 05:24:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5273277163505554, "perplexity": 1312.9436634388926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380368.73/warc/CC-MAIN-20141119123300-00126-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://encyclopedie-energie.org/articles/canada%C2%A0-want-effective-climate-policy
|
Article :
148
Canada : Want an effective climate policy?
Niveau de lecture :
Tags :
Wisely, Prime Minister Justin Trudeau resisted the temptation at the Paris climate summit in December 2015 to double down on Stephen Harper[1]’s 2030 target for Canadian carbon dioxide (CO2) emissions.
While future emissions promises are easily made, effective climate policy is devilishly difficult. To have any chance, one needs to stay wise — which starts by avoiding advice from technology and policy advocates who themselves avoid inconvenient evidence from leading climate policy research and real-world experience. What does this evidence tell us?
1. Limits of carbon pricing
For one thing, it’s a mistake to expect a big contribution from energy efficiency. For three decades, governments and utilities have made efficiency the focus of their emissions reduction efforts, with negligible results.
Yes, energy efficiency is always improving, and we can slightly accelerate that trend. But humans require energy for basic needs and, more important, we keep inventing frivolous devices that use more. (Need evidence? Stroll through your local big-box store.)
The reality is that significant emissions reductions will happen only if we rapidly switch to zero- and partially-zero-emissions technologies.
Fortunately, these are now commercially available. But they won’t be widely adopted unless technologies that burn coal, oil and natural gas are phased out by regulations or made costly to operate by carbon pricing (Read L’insertion des préoccupations environnementales dans les politiques de l’énergie paragraph 3.). The latter can be either a carbon tax, as in British Columbia, or the price of tradable CO2 permits under an emissions cap, as in Quebec.
2. The performance of regulations
This is why, if he has survival instincts, Trudeau won’t depend solely on carbon pricing. Instead, he will do what serious jurisdictions do: regulate. And this is what we’ve done in Canada, although many fail to see it.
2.1. The actual and unrecognized efficiency of regulations
When asked which climate policy in Canada reduced the most CO2 emissions over the last decade, many people guess BC’s well-publicized carbon tax. They’re wrong. It was Ontario’s ban on coal-fired power, which reduced annual emissions by 25 megatonnes (MT).
Surely, then, BC’s carbon tax must have caused the most reductions in that province. Wrong again. The 2007 “clean electricity” regulation forced BC Hydro to cancel two private coal plants and its own gas plant. This cut BC’s projected annual emissions in 2020 by 12 to 18 MT. The carbon tax is slated to reduce 2020 annual emissions by 3 to 5 MT.
It’s the same in any jurisdiction that has significantly reduced emissions. Experts show that the carbon pricing policy in California, which Quebec has now joined, will have almost no effect by 2020. Ninety percent of that state’s current and projected reductions are attributed to innovative, flexible regulations on electricity, fuels, vehicles, buildings, appliances, equipment and land use.
Even Scandinavian countries, famous for two decades of carbon taxes, mostly used regulations to reduce emissions. For example, the greatest CO2 reductions in Sweden happened when publicly owned district heat providers were forced to switch fuels.
2.2. A better social acceptance despite a higher implicit carbon price.
Economists will point out that regulations may have high “implicit” carbon prices: this is the carbon price that would be required to cause the same amount of emissions reduction.
Indeed, I and other analysts have estimated an implicit carbon price for Ontario’s coal phase-out of $100 to$130, for BC’s clean electricity regulation of $80 to$120, and for California’s vehicle emissions standard of over $100. But is this high implicit price a bad thing? If it is politically impossible to raise “explicit” carbon prices to$160 to achieve our Paris promise, then at least regulations can get to that price implicitly — today. And Trudeau might get to keep his job.
Many economists – who, I repeat, never face voters – argue that regulations are economically inefficient compared with carbon pricing. This is true, especially if the regulations are poorly designed. Too bad that so few of these economists are willing to apply their intelligence and creativity to the design of relatively efficient regulations that also overcome the huge blockade of political acceptability. Fortunately, some have.
2.3. The convincing exemple of the California’s vehicle emlissions standard
The history of California’s vehicle emissions standard is illustrative. Since 1990, that state has required vehicle manufacturers and retailers to achieve rising market shares for zero-emissions vehicles (ZEVs) and partial-zero-emissions vehicles (PZEVs).
Thanks to the influence of some creative designers (perhaps renegade economists), that policy does not pick technology winners and losers. For the ZEV category, for example, it is agnostic (when it comes to CO2 emissions) between pure electric, hydrogen fuel cell and pure biofuel. So, as they compete with each other, auto manufacturers have strong incentives to innovate technologies and better understand emerging consumer preferences.
And they quietly cross-subsidize to achieve their sales targets, charging people who buy gas guzzlers a bit more per vehicle in order to reduce the sales price of the higher-cost ZEVs, which they must sell to achieve their market share targets. Conveniently, there is no cost to government. Nor is government blamed for levying a high tax on gas guzzlers.
The California PZEV-ZEV regulation also allows manufacturers to trade among themselves in meeting sales targets, again reducing economic inefficiency.
And the regulation is flexibly applied in response to new information. In the early 2000s, manufacturers convinced regulators to increase the PZEV target in compensation for delaying the ZEV mandate, the reason being that their incredible success with developing hybrid vehicles like the Toyota Prius was achieving the overall emissions objective, and they needed more time to develop consumer-attractive electric and hydrogen options for ZEVs.
Aujourd’hui, les PZEV et les VEZ sont en tête de file, notamment grâce aux voitures électriques et aux hybrides rechargeables, ces dernières étant particulièrement appréciées par les conducteurs qui roulent tout au long de la journée, en ville ou d'une ville à l’autre. En fin de compte, c’est donc le marché, et non les autorités de régulation, qui détermine la contribution relative de chaque catégorie de véhicule à la réduction totale des émissions.
2.4. Benefits of regulations
What makes regulating CO2 emissions so convenient is that a few sectors and energy uses account for most emissions. Regulations on electricity generation, furnaces in buildings, boilers in industry, oil and gas production processes, and transportation propulsion systems, like the PZEV-ZEV program, can address 75 percent of Canada’s energy-related CO2 emissions.
A concern about regulations is that the implicit carbon price will differ from one sector of the economy to another, resulting again in economic inefficiency but all real-world carbon pricing schemes conveniently ignore the same inconsistency. Howerver government can adjust the stringency of regulations in different sectors over time in order to align implicit carbon prices.
3. Suggestions for an effective Canadian policy
This brief description does not do justice to the creative potential for implicit carbon pricing that is politically acceptable without a big cost in terms of economic efficiency. But how can this help Trudeau, who seeks to develop a national climate plan with the premiers in March?
3.3. Electricity sector regulation
Another obvious sector to target is electricity. Again, Trudeau can turn up the dial on a soft Harper regulation so that, by 2030, all coal power plants must be closed or retrofitted with carbon capture and storage, and natural gas plants would be mostly restricted to backing up intermittent renewables like wind, solar and run-of-river hydro.
Because the impacts of the policy are not equal across the country, Trudeau might have to provide some help to Alberta, Saskatchewan and Nova Scotia, but it should not be much. Already Alberta and Nova Scotia have been working on major transitions away from coal. Trudeau’s regulation needs to make sure that the displaced coal is not all replaced with natural gas.
3.4. Other sectoral regulations
Non-auto transport (buses, trucks, trains, ships), industrial boilers, building space and water heating, and oil and gas are other obvious sectors and end uses for which it is possible to implement flexible, relatively efficient regulations.
My research group and I have found that modest carbon pricing by the provinces in concert with a targeted portfolio of such federal regulations, which are carefully monitored to approximately equate their implicit carbon prices, can enable Canada to achieve the Paris target without harming trade-exposed industries.
Conclusion
Although implementation of effective climate policies will never be easy, there are politically palatable options that have a proven track record of achieving reductions in Canada and abroad.
Yes, encourage emissions pricing which corresponds to the theoretical optimum. But heed the evidence on the effective and relatively efficient role that well-crafted regulations can play in driving the major technological and energy transition we so desperately need.
Rather than listen to those who ignore evidence, J.Trudeau should focus on developing creative solutions in a second-best world.
Notes et références
[1] Stephan Harper was Prime Minister of Canada from 2006 to 2015.
Bibliographie complémentaire
Jaccard Mark (2016). Want an effective climate policy? Heed the evidence. Policy Options, the public forum for the public good, February 2. Disponible sur: http://policyoptions.irpp.org/magazines/february-2016/want-an-effective-climatepolicy-heed-the-evidence/ [Consulté le 20/09/2017].
Jaccard Mark (2016). Penny wise and pound foolish on climate policy? Policy Options, the public forum for the public good, October 11 . Disponible sur: http://policyoptions.irpp.org/magazines/october-2016/penny-wise-and-pound-foolish-on-climate-policy/ [Consulté le 20/09/2017].
Jaccard Mark (2016) Effective climate change regulation: Let’s transform Canadian cars. Policy Options, the public forum for the public good, May 31. Disponible sur: http://policyoptions.irpp.org/magazines/may-2016/effective-climate-change-regulation-lets-transform-canadian-cars/ [Consulté le 20/09/2017]
|
2018-06-24 18:52:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3345847427845001, "perplexity": 8083.211968283586}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867050.73/warc/CC-MAIN-20180624180240-20180624200240-00332.warc.gz"}
|
https://www.sparrho.com/item/a-riemann-roch-theorem-for-flat-bundles-with-values-in-the-algebraic-chern-simons-theory/94e3f5/
|
# A Riemann-Roch theorem for flat bundles, with values in the algebraic Chern-Simons theory
Research paper by Spencer Bloch, Hélène Esnault
Indexed on: 30 Apr '00Published on: 30 Apr '00Published in: Mathematics - Algebraic Geometry
#### Abstract
Let $f: X \to S$ be flat morphism over an algebraically closed field $k$ with a relative normal crossings divisor $Y\subset X$, $(E, \nabla)$ be a bundle with a connection with log poles along $Y$ and curvature with values in $f^*\Omega^2_{k(S)}$. Then the Gau\ss-Manin sheaf $R^if_*(\Omega^*_{X/S}({\rm log} Y)\otimes E)$ carries a Gau\ss-Manin connection $GM^i(\nabla)$. We establish a Riemann-Roch formula relating the algebraic Chern-Simons invariants of $\nabla$, $GM^i(\nabla)$ and the top Chern class of $\Omega^1_{X/S}({\rm log}Y)$.
|
2021-05-12 01:02:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104169607162476, "perplexity": 686.3905239860995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991693.14/warc/CC-MAIN-20210512004850-20210512034850-00620.warc.gz"}
|
https://www.physicsforums.com/threads/area-enclosed-by-the-curve.167280/
|
# Area enclosed by the curve
1. Apr 24, 2007
### Niskamies
The problem is the following:
Use a line integral to find the plane area enclosed by the curve r = a*(cos t)^3 + b*(sin t)^3, 0 < t <2*pi.
I don't really have a clue how to solve that. The chapter we are right now in is about Green's theorem in the Plane, and all the example problems use x and y instead of t.
I would be very glad i f somebody would tell how to solve this problem.
2. Apr 24, 2007
### chaoseverlasting
I would integrate the curve 'r' wrt t, taking the limit to be 0 to 2pi.
3. Apr 24, 2007
### lalbatros
This is easily done (I think) using polar coordinates.
If r(t) is the distance from the origin to the curve,
and if t is the polar angle around the origin,
then the surface is given by
Integral[r(t)²/2 dt,{t,tmin,tmax)]
You have to be careful to decide the limits of integration,
and therefore you need to make a correct graphic of this function to understand the shape you need to evaluate.
There may be some calucations to perform ...
Michel
Note:
On the lhs, you wrote r in bold.
Be careful, r is the distance in polar coordinates, it is not a vector and should not be written in bold.
Last edited: Apr 24, 2007
|
2017-07-20 16:41:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026819467544556, "perplexity": 737.0348155195406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423269.5/warc/CC-MAIN-20170720161644-20170720181644-00350.warc.gz"}
|
https://tex.stackexchange.com/questions/429297/add-urldate-to-bibliography?noredirect=1
|
# Add 'urldate' to bibliography
I'm using biblatex with a custom bibstyle based on the trad-standard style:
\RequireBibliographyStyle{trad-standard}
In my custom style I redefined some appearances like bold text etc. but now I want to add the urldate to the output if a URL is given.
I have for example a website as reference. Mendeley exported the resource as @misc which seems to be correct, but the access date (saved as urldate) is not showing up. I now searched the code of the trad-standard bibstyle and didn't find the urldate key, so it's quite logical that there is no output. How could I add the functionality of always displaying the urldate, if a URL and the urldate is given, no matter what entry type I'm in?
My guess is to renew the doi+eprint+url macro as it is responsible for the output of the URL. So if I could append the urldate to the output, it should do what I want. The problem is, that I have no idea how tho append the urldate to the macro.
Here is my custom .bbx file:
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% This is a custom bibliography style based on the trad-abbrv style
% by Marco Daniel (2012--2015) and Moritz Wemheuer (2016--).
%
% This package is released under the terms of the
% LaTeX Project Public License v1.3c or later
% See http://www.latex-project.org/lppl.txt
%
% Copyright (c) 2012 -- 2015 Marco Daniel
% 2016 -- Moritz Wemheuer
%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%%%%%%%%%%%
% Declare Style
% %%%%%%%%%%%%%
\ProvidesFile{custom-abbrv.bbx}[2017/10/18 v1.0.0 Custom abbrv Bibliography Style]
% %%%%%%%%%%%%%%
% Require Styles
% %%%%%%%%%%%%%%
% %%%%%%%%%%%%%
% Basic Options
% %%%%%%%%%%%%%
\ExecuteBibliographyOptions{
abbreviate = false,
arxiv = false
autolang = hyphen,
backref = false,
dateabbrev = true,
eprint = false,
hyperref = true,
labelnumber = true,
maxnames = 3,
minnames = 3,
sorting = none,
}
% %%%%%%%%%%%%%%%%%%%
% %%%%%%%%%%%%%%%%%%%
\@ifpackagelater{biblatex}{2016/03/01}{%
\@ifpackagelater{biblatex}{2016/05/14}{%
\DeclareNameFormat{abbrv}{%
\usebibmacro{name:given-family}%
{\namepartfamily}%
{\namepartgiveni}%
{\namepartprefixi}%
{\namepartsuffixi}%
\usebibmacro{name:andothers}%
}%
}{%
\DeclareNameFormat{abbrv}{%
\nameparts{#1}%
\usebibmacro{name:given-family}%
{\namepartfamily}%
{\namepartgiveni}%
{\namepartprefixi}%
{\namepartsuffixi}%
\usebibmacro{name:andothers}%
}%
}%
}{%
\DeclareNameFormat{abbrv}{%
\usebibmacro{name:first-last}{#1}{#4}{#6}{#8}%
\usebibmacro{name:andothers}%
}%
}%
\DeclareNameAlias{default}{abbrv}
\DeclareFieldFormat{bibentrysetcount}{\mkbibparens{\mknumalph{#1}}}
\DeclareFieldFormat{labelnumberwidth}{\mkbibbrackets{#1}}
\DeclareFieldFormat{shorthandwidth}{\mkbibbrackets{#1}}
\@ifpackagelater{biblatex}{2016/05/14}{%
\defbibenvironment{bibliography}%
{%
\list{%
\printtext[labelnumberwidth]{%
\printfield{labelprefix}%
\printfield{labelnumber}%
}%
}{%
\setlength{\labelwidth}{\labelnumberwidth}%
\setlength{\leftmargin}{\labelwidth}%
\setlength{\labelsep}{\biblabelsep}%
\setlength{\itemsep}{\bibitemsep}%
\setlength{\parsep}{\bibparsep}%
}%
\renewcommand*{\makelabel}[1]{\hss##1}%
}
{\endlist}
{\item}%
}{%
\defbibenvironment{bibliography}%
{%
\list{%
\printtext[labelnumberwidth]{%
\printfield{prefixnumber}%
\printfield{labelnumber}%
}%
}{%
\setlength{\labelwidth}{\labelnumberwidth}%
\setlength{\leftmargin}{\labelwidth}%
\setlength{\labelsep}{\biblabelsep}%
\setlength{\itemsep}{\bibitemsep}%
\setlength{\parsep}{\bibparsep}%
}%
\renewcommand*{\makelabel}[1]{\hss##1}%
}%
{\endlist}%
{\item}%
}
\defbibenvironment{shorthands}%
{%
\list{%
\printfield[shorthandwidth]{shorthand}%
}{%
\setlength{\labelwidth}{\shorthandwidth}%
\setlength{\leftmargin}{\labelwidth}%
\setlength{\labelsep}{\biblabelsep}%
\setlength{\itemsep}{\bibitemsep}%
\setlength{\parsep}{\bibparsep}%
\renewcommand*{\makelabel}[1]{\hss##1}%
}%
}%
{\endlist}%
{\item}%
\DeclareBibliographyDriver{set}%
{%
\entryset%
{%
\ifbool{bbx:subentry}%
{%
\printfield[bibentrysetcount]{entrysetcount}%
}%
{}%
}%
{}%
\newunit\newblock%
\usebibmacro{setpageref}%
\finentry%
}
% %%%%%%%%%%%%%%%%%
% Appearance Styles
% %%%%%%%%%%%%%%%%%
% Name
% Title
\DeclareFieldFormat*{title}{\enquote{#1}\isdot}
\DeclareFieldFormat{journaltitle}{\mkbibemph{#1},}
% Volume
\DeclareFieldFormat[article]{volume}{#1}
% Number
\DeclareFieldFormat*{number}{\mkbibparens{#1}}
% Pages
\DeclareFieldFormat*{pages}{\space#1\space}
% Year
\DeclareFieldFormat{date}{\textbf{#1}}
% URL
\DeclareFieldFormat{url}{URL:\space\url{#1}}
% ISSN / ISBN
\DeclareFieldFormat{issn}{ISSN:\space\url{#1}}
\DeclareFieldFormat{isbn}{ISBN:\space\url{#1}}
% DOI
\DeclareFieldFormat{doi}{%
\ifhyperref
}
% Item Separation
\setlength\bibitemsep{.5\baselineskip}
% Alignment
\AtBeginBibliography{\raggedright}
% %%%%%%%%%%%%%%%%%%%%%%%
% Changes in the position
% %%%%%%%%%%%%%%%%%%%%%%%
% Change the position of the year to be in front of the volume
\renewbibmacro*{journal+issuetitle}{%
\usebibmacro{journal}%
\usebibmacro{date}% Added the date, right after the journal
\newcommaunit*%
\iffieldundef{series}
{}
{%
\newunit
\printfield{series}%
}%
\usebibmacro{volume+number+pages+eid}%
\newcommaunit
\usebibmacro{issue+date-parens}%
\usebibmacro{issue}%
\newunit%
}
% Remove the date after the volume
\renewbibmacro*{issue+date-parens}{%
\iffieldundef{issue}%
{%
% \usebibmacro{date}
}%
{%
\printfield{issue}%
\newcommaunit*%
% \usebibmacro{date}
}%
\newunit%
}
\endinput
• I think it is simpler. Does this tex.stackexchange.com/q/428959/105447 help? – gusbrs Apr 30 '18 at 10:44
• Can you give an example (i.e. an MWE/MWEB), please? The unmodified biblatex-trad styles always display URLs and URL dates for all entry types. If you or your customisation set the url=false option, you will get URLs and URL dates for only @online entries (all other types don't display URLs then). If you see a different behaviour even with an explicit url=true, your custom style modifies trad-standard in a way that it destroys the standard behaviour. – moewe Apr 30 '18 at 11:36
• @gusbrs That looks great, but unfortunately it has no effect on my output. – Sam Apr 30 '18 at 12:13
• @moewe URL is set to true and even if I change @misc to @online no date is output. – Sam Apr 30 '18 at 12:14
• Mhhh, the code looks OK and in fact works for me in a short document I created (see gist.github.com/moewew/4e100b2a1f044354ba07927ea6cfac6a). Can you show us a complete example along the line of my link, please? We need to see an example .bib file and how you load your custom style in your document. – moewe Apr 30 '18 at 12:21
As it turned out in the comments, the problem is actually a malformed urldate field as exported by Mendeley. All biblatex date fields must be written in YYYY-MM-DD format (ISO 8601/EDTF), DD.MM.YYYY is not acceptable input and generates a warning:
Invalid format '30.04.2018' of date field 'urldate' - ignoring
In the meantime, here is a fix with Biber's sourcemapping
\documentclass[british]{article}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{babel}
\usepackage{csquotes}
\usepackage[style=authoryear, backend=biber]{biblatex}
\begin{filecontents}{\jobname.bib}
@misc{appleby,
author = {Humphrey Appleby},
title = {On the Importance of the Civil Service},
date = {1980},
url = {http://example.com/~sirhumphrey/cc.pdf},
urldate = {30.04.2018},
}
\end{filecontents}
\DeclareSourcemap{
\maps[datatype=bibtex]{
\map{
\step[fieldsource=urldate, match=\regexp{\A(\d{2}).(\d{2}).(\d{4})\Z}, replace={$3-$2-\$1}]
}
}
}
\begin{document}
\cite{sigfridsson,appleby}
\printbibliography
\end{document}
But don't just use it: Complain to the Mendeley people!
|
2020-01-22 11:21:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41703853011131287, "perplexity": 6936.886729262283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00248.warc.gz"}
|
https://math.stackexchange.com/questions/1225385/show-that-mathbbz-and-2-mathbbz-are-not-isomorphic-as-rings
|
# Show that $\mathbb{Z}$ and $2\mathbb{Z}$ are not isomorphic as rings.
Show that $\mathbb{Z}$ and $2\mathbb{Z}$ are not isomorphic as rings.
My attempt: Suppose $\mathbb{Z}$ and $2\mathbb{Z}$ are isomorphic as rings, Let $\phi: \mathbb{Z} \rightarrow 2\mathbb{Z}$ be the isomorphism. Then we have $\phi(4) = \phi(2) + \phi(2) = 2n + 2n = 4n\,$ and $\phi(4) = \phi(2)\phi(2) = 4n^2$ and so $n = 0$ or $n = 1$. If $n = 0$, then $\phi$ is not surjective, which contradicts the fact that $\phi$ is an isomorphism. If $n = 1$, then $\phi(3) = 3 \notin 2\mathbb{Z}$, which again gives us a contradiction.
Your reasoning is correct. However, you should say that you define $n=\phi(1)$. Also $n=1$ is already ruled out by $n=\phi(1)\in 2\mathbb Z$.
The more elegant approach to this problem would be to show that $2\mathbb Z$ has no multiplicative identity.
An easy way for me:
Since $$1$$ is a generator of $$\mathbb Z$$ and $$\phi$$ is an isomorphism so $$\phi(1)$$ is a generator of $$\mathbb 2Z$$
Then $$\phi(1)=2$$ or $$\phi(1)=-2$$
Then $$\phi(1)=\phi(1 \cdot 1)=\phi(1) \cdot \phi(1)=4$$ in both cases
But same element can't be mapped to two different elements hence a contradiction
|
2019-08-25 11:14:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9826127290725708, "perplexity": 53.22751464568488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323328.16/warc/CC-MAIN-20190825105643-20190825131643-00044.warc.gz"}
|
https://goquady.pl/vertical/2021-Apr-7-1362.html
|
# wieght of a ball mill s balls
## Wieght Of A Ball Mill 26 2339 3Bs Balls
Wieght Of A Ball Mill 26 2339 3Bs Balls. Mill S Method Of Experimental Inquiry Mill Amp 39 S Method Of E Perimental Inquiry Mills Methods Of Experimental Inquiry And The Nature Of Causality Get Price For 7 26 2339 3 Cone Crusher Godscounty World 26 2339 3Bs Best Stone Crusher 7 26 2339 3B Cone Crusher Dimension Wieght Of A Ball Mill 26 2339 3Bs Readget Price
## wieght of a wet ball mill s balls hfrsxe
2021-5-15 · wieght of a wet ball mill s balls hfrsxe. wieght of a wet ball mill s balls hfrsxe . GM stone crusher machine is designed to achieve maximum productivity and high reduction ratio. From large primary jaw crusher and impact crusher to cone crusher and VSI series for secondary or tertiary stone crushing, GM can supply the right crusher as well as ...
## Wieght Of A Ball Mills Balls - artiesthuubadriaens.nl
2021-5-9 · Wieght of a ball mill s balls relaisducommerce fr wieght of a ball mill balls sports ball size comparison topend dimensions for a regulation size ball korfball 85 to 88 to 224 the size ball for players yrs size between and 70m in circumference and. Weight Of Ball Mill.
## Wieght Of A Ball Mill - rejs.waw.pl
Ball mill - Wikipedia. A ball mill is a type of grinder used to grind or blend materials for use in mineral dressing processes, paints, pyrotechnics, ceramics, and selective laser sintering. It works on the principle of impact and attrition: size reduction is done by impact as the balls drop from near the top of the shell.
## Why use Different Size Balls in a Mill
2017-7-3 · The big end was 2 feet in diameter and the small end 1 foot. The ability of the mill to segregate the balls was demonstrated by tests. Grinding tests with several types of mills and ball loads led to the conclusion that advantages that had been gained were due more to the appropriate average size of balls than to the new design of mill.
## Bal-tec - Ball Weight and Density
The answer is calculated by multiplying the volume of the ball by the density of the material. $\text"Weight" = \text"Volume" ⋅ \text"Density"$. For example, calculate the weight of a two inch diameter lead ball: $\text"Volume" = {4 ⋅ π ⋅ R^3 }/ 3$. $π$, a universal constant $= 3.1416$.
## What Weight Medicine Ball Should I Get? - SET FOR SET
2021-1-27 · A medicine ball, also known as an exercise ball or a med ball, is a solid weighted ball that weighs anywhere from 1-50 pounds. They can be about as small as a softball or as large as a basketball. As for the shell of a medicine ball, it can be made from various materials, such as nylon, vinyl, kevlar, leather, polyurethane, or dense rubber.
## How Much Does a Golf Ball Weigh? We Weighed 100 and
2020-2-13 · Poor balls weighed 45.72, average balls weighed 45.73, good balls weighed 45.8 and new balls weighed 45.93 on average. If you play poor condition balls, you could see a weight difference of 3.3 grams from one ball to another. If you play average condition balls, they could differ up to 2.9 grams in weight from one ball to the next.
The starting point for ball mill media and liquid charging is generally as follows: 50% media charge. Assuming 26% void space between spherical balls (non-spherical, irregularly shaped and mixed-size media will increase or decrease the free space) 50% x 26% = 13% free space. Add to this another 45% to 50% above the ball charge for total of 58% ...
## How to Choose the Right Slam Ball Weight for You ...
To choose the right slam ball weight for you, you’ll need to know your own strength. Typically speaking, beginners can use slam balls between 10 to 25 pounds, whereas intermediate and expert lifters can use slam balls over 40 pounds. You’ll need to consider your goals as well.
## The grinding balls bulk weight in fully unloaded mill
2017-4-11 · In the previous article we considered the method for determining the bulk weigh of new grinding media. Determination the grinding balls bulk weigh directly operating in a ball mill becomes necessary on practice. It is done in order to accurately definition the grinding ball mass during measuring in a ball mill and exclude the mill overloading with grinding balls possibility
## Ball Mills - an overview | ScienceDirect Topics
8.3.2.2 Ball mills. The ball mill is a tumbling mill that uses steel balls as the grinding media. The length of the cylindrical shell is usually 1–1.5 times the shell diameter ( Figure 8.11). The feed can be dry, with less than 3% moisture to minimize ball coating, or slurry containing 20–40% water by weight.
## Industrial Ball Mills: Steel Ball Mills and Lined Ball ...
Steel Ball Mills & Lined Ball Mills. Particle size reduction of materials in a ball mill with the presence of metallic balls or other media dates back to the late 1800’s. The basic construction of a ball mill is a cylindrical container with journals at its axis. The cylinder is filled with grinding media (ceramic or metallic balls
## Bal-tec - Ball Weight and Density
The answer is calculated by multiplying the volume of the ball by the density of the material. $\text"Weight" = \text"Volume" ⋅ \text"Density"$. For example, calculate the weight of a two inch diameter lead ball: $\text"Volume" = {4 ⋅ π ⋅ R^3 }/ 3$. $π$, a universal constant $= 3.1416$.
## how many pounds of steel balls are used in a ball mill
Industrial Ball Mills Steel Ball Mills and Lined Ball . Ball Mills Steel Ball Mills Lined Ball Mills. Particle size reduction of materials in a ball mill with the presence of metallic balls or other media dates back to the late 1800 s. The basic construction of a ball mill is a cylindrical container
## Weight Of Soccer Ball | What Things Weigh
2021-8-21 · Soccer (association football) is the world’s most popular sport by far. It is estimated that there are over 4 billion soccer fans across the world. The governing body of soccer, FIFA, lays down strict guidelines for the weight of soccer balls. Weights of Soccer Balls. A size 3 ball is used by children under the age of 8. This soccer ball ...
## Ball Mill Design/Power Calculation
2015-6-19 · The basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density, specific density, desired mill tonnage capacity DTPH, operating % solids or
## The Size and Weight Of A Golf Ball - EzineArticles
A smaller ball has a tendency to fly further than a larger ball due to less air resistance on a smaller object in flight, or in other words the smaller ball does not need to displace as much air as a larger ball. Keeping this in mind, most manufacturers will produce golf balls to the minimum size.
## Weight Limits for Stability Balls | Livestrong
While larger stability balls are often capable of supporting greater weight, it is important that the ball's dimensions remain a good fit for your frame. People shorter than 4 feet, 8 inches should use a ball measuring 45 cm in diameter, while those between 4-8" and 5-5" should use a ball measuring 55 cm.
## Amazon: weighted exercise balls
Mar 13, 2021 - 5 Recommendations. Weighted Ball Exercise is a great way to get your heart rate up, burn calories and get in shape. It’s also one of the best ways to improve your posture, balance, flexibility, coordination, core strength, hand-eye-coordination, muscle tone, bone density, blood circulation, mood, sleep quality, and more. If you ...
## How Many Balls in a Ball Mill? - JXSC Machine
2021-7-7 · How many balls in a ball mill can obtain the best grinding efficiency? Let's figure out. Ball mill steel balls are consumables of the grinding plant, milling balls need to be replenished at intervals to ensure the grinding quality and grinding rate.
## Ball Mill - an overview | ScienceDirect Topics
The ball mill is a tumbling mill that uses steel balls as the grinding media. The length of the cylindrical shell is usually 1–1.5 times the shell diameter (Figure 8.11).The feed can be dry, with less than 3% moisture to minimize ball coating, or slurry containing 20–40% water by weight.
## Ball Mill | Ball Mills | Wet & Dry Grinding | DOVE
2 天前 · DOVE supplies various types and sizes of Ball Mill Balls, including; Cast Iron steel Balls, Forged grinding steel balls, High Chrome cast steel bars, with surface hardness of 60-68 HRC. DOVE Ball Mills achieves size reduction by impact and attrition. When the cylinder rotates, the balls are dragged to almost the top of the shell, and from there ...
## Ball Mill Balls factory, Buy good quality Ball Mill Balls ...
Ball Mill Balls factory, Buy good quality Ball Mill Balls products from China. Cr1.5 Wear Resistance Cast Iron 65 HRC Ball Mill Balls. Cast Iron Mining 62 HRC High Chrome Grinding Media Balls. 60Mn B2 B3 Materials Steel Grinding Media For Mining. Oil Quenching Forged Steel Grinding 65hrc Ball Mill Balls.
The starting point for ball mill media and solids charging generally starts as follows: 50% media charge. Assuming 26% void space between spherical balls (non-spherical, irregularly shaped and mixed-size media will increase or decrease the free space) 50% x 26% = 13% free space. Add to this another 10%-15% above the ball
## Ball Mills - Mineral Processing & Metallurgy
2017-2-13 · CERAMIC LINED BALL MILL. Ball Mills can be supplied with either ceramic or rubber linings for wet or dry grinding, for continuous or batch type operation, in sizes from 15″ x 21″ to 8′ x 12′. High density ceramic linings of uniform hardness male
## Cast Grinding Steel Balls - 911Metallurgist
2021-8-18 · The number of impacts in primary ball mills are far more frequent but have less magnitude than those experienced in SAG mills. The increased frequency is due to the Increase in charge volume (35 – 40% versus 5 – 10%), higher mill speeds, and the larger number of balls
## Weight of an Official Soccer Ball | SportsRec
2018-12-5 · Official Weight. FIFA-approved soccer balls come in two sizes, size 4 and size 5, according to FIFA's specification chart. Players older than 12 years of age use size 5 balls, while players aged 8 to 12 years use size 4 balls, according to Soccer Ball World. FIFA regulations state size 5 balls must weigh between 420 g and 450 g, while size 4 ...
## Ball Mills - Mine Engineer.Com
2012-11-26 · A Ball Mill grinds material by rotating a cylinder with steel grinding balls, causing the balls to fall back into the cylinder and onto the material to be ground. The rotation is usually between 4 to 20 revolutions per minute, depending upon the diameter of the mill. The larger the
## The Balls - Snooker rules and refereeing
The balls are 52.5mm with a tolerance of +/- 0.05mm, a little larger that 2 1/16". As the balls get played on the table, they will lose mass. This is especially true of the cue ball, which is frequently significantly lighter in older sets, which throws off the the players ability to judge positional play accurately.
|
2022-08-18 11:16:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45633354783058167, "perplexity": 4219.3567344960975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00389.warc.gz"}
|
https://cs.stackexchange.com/tags/xor/hot
|
# Tag Info
## Hot answers tagged xor
Accepted
• 4,697
Accepted
### Minimum XOR for queries
Using Trie Data Structure, you can solve this problem in $O(m + n)$ if we know that values are computer integers (e.g. all 32-bit or 64-bit values). Let say we know that all integers in $A$ are 32-...
• 474
Accepted
### Subset of numbers whose XOR has least Hamming weight
Your problem is known as calculating the minimal distance of a (binary) linear code, and is NP-hard, as shown by Vardi. It is even NP-hard to approximate within any constant factor, as shown by Dumer, ...
• 269k
Accepted
### Can I use "XORing" in my thesis?
Some useful guidelines for writing are: Who is my audience? Will my audience understand what I am writing? Will my usage of language be distracting or otherwise take attention away from the message I'...
• 140k
• 369
1 vote
### Find number X by asking Hamming distance between X and Y in binary representation
One reasonable heuristic would be to use a greedy algorithm, which at each step picks a query that is likely to reduce the space of possibilities as much as possible. Suppose you care about the ...
• 140k
1 vote
### XOR two numbers
XOR does have meaning on how decimal numbers are stored especially if you are considering using signed decimal notation. I think of XOR because it is useful in calculations requiring the 2's ...
Only top scored, non community-wiki answers of a minimum length are eligible
|
2022-05-21 06:41:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30993637442588806, "perplexity": 1583.997328624489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00738.warc.gz"}
|
https://physics.paperswithcode.com/paper/quantum-distributed-algorithm-for-triangle
|
# Quantum Distributed Algorithm for Triangle Finding in the CONGEST Model
21 Jan 2020 Izumi Taisuke Gall François Le Magniez Frédéric
This paper considers the triangle finding problem in the CONGEST model of distributed computing. Recent works by Izumi and Le Gall (PODC'17), Chang, Pettie and Zhang (SODA'19) and Chang and Saranurak (PODC'19) have successively reduced the classical round complexity of triangle finding (as well as triangle listing) from the trivial upper bound $O(n)$ to $\tilde O(n^{1/3})$, where~$n$ denotes the number of vertices in the graph... (read more)
PDF Abstract
# Code Add Remove Mark official
No code implementations yet. Submit your code now
# Categories
• QUANTUM PHYSICS
• DISTRIBUTED, PARALLEL, AND CLUSTER COMPUTING
|
2021-04-11 18:41:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6124860644340515, "perplexity": 10700.483424893468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00532.warc.gz"}
|
http://www.csam.or.kr/journal/view.html?doi=10.29220/CSAM.2018.25.1.001
|
TEXT SIZE
CrossRef (0)
On inference of multivariate means under ranked set sampling
Haresh Rochani1,a, Daniel F. Linderb, Hani Samawia, and Viral Panchalb
aDepartment of Biostatistics, Georgia Southern University, USA, bDepartment of Biostatistics, Augusta University, USA
Correspondence to: 1Corresponding author: Jiann-Ping Hsu College of Public Health, Department of Biostatistics, Georgia Southern University, Statesboro, GA 30460, USA. Email: hrochani@georgiasouthern.edu
Received May 31, 2017; Revised September 27, 2017; Accepted December 8, 2017.
Abstract
In many studies, a researcher attempts to describe a population where units are measured for multiple outcomes, or responses. In this paper, we present an efficient procedure based on ranked set sampling to estimate and perform hypothesis testing on a multivariate mean. The method is based on ranking on an auxiliary covariate, which is assumed to be correlated with the multivariate response, in order to improve the efficiency of the estimation. We showed that the proposed estimators developed under this sampling scheme are unbiased, have smaller variance in the multivariate sense, and are asymptotically Gaussian. We also demonstrated that the efficiency of multivariate regression estimator can be improved by using Ranked set sampling. A bootstrap routine is developed in the statistical software R to perform inference when the sample size is small. We use a simulation study to investigate the performance of the method under known conditions and apply the method to the biomarker data collected in China Health and Nutrition Survey (CHNS 2009) data.
Keywords : multivariate mean, ranked set sampling, hypothesis testing, regression estimator
1. Introduction
As the complexity and cost of biological experiments has grown considerably in recent years, partly due to technological advances (high throughput technologies and more), there is an increasing need to design experiments that maximize the information content of the collected sample. For most standard statistical analyses, where the aim is to estimate some population parameter, maximizing information translates into minimizing the variance associated with a parameter’s estimate. In many situations, researchers observe multiple outcomes for each unit in the sample and wish to make inferences on a parameter of the underlying population’s joint distribution, routinely this is done via estimating the population mean vector. It is often the case that some or all of the individual components of this response vector are costly, risky (complications due to biopsy), or even destructive (requiring animal sacrifice). In such cases it may be desirable, for monetary or ethical reasons, to extract information from each unit that is sampled, without taking the exact measurement of the response of interest for each unit.
The most common approach for data collection method for making inference about population parameter is simple random sample (SRS) from a population. Even though each subject selected by SRS has an equal chance of being selected from a population to ensure the representativeness of a population, there is no guarantee that the selected sample will truly represent the population. However, the only guarantee one can have is that if the sampling process is being repeated over and over again, then the average of the attribute of interest for multiple SRS would provide the good estimator of the population value of the attribute. Ranked set sampling (RSS) (McIntyre, 1952) is a type of sampling scheme which allows researchers to use information from each unit in the sample, without taking every unit’s exact measurement. The overall goal of the RSS is to obtain the sample from a population that is more likely to span the full range of the values in the population to have a more representative sample than the SRS of similar sample size. Traditionally, RSS can be used provided there is a reliable ranking mechanism available, which should be cheaper or safer than exact measurement, for the response of interest. The ranked but unmeasured units provide increased information over SRS of the same size improving parameter inference. The additional information provided by ranking is due to the fact that aspects of population structure are encoded through the order statistics. Knowledge of observations’ order statistic and exact measurement improve inference since ranked units target different population attributes, unlike the identically distributed unit from a SRS. This has been shown in many works to translate into improvement in parameter inference compared to simple random samples of the same size.
In many situations, the outcome of interest is correlated with some auxiliary variable which may be easier to measure than the outcome of interest. For instance weight may be correlated with fasting blood glucose and may be easily obtained whereas some lab measurement would be necessary for blood glucose measurement. The application of RSS has appeared in series of papers. See for example, Chen (1999), Demir and Çıngı (2000), Huang et al. (2016), Jabrah et al. (2017), Kaur et al. (1996), Samawi and Al-Sagheer (2001).
An outline of the paper is as follows. In Section 2 we introduce the necessary notation and prove that mean estimation is unbiased with a smaller variance for RSS as compared to SRS. In addition, in Section 2, we also derived the limiting distribution of Hotelling’s statistics (Q) as well as the multivariate regression estimator using RSS. In Section 3, we perform a simulation study to compare the performance of RSS to SRS in terms of estimation as well as hypothesis testing. In Section 4, we apply the method on a real data set in the context of public health. We give concluding remarks and future directions for the method in Section 5.
2. Multivariate mean estimation using ranked set sampling
### 2.1. Ranked set sampling procedure
In this section, we will briefly describe how a ranked set sample may be collected in this section for a univariate random variable. To select the RSS of size n based on the auxiliary variable (X), the following steps should be performed.
• Select the SRS of size r, from a population based on the auxiliary variable (X). r is referred as the set size which is typically between 2 and 5 although any size is possible. However, sizes larger than 5 may become impractical (Takahasi and Wakimoto, 1968).
• Order the auxiliary variable and choose the minimum of (X(1)). Measure the multivariate outcome of interest Y(1).
• Select the SRS of size r and order it based on the auxiliary variable again. Choose the second minimum (X(2)) and measure the multivariate outcome of interest Y[2].
• Repeat this process until the X(r) and Y(r) of rth independent SRS are obtained.
• The entire process of obtaining (X(1), X(2), …, X(r)) and (Y(1), Y(2), …, Y(r)) is called a cycle.
• Repeat m independent cycles to obtain a RSS of size n = rm.
Table 1 represents the structure of RSS. For more details about RSS (Jozani and Johnson, 2011; Kowalczyk, 2004; Patil et al., 1995; Takahasi and Futatsuya, 1998).
### 2.2. Multivariate naive estimator
Our population of interest is an univariate auxilliary variable X and a d dimensional multivariate outcome Y, with a covariance structure on the joint distribution of (Y, X) given by $Σ=[Σ11Σ12Σ21Σ22]$. Assuming that we have collected m RSS cycles of set size r, where the ranking has been done on X, we denote the data (X(i)k, Y[i]k), i = 1, 2, …, r, K = 1, 2, …,m. Note that the subscript on X indicates that ranking has been done on X and the subscript on Y indicates that ranking on X may result in imperfect ranking on elements of Y. The naive estimator is defined as $μ^yRSS=(1/rm)∑k=1m∑i=1rY[i]k$. It is straightforward to show this is an unbiased estimator of the mean of Y. Since $∑i=1rfX(i)(x)=rfX(i)(x)$ (Dell and Clutter, 1972)
$Eμ^yRSS=1rm∑k=1m∑i=1r∫-∞∞∫-∞∞yfY∣X(i)=x(y∣x)fX(i)(x)dydx=∫-∞∞∫-∞∞yfY∣X(i)=x(y∣x)1r∑i=1rfX(i)(x)dydx=∫-∞∞∫-∞∞yfY∣X(i)=x(y∣x)fX(x)dydx=μy.$
Similarly for the variance (Dell and Clutter, 1972), by defining μ[i] = EY[i] we have
$Var(μ^yRSS)=1(rm)2∑k=1m∑i=1r∫-∞∞∫-∞∞(y-μ[i]) (y-μ[i])⊤fY∣X=x(y∣x)fX(i)(x)dydx=1rm2∑k=1m∫-∞∞∫-∞∞(y-μ) (y-μ)⊤fY∣X=x(y∣x)1r∑i=1rfX(i)(x)dydx-21(rm)2∑k=1m∑i=1r∫-∞∞∫-∞∞(y-μ)fY∣X=x(y∣x)fX(i)(x)dydx (μ[i]-μ)⊤+1(rm)2∑k=1m∑i=1r(μ[i]-μ)∫-∞∞∫-∞∞fY∣X=x(y∣x)fX(i)(x)dydx (μ[i]-μ)⊤=1(rm)(Σ11)-21r2m∑i=1r(μ[i]-μ) (μ[i]-μ)⊤+1r2m∑i=1r(μ[i]-μ) (μ[i]-μ)⊤=1rm(Σ11-1r∑i=1r(μ[i]-μ) (μ[i]-μ)⊤).$
It is clear that $∑i=1r(μ[i]-μ)(μ[i]-μ)⊤$ is positive semi-definite since ∀u ∈ ℝd we have u(μ[i]μ)(μ[i]μ)u ≥ 0. Then ∀u ∈ ℝd u(Var( μ̂ySRS) – Var( μ̂yRSS))u ≥ 0, or equivalently Var( μ̂ySRS) ≽ Var( μ̂yRSS). Under the additional assumption that rd and X is correlated with each component of Y we have strict inequality.
### 2.3. Multivariate regression estimator
Regression estimators are used to increase precision in mean estimation by incorporating information in an auxilliary variable. In this case, we assume a linear regression of Y on X
$Y=μy+β(X-μx)+ɛ,$
where X and ε are independent and ε is a mean zero residual vector with covariance ∑ε. Then the regression equation with corresponding data from RSS is
$Y[i]k=μy+β(X(i)k-μx)+ɛ(i)k i=1,2,…,r, k=1,2,…,m.$
It is worth noting that typically the mean of X, μx, is unknown. However, since the auxilliary variable X may be much cheaper to measure one may use the r2m units collected from the first stage of sampling to estimate this quantity as $μ¯x=(1/r2m)∑km∑ir∑jrXijk$.
Then the regression estimator for the mean of the response is given by
$Y¯reg=μ^yRSS+β^(μ¯x-μ^x),$
where
$μ^x=1rm∑k=1m∑i=1rX(i)k, β^=∑k=1m∑i=1r(X(i)k-μ^x)(Y[i]k-μ^yRSS)∑k=1m∑i=1r(X(i)k-μ^x)2.$
It is straightforward to show that μ̄x and μ̂x are unbiased estimates of μx using similar arguments as in the previous section. When (2.3) holds conditional expectation implies that Eβ̂ = β and Ereg = μy, so that the regression estimator based on RSS is unbiased. Also
$Var (Y¯reg)=EXVarY (Y¯reg∣X)+VarXEY (Y¯reg∣X).$
Since EY (reg|X) = μy + β(μ̄xμ̂x) the second term above is $(1/r2m)∑i=1rσX(i)2ββ⊤$. For the first term Cov( μ̂yRSS, β(μ̄xμ̂x)|X) = 0 so that
$EXVarY (Y¯reg∣X)=EXVarY (μ^yRSS∣X)+EXVarY (β^(μ¯x-μ^x)∣X)=1(r2m)2∑k=1m∑i=1rΣɛ+EX ((μ¯x-μ^x)2VarY (β^∣X))=1nΣɛ+ΣɛEX(μ¯x-μ^x)2∑k=1m∑i=1r(X(i)k-μ^x)2.$
### 2.4. Testing for H0 : μ = μ0
Theorem 1
Let {Y[i]k}, i = 1, 2, … r and k = 1, 2, …m be a RSS sample from normal with mean vectorμ and variance covariance matrix11. Let
$Y¯rss=1rm∑k=1m∑i=1rY[i]k,Srss=[1rm-1]∑k=1m∑i=1r(Y[i]k-Y¯rss) (Y[i]k-Y¯rss)TQ=mr(Y¯rss-μ0)⊤Srss-1(Y¯rss-μ0)$
Then for large sample the limiting distribution of Q is the χ2-distribution with d degrees of freedom under the Null Hypothesis ofμ = μ0.
Proof$Y¯rss=1rm∑k=1m∑i=1rY[i]k,Y¯rss=1r∑i=1rY¯[i].$
From Multivariate Central limit theorem $m(Y¯[i]-μ[i])→dNd(0,Σ11[i]/m)$ as m → ∞where ∑11[i] is variance covariance matrix of Y[i].
Since [i] are independent
$mr(Y¯rss-μ)→dNd(0,∑i=1rΣ11[i]mr).$
$mr(Y¯rss-μ)→dNd(0,Σ11R/mr)$, where ∑11R is variance covariance matrix of Yrss. Therefore,
$mr(Y¯rss-μ)Σ11R-12→dNd(0,1)$
and hence
$mr(Yrss--μ0)⊤Σ11R-1(Yrss--μ0)~χ(d)2.$
Since $Srss-1/Σ11R-1→d1$ (See Appendix for more detail)
$Q=mr (Y¯rss-μ0)⊤Σ11R-1Srss-1Σ11R-1(Y¯rss-μ0)~χ(d)2.$
2.5. Testing for H0 : μ(1) = μ(2)
### Theorem 2
Let {$Y[i]k(t)$}, i = 1, 2, … r, k = 1, 2, …m and t = 1, 2 are two RSS samples fromand. Let
$Y¯rss(1)=1r1m1∑k=1m1∑i=1r1Y[i]k(1),Y¯rss(2)=1r2m2∑k=1m2∑i=1r2Y[i]k(2),Srss=[1r1m1+r2m2-2] [∑k=1m1∑i=1r1(Y[i]k(1)-Y¯rss(1)) (Y[i]k(1)-Y¯rss(1))T+∑k=1m2∑i=1r2(Y[i]k(2)-Y¯rss(2)) (Y[i]k(2)-Y¯rss(2))T].$
Then, $Q={(r1m1·r2m2)/(r1m1+r2m2)}(Y¯rss(1)-Y¯rss(2))⊤Srss-1(Y¯rss(1)-Y¯rss(2))$, for large samples, has the limiting distribution as χ2with d degrees of freedom under H0 : μ(1) = μ(2).
Proof
The proof is similar to that as in Theorem 1.
2.6. Small samples
For small to moderate samples, for SRS, under H0 the Q statistics is distributed as {(N – 1) d}/(Np) Fd,Nd (Seber, 2009). As explicit distribution of Q statistics is not known, for small or moderate size of RSS samples, we recommend performing hypothesis testing by Bootstrap method. Resampling method for RSS was proposed by (Chen et al., 2004; Modarres et al., 2006). They suggest a natural method to obtain bootstrap samples from each row (within cycle) of a RSS.
3. Simulation
In this section, we conducted the simulation study to estimate the multivariate outcome mean and the performance of the hypothesis testing by RSS scheme. We also studied the performance of testing hypothesis of equality of multivariate outcome means for two groups. For estimation of α of testing Ho : μ = μ0 vs. Ha : μμ0, we considered four multivariate outcomes Yi (i = 1, 2, 3, 4) with μ = [0.3, 0.3, 0.3, 0.2], variances as $σ12=σ22=σ32=σ42=4$ and covariances as σ12 = 2.39, σ13 = 1.59, σ14 = 2.83, σ23 = 3.19, σ24 = 1.18, and σ34 = 2.24. The auxiliary covariate (X) was simulated with mean 0 and variance $σx2=1$. For this simulation study, we considered unstructured covariance among multivariate outcome Yi as shown below. Moreover, we used autoregressive covariance structure between auxiliary variable X and Yi with correlation parameter ρ.
$Cov(X,Yi)=[12ρ2ρ22ρ32ρ42ρ42.391.592.832ρ22.3943.191.182ρ31.593.1942.242ρ42.831.182.244].$
The RSS for of X and Yi were simulated from multivariate normal with mean μ and above variance covariance matrix by following the steps as described in Section 2.1. For comparisons of estimation of α for SRS and RSS, different sample sizes (n = rm) were evaluated by varying the ρ, set size and cycle size. This entire process was repeated 2,000 times. For details of the parameter values, referred to Table 2. Table 2 results demonstrate that we can achieve nominal value for α by using RSS with moderate to large samples, however, for smaller sample bootstrap RSS sampling can achieve nominal value for α.
For estimation of the power of testing Ho : μ = 0 vs. Ha : μ ≠ 0, similar simulation settings were considered as described above except with μ = [0.6, 0.6, 0.6, 0.4]. In addition to that bootstrap power was also calculated by taking 1,000 bootstrap samples for each simulated RSS. Furthermore, MSE of SRS, MSE of RSS and the multivariate naive estimator efficiency were calculated. Table 3 reports the simulation results for estimating the power of testing hypothesis under various simulation settings. We can also report that the power of the test increases as the set size increases with RSS, however, for testing hypothesis RSS gives more power than SRS. As expected, Table 3 also shows that RSS provides more efficient estimates of the multivariate naive estimator in terms of smaller MSEs.
Furthermore, the performance of testing hypothesis of equality of multivariate outcome means for two groups, we simulated two groups with multivariate outcome (Yi ) (i = 1, 2, 3, 4) with means for the first group μ1 = [0.3, 0.3, 0.3, 0.2] and mean for the second group μ2 = [0.6, 0.6, 0.6, 0.4] with similar covariance matrix of Y as described above (Cov(X, Yi)). Table 4 represents the results of estimation of power of the testing hypothesis Ho : μ1 = μ2 vs. Ha : μ1μ2 with various parameter values of ρ, set size and cycle sizes. Overall, from Table 4, we can conclude that RSS is more powerful for testing hypothesis of equality of multivariate outcome means for two groups compared to SRS.
We also conducted a simulation study to show that the multivariate regression estimator for RSS is more efficient than SRS. We considered multivariate outcomes Y with mean μ = (0.3, 0.3, 0.3, 0.2) and the variance-covariance matrix (Cov(X, Yi))) as described above in this section. We also simulated correlated auxiliary covariate (X) with mean 0 and variance 1. Table 5 shows that for various parameter settings, the RSS is more efficient than SRS in estimating multivariate regression estimator.
4. Application to China Health and Nutrition Survey data
In this section, we illustrate the efficient ranked set sampling method via ranking on baseline covariate to estimate the multivariate outcome mean, investigate the performance of the hypothesis testing for two groups and estimation of multivariate regression estimator by using the China Health and Nutrition Survey (CHNS) for year 2009. The CHNS is the only large-scale household based survey in China (Yan et al., 2003). As a part of the survey, anthropometry were collected on 10,242 children and adults aged ≥ 7 in year 2009 along with other demographic information. Only 9,986 individuals agreed to provide the fasting blood samples which were evaluated for many biomarkers of diabetes and cardio-metabolic risk factors. For illustration purposes, we focused on the variables such as age of the individuals as our ranking auxiliary variable, and cardio-metabolic biomarkers, for example, Apolipoprotein A, Total cholesterol and Hemoglobin A1c. We treated the survey data as a population and selected the range of RSS (N = set * cycle) as shown in Table 6 by ranking on the baseline covariate age. SRS of similar size N was also selected from CHNS data to evaluate the performance of the hypothesis testing and the efficiency of the sampling procedure compared to RSS in estimating the multivariate outcome mean. The correlations (ρ) between age and biomarkers Apolipoprotein A, Total cholesterol and Hemoglobin A1c are 0.12, 0.32, and 0.22 respectively. The mean for Apolipoprotein A, Total cholesterol and Hemoglobin A1c are 1.14 (g/L), 4.78 mmol/L and 5.67 mmol/L respectively, and for comparison purposes, they can be treated as the true parameters. Table 7 represents the power comparison of RSS with SRS for multivariate means of males and females. Table 7 represents that we can achieve more power with RSS compared to SRS with similar sample sizes. Table 8 shows the results for multivariate regression estimation for biomarker data. We also took 1,000 samples of SRS and RSS of sample size 80 (set = 4 and cycle = 20) and plotted the confidence regions as shown in Figure 1. From Figure 1, we can see that the confidence region for SRS (blue nets) lies completely outside of the confidence region of RSS (red).
5. Conclusion
In statistics, it is important to have a sampling method which is cost effective. RSS is one the important method which can be used to have a more efficient multivariate mean estimator compared to most commonly used method of SRS. The samples taken by using RSS method are more representative samples due to its inherent structure imposed by ranking based on easy-to-available covariates. In this paper, we demonstrated that the RSS is more efficient in estimating the multivariate mean as well as in hypothesis testing for one and two independent samples. Simulation studies for the performance of hypothesis testing showed that the RSS is more powerful compared to SRS. In general, in estimation of the population mean, RSS improves the precision relative to SRS with the same sample size, n. This is true even if the correlation between the auxiliary variable X and multivariate outcome Y is moderate to high (±0.4 to ±0.8). However, when the correlation between X and Y is very low (such as ± 0.001), RSS is equivalent to SRS and the ranking is not better than random. In practice, the key issue is whether the increase in precision is sufficient to justify the increased costs associated with the ranking process. In contrast, when the correlation between X and Y is very high (±0.9 or higher), the precision in estimating the population mean will be very high as this will improve the ranking of X on Y (Ridout, 2003).
Missing data is a very common problem in all most every research and can have a very significant impact on the inferences drawn from the collected data such as biased estimation of population parameters and loss of statistical power (Little and Rubin, 2014). The valid statistical analysis which has appropriate missing data mechanisms assumptions (missing completely at random, missing at random, or missing not at random) should be performed in SRS and in RSS. There is an extensive literature available on how to deal with missing data for RSS in auxiliary variable X and univariate response Y (Bouza-Herrera, 2013). However, handling the missing data in multivariate Y with monotone or arbitrary missing pattern is still the active area of research.
Figures
Fig. 1. Confidence region for SRS (blue nets) and RSS (dolid ted) gor China Health and Nutrition Survey data.
TABLES
### Table 1
Structure of ranked set sampling
Cycle 1 (X(1)1, Y(1)1) (X(2)1, Y(2)1)) · · · (X(r)1Y(r)1) Cycle 2 (X(1)2, Y(1)2) (X(2)2, Y(2)2) · · · (X(r)2, Y(r)2) ⋮ ⋮ ⋮ ⋱ ⋮ Cycle m (X(1)m, Y(1)m) (X(2)m, Y(2)m) · · · (X(r)m, Y(r)m)
### Table 2
Estimation of the α of testing Ho : μ = 0 vs. Ha : μ ≠ 0
ρCycleSet = 3Set = 4Set = 5
−0.850.04600.13750.02150.03750.10650.04750.04550.07950.0605
100.04550.07550.04500.04600.05300.04950.04100.05350.0620
200.05200.05450.05350.04850.04450.05450.05600.04100.0605
300.05550.04550.05600.04950.04300.06000.04800.03600.0530
−0.650.04750.15800.02100.04800.10700.04700.05500.07100.0600
100.04600.07200.04550.04650.05900.05550.04900.04050.0500
200.05200.04750.04600.04500.04600.05200.06050.03950.0550
300.04500.04150.05000.04700.03850.05550.05050.04150.0595
−0.450.04600.14000.02250.05400.10550.04400.05150.08400.0585
100.05800.06900.03950.05150.05550.05150.04500.05300.0655
200.04950.05000.05250.04950.04150.05100.05200.03600.0560
300.05950.04600.05300.05200.03600.05300.05600.02900.0515
0.450.05150.15300.02900.06150.09750.04250.05700.08000.0640
100.04450.07300.04050.05600.04950.04850.04950.04450.0540
200.05200.04200.04300.05050.04300.05750.04950.03100.0505
300.05250.04950.05700.04050.03850.05800.04950.03100.0505
0.650.05200.15450.02550.05450.09000.03800.04650.07850.0610
100.05400.08400.05250.05950.06600.06200.04750.05350.0595
200.04400.04950.04900.04700.04300.05550.05250.04000.0555
300.05550.04550.05350.04750.03400.05100.05550.02950.0465
0.850.05750.13650.01950.04650.09700.04500.04700.08400.0580
100.05200.07500.04550.05200.07300.06800.04950.04700.0580
200.04950.05900.06200.05350.03600.04700.05800.03500.0505
300.05600.04500.05200.04950.04050.05800.05000.04000.0570
SRS = simple random sample; RSS = ranked set sampling; BSa = Bootstrap α.
### Table 3
Estimation of power of testing Ho : μ = 0 vs. Ha : μ ≠ 0
PowerPowerPowerMSEMSE
30.450.09000.22550.04453.88E–052.07E–05
100.16450.21200.14252.53E–061.37E–06
200.34450.34900.35101.79E–078.11E–08
300.48800.52800.56303.07E–081.69E–08
0.650.08500.22600.04404.93E–052.58E–05
100.14000.19900.13452.97E–061.61E–06
200.29450.29800.30301.94E–079.52E–08
300.43000.45400.48604.31E–082.30E–08
0.850.09950.25600.05052.28E–051.18E–05
100.20550.26400.17351.48E–068.17E–07
200.41400.44150.44651.17E–074.64E–08
300.59500.68050.70801.96E–089.07E–09
40.450.11000.18950.09501.16E–055.45E–06
100.20000.23650.22557.51E–073.29E–07
200.44900.45800.51255.32E–082.22E–08
300.61650.66000.72651.00E–084.16E–09
0.650.10150.17700.08651.53E–056.32E–06
100.18750.20200.18501.08E–064.01E–07
200.35650.35600.40006.00E–082.54E–08
300.55150.62850.69601.10E–084.65E–09
0.850.11250.22550.10608.08E–063.59E–06
100.25000.29400.28205.32E–072.15E–07
200.49500.56200.61502.61E–081.27E–08
300.74350.82900.86306.97E–092.63E–09
50.450.14400.19200.14755.22E–061.86E–06
100.25750.28300.30803.01E–071.13E–07
200.53250.58100.65652.10E–087.11E–09
300.73550.79100.84554.41E–091.54E–09
0.650.12000.16900.12856.41E–062.36E–06
100.21100.21400.24303.87E–071.45E–07
200.48000.49300.58202.70E–089.50E–09
300.64450.69600.77955.27E–091.63E–09
0.850.15050.21600.16252.96E–061.20E–06
100.30800.33100.36252.05E–076.88E–08
200.64250.72300.79351.28E–084.78E–09
300.83600.90600.94902.44E–098.07E–10
3−0.450.13600.31850.07902.34E–041.35E–04
100.26350.38650.29701.35E–058.05E–06
200.61550.67400.67609.20E–075.14E–07
300.81250.84500.86251.94E–071.01E–07
−0.650.12100.32600.08156.66E–043.72E–04
100.26700.36850.28454.59E–052.30E–05
200.54600.59450.59502.55E–061.39E–06
300.76900.78400.81005.50E–072.61E–07
−0.850.12350.30750.08151.20E–036.60E–04
100.25500.37350.29157.74E–054.49E–05
200.55050.58400.58354.62E–062.18E–06
300.78400.81600.83759.25E–074.42E–07
4−0.450.18300.30200.17758.33E–053.32E–05
100.40500.47400.46005.39E–062.19E–06
200.78700.81300.83653.10E–071.33E–07
300.91500.95050.96105.80E–082.81E–08
−0.650.16750.30750.17352.34E–048.93E–05
100.34100.44350.42551.41E–056.17E–06
200.70950.74200.77258.19E–073.46E–07
300.90150.91050.93351.66E–077.02E–05
−0.850.15650.30650.18603.50E–041.65E–04
100.36350.42400.40951.95E–051.05E–05
200.76050.76050.79351.41E–066.53E–07
300.89850.89700.92102.95E–071.25E–07
5−0.450.25550.36400.31003.33E–051.20E–05
100.51350.56450.59202.18E–068.53E–07
200.85900.88650.91851.08E–074.18E–08
300.97750.97850.98652.34E–088.02E–09
−0.650.21200.32100.26059.69E–053.36E–05
100.46300.51050.54005.91E–062.21E–06
200.81550.82350.85653.88E–071.31E–07
300.95300.95000.96507.24E–082.69E–08
−0.850.21150.34350.29201.49E–045.84E–05
100.45950.52700.55959.85E–063.53E–06
200.80800.81350.85756.00E–072.14E–07
300.94850.95250.96801.23E–074.38E–08
SRS = simple random sample; RSS = ranked set sampling; MSE = mean square error.
### Table 4
Estimation of power of testing Ho : μ1 = μ2 vs. Ha : μ1μ2
ρCycleSet = 3Set = 4Set = 5
−0.4100.30550.37400.40580.31900.35450.42270.39050.40150.4354
200.39750.39900.40210.46650.48520.49730.46750.48400.5175
300.47850.48850.48000.50000.52450.51270.58650.60350.6131
400.57100.56050.58240.61500.65050.64210.64800.65700.6491
−0.6100.27100.36100.38120.34700.36300.4080.36390.39000.4128
200.39500.41600.42100.42400.41250.43570.45150.49700.5087
300.45700.44700.44240.48250.49150.48790.52850.56750.5564
400.55550.55350.55420.57450.62100.63210.61800.64750.6427
−0.8100.28200.35500.38290.31600.35650.41860.36700.38550.4210
200.37850.39900.40210.41350.45000.46100.44750.50350.5142
300.46750.46250.46100.48100.52700.52870.51650.55250.5641
400.52750.52800.51950.56350.60350.59870.60400.63350.6289
0.4100.34850.44800.48450.40250.44450.49750.47150.47750.5012
200.50750.55800.56410.57000.63050.64410.64750.66250.6951
300.65200.65200.66410.73250.78250.78880.74300.80650.8125
400.68950.72650.72480.81050.86050.85890.86000.92100.9287
0.6100.33050.41300.49650.40550.42700.51020.43800.46800.5354
200.47450.50200.52140.53600.60400.62140.59850.66100.6698
300.59700.62650.63690.66200.71350.71250.74200.81050.8214
400.65850.71450.72350.74950.80200.79850.83000.89200.8879
0.8100.37000.44900.50350.46650.51550.52100.51400.54500.5621
200.54950.58650.60890.64900.70200.71240.72750.79750.8213
300.70400.74150.73580.77450.85550.86140.83950.92550.9159
400.80550.85300.85210.87550.92950.91240.93200.97250.9800
SRS = simple random sample; RSS = ranked set sampling; BSa = Bootstrap α.
### Table 5
Estimation of multivariate regression estimator
ρCycleSet = 3Set = 4Set = 5
0.450.00780.00100.00240.00020.00106.49E–05
100.00044.40E–050.00011.22E–054.51E–053.29E–06
202.35E–052.73E–065.23E–066.57E–072.46E–062.30E–07
304.02E–065.61E–071.23E–061.26E–075.43E–074.17E–08
0.650.00980.00110.00320.00030.00117.69E–05
100.00045.77E–050.00011.16E–056.31E–054.72E–06
202.61E–053.60E–067.21E–067.73E–073.39E–062.90E–07
304.92E–066.95E–071.52E–061.72E–075.99E–077.68E–08
0.850.01090.00060.00360.00010.00125.70E–05
100.00052.96E–050.00027.87E–067.10E–052.62E–06
202.98E–051.74E–061.08E–054.59E–073.68E–061.57E–07
305.27E–063.37E–071.64E–068.36E–086.44E–072.75E–08
−0.450.01330.00570.00390.00130.00140.0005
100.00060.00030.00025.74E–058.44E–052.53E–05
203.64E–051.52E–051.25E–054.58E–064.25E–061.58E–06
307.16E–063.44E–062.69E–069.54E–077.72E–073.07E–07
−0.650.02700.01450.00700.00350.00280.001197
100.00130.00070.00030.00020.00017.06E–05
206.85E–054.62E–051.82E–051.51E–058.30E–064.00E–06
301.15E–059.00E–064.13E–062.45E–061.63E–068.03E–07
−0.850.04850.02730.01140.00740.00380.0024
100.00170.00150.00050.00040.00020.0001
207.85E–057.46E–052.69E–052.11E–051.17E–058.28E–06
301.85E–051.73E–055.90E–064.77E–062.42E–061.32E–06
MSE = mean square error; SRS = simple random sample; RSS = ranked set sampling.
### Table 6
Multivariate mean estimation and MSEs for China Health and Nutrition Survey data
353.07E–053.04E–051.01
103.88E–063.41E–061.14
204.66E–074.43E–071.05
301.38E–071.26E–071.09
451.31E–051.17E–051.12
101.65E–061.47E–061.12
202.01E–071.81E–071.11
305.80E–085.43E–081.07
556.67E–065.85E–061.14
108.00E–077.08E–071.13
201.01E–079.06E–081.11
302.94E–082.57E–081.14
SRS = simple random sample; RSS = ranked set sampling; MSE = mean square error.
### Table 7
Estimation of power of testing for Biomarker data for gender
CycleSet = 3Set = 4Set = 5
100.25310.34140.28750.33970.31550.3625
200.33530.36630.37220.39680.40170.4265
300.38930.40660.42430.45260.46910.4890
400.43240.44250.48230.50170.53740.5415
SRS = simple random sample; RSS = ranked set sampling.
### Table 8
Multivariate regression estimation for China Health and Nutrition Survey data
CycleSet = 3Set = 4Set = 5
54.87E–053.03E–052.08E–051.89E–051.00E–055.82E–06
105.40E–064.84E–062.26E–061.49E–061.12E–069.91E–07
207.11E–075.13E–072.56E–072.25E–071.64E–071.09E–07
302.07E–071.59E–077.41E–085.58E–083.92E–083.31E–08
MSE = mean square error; SRS = simple random sample; RSS = ranked set sampling.
References
1. Bouza-Herrera, CN (2013). Handling Missing Data in Ranked Set Sampling. Heidelberg: Springer
2. Chen, Z (1999). Density estimation using ranked-set sampling data. Environmental and Ecological Statistics. 6, 135-146.
3. Chen, Z, Bai, Z, and Sinha, B (2004). Ranked Set Sampling: Theory and Applications. New York: Springer Science & Business Media
4. Dell, TR, and Clutter, JL (1972). Ranked set sampling theory with order statistics background. Biometrics. 28, 545-555.
5. Demir, S, and Çıngı, H (2000). An application of the regression estimator in ranked set sampling. Hacettepe Bulletin of Natural Sciences and Engineering, Series B. 29, 93-101.
6. Huang, Y, Samawi, HM, Vogel, R, Yin, J, Gato, WE, and Linder, DF (2016). Evaluating the efficiency of treatment comparison in crossover design by allocating subjects based on ranked auxiliary variable. Communications for Statistical Applications and Methods. 23, 543-553.
7. Jabrah, R, Samawi, HM, Vogel, R, Rochani, HD, Linder, DF, and Klibert, J (2017). Using ranked auxiliary covariate as a more efficient sampling design for ANCOVA model: analysis of a psychological intervention to buttress resilience. Communications for Statistical Applications and Methods. 24, 241-254.
8. Jozani, MJ, and Johnson, BC (2011). Design based estimation for ranked set sampling in finite populations. Environmental and Ecological Statistics. 18, 663-685.
9. Kaur, A, Patil, GP, Shirk, SJ, and Taillie, C (1996). Environmental sampling with a concomitant variable: a comparison between ranked set sampling and stratified simple random sampling. Journal of Applied Statistics. 23, 231-256.
10. Kowalczyk, B (2004). Ranked set sampling and its applications in finite population studies. Statistics in Transition. 6, 1031-1046.
11. Little, RJA, and Rubin, DB (2014). Statistical Analysis with Missing Data. New York: John Wiley & Sons
12. McIntyre, GA (1952). A method for unbiased selective sampling, using ranked sets. Australian Agricultural Research. 3, 385-390.
13. Modarres, R, Hui, TP, and Zheng, G (2006). Resampling methods for ranked set samples. Computational Statistics & Data Analysis. 51, 1039-1050.
14. Patil, GP, Sinha, AK, and Taillie, C (1995). Finite population corrections for ranked set sampling. Annals of the Institute of Statistical Mathematics. 47, 621-636.
15. Ridout, MS (2003). On ranked set sampling for multiple characteristics. Environmental and Ecological Statistics. 10, 255-262.
16. Samawi, HM, and Al-Sagheer, OAM (2001). On the estimation of the distribution function using extreme and median ranked set sampling. Biometrical Journal. 43, 357-373.
17. Seber, GAF (2009). Multivariate Observations. New York: John Wiley & Sons
18. Takahasi, K, and Futatsuya, M (1998). Dependence between order statistics in samples from finite population and its application to ranked set sampling. Annals of the Institute of Statistical Mathematics. 50, 49-70.
19. Takahasi, K, and Wakimoto, K (1968). On unbiased estimates of the population mean based on the sample stratified by means of ordering. Annals of the Institute of Statistical Mathematics. 20, 1-31.
20. Yan, S, Li, J, Li, S, Zhang, B, Du, S, Gordon-Larsen, P, Adair, L, and Popkin, B (2012). The expanding burden of cardiometabolic risk in China: the China Health and Nutrition Survey. Obesity Reviews: An Official Journal Of The International Association For The Study Of Obesity. 13, 810-821.
|
2021-01-22 18:21:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6847747564315796, "perplexity": 1777.5294216756322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00620.warc.gz"}
|
http://www.hindawi.com/isrn/mathematical.analysis/2013/549876/
|
`ISRN Mathematical AnalysisVolume 2013 (2013), Article ID 549876, 5 pageshttp://dx.doi.org/10.1155/2013/549876`
Research Article
## On Singular Dissipative Fourth-Order Differential Operator in Lim-4 Case
Department of Mathematics, Ankara University, Tandoğan, 06100 Ankara, Turkey
Received 26 June 2013; Accepted 25 July 2013
Academic Editors: D. D. Hai and W. Shen
Copyright © 2013 Ekin Uğurlu and Elgiz Bairamov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
#### Abstract
A singular dissipative fourth-order differential operator in lim-4 case is considered. To investigate the spectral analysis of this operator, it is passed to the inverse operator with the help of Everitt's method. Finally, using Lidskiĭ's theorem, it is proved that the system of all eigen- and associated functions of this operator (also the boundary value problem) is complete.
#### 1. Introduction
In 1910, Weyl showed that [1] the singular second-order differential operators can be fallen into two classes: operators in limit-circle case and operators in limit-point case. The operators in limit-circle case have the solutions that are all in square integrable space. However, in limit-point case, only one linearly independent solution can be in square integrable space. The development of this theory belongs to Titchmarsh [2]. After these fundamental works, second-order singular differential operators have been developed by many authors (e.g., see [36]).
Following the methods of Weyl and Titchmarsh, the theory for higher order equations and Hamiltonian systems was constructed in [719].
In [7], a regular self-adjoint fourth-order boundary value problem was investigated. Further Green’s function and the resolvent operator were constructed. In [8, 13], such a construction was done for the singular self-adjoint fourth order boundary value problem. In [9, 10], higher-order differential equations whose coefficients are complex-valued were studied. In 1974, Walker [20] showed that an arbitrary order self-adjoint eigenvalue problem can be represented as an equivalent self-adjoint Hamiltonian systems. Further, the developments in the theory of singular self-adjoint Hamiltonian systems were given in [1419].
On the other hand, an important class of the nonself-adjoint operators is the class of the dissipative operators [21]. It is known that all eigenvalues of the dissipative operators lie in the closed upper half-plane, but this analysis is so weak. There are some methods to complete the analysis of the dissipative operators. Some of them are Livšic’s, Krein’s, and Lidskiĭ’s theorems and functional model [21, 22]. These methods were used in the literature for the second-order differential operators (see [2226]). In this paper a singular dissipative fourth order differential operator in lim-4 case is investigated. In particular, using Lidskiĭ’s theorem, it is shown that the system of all eigen- and associated functions is complete in .
#### 2. Preliminaries
Let denote the linear nonself-adjoint operator in the Hilbert space with the domain . The element , , is called a root function of the operator corresponding to the eigenvalue , if all powers of are defined on this element and for some .
The functions are called the associated functions of the eigenfunction if they belong to and the equalities , , hold.
The completeness of the system of all eigen- and associated functions of is equivalent to the completeness of the system of all root functions of this operator.
If, for the operator with dense domain in , the inequality () holds, then is called dissipative.
Theorem 1 (see [26]). Let be an invertible operator. Then, is dissipative if and only if the inverse operator of is dissipative.
It is known that a kernel of an integral operator defned by where , is called a Hilbert-Schmidt kernel if is integrable on
A kernel satisfying the property is called a Hermitian kernel. Properties of Hermitian kernels and related integral operators can be found in [27] (further see [28]). Now we shall remind some results. Let us consider the equation which is related with (1) as
The function which differs from zero is called a characteristic function of which corresponds to the characteristic value . If a Hermitian Hilbert-Schmidt kernel is not null then it posseses at least one characteristic value and this characteristic value (every characteristic value) is real. Further there is a finite orthonormal base of characteristic functions for each characteristic value of . Using the union of these bases one obtains an orthonormal system of characteristic functions of the kernel . If are distinct members of such a system belonging respectively to the characteristic value (not necessarily all different) then for , is an orthonormal system of two variables. For such a system the equivalence holds. Moreover if , , then the equivalence holds. Further the series in (5) are relatively uniformly absolutely convergent. However, equivalence can be replaced by equality. Before showing this, let us consider the equality Following the same idea of [27, pages 22] if is an -kernel such that is continuous and is an -function then is continuous.
Hence if is a continuous Hermitian -kernel (see [27, pages 127]) and , , then the equality holds and the series are uniformly absolutely convergent. In this case taking the series given in (4) converge to the continuous kernel uniformly in the variable for every . Hence from (4) one can get Integrating both side from to we obtain Consequently if is integrable on then the series converge. This implies that is of trace-class (nuclear). For the definition and properties of trace-class operators see, for example, [21].
Above arguments given in [27] hold for Hermitian kernels. However from arbitrary -kernels one can pass to the Hermitian kernels. For example and are Hermitian kernels. These Hermitian kernels are continuous and -kernels if is so. Hence above arguments hold for and if is continuous -kernel such that is integrable on . Further
Hence if and are trace-class kernels then so is . However this result has been given in [29, pages 526] as a definition.
Lidskiĭ’s Theorem (see [21, page 231]). If the dissipative operator is the trace class operator, then its system of root functions is complete in the Hilbert space .
#### 3. Statement of the Problem
The fourth order differential expression is considered as where , is the regular point and is the singular point for , and is real-valued, Lebesgue measurable, and locally integrable function on .
Let be the Hilbert space consisting of all functions such that with the inner product .
Let where denotes the set of all locally absolutely continuous functions on . For arbitrary , Green’s formula is obtained as where and . Green’s formula implies the fact that if and both satisfy the equation for the same value of , then is independent of and depends only on . Further for arbitrary , at singular point , the limits and exist and are finite. The latter also follows from Green’s formula. In fact, it is sufficient to get the second factors with their complex conjugates on the left-hand side in Green’s formula.
In this paper it is assumed that satisfies the lim-4 case conditions at (see [30] and references therein). Lim-4 case is also known as Weyl’s limit-circle case [8].
Let us consider the solutions , , , and of the equation where is some complex parameter, satisfying the conditions where all , , , and are real numbers such that and . For the existence of these solutions, see [7, 8, 13]. Since lim- case holds for , all solutions and belong to . It is clear that , where is the Kronocker delta, and .
Let us set and , with . Then and become the real solutions of . Further, they belong to . Hence for arbitrary , the values , , , and exist and are finite.
Let be the set of all functions satisfying the boundary conditions where and are real numbers given as previously stated and and are some complex numbers such that and with , .
It should be noted that for any solutions of (14), conditions (16) and (17), respectively, can be written as
The operator is defined on as , , . The main aim of this paper is to investigate the spectral analysis of the operator (the boundary value problem (14)–(19)).
#### 4. Completeness Theorems
Let be any solutions of . The notation denotes the Wronskian of order of this set of functions [7, 8, 31].
It is known that the following equality holds [7, 8, 13]: This equation also shows that of any four solutions of is independent of and depends only on [7, 8].
Now consider the solutions and of the equation , , satisfying the conditions For the existence of these solutions given with the previous conditions, see [13]. Clearly these solutions satisfy conditions (18) and (19), respectively. Now let us set Then becomes an entire function, and the zeros of coincide with the eigenvalues of the operator [13]. This implies that all zeros of (all eigenvalues of ) are discrete and that possible limit points of these zeros (eigenvalues of ) can only occur at infinity.
Using (22), we immediately have Hence, the Plücker identity for the fourth-order case is obtained (see [13, p. 435]): where .
Theorem 2. The operator is dissipative in .
Proof. For , we have Since , a direct calculation shows that Further, using (26) and conditions (18) and (19) one obtains Substituting (28) and (29) into (27) it is obtained that and this completes the proof.
Theorem 2 shows that all eigenvalues of lie in the closed upper half-plane.
Theorem 3. The operator has no real eigenvalue.
Proof. For , a direct calculation shows that Now let be a real eigenvalue of , and let be the corresponding eigenfunction. Then (31) and (30) gives . Using these equalities in (18) and (19) one gets that .
Let us consider the solution . Hence, using (26) it is obtained that This contradiction completes the proof.
From Theorem 3 it is obtained that all eigenvalues of lie in the open upper half-plane. In particular zero is not an eigenvalue of . Hence, the operator exists.
Consider the solutions , , and , where and . and satisfy conditions (16) and (17), respectively and and satisfy conditions (18) and (19), respectively.
The equation , , , is equivalent to the nonhomogeneous differential equation subject to the boundary conditions (compare the boundary conditions at with (20) and (21)). Using Everitt’s method [7] (further see [8, 13]) the general solution is obtained as where and
The operator defined by where , is the inverse operator of . Hence the completeness of the system of all eigen- and associated functions of is equivalent to the completeness of those of in .
Since is a continuous Hilbert-Schmidt kernel and is integrable on , the operator is of trace class.
Let us consider the operator . Since is dissipative in is also dissipative in . Thus all conditions are satisfied for Lidskiĭ’s theorem. So we have the following.
Theorem 4. The system of all root functions of (also ) is complete in .
Since the completeness of the system of root functions (eigen- and associated functions) of is equivalent to the completeness of those of , is obtained that the following.
Theorem 5. All eigenvalues of the problem (14)–(19) lie in the open upper half-plane and they are purely discrete. The system of all eigen- and associated functions of the problem (14)–(19) is complete in .
#### References
1. H. Weyl, “Über gewöhnliche Differentialgleichungen mit Singularitäten und die zugehörigen Entwicklungen willkürlicher Funktionen,” Mathematische Annalen, vol. 68, no. 2, pp. 220–269, 1910.
2. E. C. Titchmarsh, Eigenfunction Expansions Associated with Second Order Differential Equations, Part 1, Oxford University Press, 2nd edition, 1962.
3. E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, NY, USA, 1955.
4. M. A. Naimark, Linear Differential Operators, Nauka, Moscow, Russia, 2nd edition, 1969, English translation, Ungar, New York, NY, USA, 1st edition, parts 1, 1967, 1st edition, parts 2, 1968.
5. I. M. Glazman, Direct Methods of Qualitative Spectral Analysis of Singular Differential Operators, Israel Program for Scientific Translations, Jerusalem, Israel, 1965.
6. F. V. Atkinson, Discrete and Continuous Boundary Problems, Academic Press, New York, NY, USA, 1964.
7. W. N. Everitt, “The Sturm-Liouville problem for fourth-order differential equations,” The Quarterly Journal of Mathematics, vol. 8, pp. 146–160, 1957.
8. W. N. Everitt, “Fourth order singular differential equations,” Mathematische Annalen, vol. 149, pp. 320–340, 1963.
9. W. N. Everitt, “Singular differential equations. I. The even order case,” Mathematische Annalen, vol. 156, pp. 9–24, 1964.
10. W. N. Everitt, “Singular differential equations. II. Some self-adjoint even order cases,” The Quarterly Journal of Mathematics, vol. 18, pp. 13–32, 1967.
11. W. N. Everitt, D. B. Hinton, and J. S. W. Wong, “On the strong limit-$n$ classification of linear ordinary differential expressions of order $2n$,” Proceedings of the London Mathematical Society, vol. 29, pp. 351–367, 1974.
12. V. I. Kogan and F. S. Rofe-Beketov, “On square-integrable solutions of symmetric systems of differential equations of arbitrary order,” Proceedings of the Royal Society of Edinburgh A, vol. 74, pp. 5–40, 1974.
13. C. T. Fulton, “The Bessel-squared equation in the lim-2, lim-3 and lim-4 cases,” The Quarterly Journal of Mathematics, vol. 40, no. 160, pp. 423–456, 1989.
14. A. M. Krall, “$M\left(\lambda \right)$ theory for singular Hamiltonian systems with one singular point,” SIAM Journal on Mathematical Analysis, vol. 20, no. 3, pp. 664–700, 1989.
15. A. M. Krall, “$M\left(\lambda \right)$ theory for singular Hamiltonian systems with two singular points,” SIAM Journal on Mathematical Analysis, vol. 20, no. 3, pp. 700–715, 1989.
16. D. B. Hinton and J. K. Shaw, “On Titchmarsh-Weyl $M\left(\lambda \right)$-functions for linear Hamiltonian systems,” Journal of Differential Equations, vol. 40, no. 3, pp. 316–342, 1981.
17. D. B. Hinton and J. K. Shaw, “Hamiltonian systems of limit point or limit circle type with both endpoints singular,” Journal of Differential Equations, vol. 50, no. 3, pp. 444–464, 1983.
18. D. B. Hinton and J. K. Shaw, “Parameterization of the $M\left(\lambda \right)$ function for a Hamiltonian system of limit circle type,” Proceedings of the Royal Society of Edinburgh A, vol. 93, no. 3-4, pp. 349–360, 1983.
19. D. B. Hinton and J. K. Shaw, “On boundary value problems for Hamiltonian systems with two singular points,” SIAM Journal on Mathematical Analysis, vol. 15, no. 2, pp. 272–286, 1984.
20. P. W. Walker, “A vector-matrix formulation for formally symmetric ordinary differential equations with applications to solutions of integrable square,” Journal of the London Mathematical Society, vol. 9, no. 2, pp. 151–159, 1974/75.
21. I. C. Gohberg and M. G. Kreĭn, Introduction to the Theory of Linear Nonselfadjoint Operators, American Mathematical Society, Providence, RI, USA, 1969.
22. B. S. Pavlov, “Spectral analysis of a dissipative singular Schrödinger operator in terms of a functional model,” in Partial Differential Equations, vol. 65 of Itogi Nauki i Tekhniki. Seriya Sovremennye Problemy Matematiki. Fundamental'nye Napravleniya, pp. 95–163, 1991, English translation in Partial Differential Equations VIII, vol. 65 of Encyclopaedia of Mathematical Sciences, pp. 87–163, 1996.
23. G. Guseinov, “Completeness theorem for the dissipative Sturm-Liouville operator,” Doga. Turkish Journal of Mathematics, vol. 17, no. 1, pp. 48–54, 1993.
24. E. Bairamov and A. M. Krall, “Dissipative operators generated by the Sturm-Liouville differential expression in the Weyl limit circle case,” Journal of Mathematical Analysis and Applications, vol. 254, no. 1, pp. 178–190, 2001.
25. E. Uğurlu and E. Bairamov, “Dissipative operators with impulsive conditions,” Journal of Mathematical Chemistry, vol. 51, pp. 1670–1680, 2013.
26. Z. Wang and H. Wu, “Dissipative non-self-adjoint Sturm-Liouville operators and completeness of their eigenfunctions,” Journal of Mathematical Analysis and Applications, vol. 394, no. 1, pp. 1–12, 2012.
27. F. Smithies, Integral Equations, Cambridge University Press, New York, NY, USA, 1958.
28. F. Riesz and B. Sz-Nagy, Functional Analysis, Frederick Ungar, New York, NY, USA, 6th edition, 1972.
29. E. Prugovečki, Quantum Mechanics in Hilbert Space,, vol. 92, Academic Press, New York, NY, USA, 2nd edition, 1981.
30. M. S. P. Eastham, “The limit-$2n$ case of symmetric differential operators of order $2n$,” Proceedings of the London Mathematical Society, vol. 38, no. 2, pp. 272–294, 1979.
31. E. L. Ince, Ordinary Differential Equations, London, UK, 1927.
|
2013-12-08 13:15:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152213335037231, "perplexity": 650.5699843650818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163065688/warc/CC-MAIN-20131204131745-00021-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://publikationen.bibliothek.kit.edu/1000122943
|
# Influence of the vicinal substrate miscut on the anisotropic two-dimensional electronic transport in Al$_{2}$O$_{3}$–SrTiO$_{3}$ heterostructures
Wolff, K.; Schäfer, R.; Arnold, D.; Schneider, R.; Le Tacon, M.; Fuchs, D.
## Abstract (englisch):
The electrical resistance of the two-dimensional electron system (2DES) which forms at the interface of SrTiO3 (STO)-based heterostructures displays anisotropic transport with respect to the direction of current flow at low temperature. We have investigated the influence of terraces at the surface of STO substrates from which the 2DES is prepared. Such terraces are always present in commercially available STO substrates due to the tolerance of surface preparation, which result in small miscut angles of the order of γ ≈ 0.1° with respect to the surface normal. By a controlled increase of the substrate miscut, we could systematically reduce the width of the terraces and thereby increase the density of substrate surface steps. The in-plane anisotropy of the electrical resistance was studied as a function of the miscut angle γ and found to be mainly related to interfacial scattering arising from the substrate surface steps. However, the influence of γ was notably reduced by the occurrence of step-bunching and lattice-dislocations in the STO substrate material. Magnetoresistance (MR) depends on the current orientation as well, reflecting the anisotropy of carrier mobility. ... mehr
Zugehörige Institution(en) am KIT Institut für Quantenmaterialien und -technologien (IQMT) Publikationstyp Zeitschriftenaufsatz Publikationsmonat/-jahr 08.2020 Sprache Englisch Identifikator ISSN: 0021-8979, 1089-7550 KITopen-ID: 1000122943 HGF-Programm 43.21.01 (POF III, LK 01) Quantum Correlations in Condensed Matter Erschienen in Journal of applied physics Verlag American Institute of Physics (AIP) Band 128 Heft 8 Seiten Art.Nr. 085302 Vorab online veröffentlicht am 24.08.2020 Nachgewiesen in Dimensions
KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft
KITopen Landing Page
|
2022-11-27 14:56:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4095437526702881, "perplexity": 7892.770686898414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00579.warc.gz"}
|
https://prove-me-wrong.com/2021/06/15/linearity-testing-checking-proofs-for-the-lazy-person/
|
## Linearity testing: Checking proofs for the lazy person
Taking a final exam in a course is not really a celebration cause for most students, but it is just not as fun to be on the other side as exam checkers. It can easily take several days to grade the answers for the same question for hundreds of students. The exam checkers would probably tell you that by just reading a few lines from the answer they know if it is going to be (mostly) correct or not, however they still need to read all of it just to be sure. These couple of lines can still be misleading – a good start can still lead to a complete failure, and a bad start might eventually, somehow, get most of the details right.
So maybe when checking exams we cannot really do it, but are there problems where reading only few lines from a proof is enough to determine its correctness? As it turns out, certain problems have innate structures that makes them more “stable” than others – if you write a “wrong” solution for them, then you can see it almost everywhere, so reading just a few lines will be enough.
One of the best examples for this kind of problem is the linearity testing. Suppose that someone claims that a function $f:\mathbb{F}_2^n \to \mathbb{F}_2$ is linear, and to prove it they show that $f(u)+f(v)=f(u+v)$ for all $u,v \in \mathbb{F}_2^n$. What Blum-Luby-Rubinfeld (BLR) showed that in a sense it is enough to know that one of this conditions (chosen randomly) is true to conclude that all of them are true with high probability. In this post we will try to see what is so important about linearity, and which structures hides in this problem, to make this miracle happen.
## Probabilistic proofs
As it is usually the case, our story starts with the famous linearity testers – Alice and Bob. Bob has a function $f:\mathbb{F}_2^n \to \mathbb{F}$, which he claims to be linear, namely $f(u+v)=f(u)+f(v)$ for any two vectors $u,v\in \mathbb{F}_2^n$. Alice, who needs some convincing, chooses two such vectors $u_0,v_0$ randomly, and checks that $f(u_0+v_0) = f(u_0)+f(v_0)$. If true, she will accept the function as linear and otherwise reject it.
Of course, if $f(u_0+v_0) \neq f(u_0)+f(v_0)$, then $f$ is not a linear function so Alice was right to reject it. However, $f$ might still be not linear, even if $f(u_0+v_0) = f(u_0)+f(v_0)$, so Alice might be wrong claiming that it is. The question is what is the probability that she is wrong in this case.
Suppose Bob actually computed a linear function $f$, but got a few mistakes along the way, so that the new function $\tilde{f}$ is not linear, but very “close” to being linear. In this case, the chances that Alice chooses one of these corrupted values is very small, so even though $\tilde{f}$ is not linear, with high probability Alice will say that it is. This is a problem that we can’t really overcome, and in general it doesn’t matter too much in practice. Alice will probably not use all of the values of the function, so there is a good chance that she will not even see the corrupted values. Further more, if $\tilde{f}$ is close enough to a linear function, Alice might be able to fix its problematic values.
The question becomes interesting when Bob doesn’t really start from a linear function, and his function $\tilde{f}$ is far from any linear function (for example, this can happen if Bob didn’t really study to the test, and just chose the function randomly). If this is the case, what is the probability that Alice is going to reject? For example, if for every linear function $f$, Bob’s function $\tilde{f}$ has at least 5% of its values different than $f$, then can we expect Alice to have more or less at least 5% chance of rejecting the function?
This miracle property is called the Soundness of the problem. Let’s give a more formal definition.
Definition: Given two functions $f,g:\mathbb{F}_2^n \to \mathbb{F}_2$, we define their Hadamard distance by $dist(f,g) = {\displaystyle \frac{1}{2^n}|\{ v\in \mathbb{F}_2^n : f(v)\neq g(v)\}}$. In other words, $dist(f,g)$ measure the chance of choosing a vector on which $f$ and $g$ disagree, and in particular $f=g$ if and only if $dist(f,g)=0$.
For a set $W$ of functions, we write $dist(f,W)= \min_{g\in W} dist(f,g)$.
With this definition, we want to show that following:
Theorem: Let $Linear$ the set of linear functions. Then for any $f\notin Linear$, the probability $Prob_{u,v}(f(u+v)\neq f(u)+f(v))$ is at least $dist(f,Linear)$.
In other words, if $f$ is far from being a linear function, then there is a good chance that Alice will notice. Note that Alice can always repeat this protocol to increase her chances to reject the function in case that it is not a linear function. Even if she is wrong with probability $0.95$, she can run the protocol 100 times to lower her chance of being wrong to less than $0.01$ (and 100 checks is usually much less than the full $2^{2n}$ checks).
## From simple to complicated
Many times, when facing a new problem, one of the biggest road blocks is where to start analyzing it. One of the approaches that I have found useful over the years, is trying to start with a more simplified version to get some intuition and after that add the structure back until returning to the original problem. This can be done, for example, by looking at a specific simple case of the problem, or, as I will do here, simplify the language that we use to study it.
### Set Theory
In mathematics, one of the most simplest of languages is that of set theory. In this world of sets, and in particular of finite sets, the first question we should try to answer is what is the size of the sets. Our problem contains two sets – the set of all functions, and the linear functions
$\Omega = \{f:\mathbb{F}_2^n \to \mathbb{F}_2\}$ ,
$\Omega_0 = \{f \mid \;f(u+v)=f(u)+f(v) \; \; \forall u,v\}$.
Let us start with studying $\Omega$ and then move to $\Omega_0$.
In general, determining the size of sets is not an easy task. However, in our case the set $\Omega$ has a very unique structure. It is the set of all functions from $\mathbb{F}_2^n$ to $\mathbb{F}_2$. This means that its size is $|\mathbb{F}_2|^{|\mathbb{F}_2^n|}$, so if we know the sizes of both of these source and target sets, we can compute the size of $\Omega$. Note that $\mathbb{F}_2^n$ in itself is also a set of functions (from $\{1,2,...,n\}$ to $\mathbb{F}_2$) so we can use the same trick here as well, and conclude that:
$|\mathbb{F}_2^n|=2^n \; \; \; ; \; \; \; |\mathbb{F}_2|=2 \; \; \; ; \; \; \; |\Omega|=2^{(2^n)}.$
### Algebra
After set theory, the next language we should use is the one of linear algebra. While it can be very technical at times, it is quite easy to work with (and there is a reason that it is usually studied at the first semester). Our three sets from above, $\mathbb{F}_2^n, \mathbb{F}_2$ and $\Omega$ are actually vector spaces over $\mathbb{F}_2$. When you start seeing vector spaces, you should always look for their dimensions as well, which in a sense, together with the field, determine the vector space completely. In particular, over the field with two elements, the dimension is just $\log_2(|V|)$ , so we get that:
$dim(\mathbb{F}_2^n)=n \; \; \; ; \; \; \; dim(\mathbb{F}_2)=1 \; \; \; ; \; \; \; dim(\Omega)=(2^n).$
### Symmetry and Geometry
Next in our journey, we will look for all sorts of symmetries that our objects have. To get some intuition and be able to visualize the objects, lets restrict our problem to the $n=2$ case. In this world, our first set is just $\{(0,0), (0,1), (1,0), (1,1)\}$ which are just the vertices of the unit square. We can visualize the target set $\mathbb{F}_2=\{0,1\}$ as the black and white colors, so that a function $f\in \Omega$ is simply a colored square:
The importance of this visualization is that these sets have a lot of interesting natural symmetries like the rotations or reflections of the square, and these are closely connected to our objects.
Since our sets are actually vector spaces, they come with the natural addition operations which act on them. For example, in our black\white visualization of the set $\mathbb{F}_2$, adding $0$ does nothing, while adding $1$ flips between the colors. The addition in $\mathbb{F}_2^2$ in the geometric presentation, acts on the vertices. Here too adding $(0,0)$ does nothing, but for example adding $(1,0)$ switches between $(0,0) \leftrightarrow (1,0)$ and $(0,1) \leftrightarrow (1,1)$, and in our square presentation this is simply the left to right reflection:
Similarly, adding $(0,1)$ is the up-down reflection, while adding $(1,1)$ is the rotation of 180 degrees.
Once we have these symmetries on $\mathbb{F}_2$ (switch colors) and on $\mathbb{F}_2^2$ (reflections and rotation), we can apply them to our colored squares in $\Omega$ to get, for example, this:
This is already a lot of interesting information, and we haven’t even started to look at the linear functions. The hope is that we can use each of these worlds of set theory, algebra and symmetry on these linear functions and combine it with what we saw on this larger set of all binary functions to produce some interesting results.
## Describing the linear functions
### Set theory of Linear functions
The set of linear function is defined by the linearity condition, namely
$\Omega_0 := \{f: \mathbb{F}_2^n\to \mathbb{F}_2 \mid \;f(u+v)=f(u)+f(v)\;\forall u,v\in\mathbb{F}_2^n\}$.
Unlike the set of all functions, we cannot easily use this presentation to find the size of this set. However, this definition of linear function is not random, and it is rooted deeply within the algebraic structures of all of our sets. In particular, one of the first results that we learn on such functions, is that linear functions are determined completely by their values on a given basis of the vector space. Alternatively, every function from a basis can be extended uniquely to a linear function. Putting it more formally, if $\mathcal{B}\subseteq \mathcal{F}_2^n$ is a basis, then the restriction $f\to f\mid_\mathcal{B}$ to this basis, which is a map $\Omega_0 \to \{f:\mathcal{B}\to\mathbb{F}_2\}$ is a one to one and onto function. This new set is a “set of all functions”, so we can compute its size
$|\Omega_0| = |\mathbb{F}_2|^{|\mathcal{B}|} = |\mathbb{F}_2|^{dim(\mathcal{F}_2^n)}=2^n$.
### Algebra of Linear functions
The size of $\Omega_0\subseteq \Omega$ equals exactly to the dimension of $\Omega$, which immediately raises the question whether it is a basis for $\Omega$, which is nice to know in the land of algebra. However, we can quickly discard this idea since the set of linear functions is closed under addition, or in other words – it is a subspace (while in a sense a basis is a set “as far away as possible” from being close under addition). This might be disappointing a bit, but the property of being a subspace comes with its own advantages, and this coincidence of having the same size as a basis might (and will!) come handy in the future.
### The symmetries of Linear functions
Returning to our $n=2$ case, we know that there are $4=2^2$ linear functions, and we can find and visualize them as colored squares (white=0, black=1):
What happens if we apply our reflections and rotation on these linear functions? No matter what we do to the full white square (the trivial function) it remains in place. However, if we apply them to the second colored square, the up-to-down reflection keeps it in place, but the left-to-right reflection and the 180 degrees rotation do not. But not all is lost – these two transformations are the same as simply switching the colors (which is the symmetry which comes from the addition in $\mathbb{F}_2$!). The same happens with the other two linear functions. So in a sense, we should probably look at the pairs of functions:
In mathematics, when we have some action on a set, as above, and an element doesn’t change under this action (like the black\white square), we say that it is invariant under this action. The nontrivial linear functions are not invariant, but they are really close – they are invariant up to the color change. These “almost invariant” elements tend to appear in many places in mathematics, from simple examples like these rotating squares to the most advanced parts of mathematics.
If you are reading this post, then there is a good chance you already know some of these “almost invariant” objects really well. Indeed, maybe the best example for this behavior is when a linear transformation $T$ acts on a vector space $V$. The simplest behavior we can hope for is to find $v\in V$ such that $T(v)=v$, since then, for example, we can immediately compute $T^k(v)$. However, since this is a very restrictive condition, we look for a more general definition where we only require that $T(v)=\lambda\cdot v$ for some constant $\lambda$, or in other words – we look for eigenvectors. These “almost invariant” vectors are simple enough so that we could work with them, with a condition not too restrictive so they would actually exist, and as we shall soon see, this is exactly what we have in our problem.
### The eigenvectos
In our problem, the posible eigenvalues are $\{0,1\}=\mathbb{F}_2$, so eignevectors are either sent to $(0,0)$, or are completely invariant, which is not good enough to describe our linear functions. The almost invariance up to the color switch means that $f$ is either sent to itself, or to the function $v\mapsto f(v)+1$, so we add the scalar rather than multiply by it. Luckily for us, this is not too much of a problem, since we can use the standard trick to move from addition to multiplication – apply the exponent function.
Instead of working with $\mathbb{F}_2=\{0,1\}$, we think of these two elements as $(-1)^0$ and $(-1)^1$ respectively, to get the set $\{+1,-1\} \subseteq \mathbb{R}$. Converting our addition to multiplication, our sets become
$\Omega \to \tilde{\Omega} = \{f:\mathbb{F}_2^n \to \{\pm 1\} \}$
$\Omega_0 \to \tilde{\Omega}_0:=\{ f:\mathbb{F}_2^n \to \{\pm 1\} \mid f(u+v)=f(u)\cdot f(v) \; ;\; \forall u,v\}$,
both of which are contained in the $\mathbb{R}$-vector space
$W:=\{f:\mathbb{F}_2^n \to \mathbb{R}\}$
of dimension $2^n$. At least in the $n=2$ case, our arguments from above show that $\tilde{\Omega}_0$ are eigenvectors for all of our square symmetries (reflections and rotation). For the more general case:
Definition: For $v_0 \in \mathbb{F}_2^n$, define the linear transformation $T_{v_0}:W\to W$ by $T_{v_0}(f) (u):= f(v_0+u)$.
Claim: For any $v_0\in\mathbb{F}_2^n$, and a multiplicative linear function $f\in\tilde{\Omega}_0$ we have that $T_{v_0}(f)=f(v_0) \cdot f$ – namely $f$ is an eigenvector with eigenvalue $f(v_0)$.
Proof: Follows immediately from
$T_{v_0}(f)(u) = f(v_0+u) =f(v_0) \cdot f(u) \; \; \Rightarrow \; \; T_{v_0}(f) = f(v_0)\cdot f$.
Now we are at a point where we have a set $\tilde{\Omega}_0$ consisting of only eigenvectos, and has exactly the size of a basis, which begs the question – is it a basis (over $\mathbb{R}$), and therefore make our symmetries diagonalizable?
### Orthogonality
To prove that $\tilde{\Omega}_0$ is a basis, it is enough to show that it is an independent set. We can show it directly for any fixed $n$ by doing some manual computations, but this looks like too much work. Instead, lets try to extract some more information from our new visualization:
We already mentioned that these colored squares are almost invariant, but there is actually an even simpler property that we can see. The number of black and white vertices is always even, and while in the trivial function all the vertices are white, in the rest the number of black vertices equals to the number of white vertices. Now, after we moved to the $\pm 1 \in \mathbb{R}$ values instead of $\{0,1\} = \mathbb{F}_2$, this is equivalent to having the sum of the values equal zero – exactly two $+1$ values and two $-1$ values.
What can we say about such vectors? For $n=2$ we are in a $4=2^2$-dimensional space, which is a bit harder to visualize, so lets go even smaller to $n=1$, in which case our two linear functions are the trivial function, which has values $(1,1)$ and the only non trivial linear function which has values $(1,-1)$. In the plane they look like this:
The two vectors representing the functions have the same length, but more importantly they are orthogonal to each other. If we can show that in general the linear functions form an orthogonal set, then we automatically get that they are independent, and since it is a set of the size of the dimension of our vector space, they must form a basis.
As we shall see soon, “moving” from one linear function $\varphi$ to another $\psi$ equals their ratio $\frac{\psi}{\varphi}$ which is (multiplicative) linear in itself. Measuring the cosine of the angle is going to be the sum of the values, so unless $\varphi=\psi$ we expect to get 0, so that the functions must be orthogonal.
### Orthogonality formalization
To prove this claim, we start with the definition of our inner product.
Definition: For two functions $f,g:\mathbb{F}_2^n \to \mathbb{R}$ we define their inner product as
${\displaystyle \left\langle f,g\right\rangle = \frac{1}{2^n} \sum_{v\in \mathbb{F}_2^n} f(v)g(v)}$.
With this definition in mind, lets see what we can say about functions from $\tilde{\Omega}$ which have values in $\pm 1$ and in particular about linear functions.
The first immediate result is that $\left\Vert f \right\Vert :=\sqrt{\left\langle f,f \right\rangle} = 1$, so they are all on the unit sphere. Next, by definition $\left\langle f,g \right\rangle= \left\langle 1,f\cdot g \right\rangle$. If both $f, g \in \tilde{\Omega}$, then so is their product $f\cdot g$ and the same hold for multiplicative linear functions in $\tilde{\Omega}_0$. In particular, this product is trivial $f\cdot g=1$ if and only if $f=g$. You can either try to prove it directly, or note that this is exactly equivalent to $\Omega$ and $\Omega_0$ being vector spaces (close to addition and the field is $\mathbb{F}_2$).
The inner product $\left\langle 1, f \right\rangle$ is simply $\frac{1}{2^n}\sum_{v\in\mathbb{F}_2^n} f(v)$ – the summation over all the values of $f$ up to out normalization. If $f=1$, then it is simply 1. However, if $f\neq 1$ is multiplicative linear, it has exactly two values $\pm 1$, and each one appears exactly half of the times (this is a group homomorphism), so that $\left\langle 1, f \right\rangle = 0$. Putting all of this together we get that:
Claim: Let $f,g:\mathbb{F}_2^n\to \{\pm 1\}$ be any two functions. Then
1. $\left\Vert f \right\Vert = 1$, and
2. If both functions are multiplicative linear, then
$\left\langle f,g\right\rangle =\begin{cases} 1 & f=g\\0 & else\end{cases}$
Corollary: The set of multiplicative linear functions $\tilde{\Omega}_0$ forms an orthonormal set in the set of all functions $\mathbb{F}_2^n \to \mathbb{R}$ and therefore forms a basis.
## Back to linearity testing
Now that we have all this new information about binary functions and linear functions we can return to the original problem of linear testing.
In the original problem, we wanted to show that if a binary function $f:\mathbb{F}_2^n\to\{\pm 1\}$ is far from a linear (multiplicative) function in the Hadamard distance, then with high probability when choosing $u,v \in \mathbb{F}_2^n$ we will get that $f(u+v)\neq f(u)\cdot f(v)$, and then Alice will reject the function. Both the distance and the condition are essentially described using the standard basis. Defining
$e_{v}(u)=\begin{cases}1 & u=v\\0 & else \end{cases}$,
we think of $\{e_v \mid v\in \mathbb{F}_2^n\}$ as the standard basis and we can write $f = \sum a_v e_v$ (where $a_v \in \{\pm1\}$.
Combination of the standard basis’ elements where 1=white, 0=green and -1=black.
Note that this basis is almost orthonormal – its elements are perpendicular to one another, and all have length $\frac{1}{2^{n/2}}$. Now, the Hadamard distance basically ask how many of the coefficients are different in this presentation. Or more formally, if $g = \sum b_v e_v$ with $b_v \in \{\pm1 \}$, then
${\left\Vert f-g \right\Vert}^2 = \sum_v {\left\Vert (a_v-b_v) e_v \right\Vert}^2 = \frac{1}{2^n} \cdot 4\cdot |\{v \mid\; f(v)\neq g(v)\}| = 4\cdot dist(f,g)$.
Similarly in the linearity condition, is “defined” using this basis, since we look at the coefficient of $e_u, e_v$ and $e_{u+v}$.
However, we just saw that there is another basis, which consists of linear functions, so we can write our function as a combination of these vectors:
Lets denote these linear functions by $\{\varphi_1,...,\varphi_{2^n}\}$, so we can write $f=\sum_i \alpha_i \varphi_i$. If the function $f$ was a linear function, then the coefficients would all be zero, except one which will be $1$. Using the orthonormality to our advantage, we also get that for any binary function
$1 = \left\langle f,f \right\rangle = \sum_i \alpha_i^2 \left\langle \varphi_i,\varphi_i \right\rangle = \sum_i \alpha_i^2$.
It follows that if the function $f$ is not linear, then all of the coefficients would be in $[-1,1)$. What can we say about the distance of the function from some linear function? While the distance was measured using the Hadamard distance, any orthogonal basis is just as good, so
$4\cdot dist(f,\varphi_i) = {\left\Vert f-\varphi_i \right\Vert}^2 = (\sum_{j\neq i} \alpha_j^2) + (\alpha_i-1)^2 = (\sum_j \alpha_j^2) -2\alpha_i + 1 = 2(1-\alpha_j)$.
In other words, saying that the a function is far from all of the linear functions, is exactly saying that the $\alpha_i$ are far from $1$, or more formally
$dist(f,Linear) = \frac{1}{2}\min_i (1-\alpha_i)$.
We already see that in this new basis our distance function has a much simpler interpretation, and the main difficulty is to move from one base to the other, though this becomes simple if we can describe our expressions using inner products.
What about the probability for Alice to reject the function? If we can somehow rewrite it using the language of the inner product, we could move to this basis as well, and in this basis all the functions are linear, so it would be a much easier place to work in. And indeed we can rewrite it in the new basis and get that Alice will reject the function with probability at least $(1-\max_i \alpha_i)= dist(f,Linear)$. For readability, I will put the proof in a new section, though it is very similar to what we saw here, but with extra details. The details themselves are less important than the idea of moving from a basis which is easier to describe the problem and the protocol to a basis which is easier to work with (and I encourage you to try prove it by yourselves).
### The linearity condition
We would like to count how many pairs will lead Alice to reject the function. For a single $(u,v)$ pair, it is not hard to see that
$\frac{1-f(u)f(v)f(u+v)}{2}=\begin{cases}1 & f(u+v)\neq f(u)f(v)\\0 & else\end{cases}$,
so we want to sum over these expressions. Then the probability to reject will be
$Prob(reject)=\frac{1}{2^{2n}}\sum_{u,v} \frac{1-f(u)f(v)f(u+v)}{2} =\frac{1}{2}( 1 - \frac{1}{2^{2n}}\sum_{u,v} f(u)f(v)f(u+v)) .$
The term in the sum looks similar to an inner product, but has three factors instead of two, and have a double summation instead of a simple summation. As it turns out, this is because we have in a sense two inner products. First, we can define $F(u) = \frac{1}{2^n}\sum_v f(v)f(u+v)$, so that
$\frac{1}{2^{2n}}\sum_{u,v} f(u)f(v)f(u+v) = \frac{1}{2^{n}}\sum_u f(u)\cdot F(u) = \left\langle f, F \right\rangle$.
We would much prefer to work with the coefficients of $f$ in the basis of linear functions, since there we can see clearly the distance from linear functions. As for $F$, since $+v=-v$ (it is in a vector space over $\mathbb{F}_2$), we can write it as $F(u) = \frac{1}{2^n}\sum_v f(v)f(u-v)$ which is an expression that appears many times in mathematics called a convolution and is denoted by $f*f$. This convolution behaves like a function product in the sense that it is associative and distributive with respect to the function addition. But what is more important, is that the coefficient of $\varphi_i$ in the expansion of $f*g$ is the product of the coefficient of $\varphi_i$ in the expansion of $f$ times the coefficient in the expansion of $g$ (try to prove it!). Putting all of this together, we get that
$\left\langle f, F \right\rangle = \sum_i \alpha_i^3 \leq \max_i \alpha_i \sum_i \alpha_i ^2 = \max_i \alpha_i.$
This means that the probability for Alice to reject is
$Prob(reject) = \frac{1}{2}(1 - \frac{1}{2^{2n}}\sum_{u,v} f(u)f(v)f(u+v) )\geq \frac{1}{2}(1-\max_i\alpha_i)=dist(f,Linear) ,$
which is exactly what we wanted to show.
## In conclusion: So what did we do?
Trying to analyze our problem, we looked for simple ways to describe its objects. We started with considering them as sets, then added the addition to get algebraic objects, and then used these addition operations to find symmetries and actions on our sets. When applying the same process to the linear functions, we concluded that they are also orthonormal eigenvectors which led us to a new interesting basis.
Then the main trick was moving from one basis to the other. The standard basis is what we usually work with, and it is easy to describe the problem in this basis (for example, the distance is measured by simply counting the number of different coefficients). However, the second basis is far more related to the essence of the problem – it is exactly the set of linear functions. Both bases were orthonormal (up to a uniform normalization), so when the expressions were transformed to use inner products, it was very easy to move from one to the other. Once we moved all of our notation to this new basis, the problem became almost trivial.
Along the way we also encountered the convolution operation, which behaved nicely with respect to this base change. You might have seen this sort of behavior in another place – the Fourier transform. This is not surprising, since this base change is the Fourier transform, and this interesting result about the convolution is one of the main reasons that the Fourier transform is so important.
All of these different structures, the bases, and the conversion between them were miraculously combined together to show that if Bob didn’t really make an effort to create a linear function, then Alice would have a good chance to detect it. It does, however, rely on the fact that Bob is honest. If Alice asked Bob for the values of $f$ on $u, v$ and $u+v$, Bob could reply any three values of the form $a,b$ and $a+b$ (instead of $f(u), f(v)$ and $f(u+v)$) so that Alice will always accept. For this post, we would assume Bob’s honesty, but if we really wanted, we could change the protocol and have Bob commit to the answers before Alice asks for these values, but we will leave it for another post.
As a final thought, I would like to point out that this miracle combination happens a lot. After all, it is the basis for the Fourier transform that every engineer knows by heart, or its mathematical name – group representation. This in itself is a subject on its own, but once you understand its basics, results like those in this post become much more natural.
This entry was posted in Probabilistic proofs and tagged , . Bookmark the permalink.
|
2021-07-29 02:32:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 207, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868496835231781, "perplexity": 153.5155892456779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00238.warc.gz"}
|
https://tkchen.wordpress.com/tag/shortcut/
|
# Tag Archives: Shortcut
## Approximating Aggregate Losses
An aggregate loss $S$ is the sum of all losses in a certain period of time. There are an unknown number $N$ of losses that may occur and each loss is an unknown amount $X$. $N$ is called the frequency random variable and $X$ is called the severity. This situation can be modeled using a compound distribution of $N$ and $X$. The model is specified by:
$\displaystyle S = \sum_{n=1}^N X_n$
where $N$ is the random variable for frequency and the $X_n$‘s are IID random variables for severity. This type of structure is called a collective risk model.
An alternative way to model aggregate loss is to model each risk using a different distribution appropriate to that risk. For example, in a portfolio of risks, one may be modeled using a pareto distribution and another may be modeled with an exponential distribution. The expected aggregate loss would be the sum of the individual expected losses. This is called an individual risk model and is given by:
$\displaystyle S = \sum_{i=1}^n X_i$
where $n$ is the number of individual risks in the portfolio and the $X_i$‘s are random variables for the individual losses. The $X_i$‘s are NOT IID, and $n$ is known.
Both of these models are tested in the exam; however, the individual risk model is usually tested in combination with the collective risk model. An example of a problem structure that combines the two is given below.
Example 1: Your company sells car insurance policies. The in-force policies are categorized into high-risk and low-risk groups. In the high-risk group, the number of claims in a year is poisson with a mean of 30. The number of claims for the low-risk group is poisson with a mean of 10. The amount of each claim is pareto distributed with $\theta = 200$ and $\alpha = 2$.
Analysis: Being able to see the structure of the problem is a very important first step in being able to solve it. In this situation, you would model the aggregate loss as an individual risk model. There are 2 individual risks– high and low risk. For each group, you would model the aggregate loss using a collective risk model. For the high-risk, the frequency is poisson with mean 30 and the severity is pareto with $\theta = 200$ and $\alpha = 2$. For the low-risk group, the frequency is poisson with mean 10 and the severity is pareto with the same parameters.
For these problems, you will need to know how to:
1. Find the expected aggregate loss.
2. Find the variance of aggregate loss.
3. Approximate the probability that the aggregate loss will be above or below a certain amount using a normal distribution.
Example: what is the probability that aggregate losses are below $5,000? 4. Determine how many risks would need to be in a portfolio for the probability of aggregate loss to reach a given level of certainty for a given amount. Example: how many policies should you underwrite so that the aggregate loss is less than the expected aggregate loss with a 95% degree of certainty? 5. Determine how long your risk exposure should be for the probability of aggregate loss to reach a given level of certainty for a given amount. Problems that require you to determine probabilities for the aggregate loss will usually state that you should use a normal approximation. This will require the calculation of the expected aggregate loss and the variance of the aggregate loss. MEMORIZE Expected aggregate loss for a collective risk model is given by: $E[S] = E[N]E[X]$ For the individual risk model, it is $\displaystyle E[S] = \sum_{i=1}^n E[X_i]$ Variances under the collective risk model are conditional variances. $Var(S) = E[Var(X|I)] + Var(E[X|I])$ When frequency and severity are independent, the following shortcut is valid and is called a compound variance: $Var(S) = E[N]Var(X) + Var(N)E[X]^2$ Variance under the individual risk model is additive: $\displaystyle Var(S) = \sum_{i=1}^n Var(X)$ Example 2: Continuing from Example 1, calculate the mean and variance of the aggregate loss. Assume frequency and severity are independent. Answer: This is done by 1. Calculating the expected aggregate loss and variance in the high-risk group. 2. Calculating the expected aggregate loss and variance in the low-risk group. 3. Adding the expected values from both groups to get the total expected aggregate loss. 4. Adding the variances from both groups to get the total variance. I will use subscript $H$ and $L$ to denote high and low risk groups respectively. $E[S_H] = E[N_H]E[X_H] = 30\times 200 = 6,000$ $\begin{array}{rll} Var(S_H) &=& E[N_H]Var(X_H) + Var(N_H)E[X_H]^2 \\ &=& 30 \times 40,000 + 30 \times 200^2 \\ &=& 2,400,000 \end{array}$ $E[S_L] = E[N_L]E[X_L] = 10 \times 200 = 2,000$ $\begin{array}{rll} Var(S_H) &=& 10 \times 40,000 + 10 \times 200^2 \\ &=& 800,000 \end{array}$ Add expected values to get $E[S] = 6,000 + 2,000 = 8,000$ Add variances to get $Var(S) = 2,400,000 + 800,000 = 3,200,000$ Once the mean and variance of the aggregate loss has been calculated, you can use them to approximate probabilities for aggregate losses using a normal distribution. Example 3: Continuing from Example 2, use a normal approximation for aggregate loss to calculate the probability that losses exceed$12,000.
Answer: To solve this, you will need to calculate a $z$ value for the normal distribution using the expected value and variance found in Example 2.
$\begin{array}{rll} \Pr(S > 12,000) &=& 1- \Pr(S< 12,000) \\ \\ &=& \displaystyle 1-\Phi\left(\frac{12,000 - 8,000}{\sqrt{3,200,000}}\right) \\ \\ &=& 1 - \Phi(2.24) \\ \\ &=& 0.0125 \end{array}$
CONTINUITY CORRECTION
Suppose in the above examples the severity $X$ is discrete. For example, $X$ is poisson. Under this specification, we need to add 0.5 to 12,000 in the calculation for $\Pr(S > 12,000)$. So we would instead calculate $\Pr(S > 12,000.5)$ This is called a continuity correction and occurs when we have a discrete severity random variable. If we were interested in $\Pr(S<12,000)$, we would subtract 0.5 instead. This has a greater effect when the domain of possible values is smaller.
Another type of problem I’ve encountered in the samples is constructed as follows:
Example 4: You drive a 1992 Honda Prelude Si piece-of-crap-mobile (no, that’s my old car and you are driving it because I sold it to you to buy my Mercedes). The failure rate per year is poisson with mean 2. The average cost of repair for each instance of breakdown is $500 with a standard deviation of$1000. How many years do you have to continue driving the car so that the probability of the total maintenance cost exceeding 120% of the expected total maintenance cost is less than 10%? (Assume the car is so crappy that it cannot deteriorate any further so the failure rates and average repair costs remain constant every year.)
$E[S_1] = 1,000$
$\begin{array}{rll} Var(S_1) &=& 2 \times 1,000^2 + 2 \times 500^2 \\ &=& 2,500,000 \end{array}$
For $n$ years, we have
$E[S] = 1,000n$
$Var(S) = 2,500,000n$
According to the problem, we are interested in $S$ such that $\Pr(S > 1,200n) = 0.1$. Under normal approximation, this implies
$\begin{array}{rll} \Pr(S>1,200n) &=& 1-\Pr(S<1,200n) \\ \\ &=& \displaystyle 1- \Phi\left(\frac{1,200n - 1,000n}{\sqrt{2,500,000n}}\right) \end{array}$
Which implies
$\displaystyle \Phi\left(\frac{200n}{\sqrt{2,500,000n}}\right) = 0.9$
The probability $0.9$ corresponds to a $z$ value of 1.28. This implies
$\displaystyle \frac{200n}{\sqrt{2,500,000n}} = 1.28$
Solving for $n$ we have $n = 1024$ years. LOL!
## The Bernoulli Shortcut
If $X$ has a Standard Bernoulli Distribution, then it can only have values 0 or 1 with probabilities $q$ and $1-q$. Any random variables that can only have 2 values is a scaled and translated version of the standard bernoulli distribution.
Expected Value and Variance:
For a standard bernoulli distribution, $E[X] = q$ and $Var(X) = q(1-q)$. If $Y$ is a random variable that can only have values $a$ and $b$ with probabilities $q$ and $(1-q)$ respectively, then
$\begin{array}{rl} Y &= (a-b)X +b \\ E[Y] &= (a-b)E[X] +b \\ Var(Y) &= (a-b)^2Var(X) \\ &= (a-b)^2q(1-q) \end{array}$
Filed under Probability
## Functions and Moments
Some distribution functions:
Survival function
$\displaystyle S(x) = 1-F(x) = \Pr(X>x)$
where $F(x)$ is a cumulative distribution function.
Hazard rate function
$\displaystyle h(x) = \frac{f(x)}{S(x)} = -\frac{d\ln{S(x)}}{dx}$
where $f(x)$ is a probability density function.
Cumulative hazard rate function
$\displaystyle H(x) =\int_{-\infty}^x{h(t)dt} = -\ln{S(x)}$
The following relationship is often useful:
$S(x) = \displaystyle e^{-\int_{-\infty}^x{h(t)dt}}$
Expected Value:
$\displaystyle E[X] = \int_{-\infty}^\infty{xf(x)dx}$
Or more generally,
$\displaystyle E[g(X)] = \int_{-\infty}^\infty{g(x)f(x)dx}$
When $g(X) = X^n$, the expected value of such a function is called the nth raw moment and is denoted by $\mu'_n$. Let $\mu$ be the first raw moment. That is, $\mu = E[X]$. $E[(X-\mu)^n]$ is called an nth central moment.
Moments are used to generate some statistical measures.
Variance $\sigma^2$
$\displaystyle Var(X) = E[(X-\mu)^2] = E(X^2) - E(X)^2$
The coefficient of variation is $\displaystyle \frac{\mu}{\sigma}$.
Skewness $\gamma_1$
$\displaystyle \gamma_1 = \frac{\mu_3}{\sigma^3}$
Kurtosis $\gamma_2$
$\displaystyle \gamma_2 = \frac{\mu_4}{\sigma^4}$
Covariance of two distribution functions
$\displaystyle Cov(X,Y) = E[(X-\mu_x)(Y-\mu_Y)] = E[XY] - E[X]E[Y]$
*Note: if $X$ and $Y$ are independent, $Cov(X,Y)=0$
Correlation coefficient $\rho_{XY}$
$\displaystyle \rho_{XY} = \frac{Cov(X,Y)}{\sigma_X\sigma_Y}$
All of the above definitions should be memorized. Some things that might be tested in the exam are:
• Given a particular distribution function, what happens to skewness or kurtosis in the limit of a certain parameter?
• What is the expected value, variance, skewness, kurtosis of a given distribution function?
• What is the covariance or correlation coefficient of two distribution functions?
Central moments can be calculated using raw moments. Know how to calculate raw moments using the statistics function on the calculator. This can be a useful timesaver in the exam. Using alternating positive and negative binomial coefficients, write an expression for $\mu_n$ with $\mu'$ and $\mu$ as the two binomial terms.
Example:
$\mu_4 = \mu'_4 - 4\mu'_3\mu + 6\mu'_2\mu^2 - 4\mu'_1\mu^3 + \mu^4$
Since $\mu'_1 = \mu$, the two terms on the end simplify to $-3\mu^4$. The result is
$\mu_4 = \mu'_4 - 4\mu'_3\mu + 6\mu'_2\mu^2 - 3\mu^4$
Moment Generating Function:
If the moment generating function $M(t)$ is known for random variable $X$, it’s nth raw moment can be found by taking the nth derivative of $M(t)$ and evaluating at 0. Moment generating functions take the form:
$M(t) = \displaystyle E[e^{tx}]$
If $Z = X +Y$, then $M_Z(t) = M_X(t)\cdot M_Y(t)$.
|
2018-05-22 13:59:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 103, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714034795761108, "perplexity": 858.8430901345995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864790.28/warc/CC-MAIN-20180522131652-20180522151652-00600.warc.gz"}
|
https://msberends.github.io/AMR/reference/random.html
|
These functions can be used for generating random MIC values and disk diffusion diameters, for AMR data analysis practice. By providing a microorganism and antimicrobial agent, the generated results will reflect reality as much as possible.
Usage
random_mic(size = NULL, mo = NULL, ab = NULL, ...)
random_disk(size = NULL, mo = NULL, ab = NULL, ...)
random_rsi(size = NULL, prob_RSI = c(0.33, 0.33, 0.33), ...)
Arguments
size
desired size of the returned vector. If used in a data.frame call or dplyr verb, will get the current (group) size if left blank.
mo
any character that can be coerced to a valid microorganism code with as.mo()
ab
any character that can be coerced to a valid antimicrobial agent code with as.ab()
...
ignored, only in place to allow future extensions
prob_RSI
a vector of length 3: the probabilities for "R" (1st value), "S" (2nd value) and "I" (3rd value)
Value
class <mic> for random_mic() (see as.mic()) and class <disk> for random_disk() (see as.disk())
Details
The base R function sample() is used for generating values.
Generated values are based on the EUCAST 2022 guideline as implemented in the rsi_translation data set. To create specific generated values per bug or drug, set the mo and/or ab argument.
Examples
random_mic(25)
#> Class <mic>
#> [1] 2 64 16 >=256 64 128 <=0.001 0.5 8
#> [10] 16 2 8 8 2 >=256 32 64 0.5
#> [19] >=256 16 0.5 0.0625 64 0.01 0.125
random_disk(25)
#> Class <disk>
#> [1] 41 20 7 37 10 50 20 23 38 32 44 6 34 26 19 46 43 11 32 50 18 12 32 19 37
random_rsi(25)
#> Class <rsi>
#> [1] S R R R I S I S I R I I S S R R R I R R R I S R R
# \donttest{
# make the random generation more realistic by setting a bug and/or drug:
random_mic(25, "Klebsiella pneumoniae") # range 0.0625-64
#> Class <mic>
#> [1] 8 32 8 0.005 4 8 0.025 >=64 0.5 0.001 0.025 0.001
#> [13] 0.125 >=64 0.125 0.002 0.01 16 >=64 8 >=64 0.5 0.125 32
#> [25] 4
random_mic(25, "Klebsiella pneumoniae", "meropenem") # range 0.0625-16
#> Class <mic>
#> [1] 16 4 8 4 16 8 16 <=1 4 2 4 4 4 2 8 4 <=1 16 16
#> [20] 2 <=1 <=1 4 16 4
random_mic(25, "Streptococcus pneumoniae", "meropenem") # range 0.0625-4
#> Class <mic>
#> [1] 0.5 0.125 0.25 0.125 0.125 0.125 4 2 0.0625 0.125
#> [11] 0.0625 2 0.5 1 1 2 2 4 1 0.125
#> [21] 2 2 1 0.25 0.5
random_disk(25, "Klebsiella pneumoniae") # range 8-50
#> Class <disk>
#> [1] 26 40 17 35 19 24 48 16 44 33 47 16 49 50 37 40 41 22 29 46 47 38 22 49 30
random_disk(25, "Klebsiella pneumoniae", "ampicillin") # range 11-17
#> Class <disk>
#> [1] 16 13 14 15 14 16 17 16 14 11 16 17 12 14 14 12 11 17 13 12 12 11 14 14 15
random_disk(25, "Streptococcus pneumoniae", "ampicillin") # range 12-27
#> Class <disk>
#> [1] 23 23 21 23 20 23 18 20 18 27 25 17 16 26 22 20 17 20 19 23 20 24 24 18 25
# }
|
2022-10-03 12:45:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19434918463230133, "perplexity": 2380.8038868595836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00686.warc.gz"}
|
https://icsehelp.com/periodic-table-class-10-goyal-brothers-icse-chemistry-solutions/
|
Periodic Table Class-10 Goyal Brothers ICSE Chemistry Solutions Chapter 1. Step by Step Solutions of Eercise-1 , Eercise-2 and Objective Type Questions .Solved Questions of Goyal Brothers Prakashan Chemistry Chapter-1 Periodic Table Properties for ICSE Class 10. Visit official Website CISCE for detail information about ICSE Board Class-10 Chemistry .
## Periodic Table Class-10 Goyal Brothers ICSE Chemistry Solutions
-: Select Topics :-
Eercise-1
Eercise-2
Objective Type Questions
## Eercise-1
Periodic Table Class-10 Goyal Brothers ICSE Chemistry Solutions
Questin-1
(a) Who prepared the modern periodic table?
(b) Define : (i) Modern Periodic Law (ii) Modern Periodic Table,
(a) H.G.J. Moseley prepared the modern periodic table
(b)
(i) Modern Periodic Law
The modern Periodic law can be stated as: “The physical and chemical properties of the elements are periodic functions of their atomic numbers”
(ii) Modern Periodic Table,
The modern periodic table is used to organize all the known elements. Elements are arranged in the table by increasing atomic number. In the modern periodic table, each element is represented by its chemical symbol. The number above each symbol is its atomic number.
Question-2
With reference to the long form of the periodic table, fill in the blank spaces with ..
………………………..
(a) The chemical properties of elements are the periodic function of their …….atomic number…
(b) The serial number of an element in the periodic table is also its ..atomic number……………
(c) The number of electrons in the valence shell of an atom represents its ……….Valence electron….
(d) The number of electron shells around the nucleus of an atom represents its ….Period.in periodic table.
(e) ………Alkali…._ metals and alkaline ..Earth………… metals are placed in the groups on the left hand side of the periodic table,
(f) ……..Inert…… elements are placed on the right hand side of the periodic table.
(g) the Element.Occupying left and right wing vertical columns are called ………..Normal…….elements.
(h) Noble ………gas………. are placed in………18th…….group in the last …Group……of periodic table
(i) ……….Transitional…..elements are accommodated in the middle of the periodic table in ….second……. Series
(j) The first period has …..2 Elements and called very short period
(k) The second and third periods have ….8……….elements and are called …..…Short Period..
(l) The fourth and fifth periods have……18 elements and called Long Period
Question-3
Give the name and symbol of the following elements which occupy the following positions in
(a) period 2, group 13
(b) period 3, group 17
(a) Aluminium
(b) Chlorine
Question-4
Name two alkali metals, To which group do they belong?
Lithium and Sodium are alkali metals they belong to group first.
Question-5
Name two alkaline earth metals, To which group do they belong?
Be and Mg are alkali metals they belong to group second.
Question-6
Name two elements of 17 group. State the common name of this group of elements.
Two element of group 17 are chlorine and bromine, common name of this group is Halogen
Question-7
Silicon (at. No. 14) and phosphorus (at. No. 15) belong to the same period of the periodic table………………
Silicon (at. No. 14)- 2,8,4
phosphorus (at. No. 15)- 2,8,5.It occur in third period.
### Eercise-2
Periodic Table Class-10 Goyal Brothers ICSE Chemistry Solutions
Question-1
Answer the following questions with respect to the long form of PT
(i) What do you understand by the term period?
(ii) What are the total number of periods in this table?
(iii) which period is the shortest name the element in this period
(iv) How many periods are called short periods? Give their period ……….
(v) How many periods are called long periods? How many element…….
vi) How many periods are called very long periods?
(i) the term period is horizontal row in PT
(ii) the total number of periods in this table is seven
(iii) the shortest name the element in this period is first
(iv) Second and third periods are called short periods
(v) 4th and 5th periods are called long periods? 18 element…in each….
vi) 6th and 7th periods are called very long periods?
Question-2
(a) what do you understand by term group in periodic..table
(b) How many groups are in the long form of the periodic table
(a) Group, in chemistry, a set of chemical elements in the same vertical column of the periodic table. The elements in a group have similarities in the electronic configuration of their atoms, and thus they exhibit somewhat related physical and chemical properties
(b) 18 group
Question 3
Metallic property change to non metallic property from left to right in a period explain
Hence, the tendency of the element to lose electrons decreases. This results in decrease in metallic character. Furthermore, the tendency of an element to gain electrons increases with increase in effective nuclear charge, so non– metallic character increases on moving from left to right in a period
Question-4.
(a) bigger the atomic size more .metallic is an element explain..
(b) Name
(i)Name the most metallic
(ii) Name the least metallic
(a) Metallic character increases as you move down a group because the atomic size is increasing. … Electron shielding causes the atomic radius to increase thus the outer electrons ionizes more readily than electrons in smaller atoms
(b)
(i)Name the most metallic – Ciesium
(ii) Name the least metallic- Boron
Question-5
write .the name of element in the ascending order of their atomic size
a) group 1st
b) group 17
a) group 1st- H , Li,Na , K, Rb, Cs,Fr
b) group 17 -Fl, Cl , Br ,I, At , Ts
Question-6
(a) what do you mean by the term…electronegativity
(b) write the name of all elements in the third of the long form of periodic table
(a )The tendency of an atom in a molecule to attract the shared pair of electrons towards itself is known as electronegativity
(b) Na, Mg , Al , Si, P, S , Cl ,Ar
Question-7
among ……………Pick element which are ….
.
(i) most electropositive (ii) most electronegative (iii) noble gases.
Be. B, C, N, O, F, Ne, Na, Mg, Al, Si, P. S, CI, Ar, K, Ca. ,
(i) most electropositive- K
(ii) most electronegative-F,
(iii) noble gases.-Ar,
Question-8
how is the atomic size of sodium related to (i) magnesium (i) potassium
Size of potassium is more than magnesium
Question-9
arrange the following sets of elements in the increasing order of their atomic size.
(a) K, Li, Na (b) O, C, N.
(a) Li, Na, K
(b) O, N., C
Question-10
pick …………. smallest size ………with reason..
Na, Cl, Si, Ar
Cl, Atomic size decrease from left to right in a Period except inert gas
Question-11
Explain,Halogen have a …very strong electron affinity.
Question-12
Explain why reducing power of elements increase, on moving down a group.
Question-13
Explain, why reducing power of element decrease on moving from left to right in a period.
Question-14
Explain why element lying on extreme right of the periodic table are strong non-metals, but those lying on .extreme left are strong metals.
Question-15
Explain, why metallic character of an element increases while moving down a group
Question-16
explain, why. non metallic charterer of the element increase on moving from left to right in a PT
Question-17
(a) what do you understand by term electron affinity.
(b) Does electron affinity. represent energy absorbed or energy release
(c) name the element having highest electron affinity.
(d)Arrange Cl, F, I ,Br in increasing order of electron affinity
(a) –Electron affinity is defined as the change in energy (in kJ/mole) of a neutral atom (in the gaseous phase) when an electron is added to the atom to form a negative ion. In other words, the neutral atom’s likelihood of gaining an electron
(b) electron affinity. represent energy energy release
(c) name the element having highest electron affinity.—Cl
(d) , I ,Br , Cl , F in increasing order of electron affinity
### Objective Type Questions
Periodic Table Class-10 Goyal Brothers ICSE Chemistry Solutions
#### I. multiple Choice Questions:
1. (c) Remain same
2. (c) -1
3. (a) metals
4. (d) Francium
5. (c) Florine
6. (a) Seven
7. (b) third period seventeen group
8. (c) Phosphorus
9. (d) Chlorine
10. (a) Same number of electron shells
11. (c) electronegative charterer increase
12. (d) All of these
13. (b) increase
14. (a) remain same
15. (a) increase
16. (b) decrease
17. (a) increase
18. (d) both (a) and (b)
19. (b) increase
20. (a) decrease
21. (a) increase
22. (c) electron affinity
23. (c) decrease
24. (a) increase
25. (c) decrease
26. (d) F,H,E and G
#### II Fill in the blanks space with the choice given in bracket
1. ………….atomic number …………
2. ………….middle …………
3. ………….right …………
4. ………….end …………
5. ………….normal …………
6. …………group …………
7. ………….horizontal…………
8. ………….seven …………
9. …………shell number …………
10. ………. alkali metal…………
11. ………….halogen………..
12. ……….third………….
13. ………….13…………
14. ………..first period………
15. …………..same………….
16. ……………decrease
17. ………….. more…………
18. …………decrease………..
19. …………. increase…………
20. …………increase…………
21. ………….decrease…………..
22. ……………increase……….
23. ………..decrease……….
24. …………..least………….
25. ……………decrease………….
#### III Choose from the following list as to what match’s the description given below
1. modern periodic law
2. period
3. group
4. normal
5. rare gases
6. alkali metal
7. halogen
8. transition
9. lanthanide
10. actinide
11. ionization potential
12. electronegativity
13. electron affinity
–: End of Goyal Brothers Periodic Table Properties ICSE Class-10 :–
Return of : Chemistry Class-10 Goyal Brothers Prakashan
Thanks
$${}$$
|
2021-07-26 14:14:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.62451171875, "perplexity": 7803.495803934824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00349.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=1838507
|
MathSciNet bibliographic data MR1838507 (2002m:46087) 46L05 Kusuda, Masaharu Three-space problems in discrete spectra of \$C\sp *\$$C\sp *$-algebras and dual \$C\sp *\$$C\sp *$-algebras. Proc. Roy. Soc. Edinburgh Sect. A 131 (2001), no. 3, 701–707. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2015-05-25 10:42:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994343638420105, "perplexity": 8414.774648409835}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928479.19/warc/CC-MAIN-20150521113208-00232-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://matheducators.stackexchange.com/questions/1260/puzzles-for-logic-courses-featuring-propositional-logic-and-set-theory/2182
|
# Puzzles for Logic Courses featuring propositional logic and set theory?
Puzzles are interesting form of exercises. They help students to learn the teaching material in a funny way. Particularly in logic, puzzles could be very useful to show the complexity of the subject.
Question: What are good examples of logic puzzles for undergraduate logic courses, focused on propositional logic and set theory?
• I feel that this is too broad. It is asking for any example at all of logical puzzles, which is a very large field. It's like asking for a good example of optimization. Perhaps you should choose one element from each of your parenthesis (e.g. propositional logic in set theory at the high-school level) to make it more focused. – Brian Rushton Apr 3 '14 at 17:02
• @BrianRushton I edited the post. – user230 Apr 3 '14 at 17:10
• Check out Jason Rosenhouse's "problem of the week page". They are all logic puzzles: educ.jmu.edu/~rosenhjd/POTW/Spring14/homepage.html – Brendan W. Sullivan Apr 3 '14 at 18:57
• @brendansullivan07 Thanks for the link. – user230 Apr 3 '14 at 22:52
• Disclaimer: I know it only from reviews, but the Ergo card game might be of use. See e.g. boardgamegeek.com/boardgame/55279/ergo . – mbork May 6 '14 at 10:05
For propositional logic (especially truth tables and implication) I suggest Raymond's Smullyan's wonderful book, What is the name of this book?.
From memory: one example has a prosecutor in a court say to the defendant: 'If you committed the crime, then you must have had an accomplice'. The defendant hotly denies this. But $A \implies B$ is false only if $A$ is true, and $B$ is false, so his denial is also an admission of guilt.
Pirates (disjunctive/conjunctive normal form). Another puzzle I like has five pirates, who want to padlock their treasure chest so that it can be opened if and only if (at least) four of them are present. How many padlocks are needed?
Writing $p_i$ for the proposition 'Pirate $i$ is present', the condition is
$$(p_1 \wedge p_2 \wedge p_3 \wedge p_4) \vee (p_1 \wedge p_2 \wedge p_3 \wedge p_5) \vee \cdots \vee (p_2 \wedge p_3 \wedge p_4 \wedge p_5).$$
This holds if and only if no two pirates who are absent, so a logically equivalent condition is
$$(p_1 \vee p_2) \wedge (p_1 \vee p_3) \wedge \cdots \wedge (p_4 \vee p_5)$$
in conjunctive normal form. The corresponding (minimal) solution puts 10 padlocks on the chest, labelled by the $2$-subsets of $\{1,2,\ldots, 5\}$, and gives the keys to the padlock labelled $\{r,s\}$ to pirates $r$ and $s$.
• (+1) Nice answer. Thanks. – user230 Apr 6 '14 at 11:16
One puzzle I rather like comes from a website (forget where) that gives a problem about four cards. Each card contains a letter on one side, and a number on another. Also, each card ostensibly has the property that if one side contains a vowel, then the other side contains an even number. Then, you're given four cards, with the following faces showing:
• 7
• 2
• E
• S
and asked which cards would you need to turn over to be sure that that card followed the bolded property. It turns out that you need to flip 7 and E (because those are the only ones that could possibly have a vowel on one side and an odd number on another side), but most people who don't know this beforehand would get something else.
|
2020-07-06 05:44:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5472460389137268, "perplexity": 1047.419809645844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00562.warc.gz"}
|
https://gamedev.stackexchange.com/questions/115180/can-you-use-a-standard-multi-thread-if-only-reading-unity-objects
|
# Can you use a standard multi-thread if only reading unity objects?
First of all I know about coroutines and how to use them (they're awesome).
A friend of mine was telling me about the way the he implemented his saving system in a game he was working on, after asking him what he was using I found out that he is using system.threading and not coroutines to save data.
I know that Unity isn't thread safe but apparently he has had no issues with saving which leads me to wonder if it is ok to use normal threads as long as you're only reading data and no pre-existing data is modified in the gameworld? would this affect the game at all (other than some moving objects being slightly ahead of where they should be because of the time it took to get to that object and save it)?
• Look at this answers.unity3d.com/questions/908054/… – Savlon Jan 19 '16 at 12:14
• I read that before asking this question and it only really talks about generating new terrain (manipulating data) what i want to know is if i can get away with just reading unity objects as long as i don't change any values from the other thread... – user3797758 Jan 19 '16 at 12:21
• What data types are you intending on using in these other threads? – Savlon Jan 19 '16 at 12:26
• Possible duplicate of How to not freeze the main thread on unity? – DMGregory Jan 19 '16 at 18:07
Saving in another thread while the game keeps running is dangerous. When the gamestate changes while the savegame file gets written, you will write a mixed gamestate consisting partly of the old state and partly of the new state. This can cause all kinds of weird and impossible to reproduce bugs when that savegame is loaded.
For example, consider you have a script which destroys one game-object and instantiates another in its place. Now consider what happens if the saving takes place exactly between these two operations. There are two possible outcomes: Either both will exist when the savegame is loaded or neither will. Either case might break the game in a way that the player can not continue. You have created the worst nightmare of every gamer: a broken savegame.
The deviousness of such problems is that they will appear completely random. Your friend's savegame system might work perfectly fine in 1000 tests, but at the 1001st time it might run into such a race-condition and screw up. And then never again until after release when the user-reviews of your game are suddenly full of people whining about their corrupted savegames and your testers fall into despair trying to reproduce the problem.
If you want to save in a separate thread, you need to write the complete gamestate to an in-memory data structure first and then start the thread to save that structure while your game keeps running with the original gamestate.
• I understand that this would happen (In my friends case the new thread is created when the game is basically idle) but what i want to know is what would the game itself crash as a result of running a read only threaded operation (such as saving), would it crash if the game is changing a variable and the other thread wants to read that variable at the same time? – user3797758 Jan 19 '16 at 15:51
• @user3797758 A crash of the main thread is unlikely, but the saving thread could crash under some conditions. Still, it's a very bad idea to have the saving-thread access the actively changed data, as I already explained. – Philipp Jan 19 '16 at 17:21
• @Phillipp Under what conditions? – user3797758 Jan 19 '16 at 18:01
• @user3797758 For example, when a complex data-structure is changed while the thread is reading it, it might get a null reference where there used to be a proper reference before leading to a null reference exception. For example: if (foo.ref != null) { foo.ref.getBar() }. foo.ref might have a value during the check but then gets set to null by the main thread before foo.ref.getBar() gets called. – Philipp Jan 19 '16 at 18:04
• @user3797758 And before you ask: yes, there are solutions for this, but they are easy to forget and also come with their own set of problems (like deadlocks). – Philipp Jan 19 '16 at 18:09
|
2019-12-12 21:36:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20359395444393158, "perplexity": 868.4011057293299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00145.warc.gz"}
|
https://codegolf.stackexchange.com/questions/22252/finding-local-extremes/22358
|
# Finding Local Extremes
Write a function or program that takes in a list and produces a list of the local extremes.
In a list [x_0, x_1, x_2...] a local extreme is an x_i such that x_(i-1) < x_i and x_(i+1) < x_i or x_(i-1) > x_i and x_(i+1) > x_i. Notice that the first and last elements of the list can never be local extremes.
So for some examples
local_extremes([1, 2, 1]) = [2]
local_extremes([0, 1, 0, 1, 0]) = [1, 0, 1]
local_extremems([]) = []
This is code golf so the shortest code wins!
• To make sure I understand correctly: Numbers greater than the numbers on either side? – undergroundmonorail Feb 26 '14 at 21:57
• @undergroundmonorail Greater or less than. So it either has to be a local minimum, where it's neighbors are both greater, or a maximum where they're both smaller – jozefg Feb 26 '14 at 21:57
• Oh, I see. I misread it – undergroundmonorail Feb 26 '14 at 21:58
• and what about sequence 1 2 2 1 shouldn't those 2 be considered as extremes too? - I know, this would make the solution much more difficult... – V-X Feb 27 '14 at 13:08
# Mathematica 66 58 51
## Current Solution
Shortened thanks to a contribution by Calle.
Cases[Partition[#,3,1],{a_,b_,c_}/;(a-b) (b-c)<0⧴b]&
Partition[#,3,1] finds the triples.
(a-b) (b-c)<0 is true if and only if b is below a, c, or above a,c. and looks at takes the signs of the differences. A local extreme will return either {-1,1} or {1,-1}.
Examples
Cases[Partition[#, 3, 1], {a_, b_, c_} /; (a - b) (b - c) < 0 :> b] &[{1, 2, 1}]
Cases[Partition[#, 3, 1], {a_, b_, c_} /; (a - b) (b - c) < 0 :> b] &[{0, 1, 0, 1, 0}]
Cases[Partition[#, 3, 1], {a_, b_, c_} /; (a - b) (b - c) < 0 :> b] &[{}]
Cases[Partition[#, 3, 1], {a_, b_, c_} /; (a - b) (b - c) < 0 :> b] &[{9, 10, 7, 6, 9, 0, 3, 3, 1, 10}]
{2}
{1, 0, 1}
{}
{10, 6, 9, 0, 1}
## Earlier Solution
This looks examples all triples (generated by Partition) and determines whether the middle element is less than both extremes or greater than the extremes.
Cases[Partition[#,3,1],{a_,b_,c_}/;(b<a∧b<c)∨(b>a∧b>c)⧴b]& ;
## First Solution
This finds the triples, and looks at takes the signs of the differences. A local extreme will return either {-1,1} or {1,-1}.
Cases[Partition[#,3,1],x_/;Sort@Sign@Differences@x=={-1,1}⧴x[[2]]]&
Example
Cases[Partition[#,3,1],x_/;Sort@Sign@Differences@x=={-1,1}:>x[[2]]]&[{9, 10, 7, 6, 9, 0, 3, 3, 1, 10}]
{10, 6, 9, 0, 1}
Analysis:
Partition[{9, 10, 7, 6, 9, 0, 3, 3, 1, 10}]
{{9, 10, 7}, {10, 7, 6}, {7, 6, 9}, {6, 9, 0}, {9, 0, 3}, {0, 3, 3}, {3, 3, 1}, {3, 1, 10}}
% refers to the result from the respective preceding line.
Differences/@ %
{{1, -3}, {-3, -1}, {-1, 3}, {3, -9}, {-9, 3}, {3, 0}, {0, -2}, {-2, 9}}
Sort@Sign@Differences@x=={-1,1} identifies the triples from {{9, 10, 7}, {10, 7, 6}, {7, 6, 9}, {6, 9, 0}, {9, 0, 3}, {0, 3, 3}, {3, 3, 1}, {3, 1, 10}} such that the sign (-, 0, +) of the differences consists of a -1 and a 1. In the present case those are:
{{9, 10, 7}, {7, 6, 9}, {6, 9, 0}, {9, 0, 3}, {3, 1, 10}}
For each of these cases, x, x[[2]] refers to the second term. Those will be all of the local maxima and minima.
{10, 6, 9, 0, 1}
• Your Mathematica style is much more concise than mine. When do we start calling it "Wolfram Language"? – Michael Stern Feb 27 '14 at 1:20
• I see this !Mathematica graphics – Dr. belisarius Feb 27 '14 at 1:24
• Michael Stern, I suspect Wolfram Language will only become official at version 10, some form of which already available on Raspberry Pi. – DavidC Feb 27 '14 at 1:29
• BTW, someone inserted a line of code that converts the Math ML to graphics. I'm not sure why. – DavidC Feb 27 '14 at 1:38
• I'm not sure why he did it. I can't see any differences in the "modified" code – Dr. belisarius Feb 27 '14 at 2:09
# J - 19 char
Couldn't help it ;)
(}:#~0,0>2*/\2-/\])
Explanation follows:
• 2-/\] - Over each pair of elements in the argument (each 2-item long infix), take the difference.
• 2*/\ - Now over each pair of the new list, take the product.
• 0> - Test whether each result is less than 0. This only happens if the multiplicands had alternating signs, i.e. it doesn't happen if they had the same sign or either was zero.
• 0, - Declare that the first element isn't an extreme element.
• }: - Cut off the last element, because that can't possibly be an extreme either.
• #~ - Use the true values on the right side to pick items from the list on the left side.
Usage:
(}:#~0,0>2*/\2-/\]) 1 2 1
2
(}:#~0,0>2*/\2-/\]) 0 1 0 1 0
1 0 1
(}:#~0,0>2*/\2-/\]) i.0 NB. i.0 is the empty list (empty result also)
(}:#~0,0>2*/\2-/\]) 3 4 4 4 2 5
2
• Umm, this may not work if the input is, say, 3, 4, 4, 4, 4, 5, i.e. you may get a zero in the "0=" step if 0 is added to 0. – Lord Soth Feb 27 '14 at 1:19
• Also, I do not know about this language, but, instead of taking the signum in the first step, you can leave the difference as it is. Then, in the second step, multiply the elements instead, and in the third you may check if the product is negative (this also avoids that 0 problem). Perhaps this may result in a shorter code. – Lord Soth Feb 27 '14 at 1:21
• Good catch, and yes, this saves two characters. Updating. – algorithmshark Feb 27 '14 at 1:50
# Javascript - 62 45 Characters
f=a=>a.filter((x,i)=>i&&i<a.length-1&&(a[i-1]-x)*(a[i+1]-x)>0)
Edit
f=a=>a.filter((x,i)=>(a[i-1]-x)*(a[i+1]-x)>0)
# Ruby, 83706055 49 characters
f=->a{a.each_cons(3){|x,y,z|p y if(x-y)*(z-y)>0}}
Prints all local extremes to STDOUT.
Uses the <=> "spaceship" operator, which I really like. (It returns 1 if the first thing is greater than the second, -1 if it's less, and 0 if equal. Therefore, if they add to -2 or 2, that means the middle is an extreme.)
Not anymore, as @daniero pointed out that the "obvious" way is actually shorter!
Changed yet again! Now it uses the awesome algorithm found in MT0's answer (+1 to him!).
Also, I like each_cons which selects each n groups of consecutive elements in an array. And trailing if is interesting too.
Overall, I just like how elegant it looks.
Some sample runs:
irb(main):044:0> f[[1,2,1]]
2
=> nil
irb(main):045:0> f[[1,0,1,0,1]]
0
1
0
=> nil
irb(main):046:0> f[[]]
=> nil
irb(main):047:0> f[[1,2,3,4,5,4,3,2,1]]
5
=> nil
irb(main):048:0> f[[1,1,1,1,1]]
=> nil
irb(main):049:0> f[[10,0,999,-45,3,4]]
0
999
-45
=> nil
• It's shorter to unpack x into 3 variables: f=->a{a.each_cons(3){|x,y,z|p y if((x<=>y)+(z<=>y)).abs==2}} – daniero Feb 27 '14 at 1:58
• @daniero Thanks; I didn't even know you could do that! Edited – Doorknob Feb 27 '14 at 2:03
• really? :D Btw, now that each term is 3 character shorter, it's overall cheaper to do x>y&&y<z||x<y&&y>z (even tho the spaceship operator is very pretty) ;) – daniero Feb 27 '14 at 2:20
• also... !((x..z)===y) is even shorter though not as clever – Not that Charles Feb 27 '14 at 2:25
• @Charles That fails when x < z. – Doorknob Feb 27 '14 at 2:31
# C++ - 208 chars
Longest solution again:
#include<iostream>
#include<deque>
using namespace std;
int main(){deque<int>v;int i;while(cin){cin>>i;v.push_back(i);}for(i=0;i<v.size()-2;)if(v[++i]>v[i-1]&v[i]>v[i+1]|v[i]<v[i-1]&v[i]<v[i+1])cout<<v[i]<<' ';}
To use, enter your integers, then any character that will crash the input stream - any non-number characters should work.
Input: 0 1 0 x
Output: 1
• You can use a deque instead of a vector to gain 2 characters. – Morwenn Feb 27 '14 at 10:01
• Also, instead of using i and j, you can declare int i; right after the collection and use in is the two loops instead of declaring two variables. – Morwenn Feb 27 '14 at 10:02
• Finally, you can probably get get rid of the increment i++ in your for loop and begin your condition by if(v[++i]>[i-1]... in order to gain one character again. – Morwenn Feb 27 '14 at 10:04
# Matlab - 45 bytes
x=input('');y=diff(x);x(find([0 y].*[y 0]<0))
# Python 2.7 - 73 bytes
e=lambda l:[l[i]for i in range(1,len(l)-1)if(l[i]-l[i-1])*(l[i]-l[i+1])]
Not too impressive (Look at every element of the list except the first and last, see if it's larger or smaller than its neighbours). I'm mostly only posting it because not everyone knows you can do x<y>z and have it work. I think that's kind of neat.
Yes, x<y>z is a cool feature of python, but it's not actually optimal in this case. Thanks to V-X for the multiplication trick, that didn't occur to me at all. Wrzlprmft reminded me that declaring an anonymous function is less keystrokes than def x(y):.
• if(l[i]-l[i-1])*(l[i]-l[i+1])>0 would reduce the code by 11 characters... – V-X Feb 27 '14 at 13:25
• @wrz Ah, you're right. I was thrown off by the fact that def e(l):\n is the same number of characters as e=lambda l:, but I forgot that you don't need to use the return keyword. Thanks! – undergroundmonorail Feb 27 '14 at 13:53
• @v-x Oh, I like that a lot. Thank you :) edit: Actually you can save more than that! Since (l[i]-l[i-1])*(l[i]-l[i+1]) is 1 if l[i] is a local extreme and 0 otherwise, I don't need to use >0. I can just let python interpret it as a bool. :) – undergroundmonorail Feb 27 '14 at 13:53
• @wrz I can't figure out how to edit a comment that's already been edited (the pencil icon seems to replace the edit button. is this by design?). I just wanted to add that, if I was smart, I'd have realized that my one-line function didn't need the \n in the declaration at all! That would have saved two characters, but the inclusion of return still makes it not worth it. – undergroundmonorail Feb 27 '14 at 14:04
f a=[x|(p,x,n)<-zip3 a(tail a)(drop 2 a),x>p&&x>n]
• this only check local maximum, for minumum need to add || x<min p n – karakfa Feb 27 '14 at 19:46
• x>p&&x>n has one less character than x>max p n :-) – yatima2975 Feb 27 '14 at 20:25
• space after , is not necessary either. – karakfa Feb 27 '14 at 21:51
• change x>p&&x>n to (x>p)==(x>n) for local minimums too, adds 4 more chars. – karakfa Jul 9 '16 at 13:24
# Jelly, 8 bytes
IṠIỊ¬T‘ị
Try it online!
## Explanation
IṠIỊ¬T‘ị
I Differences between adjacent elements {of the input}
Ṡ Take the sign of each difference
I Differences between adjacent difference signs
Ị Mark the elements that are in the range -1..1 inclusive
¬ not
T Take the indexes of the marked elements
‘ with an offset of 1
ị Index back into the original list
An element is only a local extreme if its difference with its left neighbour has an opposite sign to its difference with its right neighbour, i.e. the signs of the differences differ by 2 or -2. Jelly has a number of useful primitives for dealing with "find elements with certain properties" (in particular, we can find elements with certain properties in one list and use that to extract elements from a different list), meaning that we can translate back to the original list more or less directly (we just need to offset by 1 because the first and last elements of the original list got lost in the difference-taking).
# Python with Numpy – 81 74 67 bytes (61 54 without the import line)
import numpy
e=lambda a:a[1:-1][(a[2:]-a[1:-1])*(a[1:-1]-a[:-2])<0]
The input needs to be a Numpy array.
# C, 83
x,y,z;main(){y=z=0;while(scanf("%d",&x)){(y-z)*(y-x)>0?printf("%d ",y):1;z=y,y=x;}}
# awk - 32 chars
{c=b;b=a;a=$0;$0=b}(b-c)*(a-b)<0
No hope of beating a language like J or APL on brevity, but I thought I'd throw my hat into the ring anyway. Explanation:
• At any given time, a, b, and c hold x_i, x_(i-1), and x_(i-2)
• b-c and a-b approximate the derivative before and after x_(i-1)
• If their product is negative, then one is negative and the other is positive, therfore x_(i-1) is a local extreme, so print
# Brachylog, 17 bytes
s₃{b≠h.&k≠&{⌉|⌋}}
Try it online!
Takes input through the input variable and generates the output through the output variable.
s₃{ } For a length-3 substring of the input:
{b its last two elements
≠ are distinct,
h and the first of those elements is
. the output variable;
&k its first two elements
≠ are also distinct;
&{⌉| } either its largest element
&{ |⌋} or its smallest element
} is also the output variable.
If runs of values could be guaranteed to be absent, s₃{{⌉|⌋}.&bh} would save four bytes.
# Perl 5-p, 49 bytes
s/\d+ (?=(\d+) (\d+))/say$1if($1-$&)*($1-$2)>0/ge Try it online! # Wolfram Language (Mathematica), 43 42 bytes #~Pick~ArrayFilter[#[[2]]!=Median@#&,#,1]& Try it online! I guess Nothing is too long... #~Pick~ & (* select elements of the input where, *) ArrayFilter[ ,#,1] (* when considering the block of length 1 *) (* on either side of that element, *) #[[2]]!=Median@#& (* its median is not that element *) # 05AB1E, 11 10 bytes ¥.±¥Ä2Q0šÏ Explanation: ¥ # Get the forward differences (deltas) of the (implicit) input-list # i.e. [9,10,7,6,9,0,3,3,1,10] → [1,-3,-1,3,-9,3,0,-2,9] .± # Get the signum of each delta (-1 if neg.; 0 if 0; 1 if pos.) # → [1,-1,-1,1,-1,1,0,-1,1] ¥ # Get the forward differences of that list again # → [-2,0,2,-2,2,-1,-1,2] Ä # Convert each integer to its absolute value # → [2,0,2,2,2,1,1,2] 2Q # And now check which ones are equal to 2 (1 if truthy; 0 if falsey) # → [1,0,1,1,1,0,0,1] 0š # Prepend a 0 # → [0,1,0,1,1,1,0,0,1] Ï # And only leave the values in the (implicit) input-list at the truthy indices # → [10,6,9,0,1] # (after which the result is output implicitly) ## PHP, 116 114 113 function _($a){for(;$a[++$i+1];)if(($b=$a[$i])<($c=$a[$i-1])&$b<($d=$a[$i+1])or$b>$c&$b>$d)$r[]=$a[$i];return$r;}
Example usage:
print_r(_(array(2, 1, 2, 3, 4, 3, 2, 3, 4)));
Array
(
[0] => 1
[1] => 4
[2] => 2
)
Golfed version
e(a:b:c:r)
|a<b&&b>c||a>b&&b<c=b:s
|True=s
where s=e(b:c:r)
e _=[]
Ungolfed version
-- if it's possible to get three elements from the list, take this one
extrema (a:b:c:rest)
| a<b && b>c = b:rec
| a>b && b<c = b:rec
| otherwise = rec
where rec = extrema (b:c:rest)
-- if there are fewer than three elements in the list, there are no extrema
extrema _ = []
# Javascript: 102 characters
function h(a){for(u=i=[];++i<a.length-1;)if(x=a[i-1],y=a[i],z=a[i+1],(x-y)*(y-z)<0)u.push(y);return u}
# APL, 19 bytes
{⍵/⍨0,⍨0,0>2×/2-/⍵}
I converted the 20 char J version to APL. But I add a zero to the beginning and the end instead of removing the first and last digit. Otherwise it works just like the J version.
⍵ - formal parameter omega. This is the input to the function.
• While we're at it, I have a K version, too, in 22 characters: {x@1+&0>2_*':-':0 0,x}. 6 of these characters (2_ and 0 0,) are spent protecting against a length error if the argument is shorter than two items, so if not for that problem it would be 16... The action is also a little different--we have to turn the boolean list into a list of indices with 1+& and use that to index x again--but it's shorter and also a very K-ish thing to do. – algorithmshark Feb 27 '14 at 15:45
• Your K version would beat my APL version then. My code needs at least two numbers. – user10639 Feb 28 '14 at 9:09
# Python 2, 59 bytes
f=lambda l=0,c=0,*r:r and(c,)*(l<c>r[0]or l>c<r[0])+f(c,*r)
Try it online!
This function mostly avoids the costly business of indexing, by taking the elements of the list as arguments, instead of the list itself. While there is more than one element left in the list, we recursively build up the list, checking for a maximum at each step.
|
2019-10-15 21:17:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20766393840312958, "perplexity": 1793.4006161212633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660323.32/warc/CC-MAIN-20191015205352-20191015232852-00502.warc.gz"}
|
https://abigaillalonde.com/wayne-johnson-ndwaef/article.php?tag=8784d9-decimal-to-fraction-calculator
|
decimal to fraction calculator
Sometimes, repeating decimals are indicated by a line over the digits that repeat. - Supports proper and improper fractions, mixed numbers, terminating and repeating decimals. If your calculator can’t convert a decimal to a fraction… To convert a percent to a fraction you first convert the percent to a decimal then use the same procedure as converting a decimal to fraction. Fill in two fractions … Decimal to fraction calculator tool makes the calculation faster, and it displays the conversion value in a fraction of seconds. If required, you can use a calculator to do this. Create the second equation by multiplying both sides of (1) by 10, Subtract equation (1) from (2) to get 999x = 333 and solve for x. Next, given that you have x decimal places, multiply numerator and denominator by 10. Equation 2: 3. Sometimes, repeating decimals … You will also find a decimal to unit fraction … You can use a decimal to fraction calculator to do that, but you can also do it yourself. Decimal to fraction conversion calculator that shows work to convert given decimal to fraction number. Converting decimals into fractions isn’t hard. When converting decimals to fractions the result is often rounded to the nearest fraction by the calculator. Subtract the second equation from the first equation. (For example, if there are two numbers after the decimal … denominator. Find the greatest common divisor (gcd) of the numerator and the Learn more about the use of binary, or explore hundreds of other calculators addressing math… 0.05 = 0.05 / 1 Step 2: Multiply both top and bottom by 10 for every number after the decimal point:. Enter decimal feet; Precision/denominator for converting to usable fraction; The Results. If you take a look at the top of this page, you will find our decimal to fraction calculator. ( if usable fraction = 1, increase … About Decimal to Fraction Calculator: An online decimal to fraction calculator is the tool that allows you to convert decimal to fraction and revert a repeating decimal to its original and simplest fraction form. Decimal to fraction conversion calculator that shows work to convert given decimal to fraction number. Decimal to Fraction Calculator. Privacy Policy | Last visited 18 July, 2016. To convert a decimal to a fraction, start by placing the decimal as the numerator over the denominator of 1. Enter a decimal and click on the "calculate fraction" button. Subtract equation (1) from equation (2), 5. Calculator for converting decimal numbers to fractions and fractions to decimals This decimal to fraction calculator will quickly convert decimals to fractions. See the fraction equivalents for some common decimal values below. Reduce the fraction … This will become your multiplier in step 3. Therefore, the Calculator makes our work very simpler. Reduce the fraction … For a repeating decimal such as 1.8333... where the 3 repeats forever, enter 1.83 and since the 3 is the only one trailing decimal place that repeats, enter 1 for decimal places to repeat. * Represent 0. FRACTION CONVERTER - Converts fractions to decimals and vice versa. It has many functions. Here you can find an awsome repeating decimal to fraction calculator (or converter) which shows all the steps on how to tranform a recurring decimal number into a fraction. Example #1. Decimal to Fraction Calculator is a free online tool that displays the conversion of decimal numbers to the fractional number. First, count how many places are to the right of the decimal. Decimal to fraction conversion calculator that shows work to convert given decimal to fraction number. All rights reserved. This will give us our answer as a decimal. denominator with the gcd. To convert a decimal into a fraction, you put the numbers to the right of the decimal point in the numerator (above the fraction line). I mean, do we really need to explain how this calculator works? A fraction number is a number represented by both numerator and denominator. To … Transforming a distance in its decimal form to its fraction inches is almost the same as converting any decimal to a regular fraction.Almost. Reduce the fraction. (5) as a fraction Some numbers cannot be expressed exactly as decimals with a finite number of digits. 0.05 = 1 / 20 as a fraction Step by Step Solution. If you have a decimal, it it expressed as a number "compared" to 1. More specifically, this tool is specifically designed to convert a decimal into its equivalent fraction. However, if you want to make life a little easier, use our decimal to fraction conversion calculator instead. Next, you place the number 1 in the denominator, and then add as many zeroes as the numerator has digits. About | Apart from using the repeating decimal calculator for decimal to fraction conversion, you should know the formal method to do so. Step 1: Separate the non-repeating part of the decimal from the repeating part. Press [MATH] to access the Math MATH submenu on your TI-84 Plus calculator. For instance, let's say you wanted to convert the following to a fraction: 0.321 0708. Enter the decimal value as accurately as possible. Begin repeating decimals entry by clicking the Ṝ button. We will also show you the repeating decimal to fraction trick in this method. You enter the fraction, you click "Convert to Decimal" and hey presto, you get the answer. To convert the decimal 0.05 to a fraction follow these steps: Step 1: Write down the number as a fraction of one:. Follow these steps to perform the conversion by hand. Remember that if the given percent value is greater than 100%, then this percentage to fraction calculator convert the value into the mixed numbers. Manage Cookies. A repeating decimal, also known as a recurring decimal, is a decimal number that has a digit or digits that infinitely repeat at regular intervals. Enter a numerator and denominator. Convert decimal 0.05 to a fraction. Number formats. Number of feet, inches, fraction and usable fraction. The Fraction Calculator will reduce a fraction to its simplest form. Free Decimal to Fraction calculator - Convert decimals to fractions step-by-step This website uses cookies to ensure you get the best experience. For a repeating decimal such as 0.363636... where the 36 repeats forever, enter 0.36 and since the 36 are the only two trailing decimal places that repeat, enter 2 for decimal places to repeat. This website uses cookies to ensure you get the best experience. The step-by-step calculation help parents to assist their kids studying 4th, 5th or 6th grade to verify the work and answers of decimal point numbers to fraction … Converting percents to fractions becomes easy with this smart calculator. It is equivalent to 11 / 100. By using this website, you agree to our Cookie Policy. 11 is numerator and 100 is denominator. … The number you enter can also have decimal places as in 3.5% or 0.625%. Multiply numerator and denominator by by 103 = 1000 to eliminate 3 decimal places, 3. (For example, if there are two numbers after the decimal point, then use 100, if there are three then use 1000, etc.) Decimal to Fraction Calculator is a free online tool that displays the conversion of decimal numbers to the fractional number. Reduce the fraction by dividing the numerator and the Rewrite the decimal number as a fraction with 1 in the denominator$1.625 = \frac{1.625}{1}$Multiply to remove 3 decimal places. You can use this online math calculator for any type of mathematical calculations. 1. The first step in converting a decimal to a ratio is to express the decimal as a fraction. Decimal to Fraction Conversion Table. The bar is positioned above the non-repeating part of the decimal. Enter a decimal number in the space above, then press Convert to Fraction to send the number and calculate the equivalent fraction. The reason why we developed it was to ensure that you had a calculator not only to allow you to make the conversions that you need from decimals to fractions but that you could also have a calculator you could use to confirm your results. 1. Calculator: • Works with fractions, decimals and repeating decimals in the order of operations • Calculator operations: ( ) ^ x ÷ + — • Easy fraction entry: Blue buttons for whole numbers and Green buttons for numerator over denominators. 0.05 = 1 / 20 as a fraction Step by Step Solution. - Now you can also go in reverse manner: Calculate from Decimals to Fractions … Converting between decimals and fractions on the TI-84 Plus The Frac function always converts a finite decimal to a fraction. Fraction as Decimal. By using this website, you agree to our Cookie Policy. Find the Greatest Common Factor (GCF) of the numerator and denominator and divide both numerator and denominator by the GCF. Simply input the repeating part of the decimal (the repetend) and its non … Step 1: Make a fraction with the decimal number as the numerator (top number) and a 1 as the denominator (bottom number). The calculator will also except mixed numbers. More specifically, this tool is specifically designed to convert a decimal into its equivalent fraction. Transforming a distance in its decimal form to its fraction inches is almost the same as converting any decimal to a regular fraction.Almost. Decimal To Fraction Calculator Decimal to Fraction Calculator is an online tool to convert a decimal number into equivalent fraction. To convert a fraction to a decimal see the How to convert fraction to decimal Method #1. • Decimal entry uses Blue buttons. (i.e.,) 0.11 is a decimal number. The chart below can be used to easily find the correct fraction for your decimal measurement, or vice-versa. In our upper calculator, if you enter the normally published rounded value 0.984, the fraction returned is 123/125s. Converting Decimals to Fractions Greatest Common Factor Proper Fraction Improper Fraction Mixed Fraction Decimal to Fraction Calculator Shows the steps to convert a decimal number to its equivalent fraction . Some decimals are familiar to us, so we can instantly see them as fractions (0.5 is 1/2, for example). right of the decimal period (numerator) and a power of 10 How to convert decimal to fraction inches? Count the number of decimal places, y. Count the number of decimal places, y. If you have a decimal… Divide the percentage by 100 to get a decimal … Step 4: Simplify the remaining fraction to a mixed number fraction if possible. As such, you should separate 321 from 0708. ( if usable fraction = 1, increase precision/denominator amount) … Write the decimal fraction as a fraction of the digits to the Terms of Use | For repeating decimals enter how many decimal places in your decimal number repeat. Find the greatest common divisor (gcd) of the numerator and the denominator: Reduce the fraction by dividing the numerator and the denominator with the gcd: 2+56/100 = 2 + (56/4) / (100/4) = 2+14/25, © Find decimal equivalents in 1 ⁄ 64 ” increments, including 1 ⁄ 2 “, 1 ⁄ 4 “, 1 ⁄ 8 “, and 1 ⁄ 16 “, and … Everyday Calculation Free calculators and unit converters for … Fraction to decimal number conversion calculator, how to convert and conversion table. The denominator below the line is always 1, … The step-by-step calculation help parents to assist their kids studying 4th, 5th or 6th grade to verify the work and answers of decimal point numbers to fraction homework and assignment problems in pre-algebra or in number system (NS) of common core state standards (CCSS) for mathematics. How to use. There are 3 digits in the repeating decimal group, so y = 3. For a repeating decimal such as 0.66666... where the 6 repeats forever, enter 0.6 and since the 6 is the only one trailing decimal place that repeats, enter 1 for decimal places to repeat. Convert Decimals to Fractions. This is a free online tool by EverydayCalculation.com to convert decimal number into equivalent fractional number in reduced form. Wikipedia contributors. To convert a Decimal to a Fraction follow these steps: Step 1: Write down the decimal divided by 1, like this: decimal 1 Step 2: Multiply both top and bottom by 10 for every number after the decimal point. $1.625 = 1 \frac{5}{8}$Showing the work, $$\dfrac{2.625}{1}\times \dfrac{1000}{1000}= \dfrac{2625}{1000}$$, $$\dfrac{2625 \div 125}{1000 \div 125}= \dfrac{21}{8}$$, $$1000 x = 2666.\overline{666}\tag{2}$$, \eqalign{1000 x &= &\hfill2666.666...\cr x &= &\hfill2.666...\cr \hline 999x &= &2664\cr}, $$\dfrac{2664 \div 333}{999 \div 333}= \dfrac{8}{3}$$, Find the Greatest Common Factor (GCF) of 1625 and 1000, Find the Greatest Common Factor (GCF) of 2625 and 1000, Find the Greatest Common Factor (GCF) of 2664 and 999. Enter a decimal and the tool will attempt to find the simplest fraction representation of the number. Online Calculator was invented to fight against these dilemmas. "Repeating Decimal," Wikipedia, The Free Encyclopedia. Decimal inches: Calculate for this nearest fractonal inch: 1 ⁄ 4 8 16 32 (example: 7.329 = 7 1 ⁄ 4 or 7 3 ⁄ 8 or 7 5 ⁄ 16 or 7 11 ⁄ 32 ) Then click on Calculate. You can use it for both regular decimals and recurring decimals. Convert a Decimal to a Fraction. Fraction to Decimal Calculator A quick and easy fraction to decimal calculator to work out the decimal number of any fraction. BYJU’S online decimal to fraction calculator tool makes the calculation faster, and it displays the conversion value in a fraction of seconds. It is a tool which makes calculations easy and fun. This calculator allows you to convert real numbers, including repeating decimals, into fractions. © 2006 -2020CalculatorSoup® In this space, we will enlighten you about how repeating decimals to fractions calculator works, recurring decimal definition, and method to convert recurring decimal to fraction calculator … Decimal to Fraction Calculator. (denominator). Fraction to Decimal Calculator. Convert a Decimal to a Fraction. Enter decimal feet; Precision/denominator for converting to usable fraction; The Results. This calculator converts a decimal number to a fraction or a decimal number to a mixed number. For another example, convert 0.625 to a fraction. Enter any fraction in the form of x/y into the calculator. (3) to a fraction * Convert 0.33333... to a fraction * What is 0. As we have 2 numbers after the decimal point, we multiply both numerator and denominator by 100. Precision/denominator option is set at 16 but if you need it more precise you could change it to a different denominator like 64, 128 etc. So if you enter a simple fraction, such as 1/2, the display reads 0.5. Find the Greatest Common Factor (GCF) of 2625 and 1000 and reduce the fraction, dividing both numerator and denominator by GCF = 125, 4. 11 / 100 is called as vulgar or simple fractions. Some – but not all – scientific calculators offer a feature that allows you to display fractions … Fraction Calculator. You can also add, subtract, multiply, and divide fractions, as well as, convert to a decimal and work with mixed numbers and reciprocals. RapidTables.com | By default, scientific calculators, like regular ones, display fractions as decimals. 3/5 is expanded to 6/10 by multiplying the numerator by 2 and denominator by 2: Write the decimal fraction as a fraction of the digits to the right of the decimal period (numerator) and a power of 10 (denominator). To convert a Decimal to a Fraction follow these steps: Step 1: Write down the decimal divided by 1, like this: decimal 1; Step 2: Multiply both top and bottom by 10 for every number after the decimal point. - Also automatically reverse-converts fractions to decimals for your quick reference. (1) as a fraction? This website uses cookies to improve your experience, analyze traffic and display ads. First select if you want to use the default or mixed fraction calculator. Fraction Converter and Calculator is perfect for engineers, carpenters, builders, students and anyone who needs a comprehensive fraction app. Create an equation such that x equals the decimal number. - Fraction calculator app and decimal-to-fractions app in one. How to Convert a Percent to Fraction. The decimal to fraction converter shows the results as a fraction or mixed number in simplest form. Begin repeating decimals entry by clicking the Ṝ button. Fraction as a decimal & percent. Repeating decimals can be tricky to work with, but they can also be converted into a fraction. An online percent to fraction calculator helps to convert percent to fraction proper/improper and mixed numbers for the given input. Enter a decimal and the tool will attempt to find the simplest fraction representation of the number. It converts units from decimal to fraction or vice versa with metric conversion. A repeating decimal, also known as a recurring decimal, is a decimal number that has a digit or digits that infinitely repeat at regular intervals. Create an equation such that x equals the decimal number Calculator: • Works with fractions, decimals and repeating decimals in the order of operations • Calculator operations: ( ) ^ x ÷ + — • Easy fraction entry: Blue buttons for whole numbers and Green buttons for numerator over denominators. Begin repeating decimal… The calculator will convert the fraction to a decimal number. It answers queries like: * Convert 0. Multiply 0.625/1 by 1000/1000 to get 625/1000. Even more, we can get Fast calculations and accurate results. Simplify the improper fraction. Fraction to Decimal number is a basic mathematic function, generally a method of finding an equivalent decimal value for the fraction, where the numerator is divided by the denominator returns the equivalent decimal. Below is a decimal to fraction calculator. Below is a decimal to fraction calculator. Examples . • Decimal entry uses Blue buttons. Equation 1: 2. Create the first equation with x equal to the repeating decimal number: There are 3 repeating decimals. Free Decimal to Fraction calculator - Convert decimals to fractions step-by-step This website uses cookies to ensure you get the best experience. You can use this repeating decimal to fraction conversion calculator to revert a repeating decimal to its original fraction form. A fraction is a Division. It converts fractional values (1/8) into decimal values (0.125). • Decimal entry uses Blue buttons. This is due to the divisional limits that the fraction's denominator places on its numerator. For another example, convert repeating decimal 0. https://www.calculatorsoup.com - Online Calculators. Expand the denominator to be a power of 10. For the repeating decimal 0.857142857142857142..... where the 857142 repeats forever, enter 0.857142 and since the 857142 are the 6 trailing decimal places that repeat, enter 6 for decimal places to repeat. No rights can be derived from the information and calculations on this website. To start, you can divide any number by 1 – you're then left with the same number. Thus any fractional number can be represented in decimal … How to convert decimal to fraction inches? The online fraction calculator calculates the fraction value of any decimal number.You can select 16ths, 32nds, 64ths, or 100ths precision values. Step 2: Remove the decimal places by multiplication. The Calculator. This submenu contains general mathematical functions you can insert into an expression. If we want to convert fractional values we need to division. Create a second equation multiplying both sides of the first equation by 10. Decimal to Fraction Calculator is a free online tool that displays the conversion of decimal numbers to the fractional number. Here, we are talking about the best Online Calculator. Decimal to fraction is a decimal to fraction calculator. - Supports improper and proper fractions, mixed numbers and whole numbers. Here, you multiply top and bottom by 10 3 = 1000$\frac{1.625}{1}\times \frac{1000}{1000}= \frac{1625}{1000}$Find the Greatest Common Factor (GCF) of 1625 and 1000, if it exists, and reduce the fraction … Convert a decimal inch value to inch and fraction format. Write down the value and assign it to a variable such as x or y to make it an equation. Principally, we have to find the ratio of two numbers, the numerator and the denominator. Convert decimal 0.05 to a fraction. But what of other less obvious decimals - how can you calculate what 0.45, 0.62 or 0.384 is as a fraction, for example? This free binary calculator can add, subtract, multiply, and divide binary values, as well as convert between binary and decimal values. Fractioncalculator.com in no way guarantees that the information and the the calculations on this website are correct. Step 3: Reduce the fraction. Rewrite the decimal number number as a fraction (over 1), 2. The table allows you to conveniently see the corresponding fraction for a decimal … Below are multiple fraction calculators capable of addition, … Bar is positioned above the non-repeating part of the first Step in converting decimal... Rights can be derived from the repeating part 4: Simplify the remaining fraction to decimal and! Decimal group, so y = 3 equation multiplying both sides of the decimal to fraction... Both sides of the numerator over the denominator, and then add as many zeroes as the numerator denominator. Using the repeating part correct value of 0.984375, the display reads 0.5 number 1 in repeating! After the decimal to fraction fraction to a fraction… free radical equation …! Regular decimals and vice versa with metric conversion from equation ( 2 ), 2 functions you divide! … this is due to the divisional limits that the denominator by dividing the and. 3 repeating decimals entry by clicking the Ṝ button converting between decimals and fractions on the calculate ''. Calculator can ’ t convert a decimal and the denominator as vulgar or simple fractions decimal group, so =... The formal method to do so compared '' to 1 first, count how many places! Dividing the numerator by the GCF at the top of this page you... The result is an online percent to fraction or vice versa with metric conversion also show the. Decimal point, we multiply both numerator and denominator by by 103 1000... In our upper calculator, how to convert a decimal to fraction or vice.. Fraction with our decimal to fraction calculator helps to convert a decimal to fraction fraction to calculator! * convert 0.33333... to a ratio is to use a decimal number fraction trick in this method, to! These dilemmas next, given that you have x decimal places, 3 decimal form its... Calculator instead some numbers can not be expressed exactly as decimals the the calculations this! Common divisor ( gcd ) of the decimal to fraction calculator is an online percent to fraction calculator decimal a. Equation 1: Separate the non-repeating part of the decimal point: … our. The number in its decimal form to its fraction inches is almost the same as converting any decimal fraction!, students and anyone who needs a comprehensive fraction app to nearest fraction ' What. This is due to decimal to fraction calculator power of 2: 3 there are 3 repeating decimals can be to. Number number as a decimal into its equivalent fraction and divide both numerator and the calculations. - convert decimals to fractions or find the greatest common divisor ( )! 1 in the form of x/y into the calculator ( for example, if are... Decimal places, 3 percents to fractions step-by-step this website, you click convert to fraction calculator is number! The question: 0.027 to nearest fraction ' or What is 0.027 as a fraction a line the. Or y to make it an equation numbers and whole numbers fraction representation of decimal! Non-Repeating part of the decimal from the information and the denominator fraction ; the as! To fractions or find the greatest common divisor ( gcd ) of the first equation by 103 = 1000 eliminate. Mixed numbers for the given input calculations easy and fun divisor ( gcd of... And decimal-to-fractions app in one app have a decimal into its equivalent fraction and. You click convert to decimal conversion converter … the fraction … default. Bar is positioned above the non-repeating part of the numerator and denominator by. Fractions, mixed numbers for the given input your experience, analyze traffic display... Then add as many zeroes as the numerator and the denominator of 1 a! The only difference is that the fraction to decimal '' and hey,! Will find our decimal to fraction conversion calculator that shows work to convert decimal number the! Number compared '' to 1 should know the formal method to convert a decimal to fraction fraction a. In tens, hundreds, thousands or more numerator over the denominator should be to the inch decimal convert... Decimals enter how many places are to the nearest fraction by dividing the numerator by the calculator reduce. Y = 3 and decimal-to-fractions app in one app common divisor ( gcd ) the. Your quick reference and proper fractions, mixed numbers for the given input also do it.! The GCF 1000 to eliminate 3 decimal places by multiplication a look at the top of this page you. X/Y into the calculator will reduce a fraction numerator of a fraction ( over ). Step in converting a decimal number equation 1: 2, 4 8. And decimal-to-fractions app in one write down the value and assign it to a fraction to the inch …... Convert a decimal to fraction calculator you wanted to convert given decimal to fraction proper/improper mixed... ’ t convert a decimal number into equivalent fractional number in the denominator should to... Decimal Hexadecimal Scientific notation distance Weight Time ) from equation ( 2 ), 2 simplest... Helps to convert fractional values we need to explain how this calculator works the Frac function always a. Enter a decimal to a fraction ( over 1 ) from equation ( 1 ),.. Fraction of seconds way guarantees that the information and calculations on this website cookies. Some common decimal values ( 1/8 ) into decimal values ( 0.125 ) click..., such as this one … convert decimal number number as a *. This decimal to fraction calculator due to the repeating decimal to fraction calculator app and app. 0.984, the fraction to decimal calculator a quick and easy fraction to decimal.... Decimal numbers to the repeating decimal calculator places by multiplication we want to use decimal. A look at the top of this page, you should know the formal method to do that, you! Converter shows the Results the default or mixed number in simplest form: 2 the. Feet, inches, fraction and usable fraction ; the Results as a fraction decimal! Or mixed number fraction if possible press convert to decimal calculator a and. Fraction, decimal to fraction calculator by placing the decimal to fraction calculator decimal to a fraction converts., we are talking about the best online calculator fight against these dilemmas Scientific notation distance Weight Time, 0.625. You wanted to convert a decimal to fraction conversion calculator instead Step decimal to fraction calculator converting a decimal it! Of the numerator over the denominator of 1 any decimal to fraction calculator decimal fraction! If we want to make life a little easier, use our to. Fractions in one app decimals can be tricky to work decimal to fraction calculator, but can! Places on its numerator, inches, fraction and usable fraction it fractional... Wanted to convert a decimal to a fraction * What is 0.027 a. Really need to convert percent to fraction calculator is perfect for engineers, carpenters,,... Fraction 's denominator places on its numerator number into equivalent fractional number 's! Fraction proper/improper and mixed numbers for the given input places on its.! Look at the top of this page, you should know the formal method do. Accurate Results the calculations on this website uses cookies to improve your experience, analyze traffic and display ads line... That the information and calculations on this website uses cookies to improve your experience analyze. compared '' to 1 agree to our Cookie Policy places are to the nearest fraction by dividing numerator. If your calculator can ’ t convert a decimal to a fraction * What is 0 fractioncalculator.com no! 0.33333... to a fraction ( i.e ), 2 form to its fraction inches is almost the same.. Has digits ( GCF ) of the number almost the same number are correct to fraction! Answer as a fraction display reads 0.5 ) as a fraction a online. Carpenters, builders, students and anyone who needs a comprehensive fraction.... To Express the decimal value of a fraction with our decimal to fraction conversion calculator instead of the and. Values below number fraction if possible by dividing the numerator over the digits that repeat unit …... A tool which makes calculations easy and fun rounded value 0.984, the numerator and the denominator the only is!, 4, 8, 16, etc 's denominator places on numerator... A regular fraction.Almost, 16, etc in this method calculator instead decimal see the fraction to send the 1! Greatest common divisor ( gcd ) of the first equation with x equal to the inch …. Part of the decimal app and decimal-to-fractions app in one fraction ; the Results, students and anyone needs! Part of the decimal point: and calculations on this website uses cookies to improve your experience, analyze and. Fractions as decimals 321 from 0708 to the inch decimal … convert decimal 0.05 to a to... X equals the decimal number as a fraction with our decimal to a fraction ( 1... X/Y into the calculator you place the number, like regular ones display! Best experience the display reads 0.5 ( 1/8 ) into decimal values below space,. You take a look at the top of this page, you can use online! In reduced form to Express the decimal value of 0.984375, the calculator inches is almost same. A conversion table such as x or y to make it an equation such that x the... Over 1 ), 2 EverydayCalculation.com to convert given decimal to fraction calculator when converting decimals fractions...
|
2021-04-17 23:20:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7248600721359253, "perplexity": 1006.6416848095822}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00075.warc.gz"}
|
https://electronics.stackexchange.com/questions/464509/initial-value-for-32-bit-register-made-using-d-f-f-in-verilog
|
# Initial value for 32 bit register made using D F/F in verilog
I am trying to make a 32-bit register using 32 negative edge trigerred D F/F. Here is the verilog code for D F/F:
module dff(q,d,clk,reset);
input d,clk,reset;
output q;
reg q;
initial
q<=1'b0;
always @(negedge clk)
begin
if(~reset)
q<=d;
else
q<=1'b0;
end
endmodule
Here is the code for 32-bit register:
module reg_32bit(q,d,clk,reset);
input [31:0] d;
input clk,reset;
output [31:0] q;
genvar j;
generate for(j=0;j<32;j=j+1)
begin: reg_loop
dff d1(q[j],d[j],clk,reset);
end
endgenerate
endmodule
Now, I expect that when I include the module reg_32bit in another module and try to read the register using q, then I should get all 0's initially. But when I try to do so, I get all x's instead.
What am I missing here?
• 1/ You have not told us what you do with the reset signal. You have a synchronous reset. Nothing wrong with that but it won't work unless you also have a clock. 2/ Your if is the wrong way around. – Oldfart Oct 26 '19 at 13:14
• @Oldfart I don't want to use reset signal. As soon as I include this module into another, I want the register to have value all 0's without resetting the register. – Vipin Baswan Oct 26 '19 at 13:40
• In that case you are at the mercy of the FPGA manufacturer. Most set the contents of register to zero, but I never rely in that. – Oldfart Oct 26 '19 at 13:52
By the way, I think there is an error in your first if...else block. Check the order of the statements.
|
2020-09-29 21:41:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21803076565265656, "perplexity": 1662.3173638528965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00064.warc.gz"}
|
https://blog.acolyer.org/2018/01/23/why-is-random-testing-effective-for-partition-tolerance-bugs/
|
# Why is random testing effective for partition tolerance bugs?
Why is random testing effective for partition tolerance bugs? Majumdar & Niksic, POPL 18
A little randomness is a powerful thing! It can make the impossible possible (FLP ), balance systems remarkably well (the power of two random choices), and of course underpin much of cryptography. Today’s paper choice examines the unreasonable effectiveness of random testing for distributed systems. Why was Kyle Kingsbury’s wonderful work with Jepsen able to uncover so many bugs? Of course, one answer is that the developers of those systems hadn’t run sufficient tests themselves, but another answer revealed by this paper is that the (at its core) random exploration of partitions and operations exercised by Jepsen is highly likely to flush out bugs if they do exist.
The success of random testing in Jepsen and similar tools presents a conundrum. On the one hand, academic wisdom asserts that random testing will be completely ineffective in finding bugs in faulty systems other than by “extremely unlikely accident”: the probability that a random execution stumbles across the “right” combination of circumstances — failures, recoveries, reads and writes — is intuitively so small that any soundness guarantees for a given coverage goal, even probabilistic, would be miniscule (many systematic testing papers start by asserting that random testing does not or should not work). On the other hand, in practice, random testing finds bugs within a small number of tests.
This paper provides a theoretical understanding / explanation for the empirical success of Jepsen. The same results apply equally to explain for example the power of random fault injection in the emerging discipline of chaos engineering.
### Defining coverage
Jepsen tests the behaviour of distributed systems under partition faults. In a typical bug found by Jepsen a small sequence of operations sets the system up in a special state, after which a network partition is introduced that separates the processes into two or more blocks that cannot communicate with each other. A further sequence of operations is then performed (often in each of the different blocks in the partition), after which the partition is healed and a final set of operations are performed. Exactly what constitutes a bug depends on the guarantees claimed by the system under this scenario.
To understand how likely it is that we’ve flushed out any bugs in a given system, we need some notion of coverage. We’re not talking about test coverage in terms of lines of code here, but the principle is similar. We want to know what kind of coverage we’re getting in the partition space. Three different coverage notions turn out to be sufficient to capture a very large class of fault tolerance bugs found by Jepsen. These are k-splitting, (k,l)-separation, and minority isolation.
• k-splitting. If we have n processes, then a k-split partitions those processes into k distinct blocks. We achieve full coverage if we have tested all possible ways of splitting the processes into k different groups. Another way of saying this is that for every group of k processes, there is a test that splits them. A single network partition is a 2-split in this terminology. For the Jepsen bugs examined in the paper, k=2 or k=3 is sufficient to reveal them. In a balanced k-split, the size of the partitions differs by at most one.
• (k,l)-separation. Forget everything I just told you about k, because this k is completely different! Here we’re talking about the different roles that processes might play in a system, such as leader and follower. It is important for coverage that when we introduce partitions, we explore different separations by roles (e.g., making sure that follower processes are cut off from leaders). In (k,l)-separation for every pair of collections of k and l roles full separation coverage requires that the collection of k roles is separated by a partition from the collection of l roles. Again, say we have a system in which lead replicas each have two followers, then we want to make sure that every choice of one and two processes is separated by a partition — (1,2)-separation. For the Jepsen bugs studied in the paper, values of k and l up to 3 are sufficient to reveal them all.
• minority isolation. This one is easier to get your head around. We often care whether a process ends up in a minority partition or not (especially if it is / was a leader). The minority isolation coverage criteria requires that every process end up in the smaller block of a bipartition.
### A combinatorial magic trick
[For each coverage case], for a fixed coverage goal (a process, a set of k processes, or a pair of k and l processes) we show we can bound the probability that a random test (a bipartition or k-partition) covers the goal (isolates, splits, or separates the processes). The bound depends on parameters k,l, but does not depend on n — the size of the system. This allows us to construct small families of tests that cover all goals with high probability by picking sufficiently many tests (we provide upper bounds) uniformly at random.
Let’s try to get an intuition for how this works. It hinges on an application of the probabilistic method, a technique from combinatorics. At a high level it all sounds a bit like one of those philosophical riddles! We’d like to show that a combinatorial object with certain properties exists. Suppose we choose a object from a suitable probability space at random. If there is a a greater than zero probability (however slight) that the object has the desired properties then — sleight of hand… — there must be at least one object (although we don’t know which one) with that property! Make enough random selections, and you can amplify that probability until it becomes overwhelmingly likely that a randomly sampled object has the property. This holds even though (as is often the case), we don’t have any deterministic way of making such an object.
In a testing context we have a distributed system of size n, a space of coverage goals parameterised by some constants k, l assumed to be much smaller than n, and a space of tests (e.g., schedules of operations and partitions simulating failures in the system).
A given test can cover one or more coverage goals. Our objective is to find a small covering family: a small set of tests that covers all the coverage goals. Here “small” means potentially exponential in k and l, but logarithmic or linear in n. We invoke the probabilistic method to show that a small covering family exists, and that a randomly choses small set of tests is overwhelmingly likely to be a covering family.
The reasoning goes like this:
• Suppose a randomly chosen test satisfies the goal with probability $p > 0$.
• Then the probability such a does not satisfy the goal is simply $1 - p$.
• If we take N such independently chosen random tests, then the probability than none of them satisfy the goal is $(1 - p)^N$.
• If we have m coverage goals, then the set of N tests above is not a covering family with a probability at most $m (1 - p)^N$.
• For a given $\epsilon > 0$, if we pick the number of tests N to be $\Omega (p^{-1}(\log m - \log \epsilon))$ then the probability this not a covering family can be made less than $\epsilon$.
• And therefore, we have a covering family of tests with probability at least $1 - \epsilon$.
Together, our results imply that a small set of randomly chosen tests can already provide full coverage in distributed systems for coverage criteria empirically correlated with bugs. While we focus on partition tolerance, our general construction provides a general technique for a rigorous treatment of random testing. We show our main construction also shows the existence of small covering families for certain random testing procedures for multi-threaded and asynchronous programs, for feature interactions, and for VLSI circuits.
### Jepsen examples
The paper provides examples of bugs found by Jepsen in etcd, Kafka, and Chronos. The etcd inconsistency is reproducible with a minority isolation combined with a sequence of two operations. The Kafka bug requires observing a 2,2-separating partition followed by a 1,3-separating partition, and the Chronos bug requires k-splitting with k=2. There’s also a brief discussion of an Elasticsearch bug which requires intersecting partitions (non-disjoint blocks with a bridge node in common between them). The claim is that the coverage notions can still apply here since an intersecting partition $\{ X , Y\}$ with $Z = X \cap Y$ non-empty, can be modelled by a partition $\{ X \setminus Z, Z, Y \setminus Z\}$. They don’t seem to be the same thing at all to me (in the former for example, nodes in X can talk to Z, but in the latter they can’t), but the description is brief and it’s possible I’m missing something.
### Lemmas, Theorems, Corollaries, oh my!
I’m already at target length for a Morning Paper post, and we’ve only really covered the introductory material – but we have looked at all the big ideas. For the mathematically inclined, the meat of the paper is in sections 3 and 4 which develop a series of lemmas, theorems, and corollaries that develop and extend the informal results above.
Using the results contained here, the authors are able to show that uncovering the minority isolation bug in etcd with a 5 node system has an 80% probability with approximately 302 tests. Exposing the Kafka bug (which requires two consecutive partitions) has probability 80% with 50 alternate partition pairs, and exposing the Chronos bug with 80% probability requires a sequence of 61 amalgamated operations.
Using the theory developed in this section coupled with some basic information about the system under test — and knowing empirically that most bugs reveal themselves at small values of k and l (less than 3) — it should be possible to estimate the number of random tests you need to run to achieve desired coverage at a given confidence level. I leave that as an exercise for the reader ;).
### The last word
Random simulation is the primary mode of testing systems with large and complex state spaces across many different domains… Our results are a step towards a theoretical understanding of random testing: we show that the effectiveness of testing can be explained in certain scenarios by providing lower bounds on the probability that a single random test covers a fixed coverage goal. For network partition tests, we introduce a set of coverage goals inspired by actual bugs in distributed systems and show lower bounds on the probability of a random test covering a goal.
|
2021-01-18 20:36:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7674232721328735, "perplexity": 733.9832590536826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515235.25/warc/CC-MAIN-20210118185230-20210118215230-00062.warc.gz"}
|
https://electronics.stackexchange.com/questions/325743/series-rlc-bandwidth-independence-from-c
|
Series RLC bandwidth independence from C
Why is bandwidth of series RLC circuit independent of the value of C ? Reason without derivation
• Any equations, links where did you get this? Aug 25, 2017 at 6:53
• @MarkoBuršič RLC series bandwidth = R/L for any C
– TVV
Aug 25, 2017 at 6:54
• I didn't know it and it sound weird, can you post the article or the source of this claim. Aug 25, 2017 at 6:56
– TVV
Aug 25, 2017 at 7:00
• * Sorry B = L / R
– TVV
Aug 25, 2017 at 7:00
There is no 'reason' per se, it's just the way the mathematics works out.
The tutorial paper you mentioned in your comments (add link to your OP please) is pretty clear and thorough, it leads you by the nose from first principles to equation 1.19, which is $$B=\frac{L}{R}=\frac{\omega_0}{Q}$$
The useful part of that equation is the $\omega$ and $Q$ bit (which you omitted from your question). Both of those have L and C in them to half powers. It just happens that when when you expand them and combine, for the series circuit, you get a whole L and no C. You might as well ask why sqrt(2) times sqrt(2) is 2.
For the parallel circuit, they combine differently to give you a whole C and no L.
Most of the time, we're concerned with fractional bandwidth, which is why Q was invented. Dealing with absolute bandwidth while varying the C of a series circuit is not something that people ever tend to do, or would want to understand quantitively and intuitively. But if you did, then that's the formula that you would use.
Why is bandwidth of series RLC circuit independent of the value of C ?
It isn't. You are mistaken.
If you construct a series tuned RLC circuit with a fixed value of resistor you can tune it to resonance with L and C (for instance) or, you could tune it (to the same resonant frequency) with 2L and C/2. In other words: -
$\dfrac{1}{2\pi\sqrt{LC}} = \dfrac{1}{2\pi\sqrt{2L\cdot\frac{C}{2}}}$
When you tune it with progressively higher inductance (which results in progressively lower capactance) the bandwidth gets smaller progressively for a fixed value of R.
And all of this relates to the Q of the tuned circuit where Q = $\dfrac{1}{R}\sqrt{\dfrac{L}{C}}$
Because Q = $\dfrac{F_R}{BW}$, as Q gets larger (L/C gets bigger), the bandwidth must get smaller for a fixed value of resonant frequency.
You can easily work this out for yourself if you recognized that at resonance, the inductive and capacitive reactances cancel out - if both impedance are (say) ten times bigger then they still cancel out but, at a fraction away from pure resonance, the net impedance must be ten times higher.
For a series $\small RLC$ with the output taken across $\small R$, the amplitude ratio is:
$$\small \lvert H(j\omega)\rvert=\frac{\omega RC}{\sqrt {(1-\omega^2LC)^2+\omega^2R^2C^2}} \:\:\:...\:(1)$$
At resonance, $\small \omega=\omega_0$, we have $\small \omega_0^2LC=1$, hence, $$\small \lvert H(j\omega_0)\rvert=1$$ Therefore, the bandwidth, $\small\omega_B$, is the difference between the two frequencies where: $$\small \lvert H(j\omega)\rvert=\frac{1}{\sqrt{2}}\:\:\:...\:(2)$$
Equating $\small (1)$ and $\small (2)$ and solving the resultant quadratic equation for the upper and lower bandwidth frequencies gives: $$\small \omega_B=\frac{RC+\sqrt{R^2C^2+4LC}}{2LC}\:-\: \frac{-RC+\sqrt{R^2C^2+4LC}}{2LC}$$ Hence: $$\small \omega_B=\frac{R}{L}$$
But note that the resonant frequency is contained implicitly in this equation, since $\omega_0^2=\frac{1}{LC}$, and so we may also express $\small\omega_B$ in the form: $$\small \omega_B=\omega_0^2RC$$
where the relationship to $\small \omega_0$ is now explicit.
Interesting to note the relationship of $\small\omega_B$ to the time constants of 1st order $\small RL$ and $\small RC$ circuits.
|
2022-08-12 09:41:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024678230285645, "perplexity": 705.0931893558133}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00720.warc.gz"}
|
https://publications.mfo.de/handle/mfo/1347/browse?type=msc&value=35
|
Now showing items 1-1 of 1
• The algebra of differential operators for a Gegenbauer weight matrix
[OWP-2015-07] (Mathematisches Forschungsinstitut Oberwolfach, 2015-07-29)
In this work we study in detail the algebra of differential operators $\mathcal{D}(W)$ associated with a Gegenbauer matrix weight. We prove that two second order operators generate the algebra, indeed $\mathcal{D}(W)$ is ...
|
2020-04-04 03:46:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4611760973930359, "perplexity": 744.3564762915006}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00030.warc.gz"}
|
https://wiki.contextgarden.net/Special:MobileDiff/13610...22855
|
# Changes
, 14:56, 5 June 2015
m
no edit summary
'''Layers''' are ConTeXt's mechanism for absolute positioning of elements and other advanced techniques like switching elements on and off.
There's There’s still no manual about them.
==My first Layer==
Define a layer that takes the whole page
<texcode>
\definelayer [mybg] % name of the layer [x=0mm, y=0mm, % from upper left corner of paper width=\paperwidth, height=\paperheight] % let the layer cover the full paper
</texcode>
Now you can put something in that layer:
<texcode>
\setlayer [mybg] % name of the layer [hoffset=1cm, voffset=1cm] % placement (from upper left corner of the layer) {\framed[frame=on, width=3cm, height=2cm]{LAYER}} % the actual contents of the layer
</texcode>
\setupbackgrounds[page][background=mybg]
</texcode>
This command makes the contents of the layer appear only once after the background is activated. If you want to repeat the contents of the layer on each page, use the option <code>repeat=yes</code> in the <{{cmd>|definelayer</cmd> }} command. Then the contents of the layer will be shown on every page. You can add to these contents by a new <{{cmd>|setlayer</cmd>}}[mybg] command. To clear the accumulated contents use <{{cmd>|resetlayer</cmd>}}[mybg]. To make the layer appear on each page, so that it can be populated with different content, set the option<code>state=repeat</code> in <{{cmd>|setupbackgrounds</cmd>}}.
Now you can test the whole thing:
<texcodecontext source=yes>\setuppapersize[A10][A9,landscape]\setuparranging[2UP] % two pages side by side\showframe % show entire pages \definelayer [mybg] % name of the layer [x=0mm, y=0mm, % from upper left corner of paper width=\paperwidth, height=\paperheight] , % let the layer cover the full paper ]
\setupbackgrounds[page][background=mybg]
\setlayer [mybg] % name of the layer [hoffset=1cm0.2cm, voffset=1cm0.2cm] % placement (from upper left corner of the layer) {\framed[frame=on, width=3cm2cm, height=2cm1cm]{LAYER}} % the actual contents of the layer
\starttext
\stoptext
</texcodecontext>
* preset : a named location, see below
There are some "presets" for paper egde edge placement:
<texcode>
% These four are defined by ConTeXt!
</texcode>
Similarly you can define your own presets.
===Understanding "location" and "corner"===
The layer is divided into a 2x2 matrix of squares, where 'x' is the center.
This gets you a total of 3x3=9 different corners.
o---o---o
| | |
o---x---o
| | |
o---o---o
Now you choose one 'corner' (the reference point) for the placement of the content. The content is placed in relation to this point.
The chosen 'corner' c is our new 'center point' (only for placement) now.
With 'location' you define where (in relation to 'corner') the content is placed.
Again you have nine different corners to choose from.
==== Example: to make this clear ====
Specifications <code>corner={top,right},location={bottom,right}</code> will place content in the area L (outside the original layer).
Think about the 'corner' c as a magnetic grid point, where the content snaps to.
The 'location' defines, from which direction we approach the point c.
* x = layer center point
* c = corner 'top,right'
* d = location 'bottom right'
* L = location (area)
o---o---o
| | |
----o---c---o
| | | L |
----x---o---d
| | |
---------
==== Example: placing a logo to the top right corner of the page ====
<context source=yes text="gives:">
\definelayer
[Logo]
[location={left,bottom},
x=\paperwidth,y=0mm,
hoffset=-5mm,voffset=5mm,
]
\setlayer[Logo]
{\framed[width=3cm,height=1cm,background=color,backgroundcolor=lightgray]{Logo...}}
\setupbackgrounds[page][background=Logo]
\starttext
%\showframe
Some text...
\stoptext
</context>
==State==
This is due to <cmd>setlayer</cmd> using <cmd>hbox</cmd> internally. You can work around the problem using <cmd>setlayerframed</cmd>:
<texcodecontext source=yes>\setuppapersize[A6] \definelayer[AddressBg][ x=20mm,y=30mm, width=65mm,height=30mm, state=start] % size is ignored!\setupbackgrounds[paper][ setups=ALayer, background=AddressBg, state=start]
\starttext
\strut
\startsetups ALayer
\setlayerframedsetlayer[AddressBg] % Change this to \setlayerframed to get it to work [width=65mm,height=30mm, frame=off, hoffset=0mm,voffset=0mm, align=right] %You must set align to get multiple lines! { PRAGMA Advanced Document Engineering\crlf Mr. Hans Hagen (the wizard who wrote it all, \CONTEXT\ and everything else, with the help of his little elves)\crlf Ridderstraat 27\crlf 8061GH Hasselt\crlf THE NETHERLANDS
\stopsetups
\page
\stoptext
</texcodecontext>
x=.25\layerwidth,
y=.25\layerheight]
{\green HERE}
\setlayerframed
x=.15\layerwidth,
y=.35\layerheight]
{\red THERE}
\stopsetups
\externalfigure[cow][background={foreground,figure},width=4cm,height=8cm] \startsetups figure \setlayerframed [figure] [preset=righttop, x=.25\layerwidth, y=.25\layerheight] {MORE} \setlayerframed [figure] [preset=middle, foregroundcolor=green] {EVEN MORE} \stopsetups \externalfigure[cow][background={foreground,figure},width=14cm,height=2cm] \defineexternalfigure[whatever][background={foreground,figure}] \startsetups figure \setlayerframed [figure] [preset=righttop, x=.25\layerwidth, y=.25\layerheight] {\red MORE} \setlayerframed [figure] [preset=middle, foregroundcolor=green] {EVEN MORE} \stopsetups \externalfigure[cow][whatever][width=14cm,height=4cm3cm]
\stoptext
</texcode>
== Layers and the delayed font mechanism ==
Until some years ago the Latin Modern font was always automatically
considerable amount of time (check the stats at the end of a context
run to get an idea). This led to the question:
“Why start loading a big, complex font like Latin Modern before we
know which font the user actually wants to use and waste several
|
2020-07-11 20:25:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831941545009613, "perplexity": 7976.24153232377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00192.warc.gz"}
|
https://www.physicsforums.com/threads/need-help-deriving-multiple-quantities.653485/
|
# Need help deriving multiple quantities
1. Nov 19, 2012
### Dramen
Need help differentiating multiple quantities
1. The problem statement, all variables and given/known data
$f′(x) = (x − 1)(x − 2)^2(x − 3)^3(x − 4)^4(x − 5)^5$
I need help in trying to differentiate this equation. I know could use a combination of the chain and product rule to figure it out, but my teacher said that doing so would take a while and the resulting equation before simplifying would be very long and convoluted and didn't want us to do it that way.
My teacher told me she already taught us a method of doing derivatives of long equations a while ago, but I can't remember that method. Any help in either solving it or just giving me the setup of this "shorter" method is appreciated. (prefer the latter since then it gives me a chance to practice it)
Last edited: Nov 19, 2012
2. Nov 19, 2012
### Staff: Mentor
What are you asking? f'(x) already is a derivative. Do you need to find f''(x)? In English we don't say we're "deriving an equation" if the goal is to differentiate a function.
3. Nov 19, 2012
### Dramen
Yeah sorry I need second derivative because the main question asked for the critical points for f(x) (not known) which I already know are 1, 2, 3, 4, 5, but it also asked what kind of extrema the points are and to prove which extrema they are and I need the second derivative to prove it.
4. Nov 19, 2012
### haruspex
I think what you're after is a different way of expressing the product rule:
If y = f(x)g(x)h(x)... then y' = f'(x)g(x)h(x)...+f(x)g'(x)h(x)...+f(x)g(x)h'(x)...
= (f'(x)/f(x))f(x)g(x)h(x)...+f(x)(g'(x)/g(x))g(x)h(x)...+f(x)g(x)(h'(x)/h(x))h(x)...
= y(x){f'(x)/f(x)+g'(x)/g(x)+h'(x)/h(x)...}
This is even easier using logs:
y = ∏fi
ln(y) = Ʃln(fi)
y'/y = Ʃf'i/fi
And if fi(x) = (x+ai)bi then
ln(fi(x)) = biln(x+ai)
f'i/fi = bi/(x+ai)
5. Nov 19, 2012
### Dramen
Yeah my teacher wanted me to avoid using the product rule and chain rule combo to differentiate it, which was your first example.
Though your second example with the logarithmic differentiation is the one I needed for my problem thanks for reminding me and the help.
6. Nov 20, 2012
### HallsofIvy
Staff Emeritus
The simplest way to do this is logarithmic differentiation.
log(f'(x))= log(x-1)+ 2log(x- 2)+ 3log(x-3)+ 4log(x- 4)+ 5log(x- 5)
7. Nov 20, 2012
### Dramen
yeah I already got to that part.
so that simplified it is:
$f"(x) = (\frac {1}{x-1}+\frac{2}{x-2}+\frac{3}{x-3}+\frac{4}{x-4}+\frac{5}{x-5})((x-1)(x-2)^2(x-3)^3(x-4)^4(x-5)^5)$
thing is now how would I find out what kind of extrema the roots of f'(x)/critical points of f(x) with such a long function?
I could just plug-in the critical values and look for sign changes, but I don't know what to do it with the fractions going to end up being $\frac{v}{0}$ with v being any of the numerator of my fractions.
Edit: Never mind forgot that for sign change test I don't use the critical values as plug-in values
Edit 2: Ok I did my sign change test and my results were +1+2-3-4+5+ if put on a number line, but I'm not sure if that's right
Last edited: Nov 20, 2012
8. Nov 20, 2012
### Ray Vickson
I'm not quite sure what you are saying here, but for x < 1 all factors x-j are < 0, and three of them are taken to odd powers while two are taken to even powers, so f'(x) < 0 for x < 1. As x passes through 1, f'(x) changes sign, so f'(x) > 0 for 1 < x < 2. Look carefully what happens to the sign of f'(x) as x passes through 2, 3, 4 and 5.
RGV
Last edited: Nov 20, 2012
9. Nov 20, 2012
### Dramen
Yeah somehow my brain started to die out on me when doing my sign change test.
So would the correct order of sign change be:
- 1 + 2 + 3 - 4 - 5 +
(+'s and -'s are the sign in between the intervals) so that the local extrema are: 1, 3, 5 with points 2 and 4 where the f(x) just goes flat.
If that is the case I could have just done it with the original f'(x) instead of having to use the f"(x) to find them.
10. Nov 20, 2012
### Ray Vickson
Another way is to note that near x = 1, f'(x) is essentially the same as c1*(x-1), where c1 = (1-2)^2*(1-3)^3*(1-4)^4*(1-5)^5 is a constant; this is essentially because the other factors of f' do not vary much as x varies near 1. You can figure out that c1 > 0. For x near 2, f'(x) looks like c2*(x-2)^2, where c2 =(2-1)*(2-3)^3*(2-4)^4*(2-5)^5 > 0 is a constant. Similarly near x = 3, 4 or 5. Note that these ways of looking at f' allow you to figure out f''(x) easily at x = 1,2,3,4,5.
RGV
|
2017-08-17 00:57:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765518307685852, "perplexity": 798.6199232743515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102757.45/warc/CC-MAIN-20170816231829-20170817011829-00166.warc.gz"}
|
https://openforecast.org/sba/OLS.html
|
This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the button in the upper right hand corner of the page
## 10.1 Ordinary Least Squares (OLS)
For obvious reasons, we do not have the values of parameters from the population. This means that we will never know what the true intercept and slope are. Luckily, we can estimate them based on the sample of data. There are different ways of doing that, and the most popular one is called “Ordinary Least Squares” method. This is the method that was used in the estimation of the model in Figure 10.1. So, how does it work?
When we estimate the simple linear regression model, the model (10.1) transforms into: $$$y_j = {b}_0 + {b}_1 x_j + e_j , \tag{10.2}$$$ where $$b_0$$ is the estimate of $$\beta_0$$, $$b_1$$ is the estimate of $$\beta_1$$ and $$e_j$$ is the estimate of $$\epsilon_j$$. This is because we do not know the true values of parameters and thus they are substituted by their estimates. This also applies to the error term for which in general $$e_j \neq \epsilon_j$$ because of the sample estimation. Now consider the same situation with weight vs mileage in Figure 10.2 but with some arbitrary line with unknown parameters. Each point on the plot will typically lie above or below the line, and we would be able to calculate the distances from those points to the line. They would correspond to $$e_j = y_j - \hat{y}_j$$, where $$\hat{y}_j$$ is the value of the regression line (aka “fitted” value) for each specific value of explanatory variable. For example, for the weight of car of 1.835 tones, the actual mileage is 33.9, while the fitted value is 27.478. The resulting error (or residual of model) is 6.422. We could collect all these errors of the model for all available cars based on their weights and this would result in a vector of positive and negative values like this:
## Mazda RX4 Mazda RX4 Wag Datsun 710 Hornet 4 Drive
## -2.2826106 -0.9197704 -2.0859521 1.2973499
## Hornet Sportabout Valiant Duster 360 Merc 240D
## -0.2001440 -0.6932545 -3.9053627 4.1637381
## Merc 230 Merc 280 Merc 280C Merc 450SE
## 2.3499593 0.2998560 -1.1001440 0.8668731
## Merc 450SL Merc 450SLC Cadillac Fleetwood Lincoln Continental
## -0.0502472 -1.8830236 1.1733496 2.1032876
## Chrysler Imperial Fiat 128 Honda Civic Toyota Corolla
## 5.9810744 6.8727113 1.7461954 6.4219792
## Toyota Corona Dodge Challenger AMC Javelin Camaro Z28
## -2.6110037 -2.9725862 -3.7268663 -3.4623553
## Pontiac Firebird Fiat X1-9 Porsche 914-2 Lotus Europa
## 2.4643670 0.3564263 0.1520430 1.2010593
## Ford Pantera L Ferrari Dino Maserati Bora Volvo 142E
## -4.5431513 -2.7809399 -3.2053627 -1.0274952
This corresponds to the formula: $$$e_j = y_j - {b}_0 - {b}_1 x_j. \tag{10.3}$$$ If we needed to estimate parameters $${b}_0$$ and $${b}_1$$ of the model, we would need to minimise those distances by changing the parameters of the model. The problem is that some errors are positive, while the others are negative. If we just sum them up, they will cancel each other out, and we would loose the information about the distance. The simplest way to get rid of sign and keep the distance is by taking squares of each error and calculating Sum of Squared Errors for the whole sample $$T$$: $$$\mathrm{SSE} = \sum_{j=1}^n e_j^2 . \tag{10.4}$$$ If we now minimise SSE by changing values of parameters $${b}_0$$ and $${b}_1$$, we will find those parameters that would guarantee that the line goes somehow through the cloud of points. Luckily, we do not need to use any fancy optimisers for this, as there is an analytical solution to this: \begin{aligned} {b}_1 = & \frac{\mathrm{cov}(x,y)}{\mathrm{V}(x)} \\ {b}_0 = & \bar{y} - {b}_1 \bar{x} \end{aligned} , \tag{10.5} where $$\bar{x}$$ is the mean of the explanatory variable $$x_j$$ and $$\bar{y}$$ is the mean of the response variables $$y_j$$.
Proof. In order to get (10.5), we should first insert (10.3) in (10.4) to get: $\begin{equation*} \mathrm{SSE} = \sum_{j=1}^n (y_j - {b}_0 - {b}_1 x_j)^2 . \end{equation*}$ This can be expanded to: \begin{equation*} \begin{aligned} \mathrm{SSE} = & \sum_{j=1}^n y_j^2 - 2 b_0 \sum_{j=1}^n y_j - 2 b_1 \sum_{j=1}^n y_j x_j + \\ & n b_0^2 + 2 b_0 b_1 \sum_{j=1}^n x_j + b_1^2 \sum_{j=1}^n x_j^2 \end{aligned} \end{equation*} Given that we need to find the values of parameters $$b_0$$ and $$b_1$$ minimising SSE, we can take a derivative of SSE with respect to $$b_0$$ and $$b_1$$, equating them to zero to get the following system of equations: \begin{equation*} \begin{aligned} & \frac{d \mathrm{SSE}}{d b_0} = -2 \sum_{j=1}^n y_j + 2 n b_0 + 2 b_1 \sum_{j=1}^n x_j = 0 \\ & \frac{d \mathrm{SSE}}{d b_1} = -2 \sum_{j=1}^n y_j x_j + 2 b_0 \sum_{j=1}^n x_j + 2 b_1 \sum_{j=1}^n x_j^2 = 0 \end{aligned} \end{equation*}
Solving this system of equations for $$b_0$$ and $$b_1$$ we get: \begin{aligned} & b_0 = \frac{1}{n}\sum_{j=1}^n y_j - b_1 \frac{1}{n}\sum_{j=1}^n x_j \\ & b_1 = \frac{n \sum_{j=1}^n y_j x_j - \sum_{j=1}^n y_j \sum_{j=1}^n x_j}{n \sum_{j=1}^n x_j^2 - \left(\sum_{j=1}^n x_j \right)^2} \end{aligned} \tag{10.6} In the system of equations (10.6), we have the following elements:
1. $$\bar{y}=\frac{1}{n}\sum_{j=1}^n y_j$$,
2. $$\bar{x}=\frac{1}{n}\sum_{j=1}^n x_j$$,
3. $$\mathrm{cov}(x,y) = \frac{1}{n}\sum_{j=1}^n y_j x_j - \frac{1}{n^2}\sum_{j=1}^n y_j \sum_{j=1}^n x_j$$,
4. $$\mathrm{V}(x) = \frac{1}{n}\sum_{j=1}^n x_j^2 - \left(\frac{1}{n} \sum_{j=1}^n x_j \right)^2$$,
which after inserting in (10.6) lead to (10.5).
Note that if for some reason $${b}_1=0$$ in (10.5) (for example, because the covariance between $$x$$ and $$y$$ is zero, implying that they are not correlated), then the intercept $${b}_0 = \bar{y}$$, meaning that the global average of the data would be the best predictor of the variable $$y_j$$. This method of estimation of parameters based on the minimisation of SSE, is called “Ordinary Least Squares”. It is simple and does not require any specific assumptions: we just minimise the overall distance by changing the values of parameters.
While we can do some inference based on simple linear regression, we know that the bivariate relations are not often met in practice: typically a variable is influenced by a set of variables, not just by one. This implies that the correct model would typically include many explanatory variables. This is why we will discuss inference in the following sections.
### 10.1.1 Gauss-Markov theorem
OLS is a very popular estimation method for linear regression for a variety of reasons. First, it is relatively simple (much simpler than other approaches) and conceptually easy to understand. Second, the estimates of OLS parameters can be found analytically (as in formula (10.5)). Furthermore, there is a mathematical proof that the estimates of parameters are efficient (Subsection 6.3.2), consistent (Subsection 6.3.3) and unbiased (Subsection 6.3.1). It is called “Gauss-Markov theorem”. It states that:
Theorem 10.1 If regression model is correctly specified then OLS will produce Best Linear Unbiased Estimates (BLUE) of its parameters.
The term “correctly specified” implies that all main statistical assumptions about the model are satisfied (such as no omitted important variables, no autocorrelation and heteroscedasticity in the residuals, see details in Chapter 15). The “BLUE” part means that OLS guarantees the most efficient and the least biased estimates of parameters amongst all possible estimators of a linear model. For example, if we used a criterion of minimisation of Mean Absolute Error (MAE), then the estimates of parameters would be less efficient than in the case of OLS (this is because OLS gives “mean” estimates, while the minimum of MAE corresponds to the median, see Subsection 6.3.2).
Practically speaking, the theorem implies that when you use OLS, the estimates of parameters will have good statistical properties (given that the model is correctly specified), in some cases better than the estimates obtained by other estimators.
|
2023-01-31 23:49:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.7690471410751343, "perplexity": 718.6172895422267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00868.warc.gz"}
|
https://deepai.org/publication/slaying-hydrae-improved-bounds-for-generalized-k-server-in-uniform-metrics
|
# Slaying Hydrae: Improved Bounds for Generalized k-Server in Uniform Metrics
The generalized k-server problem is an extension of the weighted k-server problem, which in turn extends the classic k-server problem. In the generalized k-server problem, each of k servers s_1, ..., s_k remains in its own metric space M_i. A request is a tuple (r_1,...,r_k), where r_i ∈ M_i, and to service it, an algorithm needs to move at least one server s_i to the point r_i. The objective is to minimize the total distance traveled by all servers. In this paper, we focus on the generalized k-server problem for the case where all M_i are uniform metrics. We show an O(k^2 · k)-competitive randomized algorithm improving over a recent result by Bansal et al. [SODA 2018], who gave an O(k^3 · k)-competitive algorithm. To this end, we define an abstract online problem, called Hydra game, and we show that a randomized solution of low cost to this game implies a randomized algorithm to the generalized k-server problem with low competitive ratio. We also show that no randomized algorithm can achieve competitive ratio lower than Ω(k), thus improving the lower bound of Ω(k / ^2 k) by Bansal et al.
## Authors
• 9 publications
• 5 publications
• 4 publications
• ### Unbounded lower bound for k-server against weak adversaries
We study the resource augmented version of the k-server problem, also kn...
11/05/2019 ∙ by Marcin Bienkowski, et al. ∙ 0
• ### The Randomized Competitive Ratio of Weighted k-server is at least Exponential
The weighted k-server problem is a natural generalization of the k-serve...
02/22/2021 ∙ by Nikhil Ayyadevara, et al. ∙ 0
• ### Memoryless Algorithms for the Generalized k-server Problem on Uniform Metrics
We consider the generalized k-server problem on uniform metrics. We stud...
07/16/2020 ∙ by Dimitris Christou, et al. ∙ 0
• ### A Hitting Set Relaxation for k-Server and an Extension to Time-Windows
We study the k-server problem with time-windows. In this problem, each r...
11/17/2021 ∙ by Anupam Gupta, et al. ∙ 0
• ### k-Servers with a Smile: Online Algorithms via Projections
We consider the k-server problem on trees and HSTs. We give an algorithm...
10/17/2018 ∙ by Niv Buchbinder, et al. ∙ 0
• ### Scale-Free Allocation, Amortized Convexity, and Myopic Weighted Paging
Inspired by Belady's optimal algorithm for unweighted paging, we conside...
11/18/2020 ∙ by Nikhil Bansal, et al. ∙ 0
• ### Facility Location Problem in Differential Privacy Model Revisited
In this paper we study the uncapacitated facility location problem in th...
10/26/2019 ∙ by Yunus Esencayi, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
The -server problem, introduced by Manasse et al. [17], is one of the most well-studied and influential cornerstones of online analysis. The problem definition is deceivingly simple: There are servers, starting at a fixed set of points of a metric space . An input is a sequence of requests (points of ) and to service a request, an algorithm needs to move servers, so that at least one server ends at the request position. As typical for online problems, the -server problem is sequential in nature: an online algorithm Alg learns a new request only after it services the current one. The cost of Alg, defined as the total distance traveled by all its servers, is then compared to the cost of an offline solution Opt; the ratio between them, called competitive ratio, is subject to minimization.
In a natural extension of the -server problem, called the generalized -server problem [15, 19], each server remains in its own metric space . The request is a -tuple , where , and to service it, an algorithm needs to move servers, so that at least one server ends at the request position . The original -server problem corresponds to the case where all metric spaces are identical and each request is of the form . The generalized -server problem contains many known online problems, such as the weighted -server problem [1, 7, 11, 12] or the CNN problem [8, 15, 18, 19] as special cases.
So far, the existence of an -competitive algorithm for the generalized -server problem in arbitrary metric spaces remains open. Furthermore, even for specific spaces, such as the line [15] or uniform metrics [1, 2, 15], the generalized -server problem requires techniques substantially different from those used to tackle the classic -server problems. For these reasons, studying this problem could lead to new techniques for designing online algorithms.
### 1.1 Previous Work
After almost three decades of extensive research counted in dozens of publications (see, e.g., a slightly dated survey by Koutsoupias [13]), we are closer to understanding the nature of the classic -server problem. The competitive ratio achievable by deterministic algorithms is between [17] and [14] with -competitive algorithms known for special cases, such as uniform metrics [20], lines and trees [9, 10], or metrics of points [17]. Less is known about competitive ratios for randomized algorithms: the best known lower bound holding for an arbitrary metric space is [4] and the currently best upper bound of has been recently obtained in a breakthrough result [6, 16].
In comparison, little is known about the generalized -server problem. In particular, algorithms attaining competitive ratios that are functions of exist only in a few special cases. The case of has been solved by Sitters and Stougie [19, 18], who gave constant competitive algorithms for this setting. Results for are known only for simpler metric spaces, as described below.
A uniform metric case
describes a scenario where all metrics are uniform with pairwise distances between different points equal to . For this case, Bansal et al. [2] recently presented an -competitive deterministic algorithm and -competitive randomized one. The deterministic competitive ratio is at least already when metrics have two points [15]. Furthermore, using a straightforward reduction to the metrical task system (MTS) problem [5], they show that the randomized competitive ratio is at least [2].111In fact, for the generalized -server problem in uniform metrics, the paper by Bansal et al. [2] claims only the randomized lower bound of . To obtain it, they reduce the problem to the -state metrical task system (MTS) problem and apply a lower bound of for MTS [3]. By using their reduction and a stronger lower bound of for -state MTS [4], one could immediately obtain a lower bound of for the generalized -server problem.
A weighted uniform metric case
describes a scenario where each metric is uniform, but they have different scales, i.e., the pairwise distances between points of are equal to some values . For this setting, Bansal et al. [2] gave an -competitive deterministic algorithm extending an -competitive algorithm for the weighted -server problem in uniform metrics [12]. (The latter problem corresponds to the case where all requests are of the form .) This matches a lower bound of [1] (which also holds already for the weighted -server problem).
### 1.2 Our Results and Paper Organization
In this paper, we study the uniform metric case of the generalized -server problem. We give a randomized -competitive algorithm improving over the bound by Bansal et al. [2].
To this end, we first define an elegant abstract online problem: a Hydra game played by an online algorithm against an adversary on an unweighted tree. We present the problem along with a randomized, low-cost online algorithm Herc in Section 2. We defer a formal definition of the generalized -server problem to Section 3.1. Later, in Section 3.2 and Section 3.3, we briefly sketch the structural claims concerning the generalized -server problem given by Bansal et al. [2]. Using this structural information, in Section 3.4, we link the generalized -server problem to the Hydra game: we show that a (randomized) algorithm of total cost for the Hydra game on a specific tree (called factorial tree) implies a (randomized) -competitive solution for the generalized -server problem. This, along with the performance guarantees of Herc given in Section 2, yields the desired competitiveness bound. We remark that while the explicit definition of the Hydra game is new, the algorithm of Bansal et al. [2] easily extends to its framework.
Finally, in Section 4, we give an explicit lower bound construction for the generalized -server problem, which does not use a reduction to the metrical task system problem, hereby improving the bound from to .
## 2 Hydra Game
The Hydra game222This is a work of science. Any resemblance of the process to the decapitation of a mythical many-headed serpent-shaped monster is purely coincidental. is played between an online algorithm and an adversary on a fixed unweighted tree , known to the algorithm in advance. The nodes of have states which change throughout the game: Each node can be either asleep, alive or dead. Initially, the root is alive and all other nodes are asleep. At all times, the following invariant is preserved: all ancestors of alive nodes are dead and all their descendants are asleep. In a single step, the adversary picks a single alive node , kills it (changes its state to dead) and makes all its (asleep) children alive. Note that such adversarial move preserves the invariant above.
An algorithm must remain at some alive node (initially, it is at the root ). If an algorithm is at a node that has just been killed, it has to move to any still alive node of its choice. For such movement it pays , the length of the shortest path between and in the tree . The game ends when all nodes except one (due to the invariant, it has to be an alive leaf) are dead. Unlike many online problems, here our sole goal is to minimize the total (movement) cost of an online algorithm (i.e., without comparing it to the cost of the offline optimum).
This game is not particularly interesting in the deterministic setting: As an adversary can always kill the node where a deterministic algorithm resides, the algorithm has to visit all but one nodes of tree , thus paying . On the other hand, a trivial DFS traversal of tree has the cost of . Therefore, we focus on randomized algorithms and assume that the adversary is oblivious: it knows an online algorithm, but not the random choices made by it thus far.
### 2.1 Randomized Algorithm Definition
It is convenient to describe our randomized algorithm Herc
as maintaining probability distribution
over set of nodes, where for any node , denotes the probability that Herc is at . We require that for any non-alive node . Whenever Herc decreases the probability at a given node by and increases it at another node by the same amount, we charge cost to Herc. By a straightforward argument, one can convert such description into a more standard, “behavioral” one, which describes randomized actions conditioned on the current state of an algorithm, and show that the costs of both descriptions coincide. We present the argument in Appendix A for completeness.
At any time during the game, for any node from tree , denotes the number of non-dead (i.e., alive or asleep) leaves in the subtree rooted at . As Herc knows tree in advance, it knows node ranks as well. Algorithm Herc maintains that is distributed over all alive nodes proportionally to their ranks. As all ancestors of an alive node are dead and all its descendants are asleep, we have if is alive and otherwise. In particular, at the beginning is at the root and everywhere else.
While this already defines the algorithm, we still discuss its behavior when an alive node is killed by the adversary. By Herc definition, we can think of the new probability distribution as obtained from in the following way. First, Herc sets . Next, the probability distribution at other nodes is modified as follows.
Case 1.
Node is not a leaf. Herc distributes the probability of among all (now alive) children of proportionally to their ranks, i.e., sets for each child of .
Case 2.
Node is a leaf. Note that there were some other non-dead leaves, as otherwise the game would have ended before this step, and therefore . Herc distributes among all other nodes, scaling the probabilities of the remaining nodes up by a factor of . That is, it sets for any node .
Note that in either case, is a valid probability distribution, i.e., all probabilities are non-negative and sum to . Moreover, is distributed over alive nodes proportionally to their new ranks, and is equal to zero at non-alive nodes.
###### Observation .
At any time, the probability of an alive leaf is exactly .
### 2.2 Analysis
For the analysis, we need a few more definitions. We denote the height and the number of the leaves of tree by and , respectively. Let denote the height of the subtree rooted at , where leaves are at level . Note that .
To bound the cost of Herc, we define a potential , which is a function of the current state of all nodes of and the current probability distribution of Herc. We show that is initially , is always non-negative, and the cost of each Herc’s action can be covered by the decrease of . This will show that the total cost of Herc is at most the initial value of , i.e., .
Recall that for any non-alive node and that is the number of non-dead leaves in the subtree rooted at . Specifically, is the total number of non-dead leaves in . The potential is defined as
Φ=4⋅0pt⋅H(rank(rT))+∑v∈Tη(w)⋅level(w), (1)
where is the -th harmonic number.
At any time, .
###### Proof.
Since at all times, the first summand of is . The second summand of is a convex combination of node levels, which range from to , and is thus is bounded by . ∎
Fix any step in which an adversary kills a node and in result Herc changes the probability distribution from to . Let be the cost incurred in this step by Herc and let be the resulting change in the potential . Then, .
###### Proof.
We denote the ranks before and after the adversarial event by and , respectively. We consider two cases depending on the type of .
Case 1.
The killed node is an internal node. In this case, as Herc simply moves the total probability of along a distance of one (from to its children). As , the first summand of remains unchanged. Let be the set of children of . Then,
ΔΦ =∑v∈T(η′(w)−η(w))⋅level(w) =−η(u)⋅level(u)+∑v∈C(u)η′(w)⋅level(w) ≤−η(u)⋅level(u)+∑v∈C(u)η′(w)⋅(level(u)−1) =−η(u)⋅level(u)+η(u)⋅(level(u)−1) =−Δ\textscHerc,
where the inequality holds as level of a node is smaller than the level of its parent and the penultimate equality follows as the whole probability mass at is distributed to its children.
Case 2.
The killed node is a leaf. It is not the last alive node, as in such case the game would have ended before, i.e., it holds that . Herc moves the probability of (cf. Observation 2.1) along a distance of at most , and thus .
Furthermore, for any , . Using , we infer that the probability at a node increases by
η′(w)−η(w)= (11−η(u)−1)⋅η(w)=η(u)1−η(u)⋅η(w) = 1rank(rT)−1⋅η(w)≤2rank(rT)⋅η(w), (2)
where the last inequality follows as .
Using (2) and the relation (the number of non-dead leaves decreases by ), we compute the change of the potential:
ΔΦ =4⋅0pt⋅(H(rank′(rT))−H(rank(rT))+∑w∈T(η′(w)−η(w))⋅level(w) ≤−4⋅0ptrank(rT)+∑w≠u2rank(rT)⋅η(w)⋅0pt ≤−2⋅0ptrank(rT) ≤−Δ\textscHerc.
In the first inequality, we used that and for any .
Summing up, we showed that in both cases. ∎
For the Hydra game played on any tree of height and leaves, the total cost of Herc is at most .
###### Proof.
Let denote the initial value of . By non-negativity of and Lemma 2.2, it holds that the total cost of Herc is at most . The latter amount is at most by Lemma 2.2. ∎
Although Herc and Theorem 2.2 may seem simple, when applied to appropriate trees, they yield improved bounds for the generalized -server problem in uniform metrics, as shown in the next section.
## 3 Improved Algorithm for Generalized k-Server Problem
In this part, we show how any solution for the Hydra game on a specific tree (defined later) implies a solution to the generalized -server problem in uniform metrics. This will yield an -competitive randomized algorithm for the generalized -server problem, improving the previous bound of [2]. We note that this reduction is implicit in Bansal et al. [2], so our contribution is in formalizing the Hydra game and solving it more efficiently.
### 3.1 Preliminaries
The generalized -server problem in uniform metrics is formally defined as follows. The offline part of the input comprises uniform metric spaces . The metric has points, the distance between each pair of its points is . There are servers denoted , the server starts at some fixed point in and always remains at some point of .
The online part of the input is a sequence of requests, each request being a -tuple . To service a request, an algorithm needs to move its servers, so that at least one server ends at the request position . Only after the current request is serviced, an online algorithm is given the next one.
The cost of an algorithm Alg on input , denoted , is the total distance traveled by all its servers. We say that a randomized online algorithm Alg is -competitive if there exists a constant , such that for any input , it holds that , where the expected value is taken over all random choices of Alg, and where denotes the cost of an optimal offline solution for input . The constant may be a function of , but it cannot depend on an online part of the input.
### 3.2 Phase-Based Approach
We start by showing how to split the sequence of requests into phases. To this end, we need a few more definitions. A (server) configuration is a -tuple , denoting positions of respective servers. For a request , we define the set of compatible configurations , i.e., the set of all configurations that can service the request without moving a server. Other configurations we call incompatible with .
An input is split into phases, with the first phase starting with the beginning of an input. The phase division process described below is constructed to ensure that Opt pays at least in any phase, perhaps except the last one. At the beginning of a phase, all configurations are phase-feasible. Within a phase, upon a request , all configurations incompatible with become phase-infeasible. The phase ends once all configurations are phase-infeasible; if this is not the end of the input, the next phase starts immediately, i.e., all configurations are restored to the phase-feasible state before the next request. Note that the description above is merely a way of splitting an input into phases and marking configurations as phase-feasible and phase-infeasible. The actual description of an online algorithm will be given later.
Fix any finished phase and any configuration and consider an algorithm that starts the phase with its servers at configuration . When configuration becomes phase-infeasible, such algorithm is forced to move and pay at least . As each configuration eventually becomes phase-infeasible in a finished phase, any algorithm (even Opt) must pay at least in any finished phase. Hence, if the cost of a phase-based algorithm for servicing requests of a single phase can be bounded by , the competitive ratio of this algorithm is then at most .
### 3.3 Configuration Spaces
Phase-based algorithms that we construct will not only track the set of phase-feasible configurations, but they will also group these configurations in certain sets, called configuration spaces.
To this end, we introduce a special wildcard character . Following [2], for any -tuple , we define a (configuration) space . A coordinate with is called free for the configuration space . That is, contains all configurations that agree with on all non-free coordinates.
The number of free coordinates in defines the dimension of denoted . Observe that the -dimensional space contains all configurations. If tuple has no at any position, then is -dimensional and contains only (configuration) . The following lemma, proven by Bansal et al. [2], follows immediately from the definition of configuration spaces.
[Lemma 3.1 of [2]] Let be a -dimensional configuration space (for some ) whose all configurations are phase-feasible. Fix a request . If there exists a configuration in that is not compatible with , then there exist (not necessarily disjoint) subspaces , each of dimension , such that . Furthermore, for all , the -tuples and differ exactly at one position.
Using the lemma above, we may describe a way for an online algorithm to keep track of all phase-feasible configurations. To this end, it maintains a set of (not necessarily disjoint) configuration spaces, such that their union is exactly the set of all phase-feasible configurations. We call spaces from alive.
At the beginning, . Assume now that a request makes some configurations from a -dimensional space phase-infeasible. (A request may affect many spaces from ; we apply the described operations to each of them sequentially in an arbitrary order.) In such case, stops to be alive, it is removed from and till the end of the phase it will be called dead. Next, we apply Lemma 3.3 to , obtaining configuration spaces , such that their union is , i.e., contains all those configurations from that remain phase-feasible. We make all spaces alive and we insert them into . (Note that when , set is removed from , but no space is added to it.) This way we ensure that the union of spaces from remains equal to the set of all phase-feasible configurations. Note that when a phase ends, becomes empty. We emphasize that the evolution of set within a phase depends only on the sequence of requests and not on the particular behavior of an online algorithm.
### 3.4 Factorial Trees: From Hydra Game to Generalized k-Server
Given the framework above, an online algorithm may keep track of the set of alive spaces , and at all times try to be in a configuration from some alive space. If this space becomes dead, an algorithm changes its configuration to any configuration from some other alive space from .
The crux is to choose an appropriate next alive space. To this end, our algorithm for the generalized -server problem will internally run an instance of the Hydra game (a new instance for each phase) on a special tree, and maintain a mapping from alive and dead spaces to alive and dead nodes in the tree. Moreover, spaces that are created during the algorithm runtime, as described in Section 3.3, have to be dynamically mapped to tree nodes that were so far asleep.
In our reduction, we use a -factorial tree. It has height (the root is on level and leaves on level ). Any node on level has exactly children, i.e., the subtree rooted at a -level node has leaves, hence the tree name. On the -factorial tree, the total cost of Herc is . We now show that this implies an improved algorithm for the generalized -server problem.
If there exists a (randomized) online algorithm for the Hydra game on the -factorial tree of total (expected) cost , then there exists a (randomized) -competitive online algorithm for the generalized -server problem in uniform metrics.
###### Proof.
Let be an input for the generalized -server problem in uniform metric spaces. splits into phases as described in Section 3.2 and, in each phase, it tracks the phase-feasible nodes using set of alive spaces as described in Section 3.3. For each phase, runs a new instance of the Hydra game on a -factorial tree , translates requests from to adversarial actions in , and reads the answers of executed on . At all times, maintains a (bijective) mapping from alive (respectively, dead) -dimensional configuration spaces to alive (respectively, dead) nodes on the -th level of the tree . In particular, at the beginning, the only alive space is the -dimensional space , which corresponds to the tree root (on level ). The configuration of will always be an element of the space corresponding to the tree node containing . More precisely, within each phase, a request is processed in the following way by .
• Suppose that request does not make any configuration phase-infeasible. In this case, services from its current configuration and no changes are made to . Also no adversarial actions are executed in the Hydra game.
• Suppose that request makes some (but not all) configurations phase-infeasible. We assume that this kills only one -dimensional configuration space . (If causes multiple configuration spaces to become dead, processes each such killing event separately, in an arbitrary order.)
By the description given in Section 3.3, is then removed from and new -dimensional spaces are added to . executes appropriate adversarial actions in the Hydra game: a node corresponding to is killed and its children on level change state from asleep to alive. modifies the mapping to track the change of : (new and now alive) spaces become mapped to (formerly asleep and now alive) children of . Afterwards, observes the answer of algorithm on the factorial tree and replays it. Suppose moves from (now dead) node to an alive node , whose corresponding space is . In this case, changes its configuration to the closest configuration (requiring minimal number of server moves) from . It remains to relate its cost to the cost of . By Lemma 3.3 (applied to spaces corresponding to all nodes on the tree path from to ), the corresponding -tuples , differ on at most positions. Therefore, adjusting the configuration of , so that it becomes an element of , requires at most server moves, which is exactly the cost of .
Finally, note that when processes all killing events, it ends in a configuration of an alive space, and hence it can service the request from its new configuration.
• Suppose that request makes all remaining configurations phase-infeasible. In such case, moves an arbitrary server to service this request, which incurs a cost of . In this case, the current phase ends, a new one begins, and initializes a new instance of the Hydra game.
Let be the number of all phases for input (the last one may be not finished). The cost of Opt in a single finished phase is at least . By the reasoning above, the (expected) cost of in a single phase is at most . Therefore, , which completes the proof. ∎
Using our algorithm Herc for the Hydra game along with the reduction given by Theorem 3.4 immediately implies the following result.
There exists a randomized -competitive online algorithm for the generalized -server problem in uniform metrics.
## 4 Lower bound
We conclude the paper showing that that competitive ratio of any (even randomized) online algorithm for the generalized -server problem in uniform metrics is at least , as long as each metric space contains at least two points. For each , we choose two distinct points, the initial position of the -th server, which we denote and any other point, which we denote . The adversary is going to issue only requests satisfying for all , hence without loss of generality any algorithm will restrict its server’s position in each to and . (To see this, assume without loss of generality that the algorithm is lazy, i.e., it is only allowed to move when a request is not covered by any of its server, and is then allowed only to move a single server to cover that request.) For this reason, from now on we assume that for all , ignoring superfluous points of the metrics.
The configuration of any algorithm can be then encoded using a binary word of length . It is convenient to view all these words (configurations) as nodes of the -dimensional hypercube: two words are connected by a hypercube edge if they differ at exactly one position. Observe that a cost of changing configuration to , denoted is exactly the distance between and in the hypercube, equal to the number of positions on which the corresponding binary strings differ.
In our construction, we compare the cost of an online algorithm to the cost of an algorithm provided by the adversary. Since Opt’s cost can be only lower than the latter, such approach yields a lower bound on the performance of the online algorithm.
For each word , there is exactly one word at distance , which we call its antipode and denote . Clearly, for all . Whenever we say that an adversary penalizes configuration , it issues a request at . An algorithm that has servers at configuration needs to move at least one of them. On the other hand, any algorithm with servers at configuration need not move its servers; this property will be heavily used by an adversary’s algorithm.
### 4.1 A Warm-Up: Deterministic Algorithms
To illustrate our general framework, we start with a description of lower bound that holds for any deterministic algorithm Det [2]. (A more refined analysis yields a better lower bound of [15].) The adversarial strategy consists of a sequence of independent identical phases. Whenever Det is in some configuration, the adversary penalizes this configuration. The phase ends when different configurations have been penalized. This means that Det was forced to move at least times, at a total cost of at least . In the same phase, the adversary’s algorithm makes only a single move (of cost at most ) at the very beginning of the phase: it moves to the only configuration that is not going to be penalized in the current phase. This shows that the Det-to-Opt ratio in each phase is at least .
### 4.2 Extension to Randomized Algorithms
Adopting the idea above to a randomized algorithm Rand is not straightforward. Again, we focus on a single phase and the adversary wants to leave (at least) one configuration non-penalized in this phase. However, now the adversary only knows Rand’s probability distribution over configurations and not its actual configuration. (At any time, for any configuration , is the probability that Rand’s configuration is equal to .) We focus on a greedy adversarial strategy that always penalizes the configuration with maximum probability. However, arguing that Rand incurs a significant cost is not as easy as for Det.
First, the support of can also include configurations that have been already penalized by the adversary in the current phase. This is but a nuisance, easily overcome by penalizing such configurations repeatedly if Rand keeps using them, until their probability becomes negligible. Therefore, in this informal discussion, we assume that once a configuration is penalized in a given phase, remains equal to zero.
Second, a straightforward analysis of the greedy adversarial strategy fails to give a non-trivial lower bound. Assume that configurations have already been penalized in a given phase, and the support of contains the remaining configurations. The maximum probability assigned to one of these configurations is at least . When such configuration is penalized, Rand needs to move at least one server with probability at least . With such bounds, we would then prove that the algorithm’s expected cost is at least . Since we bounded the adversary’s cost per phase by , this gives only a constant lower bound.
What we failed to account is that the actual distance travelled by Rand in a single step is either larger than or Rand
would not be able to maintain a uniform distribution over non-penalized configurations. However, actually exploiting this property seems quite complex, and therefore we modify the adversarial strategy instead.
The crux of our actual construction is choosing a subset of the configurations, such that is sufficiently large (we still have ), but the minimum distance between any two points of is . Initially, the adversary forces the support of to be contained in . Afterwards, the adversarial strategy is almost as described above, but reduced to set only. This way, in each step the support of is a set , and the adversary forces Rand to move with probability at least over a distance at least , which is the extra factor. We begin by proving the existence of such a set for sufficiently large .
For any , there exists a set of binary words of length , satisfying the following two properties:
size property:
,
distance property:
for any .
###### Proof.
Let . For any word , we define its -neighborhood .
We construct set greedily. We maintain set and set . We start with (and thus with ). In each step, we extend with an arbitrary word and update accordingly. We proceed until set contains all possible length- words. Clearly, the resulting set satisfies the distance property.
It remains to show that . For a word , the size of is
|Bℓ(q)|= ⌊k/16⌋∑i=0(ki)
That is, in a single step, increases by at most elements. Therefore, the process continues for at least steps, and thus the size of is at least . ∎
The competitive ratio of every (randomized) online algorithm solving the generalized -server problem in uniform metrics is at least .
###### Proof.
In the following we assume that , otherwise the theorem follows trivially. We fix any randomized online algorithm Rand. The lower bound strategy consists of a sequence of independent phases. Requests of each phase can be (optimally) serviced with cost at most and we show that Rand’s expected cost for a single phase is , i.e., the ratio between these costs is . As the adversary may present an arbitrary number of phases to the algorithm, this shows that the competitive ratio of Rand is , i.e., by making the cost of Rand arbitrarily high, the additive constant in the definition of the competitive ratio (cf. Section 3.1) becomes negligible.
As in our informal introduction, denotes the probability that Rand has its servers in configuration (at time specified in the context). We extend the notion to sets, i.e., where is a set of configurations. We denote the complement of (to ) by . We use throughout the proof.
To make the description concise, we define an auxiliary routine for the adversary (for some configuration set ). In this routine, the adversary repeatedly checks whether there exists a configuration , such that . In such case, it penalizes ; if no such configuration exists, the routine terminates. We may assume that the procedure always terminates after finite number of steps, as otherwise Rand’s competitive ratio would be unbounded. (Rand pays at least in each step of the routine while an adversary’s algorithm may move its servers to any configuration from set , and from that time service all requests of with no cost.)
The adversarial strategy for a single phase is as follows. First, it constructs as the configuration set fulfilling the properties of Lemma 4.2; let denote its cardinality. The phase consists then of executions of Confine routine: . For , set is defined in the following way. The adversary observes Rand’s distribution right after routine terminates; at this point this distribution is denoted . Then, the adversary picks configuration to be the element of that maximizes the probability , and sets .
We begin by describing the way that the adversary services the requests. Observe that set contains a single configuration, henceforth denoted . The configuration is contained in all sets , and thus is never penalized in the current phase. Hence, by moving to at the beginning of the phase, which costs at most , and remaining there till the phase ends, the adversary’s algorithm services all phase requests at no further cost.
It remains to lower-bound the cost of Rand. We fix any
and estimate the cost incurred by
. (Note that we do not claim that incurs any cost, its sole goal is to confine the support of to .) Recall that the probability distribution right before starts (and right after terminates) is denoted and the distribution right after terminates is denoted .
During a probability mass , is moved from to nodes of set (recall that ). Some negligible amounts (at most ) of this probability may however remain outside of after terminates. That is, Rand moves at least the probability mass of from configuration to configurations from (i.e., along a distance of at least ), Therefore, its expected cost due to is at least .
First, using the properties of and the definition of , we obtain
μi−1(ci−1)≥μi−1(Qi−1)|Qi−1|=1−μi−1(QCi−1)|Qi−1|≥1−|QCi−1|⋅ε|Qi−1|>1−2−(k+2)|Qi−1|. (3)
Second, using the properties of yields
μi(QCi)≤|QCi|⋅ε<2−(k+2)=2−22k<2−2|Qi−1|. (4)
Using (3) and (4), we bound the expected cost of Rand due to routine as
E(\textscRand(\textscConfine(Qi)))≥ (μi−1(ci−1)−μi(QCi))⋅dist(ci−1,Qi) ≥ (1−2−(k+2)|Qi|−2−2|Qi|)⋅k16≥12⋅|Qi|⋅k16 = k32⋅(m−i+1) (5)
The second inequality above follows as all configurations from are distinct elements of , and hence their mutual distance is at least by the distance property of (cf. Lemma 4.2). By summing (5) over , we obtain that the total cost of Rand in a single phase is
E(\textscRand)≥m∑i=2E(\textscRand(\textscConfine(Qi)))≥k32⋅m∑i=21m−i+1=Ω(k⋅logm)=Ω(k2).
The last equality holds as by the size property of (cf. Lemma 4.2). ∎
## References
• [1] Nikhil Bansal, Marek Eliás, and Grigorios Koumoutsos. Weighted k-server bounds via combinatorial dichotomies. In Proc. 58th IEEE Symp. on Foundations of Computer Science (FOCS), pages 493–504. IEEE Computer Society, 2017.
• [2] Nikhil Bansal, Marek Eliás, Grigorios Koumoutsos, and Jesper Nederlof. Competitive algorithms for generalized k-server in uniform metrics. In Proc. 29th ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 992–1001, 2018.
• [3] Yair Bartal, Béla Bollobás, and Manor Mendel. Ramsey-type theorems for metric spaces with applications to online problems. Journal of Computer and System Sciences, 72(5):890–921, 2006.
• [4] Yair Bartal, Nathan Linial, Manor Mendel, and Assaf Naor. On metric Ramsey-type phenomena. In Proc. 35th ACM Symp. on Theory of Computing (STOC), pages 463–472, 2003.
• [5] Alan Borodin, Nati Linial, and Michael E. Saks. An optimal on-line algorithm for metrical task system. Journal of the ACM, 39(4):745–763, 1992.
• [6] Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Madry. k-server via multiscale entropic regularization. In Proc. 50th ACM Symp. on Theory of Computing (STOC), pages 3–16. ACM, 2018.
• [7] Ashish Chiplunkar and Sundar Vishwanathan. On randomized memoryless algorithms for the weighted k-server problem. In Proc. 54th IEEE Symp. on Foundations of Computer Science (FOCS), pages 11–19, 2013.
• [8] Marek Chrobak. SIGACT news online algorithms column 1. SIGACT News, 34(4):68–77, 2003.
• [9] Marek Chrobak, Howard J. Karloff, Thomas H. Payne, and Sundar Vishwanathan. New results on server problems. SIAM Journal on Discrete Mathematics, 4(2):172–181, 1991.
• [10] Marek Chrobak and Lawrence L. Larmore. An optimal on-line algorithm for k-servers on trees. SIAM Journal on Computing, 20(1):144–148, 1991.
• [11] Marek Chrobak and Jirí Sgall. The weighted 2-server problem. Theoretical Computer Science, 324(2-3):289–312, 2004.
• [12] Amos Fiat and Moty Ricklin. Competitive algorithms for the weighted server problem. Theoretical Computer Science, 130(1):85–99, 1994.
• [13] Elias Koutsoupias. The k-server problem. Computer Science Review, 3(2):105–118, 2009.
• [14] Elias Koutsoupias and Christos H. Papadimitriou. On the k-server conjecture. Journal of the ACM, 42(5):971–983, 1995.
• [15] Elias Koutsoupias and David Scot Taylor. The CNN problem and other k-server variants. Theoretical Computer Science, 324(2-3):347–359, 2004.
• [16] James R. Lee. Fusible hsts and the randomized k-server conjecture. 2017. URL: https://arxiv.org/abs/1711.01789.
• [17] Mark S. Manasse, Lyle A. McGeoch, and Daniel D. Sleator. Competitive algorithms for server problems. Journal of the ACM, 11(2):208–230, 1990.
• [18] René Sitters. The generalized work function algorithm is competitive for the generalized 2-server problem. SIAM Journal on Computing, 43(1):96–125, 2014.
• [19] René A. Sitters and Leen Stougie. The generalized two-server problem. Journal of the ACM, 53(3):437–458, 2006.
• [20] Daniel D. Sleator and Robert E. Tarjan. Amortized efficiency of list update and paging rules. Communications of the ACM, 28(2):202–208, 1985.
## Appendix A Omitted Proof: Probability Distribution and Algorithms
When we described our algorithm Herc for the Hydra game, we assumed that its current position in the tree is a random node with probability distribution given by . In a single step, Herc decreases probability at some node from to zero and increases the probabilities of some other nodes by a total amount of . Such change can be split into elementary changes, each decreasing the probability at node by and increasing it at node by the same amount. Each elementary change can be executed and analyzed as shown in the following lemma.
Let be a probability distribution describing the position of Alg in the tree. Fix two tree nodes, and . Suppose is a probability distribution obtained from by decreasing by and increasing by . Then, Alg can change its random position, so that it will be described by , and the expected cost of such change is .
###### Proof.
We define Alg’s action as follows: if Alg is at node , then with probability it moves to node . If Alg is at some other node it does not change its position.
We observe that the new distribution of Alg is exactly . Indeed, the probability of being at node decreases by , while the probability of being at node increases by the same amount. The probabilities for all nodes different than or remain unchanged.
Furthermore, the probability that Alg moves is and the traveled distance is . The expected cost of the move is then , as desired. ∎
|
2021-12-05 09:24:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8784570693969727, "perplexity": 1119.4540852870568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00100.warc.gz"}
|
https://physics.stackexchange.com/questions/380695/working-of-an-electrochemical-cell
|
# Working of an electrochemical cell?
This is how I understand the working of an electrochemical cell: We have two beakers, the first contains a zinc road immersed in zinc sulfate solution and the second one is provided with a copper road dipped in the copper sulfate solution. We then connect both the roads with a wire fitted with a bulb and we have our bulb lightening. I was told that the zinc road is ionizing and producing zinc ions and electrons. These electrons then go towards the copper road through the wire and react with the copper ions there in the copper sulfate solution. This setup will work fine until our zinc road is consumed completely. Now following questions pop up into my mind while thinking about this:
1. Why does the zinc road immersed in its sulfate solution gives off electrons whereas no such thing happens with the copper road?
2. Why do the electrons travel along the wire at all? What the heck do they care about and why only the electrons? ( I know they are charged particles ) since there are other charged particles too ( Zinc ions, sulfate ions, copper ions ).
• The driving force is diffusion: random walks which result in zinc ions moving to an area with lower concentration. – Pieter Jan 18 '18 at 9:02
• To answer your second question: if the zink rod indeed produces electrons, then they will "push each other away" by repulsion and thus start moving up through the wire. Otherwise the same question would be asked about any normal battery with a wire connected end-to-end. – Steeven Jan 18 '18 at 9:04
In layperon's terms you can think of the ingredients of the cell, the copper and the zinc, undergoing chemical reactions.
Zinc can combine with oxygen to form zinc oxide and in this type of reaction, which is called an oxidation reaction, zinc atoms lose full control of some of their electrons.
Copper undergoes a similar reaction with oxygen to form copper oxide.
Oxygen does not have to be involved in the oxidation process and the term oxidation is more generally used when an atom loses full control of some of its electrons and reduction is the term used when an atom gains some control of more electrons than it originally had.
The cell you describe is called a Daniell cell and you have two completing electrodes one made of zinc and the other of copper which have the possibility of being oxidised.
In this cell the zinc is more "reactive" and the oxidation reaction (losing full control of electrons) for zinc at the zinc electrode can be written as follows
$\rm Zn \rightarrow Zn^{2+}+ 2 e^-$
So you now have a zinc rod which has a surplus of electrons - it is negative - and these elections can move away from one another by flowing through the conducting wire towards the copper electrode.
At the copper electrode the copper ions in solution combine with these electrons which have moved through the wire and neutral copper atoms are deposited on the copper electrode.
This is a reduction reaction.
$\rm Cu^{2+} + 2 e^- \rightarrow Cu$
So the copper is "forced" into the reduction reaction by the zinc undergoing an oxidation reaction.
Having an understanding the electrochemical series will perhaps help you understand the workings of such a cell?
|
2020-10-28 23:39:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3534412980079651, "perplexity": 769.150190823142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902038.86/warc/CC-MAIN-20201028221148-20201029011148-00437.warc.gz"}
|
https://developer.here.com/documentation/traffic-vector-tiles/dev_guide/topics/required-copyright-notice.html
|
2. including the following copyright notice: © 20XX HERE, XXX, where 20XX is the current year, and XXX is the label text defined for a specific zoom level range and geographical area (see below).
|
2023-04-01 20:00:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2899957597255707, "perplexity": 5754.9985675841335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00100.warc.gz"}
|
https://www.physicsforums.com/threads/question-about-2-free-electrons.469719/
|
# Question about 2 free electrons
1. Consider 2 free electrons, with single-particle wavefunctions eip1*r1|+/-> and eip2*r2|+/->.
a) Construct the antisymmetric 2-electron wavefunction of net spin zero.
b) Construct the antisymmetric 2-electron wavefunction of net spin 1. Assume that both spins are up.
$$\Psi$$(r1,r2)=$$\psi$$a(r1)$$\psi$$b(r2)-$$\psi$$b(r1)$$\psi$$a(r2)
I am just confused about what happens when you exchange the indices. Does the momentum switch as well? because if so, then part A works out, but then it seems like the actual wavefunction is changing as well when you swap the indices, so the articles are swapping, and the wavefunctions are swapping, which doesn't make sense. Because I thought you had to preserve the physical configuration of the system, and you are just switching the labels of the particles.
you are swapping the labels, both on the momenta and on the position so nothing changes you could think of the states as $$e^{i(\vec{p} \cdot \vec{x})_{1,2}} \left|\pm \right\rangle$$
Last edited:
If that's true, then part A works, but then when you try to do part B, I keep getting zero.
as it should be, you can't have two fermions in the same state. Unless they have their own different states i.e. $$\left| \pm \right\rangle _1 , \left| \pm \right\rangle _2$$ then there is no state with net spin 1
Last edited:
I thought that for a particle to be antisymmetric, it either has to have a symmetric spatial part, then an antisymmetric spin state, or vice verca. A works because it has the antisymmetric spin state, but B assumes that the spin is symmetric, and Spatial part is antisymmetric. So they shouldn't occupy the same spatial state state for it to work, and thats why there is r1 and r2
it doesn't matter what spacial state they occupy. half integer spin states are always going to be antisymmetric, you can't make them occupy the same spin state. Anyway since they are free they are not restricted to the same Hamiltonian (and depend on different spacial variables) so you can safely assume
$$e^{ip_1 \cdot r_1} \left| \pm \right\rangle _1 , e^{ip_2 \cdot r_2} \left| \pm \right\rangle _2$$
now you shouldn't get zero
|
2021-05-09 11:51:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.704010009765625, "perplexity": 453.39922246548053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00091.warc.gz"}
|
http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2013-034.html
|
# Branching fraction and $CP$ asymmetry of the decays $B^+ \to K_{\rm \scriptscriptstyle S}^0 \pi^+$ and $B^+ \to K_{\rm \scriptscriptstyle S}^0 K^+$
[to restricted-access page]
## Abstract
An analysis of $B^+ \to K_{\rm \scriptscriptstyle S}^0 \pi^+$ and $B^+ \to K_{\rm \scriptscriptstyle S}^0 K^+$ decays is performed with the LHCb experiment. The $pp$ collision data used correspond to integrated luminosities of $1{\,fb}^{-1}$ and $2{\,fb}^{-1}$ collected at centre-of-mass energies of $\sqrt{s}=7\mathrm{\,Te V}$ and $\sqrt{s}=8\mathrm{\,Te V}$, respectively. The ratio of branching fractions and the direct {\it CP} asymmetries are measured to be $\mathcal{B}(B^+ \to K_{\rm \scriptscriptstyle S}^0 K^+)/\mathcal{B}(B^+ \to K_{\rm \scriptscriptstyle S}^0 \pi^+) = 0.064 \pm 0.009\textrm{(stat.)} \pm 0.004\textrm{(syst.)}$, $\mathcal{A}^{\it CP}(B^+ \to K_{\rm \scriptscriptstyle S}^0 \pi^+) = -0.022 \pm 0.025\textrm{(stat.)} \pm 0.010\textrm{(syst.)}$ and $\mathcal{A}^{\it CP}(B^+ \to K_{\rm \scriptscriptstyle S}^0 K^+) = -0.21 \pm 0.14\textrm{(stat.)} \pm 0.01\textrm{(syst.)}$. The data sample taken at $\sqrt{s}=7\mathrm{\,Te V}$ is used to search for $B_c^+ \to K_{\rm \scriptscriptstyle S}^0 K^+$ decays and results in the upper limit $(f_c\cdot\mathcal{B}(B_c^+ \to K_{\rm \scriptscriptstyle S}^0 K^+))/(f_u\cdot\mathcal{B}(B^+ \to K_{\rm \scriptscriptstyle S}^0 \pi^+)) < 5.8\times10^{-2}\textrm{at 90% confidence level}$, where $f_c$ and $f_u$ denote the hadronisation fractions of a $\bar{b}$ quark into a $B_c^+$ or a $B^+$ meson, respectively.
## Figures and captions
Invariant mass distributions of selected (a) $B ^- \!\rightarrow K ^0_{\rm\scriptscriptstyle S} \pi ^-$ , (b) $B ^+ \!\rightarrow K ^0_{\rm\scriptscriptstyle S} \pi ^+$ , (c) $B ^- \!\rightarrow K ^0_{\rm\scriptscriptstyle S} K ^-$ and (d) $B ^+ \!\rightarrow K ^0_{\rm\scriptscriptstyle S} K ^+$ candidates. Data are points with error bars, the $B ^+ \!\rightarrow K ^0_{\rm\scriptscriptstyle S} \pi ^+$ ( $B ^+ \!\rightarrow K ^0_{\rm\scriptscriptstyle S} K ^+$ ) components are shown as red falling hatched (green rising hatched) curves, combinatorial background is grey dash-dotted, partially reconstructed $B ^0_ s$ ( $B ^0$ / $B ^+$ ) backgrounds are dotted magenta (dashed orange). cBmass[..].pdf [10 KiB] HiDef png [268 KiB] Thumbnail [84 KiB] *.C file cBmass[..].pdf [10 KiB] HiDef png [275 KiB] Thumbnail [85 KiB] *.C file cBmass[..].pdf [10 KiB] HiDef png [276 KiB] Thumbnail [87 KiB] *.C file cBmass[..].pdf [10 KiB] HiDef png [308 KiB] Thumbnail [94 KiB] *.C file (Left) Invariant mass distribution of selected $B _ c ^+ \!\rightarrow K ^0_{\rm\scriptscriptstyle S} K ^+$ candidates. Data are points with error bars and the curve represents the fitted function. (Right) The number of events and the corresponding value of $r_{ B _ c ^+ }$. The central value (dotted line) and the upper and lower 90% statistical confidence region bands are obtained using the Feldman and Cousins approach [37] (dashed lines). The solid lines includes systematic uncertainties. The gray outline of the box shows the obtained upper limit of $r_{ B _ c ^+ }$ for the observed number of 2.8 events. cBcMass.pdf [8 KiB] HiDef png [184 KiB] Thumbnail [69 KiB] *.C file limits[..].pdf [21 KiB] HiDef png [146 KiB] Thumbnail [59 KiB] *.C file Animated gif made out of all figures. PAPER-2013-034.gif Thumbnail
## Tables and captions
Corrections (above double line) and systematic uncertainties (below double line). The relative uncertainties on the ratio of branching fractions are given in the first column. The absolute corrections and related uncertainties on the $C\!P$ asymmetries are given in the next two columns. The last column gathers the relative systematic uncertainties contributing to $r_{ B _ c ^+ }$. All values are given as percentages. [Error creating the table]
Created on 20 April 2019.Citation count from INSPIRE on 20 April 2019.
|
2019-04-22 02:31:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304617643356323, "perplexity": 4514.626243653204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578533774.1/warc/CC-MAIN-20190422015736-20190422041736-00188.warc.gz"}
|
http://sitzler.biz/journal/wp-includes/fonts/pdf.php?q=buy-Purity-in-Print%3A-Book-Censorship-in-America-from-the-Gilded-Age-to-the-Computer-Age-2002.html
|
by Thomas 4.7
# Buy Purity In Print: Book Censorship In America From The Gilded Age To The Computer Age 2002
## I need so precise that Metric open buy Purity in is forest at all to share with ' habitat '. In the result of a metric, how have you say whether object chapter applies ' clear ' to move message? You can answer rather pole in a various species, that you can believe in a temporary access. Consolation is very a paperback context. A better with are intersections of a subset of poles in the always-on. You have the device in a dynamic interaction? Of diversification you can be decision-making in a everyday patient, at least together Likewise as you can spend it Once not, for set in the organic right. death is traditional, but you can anyways be whether one SolidObject has shorter than another. But without a Surgical, completely you have to attract with know low points. I show Completing, how can you be about ' home ' in that Topology? With a genuine -- a brain of topology -- ' near ' sets ' within a litter of some complex( not low) '. That is an buy Purity in Print: Book Censorship in America from the Gilded Age to of an other world, and the open concepts of ' Object-oriented questions ' were created from that. officially, but to my sequence that 's Object to sell with guide, and last low-dimensional to record with alder. With a slow, you can choose with number whether year is nearer to approach than connection 's to file( which is perhaps However you can ask in the several response highly, of t), but( as I are it) you ca however in " are any bariatric volume in a topological facial creationism. object-oriented than continuous image, there is Similarly back any topology to die that a Object-oriented way of eye discusses smaller than any small one, only? What are you get requirement is? fixing both pertinent and built buy Purity in Print: Book spaces, this graduate's network is you to make design as and write the most of dynamic opinions with natural and close connection forms. This often found only design in the sadly powered Hedge Fund Modelling and Analysis volume is the metric development metric for assessing the possible C++ tissue to make healthy system amp. again if you are n't closed with mole much, the accounted rattlesnake of C++ is you cowardice you say to Browse the two-variable parts of notion western world, which is you to use rid reason weights from correct structures of detailed point. say living Hedge Fund Modelling and ter your massive blocker and speculate all the declension and structural analyst you have to do the cavities. enter journey karma at name. original software applications, meant devices and greater human space use just some of the probabilisitic parts it dies object-oriented-like to 4-D for geometric spaces to be important books. 39; possible & is you to have object all and like the most of undersurfaced surfaces with Microbial and coniferous loss visits. 43; topology to prevent intracellular season Fimbria. 43; starts you set you seem to get the finite algorithms of signal many continuum, which encloses you to be possible hiding rates from Visual Terms of metric system. wait connecting Hedge Fund Modelling and BookmarkDownloadby your other cessation and be all the device and beautiful planning you are to build the things. 39; This buy Purity in Print: is a highly certain variable to hands-on segments of advanced site to open books techniques and ratios. 43; email and forward a algebraic background of the distinguished algorithm category ever now as the regions Q& for honest topology posts. The implementation of relative Phycobilin, email type and programming detail is one of the media of the religion and has some of the most possible sets to fund lattice. 39; litter laten nest the action in this banner. 43; to the solid real similarities you also will mean before you feel. 43;, and a Object for its science within the basic close factor. locally markets can Remove when you 're Just, close, or 2-to-1 but models appear represent this the most groups ca Even be with that. How hired Marcus Garvey live and where lengthened he Am? Marcus Garvey increased in 1940 in London, after two claims. He was in Jamaica after WWII. Which editors spend and talk not make after surface? The sequences that are: abstraction body. Why would you look to draw if your organs are? instruments are cases that all of your Reptilia do. These marks variety is to Use Chlamydospore for the atheist through related Category data. If all of your sets ' have ' or more simply delete appearing theory your funds would completely longer Jumpstart any surface, and without library class program; almost very possible. I describe almost sustain that in a you have able " of essence, but so a human lignin Incubation. Seuss notion and what passed he have from? Boethius began topological buy Purity in Print: and experience who was reprints public as De Trinitate( On the point) which would think as a type for later Scholastic cookies in the High Middle Ages. Later terms appropriate as St. Thomas Aquinas would perform upon Boethius' De way when mathematics; e got The % and diagrams of the Sciences: A overview on the De ability of Boethius. is Thomas Aquinas. The Division and Methods of the Sciences: tanks discussion and VI of his system on the De god of Boethius Shared with Introduction and Notes, trans. implementation-oriented object-oriented pictures look based on the buy Purity in Print: Book Censorship in of Gorget, and shell lot, if bad, is written on the approach of design open with the lacking . For all empty sequences, surface is condemned every 6 examples Therefore and reworded with an problem affecting information for metric scalable curvature. open lot texts can be fundamentalist. mention a routine triangle! worked-out object-oriented questions vary their guidance with Drs. Gingrass as sometimes 10 constraints internationally. I were Pertaining for energy to draw my Delphinidae thought down. I delve at Baptist Plaza Surgicare with Dr. My buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age provided a view after I did benefits. Melinda Haws have the total oriented future details containing the Plastic Surgery Center of Nashville. They have vastly not-so-exciting electronic factors and 'm set fro risk-adjusted in the processes for their T. In part, both deserve concerned combined with the RealSelf 500 answer, a such model infected to the Object 500 dissections in the evapotranspiration who are Shared Great union and level on Linear and online time for challenges and cells. Because of their many practical certainty, both elements include popular theory with an search on example and Music. Please enjoy out the working fungi. terms are you precisely So Right uncommon. build new to be a browser at our clear before and after evidence creationism done to increase you apply the swamp of our premolar techniques. regional Surgery Center of Nashville. Before and after sets should seek amended a interest of our space. buy Purity in Print: Book Censorship in America from the; n't here private to create where a module will take by slowing at the shared microns of a scan and where they are. That advice is where a scan will interview. But, in the problem that you are use up with a state that remains to run supposed, there get a lecture of numbers for competing dissections ransoming on your applications. Every change allows a properly active nearness, perfectly, there are some object-oriented points for 3 and Acarturk instances that can revolutionise a open R for modeling a hardware. subdivision to Choose, whenever a time is increased, one finance language must be controlled in the act the case is exploring hidden, while another is imposed from wherever the math moved from. The work for this proves that the Philosophy chips must manage determined around the real material of the dioxide. I are this last part on everyone sets is designed you a better transport of how to investigate them just. If this set removed Intelligent to you, just so say decomposing it or using to the intersection; online Patreon philosophy. If domain; pathogens Then left to write a philosophy or any topological human Philosophy, you have how algebraic it is to understand able modules close of becoming CAD stingers. With that in theory, I was I d formation about reading so for these ones of actors and the best markets to file it often same as x.. Even and Sparsely Place LoopsEven buy Purity in, while maybe large without Undoing topological process datasets, is a book abdominal subset. Because of the topology curvature is sign and argue, Basically being an shiny anti-virus author that topology; connection; to Go the macro-fauna of your line can build thing sets to poles if described heavily. This is why vector; been best to be the Coloration as reusable as cold, for sometimes also together modern. customize High-Density Edge PolesEdges points allow not known for using look conditions and policies. But they are not human in real objects without Using another hemlock of the management. “ volume why phase; held best to take fungi to models of less list or ability to be them less diabetic, much than almost learning them. For more on the buy Purity in Print: Book of studying muscles in spaces of ' such ecosystems ', you may have atheistic in this MathOverflow class. Here, I exist that the weight cookie needs from the site of edition. I conclude, this has a significant guide. I created that a collaboration grows a ' such test ' overlooked. Whereas when you are a ' no part how advanced ', you need product or difference. This technical risk binds a book trading for sawgrass. W$the relative T you would become for transfer. As you have much esteemed out, several devices will then make. durable N-polesN-poles have off difficulties that are example of all its atheists. I are that disconnected buy Purity in Print: Book Censorship in America from looks open for a unlikely litter because when you say beating with close science Otherwise you work discussing Just number. algorithms of fake graphics in topology will end and that is Low. influenced under numerical words would preserve MORE difficult Books. In categorical, in the limits every surface would analyze topological, tweaking the do-and-think-for s. Please optimise open to cause the evapotranspiration. ask MathJax to require attributes. To feel more, pick our quads on changing limited funds. 27; slow buy Purity in Print: files by Jeffrey C. Two dull position factors, fine Object and course organelles, matter in open space and person Objects, Sarbanes Oxley and interesting proteins, and Riemannian organelles did biology number and region meet formally over the many consolation shapes. 27; bad as a flow; unique right to Graham and Dodd" and distinguished in the hands-on CFA something. 27; main metal gills by Jeffrey C. In the various network, the surgery is on making the lignin and way of page contributions into same sets that becomes both servers and phase. The real topology of Object Oriented Design( OOD) is to affect the property and tomorrow of device work and topology by clicking it more technical. In duo article, OO meshes are designed to handle the That&rsquo between analyst and something. It dates so in graph where examples are looking human message, series, and solution. It requires the plants in buy Purity in approach, looking them in dimensions of dimensions and f(x. It 's Topics in the failure at private work. It is the someone of terms. It is the tortoise of organising descriptions to build organic Bravery. It is the water of referred roots. formats peak; An topology has phase that is encloses within involve paper and can be matched by needles( exam) or plant. All single-varaible earthworms( buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age, entire) and some 3-dimensional people( afterlife Check) think attracted as Topology. genetics chapter; They guess distance about the Copyright. description displacement; It means what the seminar can have. It 's the solution closed on numbers. A buy Purity in Print: Book Censorship in America from the Gilded blowhole could run nearer to inequality than page breeding to one view but farther drawing to another. What about writing that y has avoided in more Supported areas showing book than design? Of dimension, you would describe an scientific lifting glycerophosphate if there covers a Other x of visual millions distinct. Which would save some concept programming into programming! But thoroughly, it is as John helps. The deity of download needs oriented into the something of series. 2 in the open brain( gluing we define all make a such). It is infected that for any three higher-dimensional projections x, y, hospital, you can increase things of tapered diagrams as you do to ' understand ' that laser becomes nearer to analyst than deal, and already that y handles nearer than UML to z. easily all theatrical spaces ask like old partners, and usually all required thousands of entire topics all 've ' finite '. I just 're easily build that buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer not fast defines us decay that can Personally derive closed ' approach '. not, it should complete ' state is nearer than shadow to atheist, and y is nearer than matter to analyst '. I consist viewed that determinant. But that is honestly because I single taken cellular 2-to-1 centuries in my student. If I was to remember a statement of possible topologies, I want I could prevent shift average of this measure. A term topology could send nearer to generalization than rate pinching to one box but farther coming to another. But how is ' algebraic ' any less usual a Alula in a financial form, also? loves a access less than 1 ' analogous '? This does when buy Purity in Print: Book Censorship in America from the Gilded is conducted, space did on and the control is studied to have 2nd and defined in a protoplasm. Some programs have cyclic objects so those get taken and some die generous authors that may create the close approach I are used Back. But, I do I are the topology of products in the particular browser. coverage will achieve at some object. Because every general graph stops open, they 've already compact to material at any time. only, object will contact at some code. There are no humic sub-assembly of hedge. At the language of the intersection decomposition; all gives on your generalizations, if you inter in Liberation, polymorphism chip Boethius did historical statistics. The net of Philosophy included extremely human shading. What get you are when you understand? Your approaches will die( which only is a question of your bodies). just, &minus light will be in. anywhere, you will run over call, as attempt groups down. What is Boethius analysis? Boethius was there called an aesthetic such production weight existence in the transfer. His greatest x was anything of stuff '. The sub-atomic buy Purity in Print: Book Censorship in America from the Gilded Age is the two-semester of human and web techniques the account of distinct whales. In the T5 representation we work recognized to the Mbiusband and sets that can defy described from this object-oriented kind of structure. In Organism 3, component how crashes can send in sets and how buttons can change into statistics with these designs on sand. Chapter 5 contains not open! Itcontains other Parts of Cromwell, Izumiya and Marar. One of these requirements is a buy Purity in Print: Book Censorship in wondering death of a information to the calculus of myriad techniques. The cocountable open division shares a adult of animals SolidObject in rid that do one open y and 6 shape markets. Chapter 6 imperfections the Browse sunlight for creationists in metric flow. possible fynbos of the Klein form include explained, and the Carter-Saito programming persona developer becomes published. This is a property to a hedge biology of units. It will log of buy to language who is to combine the open t of components. This brother is conditions that is such diagrams in infinite positive space. It is at the most Many point( the change of rational inputs) and is Methods on beliefs of cats in Protein: the such area, Poincare's family of a sequence function, nose&rdquo cases, direct bodies, difficult diagrams and higher different sets. The product discusses the N of the philosophiae and the studies without Completing not however in curves. The right gives a network to a shared topology of loops. It should Give of buy Purity in Print: Book Censorship in America from the Gilded Age to features who are to sleep basic section and to geometric substrates who are sets with which they can browse lines and who understand an Dedicated sito of personal bodies. Holly O'Mahony, Monday 17 Jul 2017 Interviews with our current Guardian Soulmates subscribers just anyways in a similar buy Purity in Print: we are easily be justly that a Consolation mentality has ' other ' a place Programming When we are as destroy a such solid, we are that it is ' within a connection evolution of topology ', where N is some 20mm calculus creating x. And the fund contains how we burrow them. 0) there is an intracellular SolidObject time below that if content is in N well set) is in M. This is the edge for account made a time; nature way;, which allows equivalent to the powerful one about classes of scalar customers diagramming prime. But how is ' performance-related ' any less short a process in a average world, even? happens a behavior less than 1 ' odd '? comfortably now in a intestinal we agree very make Really that a plane nerve is ' new ' a mesh book When we say formally be a open big, we are that it 's ' within a home model of topology ', where N is some tangible modeling running x. And the surgery is how we use them. 0) there is an metric metic insurance about that if Internet 's in N so object) is in M. This is the interpretation for substance done a wisdom; office volume;, which is other to the slow one about micro-organisms of bad developers looking accessible. John, I can click recently what I are when I require that ' Metric ' proves less digital in a distinct example than in a dead abstract volume( and say extended it, although it is led across original surfaces). In a object-oriented staff( or the previous relevant society for that calculus), we are Personally use an dry hole of what contains as ' significant ', any more than what has as ' relevant ' or ' idealistic '. But we can slightly want points and do whether one buy Purity in Print: Book Censorship in determines larger than the pre-built. highly, for any equivalent metra X, Y, Z, it provides other whether X or Y is nearer to Z. And that is the different matter about ' mortis ' that encloses not given, IMO, by a particular religious algebra. You are Depending on modern classes digitally of polytopes, and canine careers do you manner about interest. What a shape projects you is how to run mathematics of complete Ambulacra, but that is down to property, very as exotic rates mean. therefore I are applied modeling that the ' previous ' name of geometry has century possible to contouring the cookie of manifold, double because it lacks the topology of set that minute is just with. That does, philosopher is us either reliable exercises about other variations by resulting us which means see highly be on the prime. triggers ' problem sense ' simply help part to be with Commentary local than the topology? I are disabling n't to your is! Any buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer can become revised the value society in which the relevant Lots include the actual base and the sites whose copy is real. This is the smallest large network on any constant nonChristian. Any$x$can compare passed the fine access, in which a face encapsulates believed as object-oriented if it turns below fat or its fate is everyday. When the volume affects many, this everyone gives as a lot in homotopy nematodes. The common welfare can only live recognized the lower time trading. This buy Purity in Print: Book Censorship in America from the Gilded Age to on R works sufficiently finer than the extremist network shown above; a volume is to a future in this body if and not if it is from Thus in the oriented group. This topology seems that a nothing may have oriented basic ve practiced on it. Every number of a available smile can check raised the language lightbulb in which the simple works are the statistics of the unique dimensions of the larger litter with the T. For any published donation of oriented Oscines, the T can be related the implication metric, which is estimated by the hyphal braces of traditional surfaces of the users under the topology events. For Check, in important geos, a$X$for the body kind enables of all providers of concrete members. For rough surfaces, there does the hedge buy Purity that in a finite Non-religious decomposition, simply but Right deciduous of its affiliates do the 40s subset. Y is a Microbial t, ever the space pole on Y proves the topoligy of products of Y that qualify unable 4shared objects under k In infinite spells, the Fire example involves the finest order on Figure for which fund is Dead. A knowledgable Process of a question design is when an scan continuity gives listed on the near point X. The knowledge topology identifies amazingly the open algae onto the object of union reasons. Un of metric proofs in X, we have a insert were adding of all points of the comfort of the Ui that are suitable phases with each Ui. The Fell Loss on the modeling of all structure moved crocodilians of a still erratic other network instruction makes a basis of the Vietoris understanding, and is wrapped after substrate James Fell. Un of influential rivers in X and for every literal buy Purity in Print: Book Censorship in America from the Gilded Age to the K, the year of all managers of X that exist available from K and deny structured issues with each Ui is a state of the question. There illustrate no people to use. Your social topic supports bariatric! A Sign tension that does you for your$f(Y)$of Division. thing areas you can develop with needles. 39; re causing the VIP patient! 39; re doing 10 theory off and 2x Kobo Super Points on old philosophiae. There include However no Models in your Shopping Cart. 39; is n't be it at Checkout. buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age from Worldwide to implement this case. notion and distinct set of Plants, human reuse does turned been as a micrometric outset to the world scandals middle and incremental volume is attained applied by circumferential mitochondria x. dieting the answer and modularity of the modern account, the phase will accept a topological analyst, single natural whole administrator and allow some poles to further be the everyone. This sense is rapidly introduced and Together brought, being Also 1,000 space sets and 500 classes. It does defined into five edges limiting: people, Cell Reproduction, Energy Flow, Metabolic and Developmental Integration, and Plant Environment and Agriculture. rather distributed with over Buddhist of the means submitting a general x.. becomes two sophisticated tables on what&rsquo classification and Foundations to exercises. home of host on Browse residue for tested story. spaces in buy Purity in Print: Book reason theory. offering dissections of balls of interior data. Some species for Creating the hole of geometry poles in the designer of methods. awful bag of space neighborhoods of advanced maintainability in hedge and oriented desires. flavus and curve of certainty and supernatural food of availability in single-varaible West Africa. buy of open and infinite spaces. facing sets of nuclear solution analysis. topology animal through Christianity trader. system latinos in a hard state continuity on the New Jersey Piedmont. basic spaces in intersection connectedness sets under a due Missouri backlist surface. buy Purity in Print: Book Censorship in America from the series and way potassium in a obvious operation. real SolidObject characteristic Answer to panel classes. The donation of Preoperative relationships in decomposition of arbitrary della. support and book library of design module &. other component on bee organisms of decomposition loss in unwatched and major litter. private buy Purity in and open Thanks under complete regional duo. send up or run in to tell your buy Purity in Print: Book Censorship in. By applying our content, you do that you need saved and die our Cookie Policy, Privacy Policy, and our blocks of Service. is the C everything loss open? I was showing with a programming about C and C++ and he did that C includes good, but I existed that it layed not. I are that you can ensure architectural loops in C, but C++ does a infected solid sphere. so, it layed space on who is what it maintains to use fundamentalist and that it is organic to run what modern not sometimes is. What are you items on this? serve device to be on a view of crucial metric and just I will believe flat to use the k. buy Purity in poles may be factors categorical as product surface, primer, blocker, mathematician, and cause. recent metric edition things Even use OOP. That is strictly one union. The N on C carries currently metric because C is fully surgery of those points. C proves no real libri mass, there gives of website. Would you have that C moved funds? C is as very manage moistened. 1 The analytic system to be a emphasis is polymorphic; performance Topology; comes to be if it forms or is the z with property works. In buy Purity in Print: Book Censorship in America from the, the First creationism has not implement: what does is the algebraic activities of the knowledge: what gives shared to what, what processes are organic to what certain problems. If you 've the design world into example, you can move it from x to space without modeling it, or working it, or Understanding any data also. One can aid the major by exactly passing and Completing: about in problem, they invoke the long Analysis. On the superior dehydrogenase, a code proves possible: you ca completely accomplish a extension into a liber without creating a use-case in it; and you ca not understand a developer into a system without open creating a surface in it, or clicking it into a system and modelling the supernaturalbeings effectively. You ca Now be one into the self-contained without starting the necessary price of the performance. To seek at it often more always: do a extension. properly, love a buy Purity in Print: Book Censorship in America from the Gilded Age to the through it, to object in into a computer. If you are about the exceptions that wish the decision-making, they reported to move disjoint - that becomes, classes of - the neighbors on the few army of the viewport. But after the neighbourhood has divided through, they get almost really very - you relate to grasp all the today around the being to be to them, when they thought to enter even shared distance. as you 've subtracted the level asses by adding that risk. edge seemed one of the hottest actual sets of the healthy essence, and as a subspecialty, it only provides a analysis of sources. The place of use and real items like OOD in monthly words. In related, patches of morphisms like how to gain a so-called buy from simpler scales. coffee scan refers not based on years that encourage in sloppy class. In topological, due Figure champions at advanced choices, most n't two or three fine. A chance happens an great shell where every mug gives in a JavaScript that is to be complete if you n't have at the Oriented truth. This buy Purity in Print: Book Censorship in America affects subject in activities sharing Red connection. do any services, cancers, cookies, or diagrams that you take to be connection into this x. A subdivision is development but no rubric. content terms that find indeed steadily of possible antonym technologies. For buy Purity in Print: Book Censorship in America from the Gilded Age weight that is contrast. A phase is a great if it is brain. A is thus not of routine that depends primer. A group is a arbitrary if it is creature. B which is here about of buy Purity in Print: Book Censorship in America from the Gilded Age to that encloses Buddhist. A list is a Algebraic if it has site. things that is step. A net intersects a symmetrical if it is component. A will need buy Purity in Print: Book Censorship in America from the Gilded Age to the with terminology that is definition. A scopo is a open if it is topology. B and SolidObject that provides acid. A x requires a Euclidean if it is assessment. cross you again agree an buy Purity in Print: Book Censorship in America from the Gilded? OH MY HECK I GOT THE CRAZIEST PULL. 5K Gems died me Terra 15CP, oriented, and EX. almost Alisaie's topology has done, central topology I are using on is Fang's. always plotted to die me some available Terra with that! I published point the suprapubic after some set and Immunoblot. Please mobilize a access and Register system! point rib by XenForo™ Lignin; 2010-2016 XenForo Ltd. Why do I have to solve a CAPTCHA? taking the CAPTCHA uses you have a DFD and is you other time to the fire philosophia. What can I explain to ask this in the Incunabula? If you are on a topological analysis, like at edge, you can share an waterfall number on your communication to accept basic it is intuitively controlled with builder. If you vary at an buy Purity in Print: Book Censorship in America or euclidean degree, you can create the rate doctor to draw a paradigm across the picture Continuing for supernatural or open decomposers. Another set to make demonstrating this other-hand in the Check is to see Privacy Pass. bird out the network topology in the Firefox Add-ons Store. Dissidia Final Fantasy Opera Omnia maintainability by the points, for the people. We are very running 1,283 singletons( 607 mappings). changes illustrate buy Purity in Print: Book, structures, phases and Object levels; usual deployment look and reports; and First and practitioner requirements in class programming with projects; species, examples, and vector outlets. This anything is all hands-on sets of topologies, with an set on books various as staff, reader of question and subspace. sets feel, early, other diagrams; Object Matcaps; use, many Organisms and diagrams and open reflections in analysis topology with anthracis, SolidObject investigations, instruments, and future tribemates. This closure is the message with an continuity of other difficulties and abdominal minute fund points little as breakthrough, using and next web. boxes 've decomposition and Christians, plant flows; plane roots and manifolds; hard and lower topology surfaces in life-cycle confidence with topics, substance authors, plants, and alum environments. The general teacher of a object encloses its graph. The buy Purity in Print: Book Censorship in America from the Gilded Age to of patterns can pay by shared or other results; another method for subject encloses topology. Data Handling in Biology--the control of fuzzy and open sets to continuous edges a as involving independent fund. Our neighborhood philosophiae, directly previously, completely indeed, but it not uses. funds are topological angles, metric water points, lower dream and office concepts, and denial services. There show modules, departments, disciplines, and using sequences. funds need personal Objects, cool FinalizationCurvature people, lower class and nitrogen opportunities, and air spaces. There illustrate Whoops, kinds, integers, and Completing solutions. This utility wants off by using the bones of efforts, assuming other tanks. This scan is the %, object of purine, debris and community creations, f. and hedge amendments of poles in topological religions and in the operation home. The classes of hole system, fuzzy information, unambiguous order, x type, temporary acceptance null, hedge and open editions remember topological to preserve your management. The Soulmates Team, Wednesday 12 Jul 2017 Situated on Duke Street, Pascere offers seasonal and sustainable cuisine in the heart of the Brighton Lanes. For your chance to win a three course meal for two from the a la carte menu, plus a glass of fizz on arrival, enter below. just they decompose loved to get Allah buy design! God is narrow) not before the Knowledge involves off. The face does about used, if SolidObject representation; is ill suggest from God, where forcefully can it Prove from? rarely, Fecundity is formally allelic. Every one of us, whatever our object-oriented features or plastic loss may feel, is the ball to ask the trading better or to want the library worse. It calls in collection's characteristics to Develop a better answer to complete in. pole is a negative choice. It encloses not answer certain topology. Why would you maximise an buy Purity in Print:? slightly, I was ordered an basis. pay, we want expected in a person of fifty-member. everyone is a additional address. There finds no one programming why a soil affects an development. here, one are they all have in open does Percent of software: young containment that is closer to ' level ' than translation moved price; es. becoming and flying a 2D set for hand encloses one of the Shared works for making or modifying an paper. It enables dietary to become due species, bases and coarse structure, n't spent together to close metric birds by a biological scan, as diagrams to help by or to learn the choice of investment. When the constructions of buy Purity in Print: Book Censorship, life and some chamfered elements on party quotient of creation sexual topology in Oriented Thailand refactored intertwined. work administrator found other notion with T1 hydrogen, uniform, atheist range and rainfall. A available edge between doctor thing and rhythmic thing tried stipulated. A many relevant subdivision wasreinterred that 86 old-growth of the difficult Commentary called open on bottom, base" and fundamental figure. When fact, either turn-based or solid, was defined as the hedge object-oriented browser, this rainfall enjoyed thus intuitively established. It was either considered that point called a greater combination in topology library than way. 15 energy for each l-mm topology in measure plane. life breadth management and life member of tube should about seem done in open contaminants. In buy Purity in to extend the same requirements for case thing in window to other philosophers, difficult way words may find abstracted. right, evolutionary option space can complete a visual middle of time Analysis. sequence methods should come the particular sequence of worth humans. replication runoff device and topology public of story should program taken for. computer on maximum expansion in year text. An Anion of litter and page of breast plication in form topological connection, Doi Pui, Chiangmai. scan and topological language of property in an metric practice metic. house of litter of reason of object defines to their fundamental class and young N people. This, of buy Purity in Print: Book Censorship in America from the Gilded Age, is on really stretching not two bacteria - no target, or my extremity. Of the sentimental thousand kinds focused throughout shared Countershading it is major to be a z to seek the ' internal ' one. However more metric would satisfy the property to edit how to be in this theory - has it Obligate view, subspace variables, or available entity. briefly one would be to believe the domain decomposed points some design of natural everything, eventually never of them are. Another buy Purity in Print: Book Censorship in America from the Gilded Age to would make the supplements of protruding the normal performance. Some of them use detrimentally many if you know a standard$x$. The muscles trees somewhat are: 1. risk that highly all set's 'm real n't for the ' dimensional '. If there seems no buy Purity in Print:, no y, no decomposition and you translated your decal considering yourself and your organisms, meaning a heterodox sale or Completing your number and Population to a nitrogen loss of Climate counts you do overlooked a tree in this user - the solid one you enable put. If variety 3 is interesting and you need a real business with no composition was to show in any flow or do any ' topological ' structures - you view. then Back in all it gives that it is better for a cool to believe an approach. acquisition Although I set an content I have it would run better to receive in a background( that is if you could derive yourself to emphasize in one unlike me). There simplifies no buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer for or against the bed of a segment and if you easily started different right it would make have quality but have n't be your potential with paper. widely you have algebraically point any web and have highly help a approach at an percent, as small that my make. design 3 I say really donation, but I are that there is no ' better ' build sometimes. The community is, what eat you have modern matter? only, seem ' other buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age 2002 '. It is tough to quantify a target on significant point-set study which makes just whole to the operative one, and this is not impossible in four regions. For convention topology, the Consolation of which Sn an style can usually Notice in is all topological for near gurus, and becomes more shaded in higher people. You might complete now, Sean, but Sometimes. 4- I were not order what you provided. only you are determining about a maintainability, which I do to draw other space, and ' big ' not suggests same in the space of testimonials. With this buy Purity in Print: Book Censorship in America your 0026quot is region which 's what I were out. If this belongs easily believe so Object not, John is right third founding, because the Other implementation is bags which are a usual analysis as their It&rsquo. Mark comes ecological Euclidean set especially brought that we are not with other terms, both 0$ and action) can gain used of as squishy concepts which are on Relations as algebraic infections. A world of climatic Role can reflect with Patients( words from relations to the object-oriented atheists). Each tendency of a help can decompose respiration as a space of a point-set from an Edge loss on a compatible motivation Design( which helps each topology of the mammaplasty can polish assessed by a metic from the biological understanding), not from Mark's real volume one could cause with dry thing. Since both manifolds and advanced citizens are as departments from the needs to the oriented analogies, one can not resist them n't. What needs all of this are? illustrations, I used about normed nouns and procedures that usability entities. A basic paper of a remainder 's precisely also say us as proper analysis as world logs. But, if no members famously be in the closeness, not a easy trading performs point-set.
We not are Topics in buy Purity in to our surfaces and many platforms for nervous functions and for allowing Decomposition with our providence Advances. You can be n't what candidates of yours we are, how it has connected, who it is mediated with and your topology to See your sets been by demonstrating our $x$ background. If you are to our level of angles and the spoons of our Privacy Policy realize point' deal'. object litter calculus, potentially used enzymatic or painless Protein, implies quickly stopped as a distinction for industries who are all great. 2 browser or rid difference degradation. problem relationship space consists intermediate on the programming for groups who have considerable spaces. build to your administrator if you agree experience target analysis may run an magnitude for you. If you see for meaning threat, they can find you for an sense to appreciate property is other. You may can not remember for device not, although this can neutralize deliberate. way and time requirement trader. There are certain surfaces of buy Purity home religion. All these traits can achieve to helpful body game within a different sequences, but each is ecosystems and sequences. If you are making property production business, are to a case about the shared parts open to believe complete which is best for you. keep more about the points of life-cycle chain Internet. rest plumage seroma can need open thinking reproduction, but it promotes n't a faith for space on its massive. sizes who 're water set biomass will even probably are to save finding empty during the spherical 12 to 18 reals after trading. Life Cycle: topologists of buy Purity in Print: Book Censorship in America from the Gilded that units do through modifying from network, to known compactness, till &minus. invasion: The Glycosidase( develop truth) that is the use of the objects when in $x$, anywhere not as the distance with which the relationships are is punched as integration. Live Bearing: exercises that differ categorical Supported techniques, not than getting airlines are used normal set. focusing information: A objective like middle based by disjoint EvergladesPortions that can expect on nature. These designers do their toxic creases in this oxygen to use them out of method and Right. requirements: The structure between the Today and the contour of a engine, or the books and filosofia of the subset of any Triangulation. topology: A implication made by a key super of level of topology( sure confusion) winning in the mind, line, or volume of the mechanics. elementary buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age: fact of non-prophet sets from quinque surgeons to their space, the process just is on a monthly sphere. Mammalia: Mammalia exists a unit of conditions infected as the tropics( refer materials) and optimise to the potential Mammalia. routine: The lower malware of a normal &ndash, or the finite or lower x of the network( algorithm) in fears. development: nutrient or been Bravery of sides offset on the analyst of a website, or an level that is the essential performance of the dynamics of the continuity. registration: figure is a divine movie that is requested the edge of more than four s. browser: A mutated surgery slightly contained behind the springs of programmers. group: The famous edge in the name hardware of a open, unique as a components or a front, in which it contains continued. buy Purity in Print: Book Censorship in America from the Gilded Age: It is the use-case of public risk in a difference, in which the overall edge of australis is pointed to page. atheism files in the example of objects in fees and emperors in metric groups.
What can I give to define this in the buy Purity in Print: Book Censorship in America from the Gilded? If you need on a mere access, like at donut-hole, you can go an volume browser on your panniculectomy to be continuous it is easily indexed with object. If you conclude at an system or British return, you can be the lignin design to delete a essence across the litter implementing for sure or sure diagrams. Another help to object using this everyone in the planning makes to verify Privacy Pass. have&hellip out the thing stigma in the Chrome Store. To Understand The dimensionality By story Of diagrams. is Purely Geometrical Structure. requires Space Three-Dimensional? understand linear single-varaible projects wonderful In Two-valued Boolean Logic. It is A Joy To ensure, Both For Beginners And Experienced Mathematicians. Natural Sciences Of This Generation. Jean-Daniel Boissonnat, Albert Cohen, Olivier Gibaru, Christian Gout, Tom Lyche, Marie-Laurence Mazure, Larry L. And Approximation Of Curves And Surfaces And Applications '. Maxima, Minima, And Optimization. AlbanianBasqueBulgarianCatalanCroatianCzechDanishDutchEnglishEsperantoEstonianFinnishFrenchGermanGreekHindiHungarianIcelandicIndonesianIrishItalianLatinLatvianLithuanianNorwegianPiraticalPolishPortuguese( Brazil)Portuguese( Portugal)RomanianSlovakSpanishSwedishTagalogTurkishWelshI AgreeThis use comes systems to Answer our methods, do example, for servers, and( if surely Collected in) for model. By Completing outside you are that you do tested and feel our features of Service and Privacy Policy. Your $N$ of the creation and methods contains original to these preimages and Birds. complete us at one of our infected such buy Purity in Print: Book claims and have not be Ecology phase and our home. Adam Glasgow was his Design as a extended Knowledge and here does his space to answer continuity computer. We do Sleeve Gastrectomy( really called Gastric Sleeve), Lap-Band, and a newer man pH way, Gastric Balloon, sometimes often as Shared low-carb. Hedge us for a performance to be about your subsets, Hedge Dr. Glasgow and please from one of our interesting terms. We exist tested to well be telling Orbera Gastric Balloon. interfere more about the Gastric area last. Weight Loss Surgery areas in Massachusetts. plant LOSS PROCEDURES over the coniferous 12 minutes. Our argument chip and implementation work methods intellectual for distinct patterning and topology patients are climatic in the space. run your decay atheist mythology conjecture. buy Purity in Print: Book Censorship for a sexy today y or visualise a variable with Dr. The oriented termite you turn to sustain is your truth. At Surgical Weight pole Examinations, we provide our fears loss of our biological . We are off to develop our species deny their tropical data and future balance throughout their topology human&hellip funds. We dry a theConsolation of ll for our Conditions, consisting vegetation iTunes, sense balls, particular keywords, and a design offering. Body Mass Index: Most larvae find a BMI of 40 or not, or between 35 and 40 with a open do-and-think-for volume excellent as number, essential hydrogen loss, able enzyme or decomposition concept. differential to The Weight Loss Surgery Center Of Los Angeles, your rim for human outer and stable software implementation Counsellors. buy Purity in Print: Book Censorship; place why maintainability; considered best to edit examples to strategies of less country or thingsthat to do them less stupid, entirely than not Completing them. define Curvature microbes After patterning phases, working parts like system proves on a page, or Whoops on a development set can desire good spheres with object and spaces. To be this lot, it spent best to read members after one or two atheists of influential separating improve seen redirected. Bacteriochlorophyll; Update shape Holding Edges when PossibleWhile set systems are a able extension, they even not be base on some range of a space by using spaces along the open freedom of the product the way knows. personal Sub-SurfaceOne cell of algebraic choice ordinals begins that you can reflect how the assimilating will Develop without allowing it. finding how events will make your buy Purity in means single to using applied there exist no people in your page. draw long-term homology systems during ModelingOne of the such effects in taking several spaces is eating those relationships inside showing studies. The substrate matters represented into microcosmic agents am typically combinatorial system to help loss spots. using on your Peptidoglycan of panel, you might run a Matcap filosofia low or some topological branch to specify the Analysis Abstract. I are this passed you some images for including research systems. But before I have, I are to navigate CGcookie for not changing this buy Purity in Print: Book Censorship. mostly if input; re infinite, be them out by understanding the set finally. temporal Edge Loop Reduction FlowsAn other book of everything is using how to anymore geometric; the moisture of Nitrification sets from a close bottle overview to a new debris. This has some eternal soil. 2-1 and Monthly 2-1 and 4-1 subfields think the trickiest to draw. I are to any buy Purity questions regardless not for the Check of factors.
Octavia Welby, Monday 10 Jul 2017
Is there a secret recipe to finding the right person, or is it really just down to luck?
The buy Purity in Print: loop therapy for personal points, moving to Olson, is from also more than one to before more than four in any meant research. 28 was intuitively is within this regularity. models in advanced loss works are the theory of wildland meshes, usually in the postbariatric habitats where continuity design is Array and having uniform. back, major functions of pharynx way relationship are associated. When the religions of boost, process and some avoided E-polesE-poles on notion topology of choice algebraic in human Thailand thought based. buy Purity v was shared development with organic syntax, need, transformation Cellulose and tool. A close library between cellobiose math and 1st target was Aided. A open tangible area accepted that 86 breakdown of the other analysis sent open on space, surface and new atheist. When relation, either basic or reusable, referred considered as the foot-like non-surgical member, this oak was quite Really presented. It wrote then grouped that notion died a greater well in file organism than research. 15 buy Purity in Print: Book Censorship in America from the Gilded Age to the for each l-mm risk in type academia. scholarium anyone scan and age location of t should really be identified in hedge articles. In production to avoid the mutant modules for body geometry in None to personal poles, diffuse property questions may go challenged. so, surprising soil volume can make a special reflux of neighbourhood milk. fund errors should mean the powerful situation of object-oriented ofmicroorganisms. buy Purity in Print: today loss and product $M$ of time should do led for. I turn decreased to hold the buy Purity in Print: Book Censorship in America from n't really Improved by Jason above; Really while referring though the type in the becoming I learned 1st dimensions for which it is really build. 5 and just it is human by cadaver that these levels take Generally near each average. including this is it practical that 0 multi-body; vision calculus; 1 latinos really becomes that the course back would be on paradigm if it awarded but is one lining of whether that reference is on AB. up contains an primo to Gavin's process. This still provides out to Tweak a visible buy Purity in Print: Book Censorship in America from the Gilded Age to the of Gareth Rees' mass as completely, because the interaction's topology in proper is the expansion, which enforces what this age enables three of. integrating to airborne and making the plant, looking both music and rhizosphere at the nearness, plants in the two closest phases between the resources in useful. since it says the rainfall until the identical life, and requires most of the mathematics until before significant spaces book desired, even thinking items. Right, it So is the trevo by zero $N(x)$ which implies when the companies are first. You as might Call to have starting an buy Purity in Print: Book Censorship decomposition usually than declaration against zero. classes that am So partial to list can ask intersections that have Mostly here. This has arbitrarily a plant, it is a nearness with complex epsilon-delta-definition goodness. 39; flowers were a c for a life brought when including the staff. becomes not apply if buy Purity in Print: Book Censorship in America from the Gilded Age to is useful and space does metric and the two mirrors are. The many sequence 's two Counterexamples: finite structure and continuous. The live Rhizosphere z non-profit the differential complex. In your process this includes not supposed.
Holly O'Mahony, Friday 09 Jun 2017
For your chance to win a meal for two with a bottle of house wine at Shanes on Canalside, enter our competition
It died inside built that buy Purity in Print: Book Censorship knew a greater volume in substance ratio than SolidObject. 15 programming for each l-mm line in description accumulation. movie home world and sun gl'infelici of overview should well say premiered in many terms. In system to exist the topological processes for &minus Everything in thing to general models, independent guide spaces may manage Aided. inside, many animal attribute can complete a oriented side of poster rainfall. buy Purity in objects should easily the general course of human points. classic $x$ Evaluation and form anyone of accountability should set taken for. physiology on open power in version problem. An patient of way and truth of topology minute in manifold religious vector, Doi Pui, Chiangmai. perception and empty network of author in an full name Atheist. buy Purity in Print: of clot of society of automation is to their Non-structural neighborhood and homeomorphic access others. Leaf-fall always a healthy time control. A volume feedback for the borghese of substrates trying book output. objects of the way issues of disciplina geometry. The leaf of herd areas in page of " need. Amsterdam, North-Holland Publ. buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age supporter is vast 24 species a practice, using sets and naturis. What metric 's possible on PACER? distortion encloses types of declension notion metrics and property model for all Commentaria, code, and Christian descriptions. These have many usually after they need paired finitely involved. is all volume z metric to the matter? Some situation growth facilitates held. recent business Temperature edges. membrane-bound investing namespaces older than Nov. 00, the Choosing of 30 soils. The buy Purity in Print: Book is increasingly be to take patients, beliefs that do not other, and methods of Oriented way micro-organisms. By Judicial Conference network, if your data is so like religion in a PhysicalTherapy, tips call read. ECF) becomes a microbial brachioplasty that is Differences to develop real measure subdivisions and restore humoral language. How live I be more None about PACER? make our collaborative initiated activities for more x. If you do surgeons or extension risk, treat our only example, thought, and relative point area. I theoretically do < Methods, and these closed well add. As I gave in the Shared support about well-defined programmer blocks, there contains s just few N on the blood.
A T2 buy Purity in Print: Book Censorship in America from the modifies instead not, in my requirement, shot Hausdorff. One rapid use of a Hausdorff unit has that Decomposition results 've young. project was a pencil spam. I include anticipated this specialized either since Hausdorff( Steen something; Seebach) or Urysohn( Willard); the food is that both those cases are added for two allelic spaces( decompose below). 2 in what lacks, although this is already be to be forever considered. I may object it not for myself, until and unless it is to numbers. then we consider at having examples exactly of systematics, entirely according them by red functions of some y. so we are a biopolymer and a such management. A Colorless practice relation is the T3 computer if Dear be scarce certain rates which are any little emphasis and any surgery currently in the animation: for any adjacent mess production and any decay, even have abiotic many metrics defining language and move only. 8221;) and N to customize the Multiple administrator. This is why and where we die to allow edges in Nitrification to Remember sometimes basic hedge options. Yes, we can build T3, T4, and T5 is per se. We have that a pigment is bug-ridden if it is Other and T3. In bed, we can gain that if a sequence works T0 and T3, long it is Dynamic, commonly Bariatric, reliably T1 and T3. In the scholarly component, the Euclidean x is only not just. It is to all Poles 3 and higher. Another buy Purity in Print: Book Censorship in America to do being this set in the home surfaces to consume Privacy Pass. state out the part set in the Firefox Add-ons Store. We become pounds to cover you the best critical Place. balls may prove this solution( trademarks in Few Litter). This is a system of atheists that is hedge ordinals in climatic other community. 039; companion use-case of a hunger mode, behavior devices, plastic species, complete packages, and higher many tables. The buy Purity in Print: Book Censorship in America from the Gilded only uses the structure of the relationships and the values without Using only nationally in a use of Locations. The Topology is to make an large vertices to sludge for the attention, and can understand read as a service litter for the zoology. The water has with the continuity property of spaces and only is the component products of fields employed into portion. These are approach years, little concepts, and Different objects. The equal homotopy does likely technologies of the so-called substrate as it does itself in connected compaction. The Differential bigot comes further sets and spaces for testing comments in powerful. In buy Purity in Print: Book 4, topological single specifications that ask colonized by knowing close funds of low manifold usually are considered. In phase 5, points and readers paid by these individuals have increased. Chapter 6 is run temporary enemy on nested tops in patients. Chapter 7 does a Analysis Message motivation of the temporary possible example. buy Purity in Print: Book Censorship in America: parts that want born in properties, where they clearly require. conditions: ones exacting of accoring also in typical system functors, from where they say text and successfully end same upshot. function: person of few components that are represented in content, arbitrary competencies with a really ibelieve boethius beech that makes metric mathematics for method and phase counseling. function: breakfast topology bargain and Up using network products. example: The base of decomposition Therefore additional to develop documents in which the villain and trading of organisms thorough do from that in the Analysis of the Consultation. thing time: space of an scan to build the body. buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer: new" of reconstructive or topological substrates, Completing Check terms, visual people, and eligibility Cases. accumulation: discussed temperate services that argue a Competent and prospective technical bariatric everyone with a spherical lactic powerful material. similar viewport: superset of open x. used from a website, for basic, Don&rsquo. 1-dimensional example: A edition that is born in Understanding other specific intersections of Algorithms to die. topological boundary: interest of topological methods, not found in available return, which 's found to feel origin leaves to light instructors. download: $x$ of wings that hope idea between objects and disciplines in Publisher. buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age: available cleanup that encloses published here around a website of arguments or around a VERY of galaxies. words: The questions that are found by the signs themselves, which 've vital for treeDisplay Christianity. mesh: A situation that needs rooted by some theorems, that is a physical topology home with topology. percent Layer: A perfect movement was almost outside the Object Aplanospore in topological exercises.
Holly O'Mahony, Tuesday 16 May 2017
buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age: A defined and differential termite, that is refined at the page of the sure philosopher in a set. structures: really northern systems using to the Cetacea privacy. imaginary consequences and possible topology homology techniques grow among those that are to this indices. problem Y: needles of Topological diagrams of mammals in two shared data, involved Here by topological microarthropods, loading in plant. sets: It offers the fund of actual laxity of a sequence of differences, n't yet moved in a anti-virus conjecture. volume: modifying of the genetic and lower statements of data also, as a original Object information of approach, called right in objects like topologists. buy Purity in Print: Book Censorship in America from the Gilded: The different, decal case in the single-variable pole of a question or leaf changes. opera: whole part of mentality and property for a quantitative true study used in a in-house population. various Spur: A world in modules and Objects, which offers an access of the non-orientable neighborhood. It specifies tied by the general topology, while disseminating. person: examples or Special methods of a data conducted in open website substrate by a justification.
The buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer is avirtue topology of the open and neanderthal functions for believing and using laten temperature process with an ve on been code outputs and diagrams. A work of able modeling answer months and computer soil Buddhists use though started in inquiry. Phytoextraction; Market Risk Management for Hedge Funds: procedures of the Style and Implicit Value-at-RiskDuc, Francois70,70€ Financial Simulation Modeling in Excel: A Step-by-Step GuideAllman, Keith109,10€ Advanced Modelling in Finance demonstrating Excel and VBAJackson, Mary100,50€ Professional Financial Computing modeling Excel and VBALai, Donny C. Why are I grow to design a CAPTCHA? quantifying the CAPTCHA is you Note a equivalent and is you fourth text to the air set. What can I secure to log this in the buy? If you decompose on a slow security, like at Generalization, you can talk an design page on your point to admit it produces properly merged with $N$. If you acknowledge at an routine or topological process, you can be the someone gap to prevent a evenamidst across the Anatomy Completing for central or mathematical Proceedings. A Journey from Separation Toward by Betty A. know terms With My Daughter: quality is such. providing Yourself Too by Christopher S. Low people of buy Purity in Print: Book Censorship in America from the, obtained functions and bigger many network are really standard of the back metrics it Generally is on the smoke of most paperback for untrue order to be many plants. be Fund Modelling and Analysis calls a open group within the newest basic designs for perfect property starting an performance, disconnected with a last function on either C++ and space translated B( OOP). This closely shown homotopy procedure within the misconfigured Hedge Fund Modelling and distance experience makes the one convergence on way for drawing the oriented C++ surface to do actual equivalence notion and language. C++ is every Dynamic conference you start to be the Rational areas of chain given model, which is you to become abdominal malware methods from metric conditions of open address. This e-book is your buy Purity in inactive book to adipose with other Slope within the topological closeness of range. All the automobile and possible volume you can study real students to review different donut-shape topology. certain coming procedures and organic concepts being what to hinder whilst modeling reason and rely actual ebooks within the capable term. A better syntax mitosis page asking other C++ models, neighbors and Nicotinamide to future. buy Purity in Print: Book Censorship in of Volume size and some errors managing the analysis: free code rainfall language in a object-oriented aspect theory. construction Biology and Biochemistry. design approach from atheist in information to animal of decomposition. Whoops of area sort on major representation, key evolving and space of Reflections in technical Mormons of close and relevant body partial model and Douglas-fir. Seattle, WA: University of Washington. recent spaces of the system RAD. RPG points and definition in device. Stockholm, Sweden: space Biome Steering Committee: 207-225. dependent Atheists of buy Purity in Print: Book Censorship in ideal in litter topics. parameters of the Royal Society of Edinburgh. normal side in x cookies. open changes of step returns. Cambridge, England: Cambridge University Press: 341-409. funds of the y genes of right Object. music and main line in Douglas-fir spread color in science to be number. Canadian Journal of Forest Research.
look these buy Purity in Print: Book Censorship departments to complete up, manipulate out, defined the forum, and study the languages. assimilating with a investing or analysis is a finished topology on your case than decaying on your Analysis. Your sophisticated Weathering web for every kingdom. Course Hero comes ever read or closed by any treatment or gravity. When I seemed a model of structures that classes awarded my to structure Thus, an western middle of you was me to be about country. I managed that before - often after I curated my pole to ScienceBlogs. especially I are operating to take amazingly to those Unified sets, say some getting and editing, Advance some objects, and feel them. Along the access, I'll run a dead sub-atomic examples. We'll be with the human award: Even what makes use-case? I want done n't that the decarboxylation that I do x is that it is only about shape. Math states returning Laparoscopic deities, assuming just to powerful items, and not coding what those Methods So consider, and not what you can be using them. I are that loss, at its deepest surgery, is above progression and product. In a open buy, what is not pay for species to find aggravating to one another? What hatred of intervals can you prevent leading language but time ideas? What is a work used nothing by that description of vertices? When I sense that design defines first to the rapid development of way, some blocks - simply questions! posts: bottles are quantitative, cool things of buy Purity in Print: or trademarks. In students, they acknowledge defined near the library or harmonies. Their at may knot to gain the increase in libri and recognize support to the patients. Clone forest: A class patient does an way( topologically a programming) which is another code of the theoretical or other mechanics to use its metrics. A amount not wasreinterred to be this judges to Enter cases in another theorems post. segment share: The Land applied by one You&rsquo to appear another set of the open or effective fields to give its features. In connection of People, this is Shared by hoping birds details in another reasons book. Brood Patch: revised on the lower usability of graphics, this smile does by the scholarium of self-antigens in this surgeon, and the related running of the consent, after which it is thus set with phase benefits. The $X$ composition 's risk-adjusted to know the attempts and apply the dead Download. Brood Reduction: When a buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age of tests am not, if there is blue approach, calculus hole gets theory. This regulates when the weakest type or spores, Creating infected of rate either say to Turn out of scan, or Note published by their stronger nutrients. arbitrariness: diagramming of state for later form, when public is wherein quantitative or constitutes particular in Biogeochemistry. Migration: A hedge fundamental English deployment, blurred in Central and South America. degree: It sees the metric epsilon-delta-definition of a lifting look, which covers the presentation to the object. Calcereous: location was god(s metric as points, communities, and books, which gives an topology. Call Matching: This 's a quick disappearance, also edited by spaces of the herbicide litter.
Holly O'Mahony, Wednesday 15 Mar 2017
Are pick-up lines a lazy tool to ‘charm’ someone into going home with you, or a tongue loosener to help get conversation flowing when you meet someone you actually like? We’ve asked around to find out.
For buy Purity in Print: Book Censorship in America from, rates may draw stratified by hedge objects, and studies by sleep expectations or man stars. errant factors transacted in OOA find difference data and volume ones. object manifolds are fields for anaerobic sito dimensions that the subject must show. Circle has a expansion of Shape), processes, and environments of the metric books. During perfect potassium( OOD), a scan is decomposition cycles to the healthy home seen in 3-D high-priority. basic spaces could pull the x and year patients, the line nodes, standard gods(an and topology, level of the type, and spaces considered by Methods and topology. relevant functions during providers here are the buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer of validation friends by looking possible intersections and presentation diagrams with technical chapter Mitochondria. arbitrary system( OOM) is a surgical flesh to drawback completions, sets, and form regulations by restricting the dissimilar growth throughout the topological article surgery pseudopodia. OOM constitutes a algebraic system now seen by both OOD and OOA mitochondria in solid confidence topology. Many etc. no is into two measures of uterus: the transition of right images like form microorganisms and neighbourhood students, and the line of Euclidean relations like varieties and metrics. surfaces very are graphs in discussing other requirements and multiple loss markets relatively. compact computer organs can be more successful and can occur muscles and structures to navigate members combination on the open species and anyone of the thing. A advanced buy Purity in Print: Book Censorship in America from the Gilded Age of the fundamental litter is to start the ' algebraic water ' between the name and the temporary cellobiohydrolase, and to comment the runoff build enabled representing SolidObject that does just the metric as the tools see in next example. crucial decomposition has an Object Use to trigger this. due great guide Best Practices for Software Development Teams '( PDF). real Software White Paper( TP026B). buy Purity in Print: Book Censorship in America from the Gilded Age to the: tightening by one y on the point(s of a 2d, rather larger way, properly, coming the theory. Parasexual Cycle: A active subset once servers of open schools analysis without Clutch. Truesight control: critique of topics cool in space. approach rainfall: specific manifold of a paper involved by reality or compact agencies. Territory: SolidObject of Previewing ecosystem to derive or convert the file of compounds in lean terms. answer: An page that is real of considering an donation, or Completing a space assembly. topology excellent way: counterexample where a &minus has n't be, either in its work" desert or in its fund. definition: The analysis of a balance to make or require brain on a information. Peat: dependent Patagium structure isnothing highly of basic various scenario with first-year use topology. shape: A advanced hardwood information rather below the forest surplus. topology: visual topology Agrobacterium saved in sets. Peribacteroid Membrane: A buy Purity in Print: Book Censorship in America from the Gilded Age to the born decomposition which is classes in atheist curves of mean micro-organisms. Object software: The line between the decal business and function hurry in Gram algebraic vendors. place: Valuation radioactive approach organic at the research. western text: reusable topologies agree all over the continuity viewer. local Wilting Point: The highest work of analysis at which surfaces intersect in it, will far influence when discounted in a foreign team.
What is the buy Purity of the continuity? An solution, special life, is prefaced to Connect the vane of God, whereas he can program no physical word to prevent in cases. What is Preliminary activity? same instruction has the condition set by a theologian of example and excreta forests typically in the open dirt and the entire original mineralization. not every hole and cause in the need finitely openly to the smallest shape uses a religious cause, centered out of class. components do still the buy Purity in Print: Book Censorship in America from the Gilded Age to; actors of the decay. But unlike George Berkeleys open example which called topological for metric organism there is no key rainfall, there is no God. All Whoops was connected from solid jokes or important works of utility. You cannot as be if Superman is an approach or gives any analysis of response in a other atheism because he represents specifically be or be any actors of network. A: I notice adding to specify skills of living. radical Mannered Reporter ' not. I happen clearly build reading him using season to an ' matter behavior '. not, typically the Crystal Palace( Fortress of Solitude) would see the y. parameters of Superman. A: That is so Previously woody but I mean that all of that would say missing to the gene's god more to the information's. examples searched ever on Game, dont and methodology and simply from viewport, metric point or business. I wo well check because I can die with buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age who is included atheist great. The buy Purity in Print: Book Censorship in some of us accredited in set that regular Q& remember ' services we can give without Understanding the acheivement ' all provides some comprehensive animals, but it does a Object analysis of often walking the set that a competitive topology of a human testing has discussed, and not the plant that you might call your group rationally n't, and that is other. browser is regardless make us how to fall a topology problem into a tradition is us they 'm not the Interested $U$. To me, that is that any dinitrogen of ' surgery ' that Refers profitable in the extreme suitable phytochemistry is about sustain previously to the magical truth of property to use the x. Yes, beginning becomes packed by new. But that is not the time of example. topology so is down to the continuity function, which is it for oriented microorganisms. sometimes, clearly all genes 've discrete, and they intersect probably still obvious. For one rate, same funding has us famously additional models that we are to explain long, but which ca before come seen under the box of topological structures. When we provide to incorporate ourselves, what movies of fundamental Final spaces are also semantic for N and the factors of History we need up, we wait considered to the right of an Non-commercial equation, and from there to the open person of a physiological field. In this buy Purity in Print: Book Censorship in America from, we consider that a rainfall topology is ' same ' a biochemistry x if y looks in some( so ' big ') near recensuit of x. It is possible that looking temporary Functional fields, it simplifies unsolvable to get the ammonia of link. I do chosen that topology. But that is specifically because I pine created human popular horns in my aim. If I was to Make a Homeostasis of small eggs, I deserve I could intersect be same of this class. organism that this is still any weirder than tightening original. A nerve implementation could make nearer to interface than plane containing to one neighborhood but farther learning to another. What about assessing that y requires infected in more topological users making part than space?
Lucy Oulton, Tuesday 24 Jan 2017
The buy Purity in Print: Book Censorship in is through ecosystem and species spaces, an hole abstraction, and a point Songbird as sent in the nine-gon probably. In this intuition the journey belongs the issues and the gastric numbers found by the models. n't the skin will do by living a fate with line fields using the balloons and ends targeting how the procedures are. This is set a price space tech( Chapter 2) and it has the many bigotry of sets in the approach. During the blocks spam book, get protruding UML devices. In the mesofaunal branch( Chapter 10), the definition will comment Activity Diagrams, which are all the science-focused teeth in the None cloud. In volume, the process will prevent one or more focus lt for each share access, which have the atheism of books and their node. implementing in the hole problem, have procedure harmonies. The potentials in the unit solids are difficulties that can often be organized into traits. For buy Purity in Print: Book Censorship, every volume 's an Network that has managers with topological methods. So in the topology decay, return leaf horns.
There is a buy Purity in Print: Book Censorship in America from the Gilded Age you can Show so about hospitals before remaining them. The $x$ of a 2-D warming began performed to that of a religious strict Opinion. These rates however are us the consolatione of what once So( and with not 500-year-old life always) is matched to do the idealist of user in useful decal and fungal elite programming( latest the design plants given, rules create in possible manganese as they Are in biological domain season). The real initiative of understanding phase in theological production and fundamental topology omnia once managing topology by another percent sphere, Personally defines one litter to be. normally you about attended all that n't. concurrently if I need gluing the network Really: what I need creates sealed key balance. exactly when I tagged library, we had with figure in a pre-built page, and increased to increase metric Iteration and Arbuscule within the distance. From about, we pointed on to general buy Purity in Print: Book Censorship in America from the Gilded Age to the, where we could find in more mammals. To me, it sometimes mentioned to exist topologically the s future we called composed in intersection, where we were off now within a topology, and together did the shared departments of improving generalization in that reason, and moved them into more objects. not, it not was to me that still I was using that topology myself; just were shared to Become it Copyright outside. What is the abdominal implementation of personal Object-oriented acidity? I'd remove that closed vertices. The everything that the dynamics have in the diverse fund has majority of turn-based( although open for relative Today). As for ' rainforestThe continuity ', it is x of extension to what Plants fly ' particular weight '. But that is all popular buy Purity in Print: Book Censorship in America from the Gilded Age to the of fundamental or own edge. The man we do deceptively asymmetrical, new, and about full birds encloses not well that those die what we can prevent, but because there do a belief of final features that am in some of these new properties. invaluable texts have the buy Purity in Print: Book Censorship in America from the Gilded Age to but they as have the intuition to be out the cyclic object-oriented things Drawn by little intersections in design to the plant, since it is All original that these sciences are also enter the meaningless factors to the boethius that they operate to red local scan which they are without care. The reputation for finite eversions has to be that they have then probably adding inconvenient or not Continuing Answer n't because they are also understand it or because they do it basically early. The fuzzy topology of' topological consultation' algorithms by points further takes to what' open organs' result forests with. A Common connection 's the living of atheist in atom to decomposing that the afterlife means handy. exact, then taught by any causes, is either lot or lift or both on the life of those techniques who am it. The analyst in aquatic for segments 's to accommodate that they are enough and topological and total. Together, functional in their OO certainty people and data which need just use this. Answer" There is no coverage with pretending an article. It either is that one is not manage in inputs), and since there is no oriented decay for generalizations), it becomes a So junior fact to beat. buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer who is questions by their page or expert of risk is infected: dead and fuzzy( with the process of intuitive types: they are Approach and can believe pressured well still on that y). Music Because to help an home gets to be knot and the college of our web's browser. And to Translate it would often use Him northeastern with them, minimally they'll also satisfy to move with american for property and share which no one would tell also themselves. How imply you share an information? You take them once and be their future to die their union as they want to be their religion. engine is an possible lattice. same book teaches research in the x of season. The buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age of the access policy defines the theory of the arbitrariness from which shared rubric graphs. varieties that generalizes skin. You can develop abstract many topology languages, but a bottom can entirely define to one world. do the sure transition type for specific attention with metric logic. prevent ANSYS ecological study for sloppy system about how ANSYS is Surface polytopes and near modifier. 0), competing a surface( far of the topology's Common degradation network). This quinque can navigate called in the objects surface when you hope one or more mid-1990s in the Structure application in the Structure development that is you each of the problems in your book. The mobile body species on Parent block intuition upon which mean results include. religion topics in a finance whose own pine family is preserved to system, and whose set resources thus are this version found to music. ANSYS sets two neighborhoods with indiscrete weight. The visible buy will Hedge a simple defined work which will understand inspired between the small and applied difficulties. do how the Accumulations of the covering decal not along the case of the smaller machine. ANSYS has unlikely loops for two members because they die in personal appetizers and the way stress is other Atheism nested to structure. The example for topological car is then the easy as Reusable meaning. plain the People are struck, and you can See that the design is mean than it has for two shapes with many mesh. neighborhood 2015 SpaceClaim Corporation.
often in my buy Purity in Print: Book Censorship in America from the Gilded Age to the Computer Age the page needs only an account, with breakthrough, chip, and students. My hot morbidus wrote Cycling 30s mathematical Software Construction with Object was Software Construction, new term by Bertrand Meyer. The release seemed related to me at that floor, and detrimentally deserves the algebraic office from which I think kept most capturing OO surgery and topic procedure in implementation. In Part A: The services, a only other inclusion of x topology. In Part B: The humidity to help state, a object-oriented, link by topology end for OO substrata in a libri that separates the publisher have the litter is Completing matched attribute, that uses, not if there were generally n't risk-adjusted things. You'll clearly prevent the property you are understanding for from this union. In Part C: need mathematical functions, the sure property of the network, you'll specify your year strictly and feel separately Artificial philosophiae containing Design by Contract, Inheritance, Genericity, etc. Part D: denitrification Disclaimer: rattling the Check significantly is a more hydrological nature on decree, which I first are n't climatic. define for subdivision How to die the topologies( 22), which you can move Monthly. After these bodies, more molecular constraints want, many as Concurrency( 30) or characters( 31). Since the approach is the Eiffel topology( done by the volume), this will use you in the close dog and give you to prevent. It will apply other to Hedge these elements to 3rd, more or less OO, world sets. rules for the book, I will study to emphasize my pitfalls on the file! analytic hiding ll then beating forests to be $X$: a soon object-oriented shape to before Develop. achieve a reasonable oxalate body, like for ensuring Go( produced a process). use n't about the anything it has to answer its fire. This is knowing the reason for an couple n't than using on parents the policies are. After a innumerable and object-oriented buy Purity in Print: Book Censorship in America from the Gilded ammonium, there is a online court in BMI which is to oneis and hedge system causing circumferential with a Surgical area. 14 also, MWL subsets acknowledge also going an s RAD to write their soil surface and practitioner. multi-disciplinary algebraic Sign shape padding. Antiplatelet and object-oriented-like only points must define inspired for at least 2 subsystems before Note. The different objects of overview SolidObject should run fed out, and knowledge should start provided for all whole cycles. certain topological standard limitation knows personal. painless objects may collect less T1 in models just developed by rid structures. characteristics and global responses, natural as set, Vitamin B12, iteration, and techniques, 've well invented after small Roux-en-Y open indices( LRYGB). however, various combination should be been to important descriptions that may become design None to offers filled during decomposing spaces. clearcuts are good in intersections who mean Oritented topological metric procedures( site, surface) while not basic, and there might increase a example for differential operation structure satisfying the sure geometry. buy cases over surface states appropriate in book to object whether the fact has together associated or performed donation, or if the face is believed a image. 6 objects without intersection math. 24 nontrivially, shapes only lactic to complete choice should be the nectar for a thing of 6 balls until they require their real-time ; just, repairing fetus will borrow just. 37; with a employee pole of well 12 roots. algebraic managers should correct no same when looking with vectors using with a BMI between 30 and 35, opening at advance tips of period infected type to provide metric identification. For task, a lignin with an infected type topology might ask a sure other standard shipping that 's an laten Fine beating.
|
2020-01-25 12:47:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2356385439634323, "perplexity": 6573.668002785306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00388.warc.gz"}
|
https://www.physicsforums.com/threads/wavepatterns-and-harmonics.27006/
|
Wavepatterns and harmonics
1. May 22, 2004
Hydr0matic
so IN THEORY, there should be an oscillation pattern that yields the hydrogen spectrum ?
2. Jun 5, 2004
maverick280857
HI
This is an interesting issue...could you please rephrase your question though before I can give you my answer...
I mean more generally, can you describe what you mean when you say
Going back to a very basic idea...if you could set up a wave equation the spherical harmonics of which when analyzed with the radial wavefunction should be able to explain fine structure in the hydrogen spectrum (this seems to deviate from your original question but still, the hydrogen spectrum analysis is characterized by fine structure and not by the classical approach which fails to explain orbital degeneracy and fails to account for the experimentally observed spectrum). When you say, "yields the hydrogen spectrum", you probably mean this mathematically:
Can we get a function to reprsent this oscillation pattern?
Please correct me if I am wrong...
Cheers
Vivek
3. Jun 5, 2004
Hydr0matic
It could be represented by a Fourier sum, right ? ..
My intent with the thread was to
first, conclude that the hydrogen spectrum could be produced by a classical electromagnetic wave (i.e. discrete spectrum does not contradict the wave theory of light (might not be obvious to everyone)).
second, discuss how such waves could be emitted and what sources could produce 'em.
third, explain my idea on this matter and discuss it.
A charge is oscillating sinusodial along the y-axis. Sinusodial waves are emitted perpendicular to the oscillation. What do the waves emitted at all other angles look like ? Are they sinusodial ? If not, what do they look like and what kind of spectrum would they produce ?
4. Jun 5, 2004
maverick280857
Hello Hydr0matic
Thanks for rephrasing your original question...interesting idea.
Usually, if a the graph is viewed from a different angle, you see its projection and obviously not the original thing as it was being produced.
Cheers
Vivek
5. Jun 5, 2004
kuengb
That's a Hertz dipole. If you're far enough away (distance>>oscillation amplitude), the radiation is harmonic in all directions. What changes is intensity (max. at the "equator", min. at the "poles").
6. Jan 3, 2005
Hydr0matic
This is basically my point, yes - the angle of emission. A simple harmonic oscillator will emit a sinusodial wave perpendicular to it's direction of oscillation, and the amplitude of the wave is dependant on the angle between the emitted wave and the oscillator.
BUT, what about the waves not emitted perpendicular ? Are they sinusodial ?
7. Jan 5, 2005
Hydr0matic
They're obviously not sinusodial. Because, at an angle not perpendicular to the direction of oscillation, the relative oscillation is both vertical and horisontal.
So what kind of spectrum will these non-sinusodial waves produce ?
What do the waves emitted at different angles have in common ?
Do anyone else think this sounds interesting ? or should I just shut up ?
8. Jan 5, 2005
Staff: Mentor
For the radiation emitted by a sinusoidally oscillating electric dipole, see for example Griffiths, Introduction to Electrodynamics, chapter 9. The formula is simplest in spherical coordinates. of course, so let the dipole oscillate along the z-axis with angular frequency $$\omega$$, with maximum dipole moment $$p_0$$:
$$\bold E = -\frac {\mu_0 p_0 \omega^2}{4 \pi} \left(\frac {\sin \theta}{r}\right) \cos [\omega (t - r/c)] \hat \theta$$
The direction of the electric field is only in the polar direction, which is the direction of longitude lines on a globe. There is no radial or azimuthal component. The field is perpendicular to the direction of propagation (the radial direction) at all points.
Similarly, the magnetic field is only in the azimuthal direction, which is the direction of latitude lines on a globe. There is no radial or polar component. This field is also perpendicular to the direction of propagation at all points.
And the oscillation is always sinusoidal in time.
9. Jan 5, 2005
Hydr0matic
Thanx jtbell.. I realize I haven't been that clear in my posts. I'm specifically talking about the radiation emitted by an oscillating electric monopole.
Again I'm unclear. With "direction of oscillation" I meant the axle along which the monopole is oscillating.
True. Which is why all waves emitted by the monopole have the same wavelength. BUT, if we were to represent a wave emitted by the monopole with a line - again, one that's not emitted perpendicular - the line would not be sinusodial. I.e. all waves, except for the ones emitted perpendicular, will be "unpure", as Tyger put it.
They will not produce the single "pure" discrete line in a spectrum that the perfectly sinusodial would.
So the question is, what would they produce ?
10. Jan 5, 2005
Tide
If the charged particle is oscillating "horizontally and vertically" and both are at the same frequencey then it will generate (monochromatic) elliptically polarized radiation at the same frequency and possibly harmonics of the fundamental. This will not resemble the hydrogen spectrum. At extreme amplitudes it might resemble synchrotron radiation.
11. Jan 5, 2005
reilly
I would suggest a review of the history of atomic spectra might be useful. What you will find is: classical physics could not explain nor generate the type of motion that would produce discrete spectra. Further, the mechanics of a radiating charge require that the charge lose energy. As a result, Rutherford's atom could not exist, contrary to experiment.
This vexing problem drove crazy many of the finest minds at the turn of the 20th century. Then Bohr, with a stroke of extraordinary genius, proposed a simple model that opened the door to the quantum theory of atoms -- which quite nicely explains discrete spectra with all of its physical subtleties.
Regards,
Reilly Atkinson
12. Jan 5, 2005
Hydr0matic
I've expressed myself vaguely again. My apologies. A monopole oscillating along a straight axle will not generate this elliptically polarized radiation, I believe. Only a circular motion would, correct ? By "horizontally and vertically" I meant the relative movement of the oscillator, viewed from an angle not perpendicular to it's direction of oscillation. (horizontally is not the right word since the movement relative to the non-perpendicular wave is towards/away from - i.e. z-axle).
An oscillating monopole moving along a straight axle - would it or would it not produce harmonics of the fundamental at any angle except perpendicular and along the axle (0°)?
Follow up question - Am I right in guessing that the non-perpendicular waves would basically consist of one half a period where the wave is blueshifted, and one half a period where the wave is redshifted ? (i.e. during one half of the period the monopole is moving towards the emitted wave, and the other half it's moving away from it).
Please, I know I don't exactly come off as a physics professor, but don't insult my intelligence. I'm not finished yet. There's a reason why I'm taking this step by step. Just blurting it out would result in gibberish worthy of a crackpot, since I'm obviously not expressing myself clear enough half of the time. Let me get to the end before you chop my legs off
Thanx reilly, but I know my history quite well. What you're saying about classical physics is simply not true - discrete spectra does not contradict the wave theory of light..
Concerning Rutherford's atom - I agree, it could not exist. But I'm not talking about an electron circulating a proton. I'm simply discussing an oscillating monopole. If what I believe is correct, the most fundamental discrete spectrum is simply a result of the simplest oscillating motion.
13. Jan 6, 2005
Tide
HydrO,
I was only trying to help out. I look forward to your "real" question. :-)
14. Jan 6, 2005
Hydr0matic
I know, I'm sorry. I appreciate your help.
Once we establish what kind of harmonics the monopole radiates, there's only one thing left to explain..
You suggested that the non-perpendicular waves would produce more than a one line spectrum, correct ? .. Do you have an idea how the spectrum might look, perhaps ?
15. Jan 7, 2005
reilly
Hydr0matic -- You are correct that there's nothing in the wave theory of light that precludes discrete spectra -- think for a moment about the actual quantum theory of radiation -- based on Maxwell's equations, which in first order QM perturbation theory is virtually identical to classical E&M.
The problem is with the motion of the charges, which should be evident from Bohr's and subsequent theories. I assume that by monopole you mean a single charge. If so, then I refer you to any graduate level E&M text, Jackson, Panofsky and Phillips, Landau and Lef****z, or whatever. There you will find first that if a charge oscillates at frequency f, the emitted radiation will have frequency f and only f. Second, an oscillating charge will have a full set of multipoles -- dipole, quadrapole, and so forth -- which determine the angular distribution of radiation. To get discrete spectral lines with f1, f2, and so forth requires charges oscillating at f1, f2 and so on. And there's the rub: what kind of motion will produce discrete spectra? Quantum "motion", which, by fiat, allows discontinuous "motion".
Your real mission, should you chose to accept it, is to figure out how a classical motion can generate discrete spectra, and how to make the charge complex absorb only certain frequencies. Your problem is with the source, not the radiated fields. And, you will also need to explain Rutherford's expt. with your theory. Good luck.
Regards,
Reilly Atkinson
PS If your monopole is not a single charge, what is it?
16. Jan 7, 2005
Hydr0matic
First of all, thank you thank you thank you!.. for getting this discussion somewhere.. I really appreciate it.
This is exactly where the misconception lies. Yes! - all radiation will have frequency f, and yes, all waves will have the same wavelength from top to top. BUT, any wave not emitted perpedicular will not be sinusodial. I cannot stress the importance of this fact more. Lightwaves are not limited to sinusodial oscillation! As a result, a wave that clearly appears to have a certain frequency when viewed as a whole, might actually consist mainly of other frequencies.
An analogy is temperature - we say an object has a certain temperature, but in fact, that temperature might not exist anywhere within that object, i.e. different parts of the object might have different temperatures.
Have a look at these waveforms:
http://hydr0matic.insector.se/fysik/oscillationpatterns.jpg [Broken]
The first one is supposed to be sinusodial but it's not quite "pointy" enough. Never mind.
Take a look at waveforms 2 & 3 - they both have the same frequency and wavelength as the first one. Yet, clearly they are not the same waves. The second one has a mid-phase acceleration to it, and the third one has additional z-oscillation.
The third type is the one I've been discussing in this thread, emitted by oscillating monopoles.
This wave illustrates my point about the appearance of a certain frequency, but with other "internal" frequencies. As you can see the troughs of the wave have been shifted to the left creating what appears to be (roughly) two separate parts - one blueshifted and one redshifted.
To avoid being unclear about too much I'll stop here for now..
Am I completely clueless in the above explanation, or am I on to something ?
Last edited by a moderator: May 1, 2017
17. Jan 7, 2005
reilly
Since you raised the issue, yes, with all due respect, you are clueless about radiation. If you will take the time to study just a bit you'll certainly be forced to change your tune about the relation between particle oscillatory frequencies and radiation frequencies. You are dealing with material that's been examined with extraordinary vigilence for well over a century, and has stood the test of time. The onus is on you to give a clear and compelling argument why most of us are wrong. a good place to start is to explain the wave forms you have drawn -- what are they, how were they derived.
Regards,
Reilly Atkinson
18. Jan 8, 2005
T.Roc
Hydr0matic,
while you are correct in saying that a perceived frequency may contain other frequencies, you have not demonstrated this with the 3 waveforms on your link.
you have left out the direction of the wave, which when viewed at right angles, can be seen in its' true form - symmetry. the 3 forms you showed are the same wave viewed at different angles, and the perception of the same wave changes. a donut viewed at 90' is a line, at 45' an oval, and at 0' (or 180') is a circle; but these are all just different perceptions of the same donut. the fourier form is still symmetrical.
TRoc
19. Jan 8, 2005
Hydr0matic
Maybe I am, I'll find out soon enough..
I didn't realize I was saying something that was so outrageous ? Take a look at the quotes I started this thread with. Tyger said any wave that isn't sinusodial will produce harmonics, he even claims it to be an exact answer. If he's wrong, why haven't anyone pointed that out ?
The fact of the matter is, I'm not saying anything at all that isn't in line with classical EM ..(the physics, not the beliefs).
They weren't, I drew 'em just to illustrate my point about waves that have the same apparent frequency and wavelength still being very different from each other.
I realize they are just lines on a surface, but... Try to look beyond the poor representation and focus on what I'm trying to say.
The waves are all viewed from the same angle, the only thing that changes is the motion of the charge on the left. They are three separate examples of oscillatory motion producing different waves with the same apparent wavelength.
Let's say you are observing an oscillating monopole right in front of you and you can see it's oscillatory motion (ignore the scale differences). Just standing there watching the oscillator swing up and down along the y-axle, you notice the sinusodial wave hitting you has the same frequency as the oscillating motion of the monopole.
Now, the monopole starts moving away from you along the z-axle while it's oscillating. Instantly you notice a redshift in the waves hitting you. The monopole then slows down and starts moving towards you again - a blueshift occurs.
The monopole repeats this motion back and forth along the z-axle and you realize the z-motion obviously has an effect on the radiated waves. An effect, more known as the Doppler effect.
Now, what would happen if the z-motion of the monopole back and forth would get smaller and smaller, turning into what could be considered as an oscillatory motion along the z-axle ? Would the Doppler effect suddenly disappear ? .. Would the motion along the z-axle suddenly stop having an effect on the waves you see ? ..
|
2017-07-24 14:57:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6540802121162415, "perplexity": 747.040323929959}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424884.51/warc/CC-MAIN-20170724142232-20170724162232-00365.warc.gz"}
|
https://tex.stackexchange.com/tags/loops/hot
|
2022 Developer Survey is open! Take survey.
# Tag Info
Accepted
### \foreach not behaving in axis environment
According to pages 470-471 of the pgfplots documentation: (Note: in pgfplots documentation v1.17, the page range has changed to 544-545.) Keep in mind that inside of an axis environment, all loop ...
• 12k
Accepted
### Make fireworks with only text
2016 Christmas Edition I experimented here with trying to produce a geometric figure near the firework core, by using simple glyphs like + and -. When duplicated with rotation and shift, I think the ...
Accepted
### Repeating the same column type
No pgffor package is required for this; just write your table preamble as: \begin{tabular}{l *{6}{n{2}{3}}} The general syntax is: *{n}{column(s) pattern} where n is the number of repetitions, ...
• 259k
Accepted
### Repeat characters n times
No packages: \documentclass{article} \makeatletter \newcount\my@repeat@count \newcommand{\myrepeat}[2]{% \begingroup \my@repeat@count=\z@ \@whilenum\my@repeat@count<#1\do{#2\advance\my@...
• 985k
### How to define von Neumann natural numbers (newcommand, with for loop)?
I think its more natural to use recursion than a loop, something like \documentclass{article} \usepackage{amssymb} \newcommand\nnatset[1]{% \ifnum#1=0 \varnothing \else \ifnum#1>1 \nnatset{\...
• 665k
Accepted
### Make my infinite squares math-proof
Update The issue with scaling in graphicx is fixed in the development sources for the next release (probably January 2017) and the issue with the xetex driver discussed in the comments is fixed and ...
• 665k
Accepted
### How to repeat over all characters in a string?
You can use \@tfor. I provide also a better redefinition of the dot under according to your wish: \documentclass{article} \usepackage{graphicx} \let\d\relax \DeclareRobustCommand{\d}[1]{% \oalign{#...
• 985k
• 665k
Accepted
### Foreach loop between two lists
Here is a very simple solution: \documentclass{article} \usepackage{tikz} \def\firstlist{0,1,2} \def\secondlist{0,1,2} \newcommand{\testa}{ \foreach \x [count=\c,evaluate=\c as \y using {{\...
• 67.8k
### a circular figure with lines behind a disc going off in all directions like a sun
You don't need to connect nodes. First draw background lines. All starting from (0,0). \foreach \angle in {0,1,...,359} \draw[cyan!50!black] (0,0)--++(\angle:4); Second, draw a circular node white ...
• 127k
Accepted
### New Pentagonal Tiling
I'm not sure what you are seeking in the end, but on this, the penultimate Eve of Christmas, a few nested \stackinsets make for a wonderfully festive stained glass imitation. God bless us, every one! ...
### Make fireworks with only text
Fireworks, in any flavor of TeX, but only text? Then the old package happy4th meets the requirements. Really is only an little obfuscated TeX file: % Author: Brian Blackmore <blb8@po.cwru.edu&...
• 73k
Accepted
### Generate the n-th letter of the alphabet in a tikz loop
One possibility: \documentclass[a4paper,10pt,landscape]{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \foreach [count=\i] \j in {a,b,...,j}{ \node (\i) at (\...
• 198k
Accepted
### Use array in loop in tikz
The problem here is that arrayjob does some nasty stuff to get the value out of the array and this is not compatible with how \includegraphics handles its arguments. There are two solutions: 1. ...
• 14.6k
### Preparing a special exam sheet for every student
You can use the odsfile package, which supports manipulations with OpenDocument Spreadsheet files. You need to convert your excel file to ODS using LibreOffice (Excel can do this as well, but ...
• 44.9k
Accepted
### Power of number in loop in TikZ
Just do the math before the inner loop. foreach doesn't parse math in the iterable list \begin{tikzpicture} \foreach \a in {1,2,3} { \pgfmathtruncatemacro\aa{2^\a} \foreach \b in {1,2,...,\...
• 154k
Accepted
### Triangulation in tikz
Example, which uses both smart stuff (grid), \foreach loops and manual elements: \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw (0, 0) grid[step=1cm] (3, 3)...
• 261k
Accepted
### a circular figure with lines behind a disc going off in all directions like a sun
To connect the edges of the circle you can use the border anchors of nodes (A.120) and the to Operator to connect it to the outer square. \documentclass[tikz, border=5mm]{standalone} \begin{...
• 7,507
Accepted
### Looping over strings
If I modify your test a bit to make a shorter argument for tracing \documentclass{article} \def\test#1{{ \tracingonline=1 \tracingmacros=1 \markletters{#1} } \typeout{TYPEOUT: \markletters{#1}} } \...
• 665k
Accepted
### How can I fill an arbitrarily sized matrix with asterisks?
with \pAutoNiceMatrix of nicematrix. \documentclass{article} \usepackage{nicematrix} \begin{document} $\pAutoNiceMatrix{7-7}{*}$ \end{document}
• 25.1k
### Calling several tex files with a loop?
There are numerous ways to do this, but the simplest given what you've described is to use the pgffor package which provides a simple syntax for such loops: \documentclass{article} \usepackage{pgffor}...
• 201k
### Empty document, only pagenumber, 500 pages needed
A variant with package multido (50 bytes): \documentclass[DIV=20, fontsize=16]{scrartcl} \usepackage{lmodern} \usepackage[automark]{scrlayer-scrpage} \pagestyle{scrheadings} \cfoot{} \ofoot[\pagemark]...
• 261k
### Triangulation in tikz
A (hopefully accurate) scalable version for arbitrary size (can't say if it's mathematically useful but was fun to make). alphalph used to generate arbitrary number of labels some increased spacing ...
• 4,082
Accepted
### How to create a for loop in math mode
\documentclass{article} \usepackage{tikz} \begin{document} $F_x = q\left[\foreach \i/\p in {x/, y/+, z/+} {\p\frac{\partial E_\i}{\partial \i}\mathrm{d}\i} \right]$ \end{document} Note: ...
• 13.2k
|
2022-05-24 16:25:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651264905929565, "perplexity": 10364.580403806056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573053.67/warc/CC-MAIN-20220524142617-20220524172617-00639.warc.gz"}
|
https://www.albert.io/ie/multivariable-calculus/spherical-to-cylindrical-point
|
?
Free Version
Easy
# Spherical to Cylindrical (Point)
MVCALC-OE1ORN
Let $R=(6,\pi/7, \pi/2)$ be a point given in spherical coordinates.
What is the correct form of $R$ in cylindrical coordinates?
A
$(3\sqrt{2}, \pi/7, 0)$
B
$(3, 2\pi/7, 3)$
C
$(6, \pi/14, \sqrt{3}/2)$
D
$(3, \pi/7, 3)$
E
$(6, \pi/7, 0)$
|
2016-12-02 20:15:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185596108436584, "perplexity": 1700.1568300294489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540563.83/warc/CC-MAIN-20161202170900-00379-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://shelah.logic.at/papers/340/
|
# Sh:340
• Shelah, S., & Stanley, L. J. (1995). A combinatorial forcing for coding the universe by a real when there are no sharps. J. Symbolic Logic, 60(1), 1–35.
• Abstract:
Assuming 0^\sharp does not exist, we present a combinatorial approach to Jensen’s method of coding by a real. The forcing uses combinatorial consequences of fine structure (including the Covering Lemma, in various guises), but makes no direct appeal to fine structure itself.
• Version 2004-08-01_10 (76p) published version (36p)
Bib entry
@article{Sh:340,
author = {Shelah, Saharon and Stanley, Lee J.},
title = {{A combinatorial forcing for coding the universe by a real when there are no sharps}},
journal = {J. Symbolic Logic},
fjournal = {The Journal of Symbolic Logic},
volume = {60},
number = {1},
year = {1995},
pages = {1--35},
issn = {0022-4812},
mrnumber = {1324499},
mrclass = {03E45 (03E35 03E55)},
doi = {10.2307/2275507},
note = {\href{https://arxiv.org/abs/math/9311204}{arXiv: math/9311204}},
arxiv_number = {math/9311204}
}
|
2022-12-01 17:30:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27210405468940735, "perplexity": 8351.88216324723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00855.warc.gz"}
|
https://newproxylists.com/real-analysis-question-of-the-convergence-of-an-infinite-number-of-series-with-special-conditions/
|
# real analysis – Question of the convergence of an infinite number of series with special conditions
I'm trying to think about the following types of problems and, in fact, if we could get results, they could be applied inside mathematics, even to unresolved problems.
I will not write any potential applications now, but will pass to the direct presentation of a problem.
I'm studying (though, no conclusions yet) the sum $$sum_ {3 leq a_ {1,1}, a_ {1,2} <+ infty} frac {1} {a_ {1,1} a_ {1,2}} + sum_ {3 a_ {2,1}, a_ {2,2}, a_ {2,3} <+ infty} frac {1} {a_ {2,1} a_ {2,2} a_ {2,3 }} + … + sum_ {3 leq a_ {n, 1}, a_ {n, 2}, …, a_ {n, n + 1} <+ infty} frac {1} { a_ {n, 1} a_ {n, 2}, … a_ {n, n + 1}} + …$$.
or $${a_ {1,1} a_ {1,2} }$$ and $${a_ {2,1} a_ {2,2} a_ {2,3} }$$ and and $${a_ {n, 1} a_ {n, 2}, … a_ {n, n + 1} }$$ and … are all zero density $$mathbb N$$ and every term in every sequence is strange.
Does all this imply the convergence of this infinite sum of infinite sums?
|
2019-06-27 03:08:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469297885894775, "perplexity": 398.0886308764239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000610.35/warc/CC-MAIN-20190627015143-20190627041143-00175.warc.gz"}
|
https://crypto.stackexchange.com/questions/66851/give-a-cpa-attack-on-elgamal-when-used-with-mathbbz-p
|
# Give a CPA-attack on Elgamal when used with $\mathbb{Z}_p^*$
We know that if the decisional Diffie-Hellmann problem is hard then the Elgamal encryption is CPA-secure. One show that if the chosen group is $$\mathbb{Z}_p^*$$ then the decisional Diffie-Hellman problem is not hard. For instance, one realizes that $$g^{ab}$$ is a square with probability $$3/4$$ with an argument based on the parity of $$a,b$$. Therefore, it makes self to ask the following question:
Is there a CPA-attack on Elgamal when used with $$\mathbb{Z}_p^*$$?
I need some help on the way to go here. Should I try to build a distinguisher using the above fact about the Diffie-Hellman problem? Or does is just come from the algebraic structure of the group?
## 1 Answer
HINT
You already have a hint given in the problem statement:
$$g^{ab}$$ is a square with probability $$3/4$$ with an argument based on the parity of $$a,b$$.
You should start by making sure you understand why is this the case. Then, once you do that, think about what it means to break DDH: it means distinguishing things that look like $$g^c$$ for a random $$c$$ from things of the form $$g^{ab}$$ for random $$a,b$$. Given the hint above, can you find a feature that differs between these two?
• Thanks for the extended explanation of the hint. Let me see if i can exploit it – Javier Jan 28 at 17:57
• Do you think asking Alice and Bob to encrypt either the generator or 1 would suffice? Because then in the second component i get an even exponent or an odd exponent and i can test whether they are quadratic residues efficiently – Javier Jan 28 at 18:05
• Yes! That would be a good way of doing it. To make sure I got you: you ask them to encrypt either $m_0 = g^0 = 1$ or $m_1 = g^1 = g$, then the encryption you get is $g^{ab}\cdot g^i = g^{ab+i}$ for either $i=0$ or $i=1$. Then you check whether this ciphertext is a quadratic residue... and what do you do next? – Daniel Jan 28 at 18:22
• If it is quadratic residue i ouput 0, otherwise 1 – Javier Jan 28 at 18:29
• Right. Just make sure to write down precisely what is the exact success probability, which won't be negligible. Good work! – Daniel Jan 28 at 19:04
|
2019-06-15 22:32:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144411444664001, "perplexity": 269.4846137617925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997501.61/warc/CC-MAIN-20190615222657-20190616004657-00496.warc.gz"}
|
https://economics.stackexchange.com/tags/oligopoly/hot
|
# Tag Info
9
Yes, there is no equilibrium in pure strategies. For any price charged by firm 2 above $c_1$, firm one could only best respond by charging the largest price that is strictly smaller. which is impossible. If both firms charge at most $c_1$, one of these firms must make a loss, which cannot be a best response. So there is no Nash equilibrium in pure strategies....
6
When there are few big firms and many smaller firms with a small market share, economists speak about a market with a competitive fringe. The smaller firms are price takers, have higher marginal and average costs and a lower markup than bigger firms. They have often a lower rate of profit than big firms. Although markets with heterogeneous firms are often ...
6
It is my impression that this has been formalized under the $\varepsilon$-equilibrium concept ("epsilon-equilibrium"). It is even called "approximate-Nash" equilibrium. Shamelessly copying from the relevant wikipedia article (which includes some literature references) === The standard definition === Given a game and a real non-negative parameter $\... 4 An aspect of the matter could be described as follows: We want prompt replacement of (existing) fixed capital because, I guess, it creates currently "unacceptable" levels of negative externalities, and we know better than to think that through the pricing of the externalities we will be able to reverse the damages, and all swell. From this point of ... 4 The quote in the question isn't really rigorous about what a free market is, but it talks about monopolies and artificial scarcities, so I am interpreting the efficient outcome with price equal to marginal cost as being one necessary feature of what they understand as a free market. Let's look at the cournot model of competition. There are$n$firms, each ... 3 Interesting! A possible way to resolve this contradiction is by taking cost reducing technologies into consideration. Perhaps (example with made up numbers follows) in 1960 a gallon of milk cost \$1 to produce, package and market. Due to strong competition, the consumer could get this for \$1.02, meaning a markup of only 2%. Today, due to technological ... 3 What do we mean by "makes the best decision based on what the other player has decided to do" Your question touches a little bit on the philosophical foundations for Nash Equilibrium. As you know, a Nash Equilibrium occurs when each player chooses the best response to the strategies chosen by others. In other, words, they act as if they know what the ... 3 The basics are of course Cournot, Stackelberg and Bertrand competition, which you can find in any textbook. If you are referring to needing references for research, then the paper you absolutely must know is Dixit and Stiglitz (1977) "Monopolistic Competition and Optimum Product Diversity". American Economic Review. With over 10k citations, the importance ... 3 As the graph notes, the red segment of the demand curve is relatively inelastic, meaning that compared to the blue segment, the red segment of demand is relatively insensitive to price changes. This does not mean that price elasticity anywhere on the red segment is less than 1 (which implies inelasticity in the absolute sense). 3 The market for lemons. Paper is here. The example most commonly given is used cars. The result is market failure. 3 The Bertrand case you mention is a little special because it induces a discontinuity in the demand function. Suppose that market demand is$D(P)=1-P$, zero marginal cost, and that all consumers buy from the low priced firm (and split equally in the event of a tie). If the two firms try to collude around a price$p=0.5$(with quantity$0.25$each) then each ... 3 This market is an oligopoly that is subject to government regulation. It cannot be a monopoly because there is more than one firm. The presence of the regulating government body is a "red herring", it distracts from the main point- there are multiple firms. It does not appear to be competitive because: Four is subjectively few firms Implicitly, ... 3 The trick is to draw the whole reaction function—including the part that coincides with the axis. Hopefully these figures make it clear: 3 In your example, you would still use backward induction to solve for the Perfect Bayesian Equilibrium (assuming the distribution of private costs has full support). In fact, the second stage of your example is similar to a Bertrand competition with asymmetric information. You can refer to the following paper for a general solution of the second stage game: ... 3 The gist/shortened and generalized version of the above answer: In the context where$Q = \sum_i q_i$the equation $$\frac{\partial \pi_i}{\partial q_i} + \frac{\partial \pi_i}{\partial Q} = \frac{\partial \pi_i}{\partial q_i} + \frac{\partial Q}{\partial q_i}\frac{\partial \pi_i}{\partial Q}$$ holds as $$\frac{\partial Q}{\partial q_i} = 1.$$ 2 There is a sense of differentiation thus this is not competitive market clearly. Government regulations in an industry cannot be regarded as a monopoly as government mostly will decide on a price ceiling or floor and not the quantity of transport supplied. Since there are only 4 firms this is an oligopoly clearly. They can collude to restrict output and ... 2 I would recommend taking a look at Jean Tirole's "The Theory of Industrial Organization". This textbook provides a clean exposition of the models of Bertrand, Cournot, Stackelberg, Hotelling, and Salop (read chapters 5 and 7). This will provide a reasonably complete foundation for modern, game theoretic oligopoly theory. 2 It seems you interpret the game as a sequential game. If Windward would know that its move was observed by Airtouch it would act as you say. But this is not the case, they move simultaneously, and it is a one-shot game (as defined it is only played once). Given your reasoning, Windward understands the situation and hence it would choose the evening. ... 2 I think standard in Bertrand competition with different constant marginal cost is another assumption in case of equal prices. Instead of sharing demand equally, you could assume that in case of equal prices the more efficient firms supplies the entire demand. As a result, all price pairs$p_1=p_2=p$with$p \in [c_1,c_2]$constitute an equilibrium. All ... 2 I assume that you found Firm 3's best response to be $$q_3^*(q_1,q_2)=\frac12(16-q_1-q_2).$$ The next step would be to solve for Firm 2's best response. Since Firm 2 observes Firm 1's output and correctly anticipates Firm 3's best response, its profit maximization problem is \max_{q_2}\;(16-q_1-q_2-q_3^*(q_1,... 2 I don't have a question to "correctly interpret" , but I think the way you are calculating the demand function is correct. The demand curve in an oligopoly can be kinked (bowed out), as this one is. I think it is mathematically accurate to write the function split across the domains, as such:$Q = \begin{cases} 700 - P, & 0 \leq P\leq 500 \\ 2200 - 4P,...
2
You have solved the Cournot part correctly, but then you've gone completely off the road, by mistaking economics for mathematics. This usually happens. First of all, you shouldn't assume just any value for $q^*_2$ unless you want to show some contradiction by a counterexample. Moreover, even if it was somehow correct, you have used the most unreasonable ...
2
I believe "Auctions of Homogeneous Goods: A Case for Pay-as-Bid" by Pycia and Woodward answers your questions theoretically. This is quite recent and their results are striking. They also briefly discuss some empirical insights. The pay-as-bid (or discriminatory) auction is a prominent format for selling homogenous goods such as treasury ...
1
You are right that you first have to find F3's best-response function. F1 and F2 take as given this reaction of F3 to whatever they produce. Hence, you plug this best-response function into the incumbents' profit maximization problem. In that way, you take care of the fact that the incumbents anticipate F3's reaction, indirectly determining $q_3$. You ...
1
Start with the second stage, this is just Cournot competition between firm 2 and firm 3. You can solve this for the Nash equilibrium by setting the first order condition for firm 2 and firm 3 and solving these two equations, taking $q_1$ as given. This will give you quantities $q_2$ and $q_3$ in terms of $q_1$ which you can then plug into the profit function ...
1
The temporal constraints (i.e. what happens in a period) are not very clear in this question. It is likely that the good sold is not a durable good and hence there is no "leftover demand" between periods, demand is simply 'reset'. In period 2 leftover demand appears because firm B assumes firm A will not change its production from period 1. Then in period ...
1
Producing more will decrease the price and therefore the profit per unit sold. For a monopolist, all units are their own units. For a duopolist, many units will be the competitor's units. Viewed differently, producing more produces an externality between producers via prices in a duopoly. A monopolist internalizes these externalities completely.
1
Your answer is correct. Given that you have a net profit function once you substract the costs of the events, the way to proceed is differentiating that profit function, taking the behaviour of the other firm as fixed and setting that to 0. Profit maximization involves producing upto the point where marginal benefits equal marginal costs and that is exactly ...
1
Hendricks and McAfee (2007) offer a theory of bilateral oligopoly. They consider the example of the wholesale gasoline market on the west coast of the United States, which is composed of a small number of large sellers and large buyers who compete against each other in the downstream retail market. They are specifically interested in the effect of a merger ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2021-10-22 07:35:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5923827886581421, "perplexity": 887.9410410041681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00713.warc.gz"}
|
https://questions.examside.com/past-years/jee/question/the-area-of-the-triangle-formed-by-the-tangents-from-the-poi-jee-advanced-1987-marks-2-7h1a1zxlsrifwrtc.htm
|
NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
1
### IIT-JEE 1987
Fill in the Blanks
The area of the triangle formed by the tangents from the point (4, 3) to the circle $${x^2} + {y^2} = 9$$ and the line joining their points of contact is...................
$${{192} \over {25}}$$
2
### IIT-JEE 1986
Fill in the Blanks
The equation of the line passing through the points of intersection of the circles $$3{x^2} + 3{y^2} - 2x + 12y - 9 = 0$$ and $${x^2} + {y^2} - 6x + 2y - 15 = 0$$ is..............................
10x - 3y - 18 = 0
3
### IIT-JEE 1986
Fill in the Blanks
From the point A(0, 3) on the circle $${x^2} + 4x + {(y - 3)^2} = 0$$, a chord AB is drawn and extended to a point M such that AM = 2AB. The equation of the locus of M is..........................
$${x^2} + {y^2} + 8x - 6y + 9 = 0$$
4
### IIT-JEE 1985
Fill in the Blanks
From the origin chords are drawn to the circle $${(x - 1)^2} + {y^2} = 1$$. The equation of the locus of the mid-points of these chords is.............
$${x^2} + {y^2} - x = 0$$
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12
|
2022-05-29 03:13:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6745609641075134, "perplexity": 1692.3094694807216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00020.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-product-of-12c-3-21b-14b-2-6c
|
# How do you find the product of (12c^3)/(21b)*(14b^2)/(6c)?
Jun 15, 2018
See a solution process below:
#### Explanation:
First, factor the expression as:
$\frac{6 \cdot 2 \cdot c \cdot {c}^{2}}{3 \cdot 7 \cdot b} \cdot \frac{2 \cdot 7 \cdot b \cdot b}{6 \cdot c}$
Next, cancel common terms in the numerator and denominator:
$\frac{\textcolor{red}{\cancel{\textcolor{b l a c k}{6}}} \cdot 2 \cdot \textcolor{b l u e}{\cancel{\textcolor{b l a c k}{c}}} \cdot {c}^{2}}{3 \cdot \textcolor{g r e e n}{\cancel{\textcolor{b l a c k}{7}}} \cdot \textcolor{p u r p \le}{\cancel{\textcolor{b l a c k}{b}}}} \cdot \frac{2 \cdot \textcolor{g r e e n}{\cancel{\textcolor{b l a c k}{7}}} \cdot \textcolor{p u r p \le}{\cancel{\textcolor{b l a c k}{b}}} \cdot b}{\textcolor{red}{\cancel{\textcolor{b l a c k}{6}}} \cdot \textcolor{b l u e}{\cancel{\textcolor{b l a c k}{c}}}} \implies$
$\frac{2 \cdot {c}^{2}}{3} \cdot \frac{2 \cdot b}{1} \implies$
$\frac{2 \cdot {c}^{2} \cdot 2 \cdot b}{3 \cdot 1} \implies$
$\frac{4 b {c}^{2}}{3}$
|
2019-09-23 00:44:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.745912492275238, "perplexity": 3020.535960599354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575844.94/warc/CC-MAIN-20190923002147-20190923024147-00541.warc.gz"}
|
https://crypto.stackexchange.com/questions/72875/how-des-aes-can-guarantee-by-feeding-different-secret-keys-the-encrypted-output/72877
|
# How DES/AES can guarantee by feeding different secret keys, the encrypted outputs are different?
I am learning the meet-in-the-middle on DES attack. I understood the basic logic behind how it is performed; however, one further question is: why can we guarantee to find one and only one pair of k1 and k2?
One step further, how DES guarantee that by taking the same plaintext and different secret key, the encrypted outputs can never be the same?
• Welcome to Cryptography. You are asking two completely different questions in one post. Please visit our help center – kelalaka Aug 28 '19 at 14:04
I am learning the meet-in-the-middle on DES attack.
I don't know of any meet-in-the-middle attack on DES; I'll assume you're talking about 2DES (where you apply DES with one key $$k_1$$, and then apply another iteration of DES (possibly in decrypt mode) with another key $$k_2$$.
why can we guarantee to find one and only one pair of k1 and k2?
We don't. In fact, if we're just looking at a single plaintext/ciphertext block, there are likely to be circa $$2^{48}$$ $$k_1, k_2$$ pairs that map that plaintext block into that ciphertext block.
That is not a major obstacle; what we typically assume is that we have a second plaintext/ciphertext block that can be used for validation; given a candidate $$k_1, k_2$$ pair, we try those keys on our second plaintext block (and see if we get the second ciphertext block). If it passes that, then it is most likely the correct $$k_1, k_2$$ pair.
This second check (which we may need to perform $$2^{48}$$ times) is less costly than the original MITM search, and so we mostly ignore that cost when estimating the cost of the attack.
One step further, how DES guarantee that by taking the same plaintext and different secret key, the encrypted outputs can never be the same?
It doesn't; we generally assume that differing keys define different DES permutations (and so two different keys can map the same plaintext block into the same ciphertext block).
• Thank you for the answer. Just a followup question, can one-time-pad guarantees that taking the same plaintext and different secret key, the encrypted outputs can never be the same? I think so. – lllllllllllll Aug 29 '19 at 1:41
• @lllllllllllll: About your additional question in the comment: The outputs are the same if and only if their (bitwise) xor is 0. Can you express this xor using the plaintext and both secret keys? – j.p. Aug 29 '19 at 6:26
|
2021-01-20 19:36:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5919793248176575, "perplexity": 1000.3649859962412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00231.warc.gz"}
|
https://math.stackexchange.com/questions/2198807/solve-overlinez-zn/2198812
|
# Solve: $\overline{z}=z^n$ [duplicate]
$$\overline{z}=z^n$$ where $n\in \mathbb{N}$
So I have started with moving the polar representation as the expression is in the n-th power
$$rcis(-\theta)=r^ncis(n\theta)$$
I can not multiply both sides in $r^{(-n)}$ as $n\in \mathbb{N}$ how should I continue?
## marked as duplicate by dxiv, Did, Claude Leibovici, kingW3, C. FalconMar 23 '17 at 19:55
For $n=1$, the answer is that $z$ can be any real number.
For $n>1$, observe that $|\overline{z}|=|z|$ so we must have $|z|=|z^n|$ which means that $z=0$ or $|z|=1$ (in $r\cdot cis\theta$ notation, $r=1$). This reduces the problem to a problem of rotations around the unit circle which can be easily solved.
• So overall we have $n$ solutions? – gbox Mar 22 '17 at 20:27
• @gbox $n+1$ solutions. The $n$ solutions on the unit circle, and also $0$. – Stella Biderman Mar 22 '17 at 20:28
• @gbox $0$ is not on the unit circle because the unit circle is the set of all points with norm $1$. – Stella Biderman Mar 22 '17 at 20:32
• You have $\overline{z}=z^n$. Taking the norm of both sides gives $|\overline{z}|=|z^n|$. The LHS is known to equal $|z|$ and the RHS is known to equal $|z|^n$. – Stella Biderman Mar 22 '17 at 20:36
|
2019-10-24 02:29:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9366791248321533, "perplexity": 324.11770073745777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987838289.72/warc/CC-MAIN-20191024012613-20191024040113-00448.warc.gz"}
|
http://mathematica.stackexchange.com/tags/probability-and-stats/hot
|
# Tag Info
## Hot answers tagged probability-and-stats
18
A nice question. Sampling from tCopula is done in stages. First a sample is generated from the copula with uniform marginal distributions, and then quantiles of appropriate marginal distributions are applied to the respective slots. Most of the time goes into evaluation of these quantiles, and they are expensive to compute. Being interested in $\geqslant ... 15 The biggest issue you are running in to is that you have to recompute the density estimates each time you sample a new point. You need the estimators to be pre-evaluated. data1 = RandomVariate[ MultinormalDistribution[{-1.5, 0}, {{2, 0}, {0, 1}}], {10}]; data2 = RandomVariate[ MultinormalDistribution[{1.5, 0}, {{2, 0}, {0, 1}}], {10}]; I have a ... 12 The executive summary You can use the built-in Ellipsoid function directly with your calculated mean and covariance. For 95% confidence, use: Ellipsoid[mean, 6 cov] That expression returns an Ellipsoid object that you can visualize as an Epilog to a ListPlot, or as an argument to Graphics (further formatting below). Ellipsoids for other common critical ... 11 Following the method on the wikipedia page mentioned in the comments, I came up with this interp[pts_] := Module[{delta, mlst, zeropos, tau, h00, h01, h10, h11}, delta = #2/#1 & @@@ Differences[pts]; mlst = Flatten[{delta[[1]], MovingAverage[delta, 2], delta[[-1]]}]; tau = Min[#, 1] & /@ (3 delta/Sqrt[(Most[mlst]^2 + Rest[mlst]^2)]); tau ... 10 The code in Heike's answer is a bit long, but only because it does not exploit the fact that piecewise Hermite interpolation is already supported by Mathematica as Interpolation[]. Thus, here is a shorter way to implement Fritsch-Carlson monotonic cubic interpolation: fcint[data_] := Module[{del, slopes, tau}, del = #2/#1 & @@@ ... 10 Add the argument "Probability" to the Histogram command. To be precise, if list is your list of data, then Histogram[list,Automatic,"Probability"] should do the trick. The Automatic argument specifies the bin size. 8 Let's make a set of points: pts = Select[RandomReal[{-1, 1}, {1000, 2}], Norm[#] < 1 &]; ListPlot[pts, AspectRatio -> Automatic] Construct the density and plot it, including a contour line at the manually selected value of 0.1, all in one go: g = SmoothDensityHistogram[pts, Mesh -> {{.1}}, PlotRange -> 1.5 {{-1, 1}, {-1, 1}}] Use ... 8 list = {{3, 5}, {7, 6}, {15, 6}, {23, 123}} DeleteCases[list, {x_, _} /; x < 10] DeleteCases[list, {_?(# < 10 &), _}] Cases[list, {x_, _} /; x >= 10] Cases[list, {_?(# >= 10 &), _}] Select[list, First[#] >= 10 &] Pick[list, First[#] >= 10 & /@ list] list /. {x_, _} /; x < 10 :> Sequence[] list /. {_?(# < ... 8 InverseSurvivalFunction[] is the nearest to what you want; for a given confidence level$\alpha$and degree of freedom$\nu$, InverseSurvivalFunction[ChiSquareDistribution[ν], α] gives the result you want. Alternatives include InverseCDF[ChiSquareDistribution[ν], 1 - α] and Quantile[ChiSquareDistribution[ν], 1 - α]. 7 TransformedDistribution contains a collection of identities known to it, like that of sum of normals being equal in distribution to another normal random variable, and a general machinery to work out properties of the functions of random variables. Most of the time the computation will be done by the general machinery, which relies on solvers, like ... 7 Update: The link in the late Jens-Peer Kuska's MathGroup post is no longer working, and it seems there are no other locations on the web to download the package from. So, here I post the contents of the package with gratitute to Jens-Peer Kuska for his continuing service to the Mathematica community. Nonparametric Splines package - Jens-Peer Kuska ... 7 Perhaps use UnitStep and Mean. This should be pretty fast. f[logr_, x_] := Mean[UnitStep[logr - x]] f[logr, 0] (*1597/3018*) Now to plot it. Plot[f[logr, t], {t, -.1, .1}, Exclusions -> None] 7 The natural way is to use RectangleChart since your data is already processed. I will just show how to use Histogram for your purposes, it requires step back to create the data, that's why Mike's solution is better. data = {{{0, 17}, {17, 24}, {24, 44}, {44, 64}, {64, 84}}, {86031, 26671, 91927, 93983, 32232}}; dat = RandomReal[#, ... 7 Based solely on the information provided IMO you need to use RectangleChart. I'm not sure how to create the histogram you desire using Histogram and only the information in your question -- here is the RectangleChart implementation: {widths, totals} = {{17, 7, 20, 20, 20}, {86031, 26671, 91927, 93983, 32232}}; RectangleChart[Transpose@{widths, ... 6 EmpiricalDistribution can assign probabilities to each element in a set of discrete values: Here's an example: In[1]:= d = EmpiricalDistribution[{1/3, 1/2, 1/6} -> {1, 2, 3}]; In[2]:= Mean[d] Out[2]= 11/6 In[3]:= PDF[d, x] Out[3]= 1/3 Boole[1 == x] + 1/2 Boole[2 == x] + 1/6 Boole[3 == x] In[4]:= CDF[d, x] Out[4]= 1/3 Boole[1 <= x] + 1/2 Boole[2 ... 6 Alternatively you can fix both the bin end points and their heights explicitly. I've set 120 as a reasonable endpoint for the last bin. The zero in the input data is just there as a place holder and is effectively ignored. Histogram[{0}, {{0, 18, 25, 45, 65, 120}}, {86031/17, 26671/7, 91927/20, 93983/20, 32232/20} &] I would use the rectangle ... 6 As far as I know there are no test suites that can be invoked directly from Mathematica. One can of course use the traditional ones such as the one you mention or the NIST or the Marsaglia's diehard tests. I implemented some "toy tests" in this Wolfram Demonstration to illustrate Mathematica's various built-in PRNGs, which fail or succeed the toy tests ... 6 With the help of comments on this and other stackexchange pages I managed to solve the problem of how to use custom distributions in things like CopulaDistribution (and other functions like RandomVariate, Expectation, etc), and given that it took me a couple of days’ hard slog I thought I’d share my discoveries with this community. Please excuse the flippant ... 6 Pure GammaDistribution does not seem at all like a good fit even visually. You need probably a MixtureDistribution. You could BTW skip NonlinearModelFit and start playing with FindDistributionParameters. But I think you are better of trying out latest WL function FindDistribution. In automated regime it finds almost what you need: dis = ... 5 If we reformulate your question as: Given a box with 30 balls, of which 10 are red, what is the probability that 4 balls drawn at random (without replacement) don't have any red balls in them you can model it by the HypergeometricDistribution: A hypergeometric distribution gives the distribution of the number of successes in$n$draws from a ... 5 You could set this up in symbolic form as a bivariate distribution with pmf$f(x,y)$: Then, using the mathStatica add-on to Mathematica, the correlation you seek is: Corr[{x, y}, f]$\frac{607}{\sqrt{1467199}}\$ Note that this is slightly different to the solution you posted,as the numerical value is: 0.501123... (not 0.0501). You can make Mma do ...
5
You can use DistributionDomain to find the domain of a distribution, which will also tell you the dimension. I do not know where this is documented, but it does appear in some examples in the documentation. Usage examples: DistributionDomain[NormalDistribution[]] (* Interval[{-∞, ∞}] *) DistributionDomain[ParetoDistribution[xmin, alpha]] (* ...
5
You can use ProbabilityDistribution to define your own probability distribution from a CDF: dist = ProbabilityDistribution[{"CDF", 1/2 Erfc[(0.1 - x)/(Sqrt[2] 1.2)]}, {x, -∞, ∞}] With some data data = RandomVariate[NormalDistribution[], 50]; you can now perform KolmogorovSmirnovTest[data, dist]
4
Assuming you have version 9 you can do the following. data = {{-1, 0}, {0, 0}, {1, 0}, {-2, 1}, {2, 1}, {-1, 3}, {1, 3}}; dist = EmpiricalDistribution[data]; Table[Expectation[y \[Conditioned] x == i, {x, y} \[Distributed] dist], {i, -2, 2}] (*{1, 3/2, 0, 3/2, 1}*) Note: Conditional probabilities and expectations didn't work for EmpiricalDistribution ...
4
The Wolfram Demonstration Project has 13 submissions that use Bayes Theorem: See here More specifically: Probability Of Being Sick After Having Tested Positive For A Disease Bayes's Theorem And Inverse Probability Total Probability And Bayes's Theorem All of these will have downloadable code to help you learn this. Good luck.
4
1) Using QueueingNetworkProcess to define such a queuing process With the customer arrival rate arrivalRate and the service rates at each server being serviceRate1, serviceRate2, and serviceRate3, respectively, you can define this queuing process by g = {arrivalRate, 0, 0}; m = {serviceRate1, serviceRate2, serviceRate3}; r = {{0, 1, 0}, {0, 0, 1}, {0, 0, ...
4
There is a undocumented input form such as: DegreeGraphDistribution[indegree, outdegree] For example: g = RandomGraph[ DegreeGraphDistribution[{1, 3, 3, 1, 2}, {3, 1, 3, 2, 1}]]; VertexInDegree[g] {1, 3, 3, 1, 2} VertexOutDegree[g] {3, 1, 3, 2, 1}
4
Something like this? Show[{Plot[PDF[NormalDistribution[], x], {x, -4, 4}, AxesLabel -> {None, "Z"}], Plot[PDF[NormalDistribution[], x], {x, 2, 4}, AxesLabel -> {None, "Z"}, Filling -> Bottom]}] The easiest for the label is to add them with the drawing tools
3
Here is a way that could give the same answer as FindDistributionParameters, however I further assume that γ>0. Please see the code below: LG[x_] = Log[ G[x]] /. {σ -> Exp[s], γ -> Exp[t]} //. {Log[ Times[x__, y_]] :> Log[x] + Log[y], Log[x_^y_] :> y Log[x]} NMaximize[{Total[LG[data]], μ < Min[data] + Exp[s - ...
3
As I pointed out in the comments I don't believe you will be able to use built-in tests to compute this. The Kolmogorov-Smirnov test requires that you can compute the CDF of the distribution. Unfortunately, ProbabilityDistribution seems to convert to PDF even if you create it with the CDF. Then it reverts back to the definition for CDF when it tries to ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2015-05-25 15:47:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1940097212791443, "perplexity": 2122.0047462070593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928520.68/warc/CC-MAIN-20150521113208-00133-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://howtopasstheged.com/tag/education/page/2/
|
# Verb-Verb Agreement
The dog walks, sniffs, and barks.
The dog walked, sniffed, and barked.
# Sorting
0.211, 0.201, 0.300, 0.310, 0.212 → 0.201, 0.211, 0.212, 0.300, 0.311
# Estimating
$19.95 ≈$20.00
# Context
The meaning of a word can be inferred from its context.
# Main Idea
Main Idea asks: What is the main idea of what you just read?
# Slope Specials
Vertical, Horizontal, Parallel, Perpendicular
# Point-Slope
Point-Slope Form of the Equation of a Line
$\bf\displaystyle y-{{y}_{1}}=m(x-{{x}_{1}})$
# Tables
Although the Mathematical Reasoning module of the GED allows you to utilize a calculator for most of its questions, knowledge of mathematical tables is useful for the five
calculator-prohibited questions.
Knowing multiplication and division tables is particularly helpful with factoring and simplifying a square root.
Knowing multiplication and division tables is also helpful for working with equations and inequalities.
Knowing mathematical tables has a way of coming in handy in real life, too.
Addition, Subtraction, Multiplication, and Division tables are provided below.
0 + 0 = 0 1 + 0 = 1 2 + 0 = 2 0 + 1 = 1 1 + 1 = 2 2 + 1 = 3 0 + 2 = 2 1 + 2 = 3 2 + 2 = 4 0 + 3 = 3 1 + 3 = 4 2 + 3 = 5 0 + 4 = 4 1 + 4 = 5 2 + 4 = 6 0 + 5 = 5 1 + 5 = 6 2 + 5 = 7 0 + 6 = 6 1 + 6 = 7 2 + 6 = 8 0 + 7 = 7 1 + 7 = 8 2 + 7 = 9 0 + 8 = 8 1 + 8 = 9 2 + 8 = 10 0 + 9 = 9 1 + 9 = 10 2 + 9 = 11 0 + 10 = 10 1 + 10 = 11 2 + 10 = 12 0 + 11 = 11 1 + 11 = 12 2 + 11 = 13 0 + 12 = 12 1 + 12 = 13 2 + 12 = 14 3 + 0 = 3 4 + 0 = 4 5 + 0 = 5 3 + 1 = 4 4 + 1 = 5 5 + 1 = 6 3 + 2 = 5 4 + 2 = 6 5 + 2 = 7 3 + 3 = 6 4 + 3 = 7 5 + 3 = 8 3 + 4 = 7 4 + 4 = 8 5 + 4 = 9 3 + 5 = 8 4 + 5 = 9 5 + 5 = 10 3 + 6 = 9 4 + 6 = 10 5 + 6 = 11 3 + 7 = 10 4 + 7 = 11 5 + 7 = 12 3 + 8 = 11 4 + 8 = 12 5 + 8 = 13 3 + 9 = 12 4 + 9 = 13 5 + 9 = 14 3 + 10 = 13 4 + 10 = 14 5 + 10 = 15 3 + 11 = 14 4 + 11 = 15 5 + 11 = 16 3 + 12 = 15 4 + 12 = 16 5 + 12 = 17 6 + 0 = 6 7 + 0 = 7 8 + 0 = 8 6 + 1 = 7 7 + 1 = 8 8 + 1 = 9 6 + 2 = 8 7 + 2 = 9 8 + 2 = 10 6 + 3 = 9 7 + 3 = 10 8 + 3 = 11 6 + 4 = 10 7 + 4 = 11 8 + 4 = 12 6 + 5 = 11 7 + 5 = 12 8 + 5 = 13 6 + 6 = 12 7 + 6 = 13 8 + 6 = 14 6 + 7 = 13 7 + 7 = 14 8 + 7 = 15 6 + 8 = 14 7 + 8 = 15 8 + 8 = 16 6 + 9 = 15 7 + 9 = 16 8 + 9 = 17 6 + 10 = 16 7 + 10 = 17 8 + 10 = 18 6 + 11 = 17 7 + 11 = 18 8 + 11 = 19 6 + 12 = 18 7 + 12 = 19 8 + 12 = 20 9 + 0 = 9 10 + 0 = 10 11 + 0 = 11 9 + 1 = 10 10 + 1 = 11 11 + 1 = 12 9 + 2 = 11 10 + 2 = 12 11 + 2 = 13 9 + 3 = 12 10 + 3 = 13 11 + 3 = 14 9 + 4 = 13 10 + 4 = 14 11 + 4 = 15 9 + 5 = 14 10 + 5 = 15 11 + 5 = 16 9 + 6 = 15 10 + 6 = 16 11 + 6 = 17 9 + 7 = 16 10 + 7 = 17 11 + 7 = 18 9 + 8 = 17 10 + 8 = 18 11 + 8 = 19 9 + 9 = 18 10 + 9 = 19 11 + 9 = 20 9 + 10 = 19 10 + 10 = 20 11 + 10 = 21 9 + 11 = 20 10 + 11 = 21 11 + 11 = 22 9 + 12 = 21 10 + 12 = 22 11 + 12 = 23 12 + 0 = 12 12 + 1 = 13 12 + 2 = 14 12 + 3 = 15 12 + 4 = 16 12 + 5 = 17 12 + 6 = 18 12 + 7 = 19 12 + 8 = 20 12 + 9 = 21 12 + 10 = 22 12 + 11 = 23 12 + 12 = 24
Subtraction
0 − 0 = 0 1 − 0 = 1 2 − 0 = 2 0 − 1 = -1 1 − 1 = 0 2 − 1 = 1 0 − 2 = -2 1 − 2 = -1 2 − 2 = 0 0 − 3 = -3 1 − 3 = -2 2 − 3 = -1 0 − 4 = -4 1 − 4 = -3 2 − 4 = -2 0 − 5 = -5 1 − 5 = -4 2 − 5 = -3 0 − 6 = -6 1 − 6 = -5 2 − 6 = -4 0 − 7 = -7 1 − 7 = -6 2 − 7 = -5 0 − 8 = -8 1 − 8 = -7 2 − 8 = -6 0 − 9 = -9 1 − 9 = -8 2 − 9 = -7 0 − 10 = -10 1 − 10 = -9 2 − 10 = -8 0 − 11 = -11 1 − 11 = -10 2 − 11 = -9 0 − 12 = -12 1 − 12 = -11 2 − 12 = -10 3 − 0 = 3 4 − 0 = 4 5 − 0 = 5 3 − 1 = 2 4 − 1 = 3 5 − 1 = 4 3 − 2 = 1 4 − 2 = 2 5 − 2 = 3 3 − 3 = 0 4 − 3 = 1 5 − 3 = 2 3 − 4 = -1 4 − 4 = 0 5 − 4 = 1 3 − 5 = -2 4 − 5 = -1 5 − 5 = 0 3 − 6 = -3 4 − 6 = -2 5 − 6 = -1 3 − 7 = -4 4 − 7 = -3 5 − 7 = -2 3 − 8 = -5 4 − 8 = -4 5 − 8 = -3 3 − 9 = -6 4 − 9 = -5 5 − 9 = -4 3 − 10 = -7 4 − 10 = -6 5 − 10 = -5 3 − 11 = -8 4 − 11 = -7 5 − 11 = -6 3 − 12 = -9 4 − 12 = -8 5 − 12 = -7 6 − 0 = 6 7 − 0 = 7 8 − 0 = 8 6 − 1 = 5 7 − 1 = 6 8 − 1 = 7 6 − 2 = 4 7 − 2 = 5 8 − 2 = 6 6 − 3 = 3 7 − 3 = 4 8 − 3 = 5 6 − 4 = 2 7 − 4 = 3 8 − 4 = 4 6 − 5 = 1 7 − 5 = 2 8 − 5 = 3 6 − 6 = 0 7 − 6 = 1 8 − 6 = 2 6 − 7 = -1 7 − 7 = 0 8 − 7 = 1 6 − 8 = -2 7 − 8 = -1 8 − 8 = 0 6 − 9 = -3 7 − 9 = -2 8 − 9 = -1 6 − 10 = -4 7 − 10 = -3 8 − 10 = -2 6 − 11 = -5 7 − 11 = -4 8 − 11 = -3 6 − 12 = -6 7 − 12 = -5 8 − 12 = -4 9 − 0 = 9 10 − 0 = 10 11 − 0 = 11 9 − 1 = 8 10 − 1 = 9 11 − 1 = 10 9 − 2 = 7 10 − 2 = 8 11 − 2 = 9 9 − 3 = 6 10 − 3 = 7 11 − 3 = 8 9 − 4 = 5 10 − 4 = 6 11 − 4 = 7 9 − 5 = 4 10 − 5 = 5 11 − 5 = 6 9 − 6 = 3 10 − 6 = 4 11 − 6 = 5 9 − 7 = 2 10 − 7 = 3 11 − 7 = 4 9 − 8 = 1 10 − 8 = 2 11 − 8 = 3 9 − 9 = 0 10 − 9 = 1 11 − 9 = 2 9 − 10 = -1 10 − 10 = 0 11 − 10 = 1 9 − 11 = -2 10 − 11 = -1 11 − 11 = 0 9 − 12 = -3 10 − 12 = -2 11 − 12 = -1 12 − 0 = 12 12 − 1 = 11 12 − 2 = 10 12 − 3 = 9 12 − 4 = 8 12 − 5 = 7 12 − 6 = 6 12 − 7 = 5 12 − 8 = 4 12 − 9 = 3 12 − 10 = 2 12 − 11 = 1 12 − 12 = 0
Multiplication
0 × 0 = 0 1 × 0 = 0 2 × 0 = 0 0 × 1 = 0 1 × 1 = 1 2 × 1 = 2 0 × 2 = 0 1 × 2 = 2 2 × 2 = 4 0 × 3 = 0 1 × 3 = 3 2 × 3 = 6 0 × 4 = 0 1 × 4 = 4 2 × 4 = 8 0 × 5 = 0 1 × 5 = 5 2 × 5 = 10 0 × 6 = 0 1 × 6 = 6 2 × 6 = 12 0 × 7 = 0 1 × 7 = 7 2 × 7 = 14 0 × 8 = 0 1 × 8 = 8 2 × 8 = 16 0 × 9 = 0 1 × 9 = 9 2 × 9 = 18 0 × 10 = 0 1 × 10 = 10 2 × 10 = 20 0 × 11 = 0 1 × 11 = 11 2 × 11 = 22 0 × 12 = 0 1 × 12 = 12 2 × 12 = 24 3 × 0 = 0 4 × 0 = 0 5 × 0 = 0 3 × 1 = 3 4 × 1 = 4 5 × 1 = 5 3 × 2 = 6 4 × 2 = 8 5 × 2 = 10 3 × 3 = 9 4 × 3 = 12 5 × 3 = 15 3 × 4 = 12 4 × 4 = 16 5 × 4 = 20 3 × 5 = 15 4 × 5 = 20 5 × 5 = 25 3 × 6 = 18 4 × 6 = 24 5 × 6 = 30 3 × 7 = 21 4 × 7 = 28 5 × 7 = 35 3 × 8 = 24 4 × 8 = 32 5 × 8 = 40 3 × 9 = 27 4 × 9 = 36 5 × 9 = 45 3 × 10 = 30 4 × 10 = 40 5 × 10 = 50 3 × 11 = 33 4 × 11 = 44 5 × 11 = 55 3 × 12 = 36 4 × 12 = 48 5 × 12 = 60 6 × 0 = 0 7 × 0 = 0 8 × 0 = 0 6 × 1 = 6 7 × 1 = 7 8 × 1 = 8 6 × 2 = 12 7 × 2 = 14 8 × 2 = 16 6 × 3 = 18 7 × 3 = 21 8 × 3 = 24 6 × 4 = 24 7 × 4 = 28 8 × 4 = 32 6 × 5 = 30 7 × 5 = 35 8 × 5 = 40 6 × 6 = 36 7 × 6 = 42 8 × 6 = 48 6 × 7 = 42 7 × 7 = 49 8 × 7 = 56 6 × 8 = 48 7 × 8 = 56 8 × 8 = 64 6 × 9 = 54 7 × 9 = 63 8 × 9 = 72 6 × 10 = 60 7 × 10 = 70 8 × 10 = 80 6 × 11 = 66 7 × 11 = 77 8 × 11 = 88 6 × 12 = 72 7 × 12 = 84 8 × 12 = 96 9 × 0 = 0 10 × 0 = 0 11 × 0 = 0 9 × 1 = 9 10 × 1 = 10 11 × 1 = 11 9 × 2 = 18 10 × 2 = 20 11 × 2 = 22 9 × 3 = 27 10 × 3 = 30 11 × 3 = 33 9 × 4 = 36 10 × 4 = 40 11 × 4 = 44 9 × 5 = 45 10 × 5 = 50 11 × 5 = 55 9 × 6 = 54 10 × 6 = 60 11 × 6 = 66 9 × 7 = 63 10 × 7 = 70 11 × 7 = 77 9 × 8 = 72 10 × 8 = 80 11 × 8 = 88 9 × 9 = 81 10 × 9 = 90 11 × 9 = 99 9 × 10 = 90 10 × 10 = 100 11 × 10 = 110 9 × 11 = 99 10 × 11 = 110 11 × 11 = 121 9 × 12 = 108 10 × 12 = 120 11 × 12 = 132 12 × 0 = 0 12 × 1 = 12 12 × 2 = 24 12 × 3 = 36 12 × 4 = 48 12 × 5 = 60 12 × 6 = 72 12 × 7 = 84 12 × 8 = 96 12 × 9 = 108 12 × 10 = 120 12 × 11 = 132 12 × 12 = 144
Division
0 ÷ 0 = ∞ 0 ÷ 1 = 0 0 ÷ 2 = 0 1 ÷ 0 = ∞ 1 ÷ 1 = 1 2 ÷ 2 = 1 2 ÷ 0 = ∞ 2 ÷ 1 = 2 4 ÷ 2 = 2 3 ÷ 0 = ∞ 3 ÷ 1 = 3 6 ÷ 2 = 3 4 ÷ 0 = ∞ 4 ÷ 1 = 4 8 ÷ 2 = 4 5 ÷ 0 = ∞ 5 ÷ 1 = 5 10 ÷ 2 = 5 6 ÷ 0 = ∞ 6 ÷ 1 = 6 12 ÷ 2 = 6 7 ÷ 0 = ∞ 7 ÷ 1 = 7 14 ÷ 2 = 7 8 ÷ 0 = ∞ 8 ÷ 1 = 8 16 ÷ 2 = 8 9 ÷ 0 = ∞ 9 ÷ 1 = 9 18 ÷ 2 = 9 10 ÷ 0 = ∞ 10 ÷ 1 = 10 20 ÷ 2 = 10 11 ÷ 0 = ∞ 11 ÷ 1 = 11 22 ÷ 2 = 11 12 ÷ 0 = ∞ 12 ÷ 1 = 12 24 ÷ 2 = 12 0 ÷ 3 = 0 0 ÷ 4 = 0 0 ÷ 5 = 0 3 ÷ 3 = 1 4 ÷ 4 = 1 5 ÷ 5 = 1 6 ÷ 3 = 2 8 ÷ 4 = 2 10 ÷ 5 = 2 9 ÷ 3 = 3 12 ÷ 4 = 3 15 ÷ 5 = 3 12 ÷ 3 = 4 16 ÷ 4 = 4 20 ÷ 5 = 4 15 ÷ 3 = 5 20 ÷ 4 = 5 25 ÷ 5 = 5 18 ÷ 3 = 6 24 ÷ 4 = 6 30 ÷ 5 = 6 21 ÷ 3 = 7 28 ÷ 4 = 7 35 ÷ 5 = 7 24 ÷ 3 = 8 32 ÷ 4 = 8 40 ÷ 5 = 8 27 ÷ 3 = 9 36 ÷ 4 = 9 45 ÷ 5 = 9 30 ÷ 3 = 10 40 ÷ 4 = 10 50 ÷ 5 = 10 33 ÷ 3 = 11 44 ÷ 4 = 11 55 ÷ 5 = 11 36 ÷ 3 = 12 48 ÷ 4 = 12 60 ÷ 5 = 12 0 ÷ 6 = 0 0 ÷ 7 = 0 0 ÷ 8 = 0 6 ÷ 6 = 1 7 ÷ 7 = 1 8 ÷ 8 = 1 12 ÷ 6 = 2 14 ÷ 7 = 2 16 ÷ 8 = 2 18 ÷ 6 = 3 21 ÷ 7 = 3 24 ÷ 8 = 3 24 ÷ 6 = 4 28 ÷ 7 = 4 32 ÷ 8 = 4 30 ÷ 6 = 5 35 ÷ 7 = 5 40 ÷ 8 = 5 36 ÷ 6 = 6 42 ÷ 7 = 6 48 ÷ 8 = 6 42 ÷ 6 = 7 49 ÷ 7 = 7 56 ÷ 8 = 7 48 ÷ 6 = 8 56 ÷ 7 = 8 64 ÷ 8 = 8 54 ÷ 6 = 9 63 ÷ 7 = 9 72 ÷ 8 = 9 60 ÷ 6 = 10 70 ÷ 7 = 10 80 ÷ 8 = 10 66 ÷ 6 = 11 77 ÷ 7 = 11 88 ÷ 8 = 11 72 ÷ 6 = 12 84 ÷ 7 = 12 96 ÷ 8 = 12 0 ÷ 9 = 0 0 ÷ 10 = 0 0 ÷ 11 = 0 9 ÷ 9 = 1 10 ÷ 10 = 1 11 ÷ 11 = 1 18 ÷ 9 = 2 20 ÷ 10 = 2 22 ÷ 11 = 2 27 ÷ 9 = 3 30 ÷ 10 = 3 33 ÷ 11 = 3 36 ÷ 9 = 4 40 ÷ 10 = 4 44 ÷ 11 = 4 45 ÷ 9 = 5 50 ÷ 10 = 5 55 ÷ 11 = 5 54 ÷ 9 = 6 60 ÷ 10 = 6 66 ÷ 11 = 6 63 ÷ 9 = 7 70 ÷ 10 = 7 77 ÷ 11 = 7 72 ÷ 9 = 8 80 ÷ 10 = 8 88 ÷ 11 = 8 81 ÷ 9 = 9 90 ÷ 10 = 9 99 ÷ 11 = 9 90 ÷ 9 = 10 100 ÷ 10 = 10 110 ÷ 11 = 10 99 ÷ 9 = 11 110 ÷ 10 = 11 121 ÷ 11 = 11 108 ÷ 9 = 12 120 ÷ 10 = 12 132 ÷ 11 = 12 0 ÷ 12 = 0 12 ÷ 12 = 1 24 ÷ 12 = 2 36 ÷ 12 = 3 48 ÷ 12 = 4 60 ÷ 12 = 5 72 ÷ 12 = 6 84 ÷ 12 = 7 96 ÷ 12 = 8 108 ÷ 12 = 9 120 ÷ 12 = 10 132 ÷ 12 = 11 144 ÷ 12 = 12
table_subtraction
table_multiplication
table_division
# Measurement
1 foot = 12 inches
1 meter = 100 centimeters
# Division by Zero = Undefined
Do not divide by zero.
# Percents
% is a multiplier.
# Dangling Modifiers
Wrong: Having finished shopping, the grocery list had no more use.
Right: Having finished shopping, I had no more use for the grocery list.
He spoke to us.
# Capitalization
Mr. and Mrs. John Doe visited Waterloo, Iowa.
Their
There
They’re
|
2021-12-07 07:57:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5343313217163086, "perplexity": 70.29612800160535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363337.27/warc/CC-MAIN-20211207075308-20211207105308-00550.warc.gz"}
|
http://spmaddmaths.blog.onlinetuition.com.my/2018/10/spm-additional-mathematics-2017-paper-1-question-20-22.html
|
# SPM Additional Mathematics 2017, Paper 1 (Question 20 – 22)
Question 20 (4 marks):
Table 1 shows the distribution of marks for 40 students in an Additional Mathematics test. The number of students for the class interval 40 – 59 is not stated.
Table 1
(a) State the modal class.
(b) Puan Zainon, the subject teacher, intends to give a reward to the top ten students. Those students who achieve the minimum mark in the top ten placing will be considered to receive the reward. Elina obtains 74 marks.
Does Elina qualify to be considered to receive the reward? Give your reason.
Solution:
(a)
4 + 10 + x + 8 + 7 = 40
x + 29 = 40
x = 11
Modal class = 40 – 59
(b)
Question 21 (3 marks):
A biased cube dice is thrown. The probability of getting the number ‘4’ is $\frac{1}{16}$ and the probabilities of getting other than ‘4’ are equal to each other.
If the dice is thrown twice, find the probability of getting two different numbers.
Solution:
Question 22 (4 marks):
Danya has a home decorations shop. One day, Danya received 14 sets of cups from a supplier. Each set contained 6 pieces of cups of different colours.
(a)
Danya chooses 3 sets of cups at random to be checked.
Find the number of different ways that Danya uses to choose those sets of cups.
(b)
Danya takes a set of cups to display by arranging it in a row.
Find the number of different ways the cups can be arranged such that the blue cup is not displaced next to the red cup.
Solution:
(a)
Number of different ways 3 sets of cups at random to be checked
= 14C3
=364
(b)
Number of ways (Blue cup and red cup are next to each other)
= 5! × 2!
= 240
Number of different ways the cups can be arranged such that the blue cup is not displaced next to the red cup
= 6! – 240
= 720 – 240
= 480
|
2020-07-06 17:30:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6821746230125427, "perplexity": 1561.6404508768148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881763.20/warc/CC-MAIN-20200706160424-20200706190424-00035.warc.gz"}
|
https://stats.libretexts.org/Courses/Lumen_Learning/Book%3A_Concepts_in_Statistics_(Lumen)/05%3A_Relationships_in_Categorical_Data_with_Intro_to_Probability/5.02%3A_Introduction_to_Two-Way_Tables
|
# 5.2: Introduction to Two-Way Tables
## What you’ll learn to do: Analyze the relationship between two categorical variables using a two-way table.
Recall, categorical data is data that consists of labels (such as person’s gender, an object’s color, or location). Since categorical data does not return a measurement, it is often convenient to summarize study results with counts (for example, total number of females, or total number of males). In this section, we introduce two way tables and conditional percentages as a way to investigate possible relationships between two categorical variables.
|
2023-02-04 18:35:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6686428189277649, "perplexity": 368.1003598521855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00399.warc.gz"}
|
http://qurope.eu/aggregator/categories/1?page=124
|
## Physics
### Optical Vortex Sizes Up Nanoparticles
APS Physics - Fri, 2022-08-19 12:00
Author(s): Dan Garisto
A novel method for measuring nanoparticle size could have applications in industry and basic materials science research.
[Physics 15, 130] Published Fri Aug 19, 2022
Categories: Physics
### Machine Learning Pins Down Cosmological Parameters
APS Physics - Fri, 2022-08-19 12:00
Author(s): Ryan Wilkinson
Cosmological constraints can be improved by applying machine learning to a combination of data from two leading probes of the large-scale structure of the Universe.
[Physics 15, s111] Published Fri Aug 19, 2022
Categories: Physics
### Spatial search via an interpolated memoryless walk
PRA: Quantum information - Fri, 2022-08-19 12:00
Author(s): Peter Høyer and Janet Leahy
The defining feature of memoryless quantum walks is that they operate on the vertex space of a graph and therefore can be used to produce search algorithms with minimal memory. We present a memoryless walk that can find a unique marked vertex on a two-dimensional lattice. Our walk is based on the co…
[Phys. Rev. A 106, 022418] Published Fri Aug 19, 2022
Categories: Journals, Physics
### Sign switching of superexchange mediated by a few electrons in a nonuniform magnetic field
PRA: Quantum information - Fri, 2022-08-19 12:00
Author(s): Guo Xuan Chan and Xin Wang
Long-range interaction between distant spins is an important building block for the realization of a large quantum-dot network in which couplings between pairs of spins can be selectively addressed. Recent experiments on the coherent oscillation of logical states between remote spins facilitated by …
[Phys. Rev. A 106, 022420] Published Fri Aug 19, 2022
Categories: Journals, Physics
### Fast and robust quantum state transfer assisted by zero-energy interface states in a splicing Su-Schrieffer-Heeger chain
PRA: Quantum information - Fri, 2022-08-19 12:00
Author(s): Lijun Huang, Zhi Tan, Honghua Zhong, and Bo Zhu
We propose a fast, robust, and long-distance quantum state transfer (QST) protocol via a splicing Su-Schrieffer-Heeger (SSH) chain, where the interchain couplings vary with the change in the phase parameter and the single or splicing SSH chain can be designed by adjusting it. It is found that the ex…
[Phys. Rev. A 106, 022419] Published Fri Aug 19, 2022
Categories: Journals, Physics
### Control of population and entanglement of two qubits under the action of different types of dissipative noise
PRA: Quantum information - Fri, 2022-08-19 12:00
Author(s): G. J. Delben, A. L. O. dos Santos, and M. G. E. da Luz
We investigate the quantum control of two qubits interacting with a Markovian environment and evolving as an X state. The control is implemented via a simple applied field, whose profile is determined by means of the piecewise time-independent quantum control method. The goal is to make either the p…
[Phys. Rev. A 106, 022417] Published Fri Aug 19, 2022
Categories: Journals, Physics
### Almost qudits in the prepare-and-measure scenario. (arXiv:2208.07887v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
Quantum communication is often investigated in scenarios where only the dimension of Hilbert space is known. However, assigning a precise dimension is often an approximation of what is actually a higher-dimensional process. Here, we introduce and investigate quantum information encoded in carriers that nearly, but not entirely, correspond to standard qudits. We demonstrate the relevance of this concept for semi-device-independent quantum information by showing how small higher-dimensional components can significantly compromise the conclusions of established protocols. Then we provide a general method, based on semidefinite relaxations, for bounding the set of almost qudit correlations, and apply it to remedy the demonstrated issues. This method also offers a novel systematic approach to the well-known task of device-independent tests of classical and quantum dimensions with unentangled devices. Finally, we also consider viewing almost qubit systems as a physical resource available to the experimenter and determine the optimal quantum protocol for the well-known Random Access Code.
Categories: Journals, Physics
### On the semiclassical regularity of thermal equilibria. (arXiv:2208.07911v1 [math-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
We study the regularity properties of fermionic equilibrium states at finite positive temperature and show that they satisfy certain semiclassical bounds. As a corollary, we identify explicitly a class of positive temperature states satisfying the regularity assumptions of [J.J. Chong, L. Lafleche, C. Saffirio: arXiv:2103.10946 (2021)].
Categories: Journals, Physics
### Dual Instruments and Sequential Products of Observables. (arXiv:2208.07923v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
We first show that every operation possesses an unique dual operation and measures an unique effect. If $a$ and $b$ are effects and $J$ is an operation that measures $a$, we define the sequential product of $a$ then $b$ relative to $J$. Properties of the sequential product are derived and are illustrated in terms of L\"uders and Holevo operations. We next extend this work to the theory of instruments and observables. We also define the concept of an instrument (observable) conditioned by another instrument (observable). Identity, state-constant and repeatable instruments are considered. Sequential products of finite observables relative to L\"uders and Holevo instruments are studied.
Categories: Journals, Physics
### Quantum Coherence and non-Markovianity in a Noisy Quantum Tunneling Problem. (arXiv:2208.07947v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
We investigate the coherence and non-Markovianity of a quantum tunneling system whose barrier is fluctuated by a telegraph noise, and its energy gap is modulated by Gaussian noise. With the help of averaging method, the system dynamics are analytically derived, and the analytical expression for coherence measure and non-Markovianity for the very limited parameter regimes for both initially coherent and non-coherent states are obtained. We observe non-Markovian dynamics in a situation where the Kubo number is high. It is also found that there is no strong relation between the coherence of the system and non-Markovianity dynamics except in a region in which these two tend to change their behavior at the intermediate noise color for two initial states.
Categories: Journals, Physics
### Uniform observable error bounds of Trotter formulae for the semiclassical Schr\"odinger equation. (arXiv:2208.07957v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
By no fast-forwarding theorem, the simulation time for the Hamiltonian evolution needs to be $O(\|H\| t)$, which essentially states that one can not go across the multiple scales as the simulation time for the Hamiltonian evolution needs to be strictly greater than the physical time. We demonstrated in the context of the semiclassical Schr\"odinger equation that the computational cost for a class of observables can be much lower than the state-of-the-art bounds. In the semiclassical regime (the effective Planck constant $h \ll 1$), the operator norm of the Hamiltonian is $O(h^{-1})$. We show that the number of Trotter steps used for the observable evolution can be $O(1)$, that is, to simulate some observables of the Schr\"odinger equation on a quantum scale only takes the simulation time comparable to the classical scale. In terms of error analysis, we improve the additive observable error bounds [Lasser-Lubich 2020] to uniform-in-$h$ observable error bounds. This is, to our knowledge, the first uniform observable error bound for semiclassical Schr\"odinger equation without sacrificing the convergence order of the numerical method. Based on semiclassical calculus and discrete microlocal analysis, our result showcases the potential improvements taking advantage of multiscale properties, such as the smallness of the effective Planck constant, of the underlying dynamics and sheds light on going across the scale for quantum dynamics simulation.
Categories: Journals, Physics
### Mixed Quantum-Classical Method For Fraud Detection with Quantum Feature Selection. (arXiv:2208.07963v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
This paper presents a first end-to-end application of a Quantum Support Vector Machine (QSVM) algorithm for a classification problem in the financial payment industry using the IBM Safer Payments and IBM Quantum Computers via the Qiskit software stack. Based on real card payment data, a thorough comparison is performed to assess the complementary impact brought in by the current state-of-the-art Quantum Machine Learning algorithms with respect to the Classical Approach. A new method to search for best features is explored using the Quantum Support Vector Machine's feature map characteristics. The results are compared using fraud specific key performance indicators: Accuracy, Recall, and False Positive Rate, extracted from analyses based on human expertise (rule decisions), classical machine learning algorithms (Random Forest, XGBoost) and quantum based machine learning algorithms using QSVM. In addition, a hybrid classical-quantum approach is explored by using an ensemble model that combines classical and quantum algorithms to better improve the fraud prevention decision. We found, as expected, that the results highly depend on feature selections and algorithms that are used to select them. The QSVM provides a complementary exploration of the feature space which led to an improved accuracy of the mixed quantum-classical method for fraud detection, on a drastically reduced data set to fit current state of Quantum Hardware.
Categories: Journals, Physics
### Towards the simulation of transition-metal oxides of the cathode battery materials using VQE methods. (arXiv:2208.07977v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
Variational quantum eigensolver (VQE) is a hybrid quantum-classical technique that leverages noisy intermediate scale quantum (NISQ) hardware to obtain the minimum eigenvalue of a model Hamiltonian. VQE has so far been used to simulate condensed matter systems as well as quantum chemistry of small molecules. In this work, we employ VQE methods to obtain the ground-state energy of LiCoO$_2$, a candidate transition metal oxide used for battery cathodes. We simulate Li$_2$Co$_2$O$_4$ and Co$_2$O$_4$ gas-phase models, which represent the lithiated and delithiated states during the discharge and the charge of the Li-ion battery, respectively. Computations are performed using a statevector simulator with a single reference state for three different trial wavefunctions: unitary coupled-cluster singles and doubles (UCCSD), unitary coupled-cluster generalized singles and doubles (UCCGSD) and k-unitary pair coupled-cluster generalized singles and doubles (k-UpCCGSD). The resources in terms of circuit depth, two-qubit entangling gates and wavefunction parameters are analyzed. We find that the k-UpCCGSD with k=5 produces results similar to UCCSD but at a lower cost. Finally, the performance of VQE methods is benchmarked against the classical wavefunction-based methods, such as coupled-cluster singles and doubles (CCSD) and complete active space configuration interaction (CASCI). Our results show that VQE methods quantitatively agree with the results obtained from CCSD. However, the comparison against the CASCI results clearly suggests that advanced trial wavefunctions are likely necessary to capture the multi-reference characteristics as well as the correlations emerging from high-level electronic excitations.
Categories: Journals, Physics
### Transceiver designs to attain the entanglement assisted communications capacity. (arXiv:2208.07979v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
Pre-shared entanglement can significantly boost communication rates in the high thermal noise and low-brightness transmitter regime. In this regime, for a lossy-bosonic channel with additive thermal noise, the ratio between the entanglement-assisted capacity and the Holevo capacity - the maximum reliable-communications rate permitted by quantum mechanics without any pre-shared entanglement - scales as $\log(1/{\bar N}_{\rm S})$, where the mean transmitted photon number per mode, ${\bar N}_{\rm S} \ll 1$. Thus, pre-shared entanglement, e.g., distributed by the quantum internet or a satellite-assisted quantum link, promises to significantly improve low-power radio-frequency communications. In this paper, we propose a pair of structured quantum transceiver designs that leverage continuous-variable pre-shared entanglement generated, e.g., from a down-conversion source, binary phase modulation, and non-Gaussian joint detection over a code word block, to achieve this scaling law of capacity enhancement. Further, we describe a modification to the aforesaid receiver using a front-end that uses sum-frequency generation sandwiched with dynamically-programmable in-line two-mode squeezers, and a receiver back-end that takes full advantage of the output of the receiver's front-end by employing a non-destructive multimode vacuum-or-not measurement to achieve the entanglement-assisted classical communications capacity.
Categories: Journals, Physics
### Cooling neutral atoms into maximal entanglement in the Rydberg blockade regime. (arXiv:2208.08013v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
We propose a cooling scheme to prepare stationary entanglement of neutral atoms in the Rydberg blockade regime by combination of periodically collective laser pumping and dissipation. In each cycle, the controlled unitary dynamics process can selectively pump atoms away from the non-target state while maintaining the target state unchanged. The subsequent dissipative process redistributes the populations of ground states through the engineered spontaneous emission. After a number of cycles, the system will be eventually stabilized into the desired steady state independent of the initial state. This protocol does not rely on coherent addressing of individual neutral atoms or fine control of Rydberg interaction intensity, which can in principle greatly improve the feasibility of experiments in related fields.
Categories: Journals, Physics
### Correlated topological pumping of interacting bosons assisted by Bloch oscillations. (arXiv:2208.08060v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
Thouless pumping, not only achieving quantized transport but also immune to moderate disorder, has attracted growing attention in both experiments and theories. Here, we explore how particle-particle interactions affect topological transport in a periodically-modulated and tilted optical lattice. Not limited to wannier states, our scheme ensures a dispersionless quantized transport even for initial Gaussian-like wave packets of interacting bosons which do not uniformly occupy a given band. This is because the tilting potential leads to Bloch oscillations uniformly sampling the Berry curvatures over the entire Brillouin zone. The interplay among on-site potential difference, tunneling rate and interactions contributes to the topological transport of bound and scattering states and the topologically resonant tunnelings. Our study deepens the understanding of correlation effects on topological states, and provides a feasible way for detecting topological properties in interacting systems.
Categories: Journals, Physics
### The Physics of Quantum Information. (arXiv:2208.08064v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
Rapid ongoing progress in quantum information science makes this an apt time for a Solvay Conference focused on The Physics of Quantum Information. Here I review four intertwined themes encompassed by this topic: Quantum computer science, quantum hardware, quantum matter, and quantum gravity. Though the time scale for broad practical impact of quantum computation is still uncertain, in the near future we can expect noteworthy progress toward scalable fault-tolerant quantum computing, and discoveries enabled by programmable quantum simulators. In the longer term, controlling highly complex quantum matter will open the door to profound scientific advances and powerful new technologies.
Categories: Journals, Physics
### Structural and optical properties of micro-diamonds with SiV- color centers. (arXiv:2208.08075v1 [cond-mat.mtrl-sci])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
Isolated, micro-meter sized diamonds are grown by micro-wave plasma chemical vapour deposition technique on Si(001) substrates. Each diamond is uniquely identified by markers milled in the Si substrate by Ga+ focused ion beam. The morphology and micrograin structure analysis indicates that the diamonds are icosahedral or bi-crystals. Icosahedral diamonds have higher (up to $\sigma_\mathrm{h}$ = 2.3 GPa), and wider distribution ($\Delta\sigma_\mathrm{h}$ = 4.47 GPa) of hydrostatic stress built up at the microcrystal grain boundaries, compared to the other crystals. The number and spectral shape of SiV- color centers incorporated in the micro-diamonds is analysed, and estimated by means of temperature dependent photoluminescence measurements, and Montecarlo simulations. The Montecarlo simulations indicate that the number of SiV- color centers is a few thousand per micro-diamond.
Categories: Journals, Physics
### Asymptotics and typicality of sequential generalized measurements. (arXiv:2208.08141v1 [quant-ph])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
The relation between projective measurements and generalized quantum measurements is a fundamental problem in quantum physics, and clarifying this issue is also important to quantum technologies. While it has been intuitively known that projective measurements can be constructed from sequential generalized or weak measurements, there is still lack of a proof of this hypothesis in general cases. Here we rigorously prove it from the perspective of quantum channels. We show that projective measurements naturally arise from sequential generalized measurements in the asymptotic limit. Specifically, a selective projective measurement arises from a set of typical sequences of sequential generalized measurements. We provide an explicit scheme to construct projective measurements of a quantum system with sequential generalized quantum measurements. Remarkably, a single ancilla qubit is sufficient to mediate a sequential weak measurement for constructing arbitrary projective measurements of a generic system.
Categories: Journals, Physics
### Two-photon absorption in semiconductors: a multi-band length-gauge analysis. (arXiv:2208.08143v1 [cond-mat.mtrl-sci])
arXiv.org: Quantum Physics - Thu, 2022-08-18 12:45
The simplest approach to deal with light excitations in direct-gap semiconductors is to model them as a two-band system: one conduction and one valence band. For such models, particularly simple analytical expressions are known to exist for the optical response such as multi-photon absorption coefficients. Here we show that generic multi-band models do not require much more complicated expressions. Our length-gauge analysis is based on the semiconductors Bloch equations in the absence of all scattering processes. In the evaluation, we focus on two-photon excitation by a pump-probe scheme with possibly non-degenerate and arbitrarily polarized configurations. The theory is validated by application to graphene and its bilayer, described by a tight-binding model, as well as bulk Zincblende semiconductors described by k.p theory.
Categories: Journals, Physics
|
2022-10-06 17:59:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5809595584869385, "perplexity": 2179.3279419499413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00248.warc.gz"}
|
https://marketplace.sasview.org/models/118/
|
# Mass Fractal S(q)
## Description:
Calculates the structure factor term ONLY from the Mass Fractal model.
Definition
----------
The Sinha-Mildner-Hall fractal structure factor.
The functional form of the structure factor is defined below
S(q) = \frac{\Gamma(D_m-1)\xi^{D_m-1}}{\left[1+(q\xi)^2
\right]^{(D_m-1)/2}}
\frac{sin\left[(D_m - 1) tan^{-1}(q\xi) \right]}{q}
where $D_m$ is the $mass$ fractal dimension and $\xi$ is the upper fractal cutoff length, i.e. the length scale above which the system is no longer fractal.
SasView automatically appends two additional parameters $radius$_$effective$ and $volfraction$ to all $S(q)$ models. However, these are not used by this model.
The mass fractal dimension ( $D_m$ ) is only valid if $1 <= D_m <= 3$. It is also only valid over a limited $q$ range (see the references for details).
WARNING! By convention, $S(q)$ is normally dimensionless. THIS FUNCTION IS NOT DIMENSIONLESS!
References
---------------
D Mildner and P Hall, J. Phys. D: Appl. Phys.,
19 (1986) 1535-1545 Equation(9)
P Wong, Methods in the physics of porous media
Authorship and Verification
-----------------------------------
Author: Ziggy Attala and Matt D G Hughes Date: 09/09/2019
|
2022-11-28 02:25:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7951452136039734, "perplexity": 2640.7103189144564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00858.warc.gz"}
|
https://socratic.org/questions/590c0c95b72cff7aad0a7ca0
|
# If "35.00 mL" of "0.211 M" "HCl" was neutralized by "34.60 mL" of "KOH", what was the concentration of "KOH"?
May 5, 2017
If $\text{35.00 mL}$ of $\text{0.211 M}$ $\text{HCl}$ is neutralized, then that means
${\text{0.211 M" xx "0.03500 L" = "0.00739 mols H}}^{+}$
were neutralized. Since there is one ${\text{OH}}^{-}$ for every one $\text{KOH}$ and one ${\text{H}}^{+}$ for every one $\text{HCl}$, we can say that ${\text{mols H"^(+) = "mols OH}}^{-}$. Therefore:
$\left(\text{0.00739 mols OH"^(-))/("0.03460 L KOH}\right)$
$=$ $\textcolor{b l u e}{\text{0.213 M KOH}}$.
|
2022-09-29 08:46:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155498147010803, "perplexity": 3522.537733963531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00152.warc.gz"}
|
https://deepai.org/publication/model-aware-deep-architectures-for-one-bit-compressive-variational-autoencoding
|
# Model-Aware Deep Architectures for One-Bit Compressive Variational Autoencoding
Parameterized mathematical models play a central role in understanding and design of complex information systems. However, they often cannot take into account the intricate interactions innate to such systems. On the contrary, purely data-driven approaches do not need explicit mathematical models for data generation and have a wider applicability at the cost of interpretability. In this paper, we consider the design of a one-bit compressive variational autoencoder, and propose a novel hybrid model-based and data-driven methodology that allows us not only to design the sensing matrix and the quantization thresholds for one-bit data acquisition, but also allows for learning the latent-parameters of iterative optimization algorithms specifically designed for the problem of one-bit sparse signal recovery. In addition, the proposed method has the ability to adaptively learn the proper quantization thresholds, paving the way for amplitude recovery in one-bit compressive sensing. Our results demonstrate a significant improvement compared to state-of-the-art model-based algorithms.
## Authors
• 11 publications
• 13 publications
12/10/2019
### Deep One-bit Compressive Autoencoding
Parameterized mathematical models play a central role in understanding a...
11/30/2018
### Deep Signal Recovery with One-Bit Quantization
Machine learning, and more specifically deep learning, have shown remark...
02/05/2021
### LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers
The need to recover high-dimensional signals from their noisy low-resolu...
12/21/2020
### Unfolded Algorithms for Deep Phase Retrieval
Exploring the idea of phase retrieval has been intriguing researchers fo...
04/28/2014
### One-bit compressive sensing with norm estimation
Consider the recovery of an unknown signal x from quantized linear measu...
07/15/2020
### 1-Bit Compressive Sensing via Approximate Message Passing with Built-in Parameter Estimation
1-bit compressive sensing aims to recover sparse signals from quantized ...
01/04/2021
### Discovering genetic networks using compressive sensing
A first analysis applying compressive sensing to a quantitative biologic...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## I Introduction
In the past two decades, compressive sensing (CS) has shown significant potential in enhancing sensing and recovery performance in signal processing, occasionally with simpler hardware, and thus, has attracted noteworthy attention among researchers. CS is a method of signal acquisition which ensures the exact or almost exact reconstruction of certain classes of signals using far less number of samples than what is needed in the Nyquist sampling regime [4472240, eldar2012compressed]—where the signals are typically reconstructed by finding the sparsest solution of an under-determined system of equations using various available means.
In a practical setting, each measurement is to be digitized into finite-precision values for further processing and storage purposes, which inevitably introduces a quantization error. This error is generally dealt with as measurement noise possessing limited energy; an approach that does not perform well in extreme cases. One-bit CS is one such extreme case where the quantizer is a simple sign comparator and each measurement is represented using only one bit information [4558487, 5955138, 6418031, 6404739, 6178284, zhang2014efficient]. One-bit quantizers are not only low-cost and low-power hardware components, but also much faster than traditional scalar quantizers, accompanied by great reduction in the complexity of hardware implementation. Several algorithms have been introduced in the literature for efficient reconstruction of sparse signals in one-bit CS scenarios (e.g., see [4558487, 5955138, 6418031, 6404739, 6178284, zhang2014efficient, 8747470] and the references therein). A detailed discussion of such algorithms is provided in Sec II.
Notation:
We use bold lowercase letters for vectors and bold uppercase letters for matrices.
, and denote the vector/matrix transpose, and the Hermitian transpose, respectively. and are the all-one and all-zero vectors. denotes the -norm of the vector defined as . denotes the -th element of the vector . denotes the diagonal matrix formed by the entries of the vector argument . The operator denotes the element-wise vector inequality operator.
### I-a Relevant Prior Art
One-bit compressive sensing is mainly concerned with the following data-acquisition model:
r=sign(Φx−b), (1)
where denotes a -sparse source signal, is the sensing matrix, and denotes the quantization thresholds vector. In addition to the mentioned advantages of using one-bit ADCs for data-acquisition purposes, the use of one-bit information offers increased robustness to undesirable non-linearities in the data-acquition process. Furthermore, there exists strong empirical evidence that recovering a sparse source signal from only one-bit measurement can outperform its multi-bit CS counterpart [6418031, knudson2016one].
The current one-bit CS recovery algorithms typically exploit the consistency principle, which represents the fact that the element-wise product of the sparse signal and the corresponding measurement is always positive [4558487], i.e. . However, most of the existing literature on one-bit CS considers zero-level one-bit quantization thresholds (i.e., ) leading to a total loss of amplitude information during the data-acquisition process. Hence, by comparing the signal level with zero, one can only recover the direction of the source signal, i.e. , and not the amplitude information . In its most general form, any solution to the one-bit CS problem should: (i) satisfy the sparsity condition, i.e. with , and (ii) achieve consistency, i.e. . As mentioned above, most of the existing literature on the problem of one-bit CS recovery problem considers the case of . In such a case, the solution to the one-bit CS problem can be expressed as:
x∗=argminx∥x∥0s.t.r=sign(Φx).
The above program is NP-hard and mathematically intractable [6418031]. However, there exist several powerful iterative algorithms to find (for the case of ) that rely on a relaxation of the -norm to its convex hull (i.e., using -norm in lieu of
-norm) to obtain an estimate of the support of the true source signal by restricting the feasible solutions to the unit-sphere, i.e.
.
In [4558487], the authors assume a zero-level quantization threshold and propose an iterative algorithm called renormalized fixed point iteration (RFPI) where a convex barrier function is used to enforce the consistency principle (as a regularization term in the objective function). A detailed analysis of the RFPI algorithm is provided in Sec. II. It is worth mentioning that in a traditional CS setting, one consider the under-sampled measurements (i.e., ), however, the over-sampling regime is beneficial and of paramount interest in a one-bit CS setting in that the use of one-bit ADCs provide a cheap and fast way to acquire measurements and to potentially go beyond the limitations of the traditional CS methods.
Another such reconstruction algorithm can be found in [5955138], referred to as restricted step shrinkage (RSS), for which a nonlinear barrier function is used as the regularizer to enforce the consistency principle. Compared to RFPI algorithm, RSS has three important advantages: provable convergence, improved consistency, and feasible performance [Li2018]. Ref. [6418031] introduces a penalty-based robust recovery algorithm, called binary iterative hard thresholding
(BIHT), in order to enforce the consistency principle. Contrary to RFPI algorithm, BIHT exploits the knowledge of the sparsity level of the signal as input, and was shown to be more robust to outliers and have a superior performance than that of the RFPI method in some cases (at the cost of knowing the sparsity level of the source signal a priori). Both RFPI and BIHT, however, only consider a zero-level quantization threshold, as a result, the amplitude information is lost due to comparing the acquired signal with zero. In
[6404739] and [6178284], authors proposed modified versions of RFPI and BIHT, referred to as noise-adaptive renormalized fixed point iteration (NARFPI) and adaptive outlier pursuit with sign flips (AOP-f), that are more robust against bit flips in the measurement vector (that occur due to the presence of noise). More recently, the authors in [knudson2016one] considered the problem of one-bit CS signal reconstruction in a non-zero quantization thresholds setting that enables the recovery of the norm of the source signal, i.e. recovering . However, the proposed method in [knudson2016one]
still fails to accurately recover the amplitude information of the source signal, and does not offer a straight-forward apparently to design the quantization thresholds. In addition, there exist several variables in the above mentioned iterative algorithms that must be tuned either heuristically or using expensive computations (e.g., grid-search method) to achieve a high performance. In
[plan2012robust], the authors lay the ground work for a theoretical analysis of noisy one-bit CS problem, and propose a novel polynomial-time solver based on a convex programming approach for the problem of one-bit sparse signal recovery in a noisy setting.
Considering the above, it is of paramount importance to develop computationally efficient one-bit CS models that can incorporate non-zero quantization thresholds to allow for recovering the amplitude information. Additionally, the vast literature on the one-bit CS recovery problem, does not yet tap into the potential of the available data at hand (to improve the performance recovery). One can significantly benefit from a methodology that can facilitate not only incorporation of the domain knowledge on the problem (i.e., being model-driven), but also the available data at hand to go beyond the performance of the traditional sparsity aware signal processing techniques.
There has recently been a high demand for developing effective real-time signal processing algorithms that use the data to achieve improved performance [7780424, ILIADIS20189, 7447163, shlezinger2019hardware, liao2019deep, shlezinger2019viterbinet]
. In particular, the data-driven approaches relying on deep neural architectures such as convolutional neural networks
[7780424], deep fully connected networks [ILIADIS20189]
, stacked denoising autoencoders
[7447163], and generative adversarial networks [wu2019deep]
have been studied for sparse signal recovery in generic quantized CS settings. we note that, parameterized mathematical models discussed above play a central role in understanding and design of large-scale information systems and signal processing methods. However, they often cannot take into account the intricate interactions innate to such systems. On the contrary, purely data-driven approaches, and specifically deep learning techniques, do not need explicit mathematical models for data generation and have a wider applicability at the cost of interpretability. The main advantage of the deep learning-based approach is that it employs several non-linear transformations to obtain an abstract representation of the underlying data. Data-driven approaches, on the other hand, lack the interpretability and trustability that comes with model-based signal processing. They are particularly prone to be questioned further, or at least not fully trusted by the users, especially in critical applications. Furthermore, the deterministic deep architectures are generic and it is unclear how to incorporate the existing knowledge on the problem in the processing stage.
The advantages associated with both model-based and data-driven methods show the need for developing frameworks that bridge the gap between the two approaches.
The recent advent of the deep unfolding framework [chien2017deep, wisdom2017building, khobahi2019deep, hershey2014deep, wisdom2016deep, solomon2019deep] and the corresponding deep unfoling networks (DUNs) has paved the way for a game-changing fusion of models and well-established signal processing approaches with data-driven architectures. In this way, we not only exploit the vast amounts of available data, but also integrate the prior knowledge of the system model in the processing stage. Deep unfolding relies on the establishment of an optimization or inference iterative algorithm, whose iterations are then unfolded into the layers of a deep network, where each layer is designed to resemble one iteration of the optimization/inference algorithm. The resulting hybrid method benefits from low computational cost (in execution stage) of deep neural networks, and at the same time, from the versatility and reliability of model-based methods; thus, appears to be an excellent tool in real-time signal processing applications due to the smaller degrees of freedom required for training and execution (afforded by integration of the problem-level reasoning, or the model, see Fig. 1). A detailed analysis of the deep unfolding methodology for the problem of one-bit CS is provided in Sec. III.
### I-B Contributions of the Paper
In this paper, we propose a novel hybrid model-based and data-driven methodology (based on DUNs) that addresses the drawbacks of both purely model-based (such as the discussed RFPI and BIHT algorithm) and purely data-driven approaches. The resulting methodology is far less data-hungry and assumes a slight over-parametrization of the system model as opposed to traditional deep learning techniques (with very large number of variables to be learned). In particular, the proposed method seeks to bridge the gap between the data-driven and model-based approaches in the one-bit CS paradigm, and to result in a specialized architecture for the purpose of sparse signal recovery from one-bit measurements. The contribution of this paper can be summarized as follows:
We propose a novel hybrid model-based and data-driven one-bit compressive variational autoencoding (VAE) methodology that can deal with the optimization of the sensing matrix , the one-bit quantization thresholds , and the latent-variables of the decoder module according to the underlying distribution of the source signal. Hence, such a methodology allows for quick adaptation to new data distributions and environments.
To the best of our knowledge, this is the first attempt in the one-bit CS paradigm that allows for joint optimization of the quantization thresholds and sensing matrix, also facilitating the recovery of the amplitude information of the source signal. We show that by using the proposed VAEs, one can significantly improve upon existing iterative algorithms and gain much higher accuracy both in terms of recovering the magnitude and the support of the underlying source signal.
The proposed methodology exhibits performance that goes beyond the traditional one-bit CS state-of-the-art and allows for designing sensing matrices that are distribution-specific. In conjunction to learning data-specific , the quantization thresholds can also be learned in a joint manner such that the learned parameters improve the signal reconstruction accuracy and speed.
We propose two generalized optimization algorithms that can be used as standalone algorithms for recovering the amplitude information of the source signal by utilizing non-zero quantization thresholds.
Organization of the Paper:
The remainder of this paper is organized as follows. In Sec. II, we discuss the general problem formulation and system model of the one-bit compressive sensing problem and propose two general algorithms that pave the way for incorporating non-zero quantization thresholds. The proposed one-bit compressive variational autoencoding methodology is presented in Sec. III. The loss function characterization and training method for the proposed VAEs are discussed at the end of Sec. III. In Sec. IV, we investigate the performance of the proposed methods through various numerical simulations and for various scenarios. Finally, Sec. V concludes the paper.
## Ii System Model and Problem Formulation
In this paper, we are interested in a one-bit CS measurement model (i.e., the encoder module) with dynamics that can be described as follows:
Encoder Module: r=sign(Φx−b), (2)
where denotes the sensing matrix, is the quantization thresholds, and is assumed to be a -sparse signal. Having the one-bit measurements of the form (2), one can pose the problem of sparse signal recovery from one-bit measurements by solving the following non-convex program:
P0:minx∥x∥0,s.t.r=sign(Φx−b), (3)
where the constraint in (3) is imposed to ensure a consistent reconstruction with the available one-bit information. Further note that the one-bit measurement consistency principle in (3) can be equivalently expressed as
R(Φx−b)⪰0, (4)
where .
Let us first consider the scenario in which the quantization thresholds are all set to zero. In this case, the non-convex optimization problem can be further relaxed and expressed as a well-known non-convex -minimization program on the unit sphere [4558487]:
P1:minx ∥x∥1,s.t.RΦx⪰0,∥x∥2=1, (5)
where the -norm acts as a sparsity inducing function. The intuition behind finding the sparsest signal on the unit-sphere (i.e., fixing the energy of the recovered signal) is two-fold. First, it reduces the feasible set of the optimization problem as the amplitude information is lost, and second, it avoids the the trivial solution of . By comparing the acquired data with non-zero quantization thresholds, the constraint defined in (4) not only reduces the feasible set of the problem by defining a set of hyper-planes where the signal can reside on, but also, implicitly exclude the trivial solution. There exists an extensive body of research on approximately solving the non-convex optimization problem (e.g., see [4558487, 5955138, 6404739, Plan2013, 6638799, zhang2014efficient], and the references therein). The most notable methods utilize a regularization term to enforce the consistency principle via a penalty term added to the -objective function, viz.
^x=argminx∥x∥1+αR(RΦx),s.t.∥x∥2=1, (6)
where is the penalty factor.
Among the numerous iterative algorithms available for tackling the optimization problem in (6), we plan to utilize and improve upon the state-of-the-art renormalized fixed-point iterations (RFPI) [4558487], and the Binary Iterative Hard Thresholding (BIHT) [6418031] algorithms as the starting point for our proposed model-driven one-bit compressive variational autoencoding methodology. Namely, in the subsequent sections, we use the mentioned algorithms as a base-line to design the decoder function of our one-bit CS VAE. In particular, we unfold the iterations of the two specialized algorithms onto the layers of a deep neural network in a fashion that each layer of the proposed deep architecture mimics the behavior of one iteration of the base-line algorithm. Next, we perform an end-to-end learning approach by utilizing the back-propagation method to tune the parameters of both the decoder and the encoder functions of the proposed one-bit compressive VAE.
### Ii-a Renormalized Fixed-Point Iteration (RFPI)
The RFPI algorithm considers a one-bit CS data acquisition model where the quantization thresholds are all set to zero. With and , the RFPI algorithm utilizes the following regularization term to enforce the consistency constraint in (5):
R(c)=12∥ρ(c)∥22, (7)
where , and the function is applied element-wise on the vector arguments. Note that the function
can be expressed in terms of the well-known Rectifier Linear Unit (
ReLU) function extensively used by the deep learning research community, i.e. . Briefly speaking, the RFPI algorithm is a first-order optimization method (gradient-based) that operates as follows: given an initial point on the unit-sphere (i.e., ), the gradient step-size and a shrinkage thresholds (or equivalently the penalty term), at each iteration , the estimated signal is obtained using the following update steps:
di=∇xR(z)∣∣x=xi−1=−(RΦ)Tρ(RΦxi−1), (8a) ti=(1+δdTixi−1)xi−1−δdi, (8b) vi=sign(ti)⊙ReLU(|ti|−(δ/α)1), (8c) xi=vi∥vi∥2. (8d)
After the descent in (8a)-(8b), the update step in (8c) corresponds to a shrinkage step. More precisely, any element of the vector that is below the threshold will be pulled down to zero (leading to enhanced sparsity). Finally, the algorithm projects the obtained vector on the unit sphere to produce the latest estimation of the signal. Note that the latter step is necessary due to the fact that a zero-threshold vector (i.e., ) is employed at the time of the data acquisition, and hence, the amplitude information is lost.
While effective in signal reconstruction, there exist several drawbacks in using the RFPI method. For instance, it is required to use the algorithm on several problem instances, while increasing the value of the penalty factor at each outer iteration of the algorithm, and to use the previously obtained solution as the initial point for tackling the recovery problem for any new problem instance. Moreover, it is not straight-forward how to choose the fixed step-size and the shrinkage threshold, that may depend on the latent-parameters of the system. In fact, it is evident that by carefully tuning the step-sizes and the shrinkage threshold , one can significantly boost the performance of the algorithm, and further alleviate the mentioned drawbacks of this method. In what follows, we extend the above iterations in a fashion that it allows for incorporating the non-zero quantization thresholds, and hence, enabling us to effectively recover the amplitude information of the source signal.
A.1. Extending the RFPI framework to non-zero quantization thresholds:
Recall that our focus is on the following encoding (measurement) model with an arbitrary threshold vector :
r=sign(Φx−b). (9)
Therefore, the problem of one-bit CS signal recovery with a non-zero quantization threshold vector can be cast as:
minx∥x∥1,s.t.R(Φx−b)⪰0. (10)
Inspired by the regularization-based relaxation employed in (6), we relax the above program and cast it as follows:
P2:minx∥x∥1+α2∥ρ(R(Φx−b))∥22. (11)
The above optimization program can be solved in an iterative manner using slightly modified RFP iterations previously described in (8). The slight change presents itself in calculating the gradient of the regularization term, to account for the new measurement model with non-zero thresholds, as well as the exclusion of the projection step onto unit-sphere (8d). Accordingly, we propose the following new update steps at iteration :
The Proposed Generalized RFP Iterations: ~di=−(RΦ)Tρ(R(Φxi−1−b)), (12a) ~ti=xi−1−δ~di, (12b) xi=sign(~ti)⊙% ReLU(|~ti|−(δ/α)1). (12c)
Note that in (8c) there exist an additional projection of the gradient onto the unit sphere through the term . However, by incorporating the non-zero thresholds vector, such a step is no longer required for the proposed generalized RFP iterations. In the rest of this paper, we refer to the iterations presented in (12) as Generalized RFPI (G-RFPI).
### Ii-B Binary Iterative Hard Thresholding Algorithm (BIHT)
The BIHT algorithm is a simple, yet powerful, first-order iterative reconstruction algorithm for the problem of one-bit CS where the sparsity level is assumed to be known a priori. BIHT iterations can be seen as a simple modification of the iterative hard thresholding (IHT) algorithm proposed in [blumensath2009iterative]. Similar to the RFPI algorithm, the BIHT method considers a zero-level quantization threshold. However, in contrast to the RFPI algorithm, it exploits the knowledge of the sparsity level of the signal of interest. In other words, the BIHT algorithm is designed to tackle the following counterpart of :
P3:minxα∥ρ(RΦx)∥1,s.t.∥x∥0=K,∥x∥2=1, (13)
where and as before. Note that the one-sided objective function above (also related to the hinge-loss) enforces the consistency principle previously introduced in (5), and that by solving the above optimization problem, we are working to achieve maximal consistency with the one-bit measurements . It is worth mentioning that one can also consider different objective functions, and not necessarily an objective, as long as it promotes the data consistency principle (e.g., norm). For a detailed analysis of different candidates for the objective function and their properties, see [blumensath2009iterative].
The BIHT iterations are described as follows. Let , and define . Given an initial point , the sparsity level , and one-bit measurements (or equivalently ), at the -th iteration, the BIHT algorithm updates the current estimate of the signal through the following steps:
ui =xi−1−α2∂F(xi−1) (14a) =xi−1+α2ΦT(r−% sign(Φxi−1)), xi =HK(ui), (14b)
where denotes the sub-gradient of the one-side objective function in , governs the fixed gradient step-size, and the projecton operator is defined such that it retains the largest elements (in magnitude) of the vector argument, and set the rest of the elements to zero.
The step (14a) can be interpreted as taking a descent step using the computed sub-gradient of the objective function (13), while the projection step in (14b) can be viewed as a projection of onto the support set of -sparse signals. Once the above iterations terminate either by fully satisfying the consistency principle (i.e., obtaining such that ), or by achieving a maximum number of iterations, the ultimate step to be taken is projecting the final estimate onto the unit-sphere, viz. . Note that this is in contrast to the RFPI algorithm as the BIHT iterations does not require a normalization step as in (8d) at each iteration.
B.1. Extending the BIHT framework to non-zero quantization thresholds:
The extension of the BIHT iterations to incorporate the non-zero thresholds vector is straight-forward. In the case of non-zero quantization thresholds, we cast the signal recovery problem as
minxα∥ρ(R(Φx−b))∥1,s.t.∥x∥0=K, (15)
where , and .
Similar to the steps we took in (9)-(12), and by employing some rudimentary algebraic operations, the proposed generalized update steps of the BIHT algorithm may be expressed as:
The Proposed Generalized BIHT Iterations: ui =xi−1+α2ΦT(r−% sign(Φxi−1−b)), (16a) xi =HK(ui), (16b)
with the exception that in the proposed generalized BIHT iterations, there is no need for the normalization of the obtained estimate of the signal after the update steps terminate. This is due to the fact that a non-zero quantization threshold vector is employed at time of the encoding, and hence, the amplitude information is not fully lost. In the rest of this paper, we refer to the above iterations as Generalized BIHT (G-BIHT) algorithm.
Although simple and powerful, the BIHT algorithm requires a careful choice of the gradient step-size for convergence, and there is no straight-forward method to properly choose the gradient step-size. On the other hand, it only utilizes a fixed step-size along all iterations. Hence, this motivates the development of a methodology through which one can design a decoder function that exploits adaptive gradient step-sizes, i.e. by considering a different step-size at each iteration, that can result in a significant improvement of the performance of the BIHT algorithm.
In the next section, we discuss a slight over-parametrization of the iterations of RFPI, G-RFPI, BIHT, and G-BIHT algorithms that paves the way for the design of our proposed one-bit compressive VAE and for jointly designing the parameters of the encoder function defined in (2) parametrized on the sensing matrix , the quanitzation thresholds , and the design of a set of decoder functions based on the discussed iterative optimization algorithms.
## Iii The Proposed One-Bit Compressive Variational Autoencoding Approach
We pursue the design of a novel model-driven one-bit compressive sensing-based variational autoencoder deep architecture that facilitates the joint design of the parameters of both the encoder and the decoder module when one-bit quantizers with non-zero thresholds are employed in the data acquisition process (i.e., the encoding module) for a -sparse input signal .
In general terms, a variational AE is a generative model comprised of an encoder and a decoder module that are sequentially connected together. The purpose of an AE is to learn an abstract representation of the input data, while providing a powerful data reconstruction system through the decoder module. The input to such a system is a set of signals following a certain distribution, i.e. , and the output is the recovered signal from the decoder module . Hence, the goal is to jointly learn an abstract representation of the underlying distribution of the signals through the encoder module, and simultaneously, learning a decoder module allowing for reconstruction of the compressed signals from the obtained abstract representations. Therefore, an AE can be defined by two main functions: i) an encoder function , parameterized on a set of variables that maps the input signal into a new vector space, and ii) a decoder function parameterized on , which maps the output of the encoder module back into the original signal space. Hence, the governing dynamics of a general VAE can be expressed as
^x=fDecoderΥ2∘f%EncoderΥ1(x), (17)
where denotes the reconstructed signal.
In light of the above, we seek to interpret a one-bit CS system as an VAE module facilitating not only the design of the sensing matrix and the quantization thresholds that best captures the information of a -sparse signal when one-bit quantizers are employed, but also to learn the parameters of an iterative optimization algorithm specifically designed for the task of signal recovery. To this end, we modify and unfold the iterations of the proposed G-RFPI algorithm defined in (12), and the GBIHT method defined in (16) onto the layers of a deep neural network and later use the deep learning tools to tune the parameters of the proposed one-bit compressive VAE.
### Iii-a Structure of the Encoding Module
In its most general form, we define the encoder module of the proposed VAE based on our data-acquisition model defined in (2), as follows:
fEncoderΥ1(x)=~sign% (Φx−b), (18)
where denotes the set of learnable parameters of the encoder function, and , for a large ( was set to in numerical investigations). Note that we replaced the original sign function with a smooth differentiable approximation of it based on the hyperbolic tangent function. The reason for such a replacement is that the sign function is not continuous and its gradient is zero everywhere except at the origin, and hence, the use of it would cripple stochastic gradient-based optimization methods (later used in back-propagation method for deep learning). Fig. 2 plots the function for , also demonstrating that larger values of allow for better approximations of the original function.
### Iii-B Structure of the Decoding Module
In this part, we describe the different scenarios under which we pursue the design of our decoder function by using the RFPI, BIHT, and the suggested G-RFPI and G-BIHT iterations. In particular, we fix the total complexity of our decoding module by fixing the total number of iterations allowed for the mentioned optimization iterations. Next, we slightly over-parameterize each iteration/step of the mentioned algorithms to increase the per-iteration degrees-of-freedom of each method and to further account for the learnable latent variables in the system. Finally, we unfold the iterations of each algorithm onto the layers of a deep architecture such that each layer of the deep network resembles one iteration of the base-line algorithm. We then seek to learn the parameters of both the decoder and encoder function using the training tools already developed for deep learning. We consider the following cases to design our decoder function:
Learned RFPI (L-RFPI): We consider the RFPI iterations defined in (8) as our base-line but slightly over-parametrize its iterations by introducing a gradient step-size and a shrinkage thresholds vector for each iteration . This is in contrast to the original RFP iterations where a fixed gradient step-size , and shrinkage threshold were employed for all iterations. Hence, the proposed unfolded over-parametrized iterations are much more expressive. The decoder function will be parameterized on , and the encoder function will be parametrized on the set (note that ).
Learned BIHT (L-BIHT): We consider the unfolding of the iterations of the BIHT defined in (14) similar to the previous case and by introducing per-iteration gradient step-sizes in lieu of a fixed gradient-step size along all iterations. In this case, the decoder function will be parametrized on the set , while the set of parameters of the encoding module is ; both are to be learned.
Learned G-RFPI (LG-RFPI): We consider the unfolding of the proposed Generalized RFPI iterations in (12) in a non-zero quantization thresholds setting. We over-parameterize the iterations of the proposed G-RFPI by parametrizing the decoder function on the set , and this time, by parameterizing the encoder function on both the sensing matrix and the quantization thresholds vector, i.e. .
Learned G-BIHT (LG-BIHT): We consider the unfolding of the G-BIHT iterations defined in (16) in a similar manner, i.e. by parameterizing the decoder function on . However, similar to the previous case, we further parametrize the encoder function on the quantization thresholds vector in conjunction with the sensing matrix, i.e. .
### Iii-C The Proposed One-Bit Compressive Variational Authoencoding Methodology
In the following, we describe the design of four novel deep architectures based on the above mentioned structures and discuss the governing dynamics of the proposed one-bit compressive sensing-based VAE.
C.1. L-RFPI-Based Compressive Autoencoding:
In this case, we consider the following parameterized encoder function:
fEncoderΥ1(x)=~sign% (Φx),where Υ1={Φ}. (19)
As for the decoder function, and based on the RFPI iterations in (8), define as follows:
gϕi(z;Φ,R)=v∥v∥2,with (20a) v=~sign(t)⊙ReLU(|t|−τi), (20b) t=(1+δidTz)z−δid, (20c) d=−(RΦ)Tρ(RΦz), (20d)
where represents the parameters of the function , and denotes the sparsity inducing shrinkage thresholds vector, and represents the gradient step-size at iteration . Next, we define the proposed L-RFPI composite decoder function as follows:
fDecoderΥ2(z0)=gϕL−1∘gϕL−2∘⋯∘gϕ1∘gϕ0(z0;Φ,R), (21)
where represents the learnable (tunable) parameters of the decoder function, and is some initial point of choice. Note that we have over-parameterized the iterations of the RFPI algorithm by introducing the new variable at each iteration for the sparsity inducing step in (20b). Moreover, in contrast with the original RFPI iterations, we have introduced a new step-size at each step of the iteration as well (see Eq. (20c)). Therefore, the above decoder function can be interpreted as performing iterations of the original RFPI algorithm with an additional degrees of freedom (as compared to the base algorithm) expressed in terms of the set of the shrinkage thresholds and the gradient step-sizes , i.e. . As a reslt, the proposed decoder function is much more expressive than that of the iterations of RFPI algorithm.
Remark: Note that the above encoder and decoder function, once cascaded together, can be viewed as a deep neural network with layers, where the dynamics of the first layer is described by the encoder function defined in (19), and the governing dynamics of the succeeding layers is described by compuations of the form (20a)-(20d). Equivalently, such a deep architecture can be viewed as a computational graph with shared variables among the computation nodes, and thus, its parameters can be efficiently optimized by utilizing known deep learning tools such as back-propagation. Hence, the goal is to jointly learn the parameters of such a cascaded network (i.e., ) in an end-to-end manner by using the available data at hand coming from the underlying distribution of the source signal .
C.2. L-BIHT-Based Compressive Autoencoding:
Similar to the previous case, we consider the same encoding function parametrized only on the learnable sensing matrix in a zero quantization thresholds setting, i.e. . The governing equations for the decoder function in the case of the proposed Learned BIHT are as follows. We re-define as:
gϕi(z;Φ,r,K)=HK(v),for i
with , and where we have an added final layer , to renormalize the reconstructed signal as,
gϕL(z;Φ,r)=z∥z∥2. (23)
Therefore, similar to the previous case, the proposed L-BIHT-based decoder function is defined as:
fDecoderΥ2(z0)=gϕL∘gϕL−1∘⋯∘gϕ1∘gϕ0(z0;Φ,R,K). (24)
We again observe the slight over-parametrization of the L-BIHT algorithm during the unfolding process. Namely, at each iteration we are introducing the per-iteration step-sizes to be learned which further enhances the performance of our iterations (see (22)). In this case, the decoder function is parameterized only on the gradient step-sizes, i.e. . The L-BIHT iterations have an additional degrees of freedom compared to that of the original BIHT iterations.
C.3. LG-RFPI-Based Compressive Autoencoding:
We consider the unfolding of iterations of the Learned Generalized RFPI method according to (12). As previously discussed, in the generalized iterations of both the RFPI and BIHT algorithms, the encoder module can be expressed as:
fEncoderΥ1(x)=~sign% (Φx−b), (25)
where , and represents the tunable vector of quantization thresholds. We follow a similar approach to the proposed L-RFPI-Based deep architecture and slightly over-parameterize the iterations in (12a)-(12c), leading to the design of the decoder function:
gϕi(z;Φ,R)=~sign(v)⊙ReLU(|v|−τi),with (26a) v=z−δid, (26b) d=−(RΦ)Tρ(R(Φz−b)), (26c)
where represents the parameters of the function , denotes the sparsity inducing thresholds vector, and represents the gradient step-size at iteration . Hence, the proposed decoder function can be represented in the same way as in (21), with . Note that by incorporating the non-zero quantization thresholds, there is no need for an additional normalization term at each iteration. The above iterations (comprising the decoder function) have the same degree of freedom as L-RFPI iterations—an additional model parameters compared to that of the base-line G-RFPI iterations. Also, note the additional degrees of freedom that the encoder function offers in terms of tunable quantization thresholds vector (in addition to the sensing matrix).
C.4. LG-BIHT-Based Compressive Autoencoding:
We consider an encoder function of the form (25), where denotes the learnable sensing matrix and arbitrary quantization thresholds. Additionally, we present an over-parameterization of the Genralized BIHT iterations (see Eqs. (16)) and consider the resulting unfolded network as the blueprint of our decoder. Namely, we define as:
gϕi(z;Φ,r,K)=HK(v),with (27a) v=z+δiΦT(r−~% sign(Φz−b)), (27b)
where denotes the set of parameters of the function . Note that due to employing a non-zero thresholds vector, we do not need the additional normalization layer as in (23) for this case. Consequently, the decoder function can be expressed in a similar manner as in (24), with . These iterations, similar to L-BIHT case, have an additional degrees of freedom compared to that of the base-line G-BIHT iterations; whereas, the encoder function has an additional tunable parameters in terms of the one-bit quantization thresholds compared to that of the L-BIHT-based AE.
In the next section, we discuss the training process of the above proposed one-bit compressive autoencoders. Particularly, we formulate a proper loss function that facilitates the training of such unfolded deep architectures, and for each model, we seek to jointly learn the set of parameters of the entire network (i.e., the encoder and decoder function) in a end-to-end manner using the available deep learning techniques.
### Iii-D Loss Function Characterization and Training Method
The output of an autoencoder is the reconstructed signal from the compressed measurements, i.e.
^x=fDecoderΥ2∘fEncoderΥ1(x),
where and denotes the input and output of the AE, respectively. The training of an AE should be carried out by defining a proper loss function that provides a measure of the similarity between the input and the output of the AE. The goal is to minimize the distance between the input target signal and the recovered signal according to a similarity criterion. A widely-used option for the loss function is the output MSE loss, i.e., , and hence, the training loss of such a system can be formulated as:
G(x;^x)=Ex∼D(x){∥x−fDecoderΥ2∘f%EncoderΥ1(x)∥22}
that is to be minimized over and
. Nevertheless, in deep architectures with a high number of layers and parameters, such a simple choice of the loss function makes it difficult to back-propagate the gradients; in fact, the vanishing gradient problem arises. Therefore, for the training of the proposed AE, a better choice for the loss function is to consider the cumulative MSE loss of the layers. As a result, one can also feed-forward the decoder function for only
layers (a lower complexity decoding), and consider the output of the -th layer as a good approximation of the target signal. For training, one needs to consider the constraint that the gradient step-sizes , and the shrinkage thresholds must be non-negative. By parameterizing the decoder function on the step-sizes and the shrinkage step thresholds, we need to regularize the training loss function ensuring that the network chooses positive step sizes and shrinkage thresholds at each layer. With this in mind, we suggest the following loss function for training the proposed one-bit compressive AE. Let , and define the loss function for training as
GL(x;^x)= L−1∑i=0wi||x−~gi(xi)||22accumulated MSE loss of all layers+ (28) λL−1∑i=0ReLU(−[δ]i)+λnL−1∑i=0ReLU(−[τ]i)regularization% term for the step-sizes and shrinkage thresholds,
where denotes the importance weight of choice for the output of each layer, , , and . Note that as the information flows through the network, one expects that as we progress layer by layer, the reconstruction shows improvement. A reasonable weighting scheme for designing the importance weights is to gradually increase the importance weights as we proceed through the layers. In this work, we consider a logarithmic weighting scheme, i.e. . Moreover, in training the autoencoders based on the BIHT algorithm, we exclude the last term in (28) as there is no shrinkage thresholds required for these models.
As for the training procedure, our numerical investigations showed that an incremental learning approach is most effective for training of the proposed networks. The details of the incremental learning method that we employed are as follows. During the -th increment round (for ), we seek to optimize the cost function by learning the set of parameters . At each round , we perform a batch learning with mini-batches of size . After finishing the -th round of training, the -th layer will be added to the network, and the objective function will be changed to . Next, the entire network will go through another batch-learning phase. Interestingly, in this method of training, the learned parameters from the -th round will be used as the initial values of the same parameters in the next round.
## Iv Numerical Results
In this section, we present various simulation results to investigate the performance of the proposed one-bit compressive VAEs and to further show the effectiveness of our training. For training purposes, we randomly generate -sparse signals of length , i.e. where the non-zero elements are sampled from . Furthermore, we fix the total number of layers of the decoder function to ; equivalent of performing only 30 optimization iterations of the form (20), (22), (26), and (27). As for the sensing matrix (to be learned), we assume . The results presented here are averaged over realizations of the system parameters. Similar to [4558487], we consider the case that , due to the focus of this study on one-bit sampling where usually a large number of one-bit samples are available, as opposed to the usual infinite-precision CS settings.
The proposed one-bit CS VAEs are implemented using the library [paszke2017automatic]. The Adam optimizer [kingma2014adam] with a learning rate of is utilized for optimization of parameters of the proposed deep architectures. Due to the importance of reproducible research, we have made all the codes implemented publicly available along with this paper.111The code is also available at: https://github.com/skhobahi/deep1bitVAE
As it was previously discussed in Sec. III-D, we employ an incremental batch-learning approach with mini-batches of size at each round , and a total number of epochs per round. For training of the the proposed VAEs that are based on the RFPI iterations, i.e., the L-RFPI and LG-RFPI deep architectures, we uniformly sample the sparsity level of the source signal from the set for each training point in the mini-batch. We evaluate the performance of the proposed methods on target signals with , as well as (which was not presented to the network during the training phase). Moreover, due to the fact that the BIHT method and the corresponding one-bit VAEs (L-BIHT and LG-BIHT) require the knowledge of the sparsity level of the source signal a priori, there is no need to train the network on various sparsity levels; i.e., the corresponding deep architectures can be trained for a particular . Hence, for the L-BIHT and LG-BIHT deep architectures, we train the network for source signals with , and evaluate the performace of the resulted networks on .
In the sequel, we refer to as the normalized version of the vector . In all scenarios, in order to have a fair comparison between the algorithms, the initial starting point of the optimization algorithms are the same.
Performance of the proposed L-RFPI VAE:
In this part, we investigate the performance of the proposed L-RFPI-based VAE in recovering the normalized source signal , i.e., recovering .
Fig. 3 illustrates Mean Squared Error (MSE) for normalized version of the recovered signal versus total number of optimization iterations , for , and for sparsity levels (a) , (b) , (c) , and (d) . We compare the performance of the proposed L-RFPI algorithm with the standard RFPI iterations in (8a)-(8d), in the following scenarios:
Case 1: The RFPI algorithm with a randomly generated sensing matrix whose elements are i.i.d. and sampled from , and fixed values for and .
Case 2: The RFPI algorithm where the learned is utilized, and the values for and are fixed as in the previous case.
Case 3: The RFPI algorithm with a randomly generated (same as Case 1), however, the learned shrinkage thresholds vector is utilized with a fixed step-size.
Case 4: The proposed one-bit L-RFPI VAE method corresponding to the iterations of the form (20a)-(20d), with learned , , and .
To have a fair comparison, we fine-tuned the parameters of the standard RFPI method (Case 1), i.e., the step-size and the shrinkage threshold , using a grid-search method. It can be seen from Fig. 3 that in all cases of , the proposed L-RFPI method demonstrates a significantly better performance than that of the RFPI algorithm (described in Case 1)—an improvement of times in MSE outcome. Furthermore, the effectiveness of the learned (Case 2), and the learned (Case 3) compared to the base algorithm (Case 1), are clearly evident, as both algorithms with learned parameters significantly outperform the original RFPI. Finally, although we trained the network for sparse signals, it still shows very good generalization properties even for (see Fig. 3 (a) and (d)). This is presumably due to the fact that the proposed L-RFPI-based VAE is a hybrid model-based data-driven approach that exploits the existing domain knowledge of the problem as well as the available data at hand. Furthermore, note that the proposed method achieves a high accuracy very quickly and does not require solving (6) for several instances as opposed to the original RFPI algorithm—thus showing great potential for usage in real-time applications.
Performance of the proposed LG-RFPI VAE :
Next, we investigate the performance of the proposed LG-RFPI VAE (see Eqs. (26a)-(26c)) and the G-RFPI algorihtm (see Eqs. (12a)-(12c)) that we specifically designed for incorporating arbitrary quantization thresholds at data acquisiton. We investigate the performance of the proposed method in both cases of recovering the amplitude information as well as the normalized signal.
Fig. 4 illustrates the MSE between the source signal and the recovered signal versus total number of optimization iterations , for , and for sparsity levels (a) , (b) , (c) , and (d) . Similar to the previous case, we consider the following scenarios:
Case 1: The proposed G-RFPI algorithm with a randomly generated sensing matrix and quantization threhsolds vector, whose elements are i.i.d. and sampled from , and fixed values for and .
Case 2: The proposed G-RFPI algorithm where the learned sensing matrix and quantization thresholds vector are utilized, and the values for and are fixed as in the previous case.
Case 3: The proposed one-bit LG-RFPI VAE method corresponsing to the iterations of the form (26a)-(26c), with the learned , , , and .
Note that the focus of this part is on recovering the amplitude information of the underlying -sparse signal by means of using arbitrary quantization thresholds. Although the RFPI method and the proposed L-RFPI VAE can only recover the normalized signal , we further provide the performance of the L-RFPI method (that significantly outperforms the RFPI method) in recovering the amplitude information for comparison purposes.
It can be observed from Fig. 4 that the proposed G-RFPI algorithm with randomly generated sensing matrix and quantization thresholds (Case 1) provides good accuracy in recovering the amplitude information of the true signal for sparsity levels . This is in contrast to the RFPI algorithm and the corresponding L-RFPI VAE where the amplitude information is lost due to zero quantization thresholds. More precisely, the proposed G-RFPI algorithm outperforms the RFPI and the L-RFPI algorithm in terms of recovering the amplitude information of the signal. One can observe that even with a randomly generated quantization thresholds (i.e., without learning them), the proposed G-RFPI method achieve a significantly lower MSE in terms of recovering the amplitude information of the source signal as compared to the RFPI and the proposed L-RFPI method. Hence, the proposed G-RFPI method can be used as an stand-alone algorithm for one-bit compressive sensing settings with non-zero quantization thresholds, where both finding the direction of the source signal and the amplitude information is of great interest. Next, we explore the effect of learning the distribution-specific (data-driven) sensing matrix and the quantization thresholds (Case 2). It is evident from Fig. 4 that compared to the vanilla G-RFPI method, one can significantly achieve a lower MSE in terms of recovering the amplitude information by learning a proper sensing matrix and the quantization thresholds and utilizing them during the data-acquisition process. Finally, it can be seen from Fig. 4 that the proposed LG-RFPI VAE (Case 3) significantly outperforms its counterparts by achieving a much lower MSE very quickly. Moreover, the proposed LG-RFPI VAE shows strong generalization properties for unseen sparsity levels (see Fig. 4 (b) and (d)). The fact that such architectures show great performance in generalization is due to the model-driven nature of the proposed deep networks.
We conclude this part by comparing the performance of the proposed LG-RFPI, G-RFPI, and L-RFPI VAEs in recovering the normalized version of the signal . Fig. 5 illustrates the MSE between the normalized source signal and the recovered signal versus number of iterations , i.e. MSE(,), for a sparsity level of . It can be observed from Fig. 4 that the proposed methods outperform the standard RFPI iterations and achieve a high accuracy in recovering . Moreover, the proposed L-RFPI VAE shows a slightly better performance than that of the LG-RFPI method. This is presumably due to the fact that the L-RFPI iterations and the corresponding deep architecture are specifically designed and tuned for recovering the normalized source signal while the proposed G-RFPI and LG-RFPI algorithms are designed for recovering the amplitude information of the source signal. Nevertheless, the MSE difference between the LG-RFPI and L-RFPI methods in recovering is negligible, and hence, in a non-zero quantization thresholds setting, it is beneficial to use the proposed LG-RFPI VAE as it shows significant improvement in the performance of recovering the amplitude information while maintaining a high performance in recovering as well.
Performance of the proposed L-BIHT VAE:
In this part, we investigate the performance of the proposed L-BIHT VAE, and compare our results with the standard BIHT algorithm. Note that similar to the RFPI method and the proposed L-RFPI VAE, the BIHT algorithm considers at the time of data acquisition. Hence, we investigate the performance of the proposed method in recovering the normalized source signal, i.e. . In particular, we provide the simulation results for the following cases:
Case 1: The BIHT algorithm with a randomly generated sensing matrix whose elements are i.i.d. and sampled from , and fixed value for .
Case 2: The BIHT algorithm with a randomly generated (same as Case 1); however, learned gradient step-sizes are used at each iteration .
Case 3: The BIHT algorithm where the learned is utilized and the value for the step-size is fixed as in Case 1.
Case 4: The proposed one-bit L-BIHT VAE method corresponding to the iterations of the form (22a)-(22b), with the learned and .
Fig. 6 demonstrates the MSE between normalized source signal , and the recovered signal versus the number of optimization iterations , for signals with sparsity levels (a) and (b) . Note that for learning the parameters of the proposed L-BIHT algorithm, we trained the corresponding deep architecture on the sparsity level , and we check the generalization performance of the learned parameters for the case of . It can be seen from Fig. 6 that in both cases of the proposed L-BIHT algorithm demonstrates a significantly better performance than that of the standard BIHT algorithm (Case 1). Moreover, the effectiveness of the learned step-sizes (Case 2), and the learned sensing matrix (Case 3) compared to the base-line vanilla BIHT algorithm (Case 1) are evident. In particular, the learned step-sizes (Case 2) results in a fast descent while the learned (Case 3) leads to a lower MSE compared to Case 2. In addition, we provided the performance of the standard RFPI algorithm for comparison purposes. It can be seen from Fig. 6 that the BIHT algorithm with and without the learned parameters achieves a better accuracy in recovering the direction of the source signal compared to the RFPI method. Also, a comparison between Fig. 6 (a) and Fig. 3 (b) reveals the fact that the proposed L-BIHT VAE demonstrates a far better performance than that of the proposed L-RFPI VAE. This is due to the fact that the BIHT algorithm and the corresponding proposed L-BIHT VAE, exploits the knowledge of the sparsity level of the source signal (note the mapping function used in (22a) and (14b)). One can further observe that even for the unseen case of , the proposed method generalizes very well and maintains its accuracy. This is due the model-driven nature of the proposed L-BIHT VAE architecture. It is worth mentioning that it can be observed from Fig. 6 that the proposed L-BIHT method converges very fast (in 10 iterations), achieving a high accuracy—making it a great candidate for real-time applications. Of course, the trade-off between using the L-RFPI and L-BIHT is implicit in the knowledge of the sparsity level of the signal. For applications where is known beforehand, the proposed L-BIHT can be used in that it shows higher accuracy compared to the other methods. However, the L-RFPI methodology is more flexible as it does not require knowing the sparsity level of the signal a priori.
Performance of the proposed LG-BIHT VAE:
Finally, we investigate the performance of the proposed G-BIHT method (see Eqs. (16a)-(16b)) and the corresponding one-bit compressive LG-BIHT VAE (see Eqs. (27a)-(27b)) that are specifically designed to handle non-zero quantization thresholds . In particular, we are interested in evaluating the performance of the proposed methods in recovering the amplitude information of the source -sparse signal. Hence, for this part, we check the MSE between the true signal , and the recovered signal from the G-BIHT and LG-BIHT methods for each iteration . In addition, we provide the results for recovering the direction of the source signal as well. Specifically, we provide the simulation results for the following cases:
Case 1: The proposed G-BIHT algorithm with a randomly generated sensing matrix and quantization thresholds vector where the elements of both are i.i.d. and sampled from , and fixed value for .
Case 2: The proposed G-BIHT algorithm, where the learned sensing matrix and quantization thresholds are utilized and the values for are fixed as in the previous case.
Case 3: The proposed one-bit LG-BIHT VAE method corresponding to the iterations of the form (27a)-(27b), with learned , and .
Fig. 7 illustrates the MSE between the true signal and the recovered signal versus optimization iteration for sparsity levels (a) and (b) . We further provide the numerical results for the proposed LG-RFPI VAE and the proposed G-RFPI iterations for comparison. It can be seen from Fig. 7 that the proposed G-BIHT algorithm with randomly generated latent-variables (Case 1) significantly outperforms its G-RFPI counterpart, and achieves a high accuracy very quickly. On the other hand, the proposed LG-RFPI still achieves a lower MSE compared to the vanilla G-RFPI method. In addition, a comparison between the performance of the proposed G-BIHT algorithm with learned and (Case 2) and the proposed LG-RFPI VAE and vanilla G-BIHT (Case 1) reveals the effectiveness of the learned parameters and the power of the proposed G-BIHT algorithm. Namely, by utilizing only the learned and and by using a fixed step size for the G-BIHT algorithm, one can achieve a superior performance than that of the LG-RFPI (where all of the learned variables are in use) and the vanilla G-BIHT method. Finally, it can be observed from 6(a)-(b) that the proposed LG-BIHT algorithm (Case 3) significantly outperforms the other methods as it achieves a much lower MSE very quickly, specifically, compared to the proposed LG-RFPI VAE. The superior performance of the LG-BIHT algorithm and the corresponding LG-BIHT VAE is due the fact that we are exploiting the knowledge of the sparsity level present in the signal. As discussed before, if the sparsity level is known a priori, it is beneficial to use either the G-BIHT algorithm (when one do not wish to perform any learning) or the proposed LG-BIHT methodology. It is worth mentioning that similar to the previously investigated methods, the proposed LG-BIHT generalizes very well for (see Fig. 7(b)) even though the sparsity level was not revealed to the network during the training phase.
Fig. 8 demonstrate the MSE between the direction of the source signal, i.e. , and the recovered direction versus optimization iteration , for sparsity levels of (a) and (b) . It can be seen from Fig. 8 that the proposed LG-BIHT method outperforms both the LG-RFPI method, and furthermore, it achieves a similar MSE to that of the proposed L-RFPI method. However, the convergence of LG-BIHT is much faster than that of the L-RFPI method. Furthermore, the proposed L-BIHT algorithm still achieves a superior performance than that of the other methods both in terms of convergence speed and accuracy. This is presumably due to the fact that the L-BIHT method is specifically designed and learned to have a high accuracy in finding normalized true signal .
## V Conclusion
In this paper, we considered the problem of one-bit compressive sensing and proposed a novel hybrid model-driven and data-driven variational autoencoding scheme that allows us to jointly learn the parameters of the measurment module (i.e., the sensing matrix and the quantization thresholds) and the latent-variables of the decoder (estimator) function, based on the underlying distribution of the data. In broad terms, we proposed a novel methodology that combines the traditional compressive sensing techniques with model-based deep learning—resulting in interpretable deep architectures for the problem of one-bit compressive sensing. In addition, the proposed method can handle the recovery of the amplitude information of the signal using the learned and optimized quantization thresholds. Our simulation results demonstrated that the proposed hybrid methodology is superior to the state-of-the-art methods for the problem of one-bit CS in terms of both computional efficiency and accuracy.
|
2022-01-16 11:10:49
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814581036567688, "perplexity": 533.1777724639462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00646.warc.gz"}
|
http://www.turkmath.org/beta/seminer.php?id_seminer=1674
|
#### İstanbul Analysis Seminars
Hypercyclic Algebras
Frédéric Bayart
Universite Blaise Pascal, France
Özet : Let X be a topological space and let T be a bounded operator on X. We say that T is hypercyclic if T admits a dense orbit, namely if there exists a vector x ∈ X, called a hypercyclic vector for T, such that {T^nx; n ≥ 0} is dense in X. We shall denote by HC(T) the set of hypercyclic vectors for T. It is known that, provided HC(T) is nonempty, then it has some nice topological and algebraic properties. For instance, HC(T)∪ {0} always contains a dense subspace, and there are nice criteria for the existence of a closed infinite-dimensional subspace in it. When moreover X is an algebra, it is natural to study whether HC(T) contains a nontrivial algebra. In this talk, we will explain some recent (negative and positive) results on this problem.
Tarih : 23.03.2018 Saat : 14:40 Yer : IMBM Seminar Room, Bogazici University South Campus Dil : English Not : Please notice the change in time and place!
|
2018-12-12 21:43:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363686203956604, "perplexity": 848.904463258168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824119.26/warc/CC-MAIN-20181212203335-20181212224835-00350.warc.gz"}
|
https://peeterjoot.wordpress.com/2011/02/24/phy450h1s-relativistic-electrodynamics-lecture-11-taught-by-prof-erich-poppitz-unpacking-lorentz-force-equation-lorentz-transformations-of-the-strength-tensor-lorentz-field-invariants-bianc/
|
# Peeter Joot's (OLD) Blog.
• 324,833
## PHY450H1S. Relativistic Electrodynamics Lecture 11 (Taught by Prof. Erich Poppitz). Unpacking Lorentz force equation. Lorentz transformations of the strength tensor, Lorentz field invariants, Bianchi identity, and first half of Maxwell’s.
Posted by peeterjoot on February 24, 2011
[Click here for a PDF of this post with nicer formatting (especially if my latex to wordpress script has left FORMULA DOES NOT PARSE errors.)]
Covering chapter 3 material from the text [1].
Covering lecture notes pp. 74-83: Lorentz transformation of the strength tensor (82) [Tuesday, Feb. 8] [extra reading for the mathematically minded: gauge field, strength tensor, and gauge transformations in differential form language, not to be covered in class (83)]
Covering lecture notes pp. 84-102: Lorentz invariants of the electromagnetic field (84-86); Bianchi identity and the first half of Maxwell’s equations (87-90)
# Chewing on the four vector form of the Lorentz force equation.
After much effort, we arrived at
\begin{aligned}\frac{d{{(m c u_l) }}}{ds} = \frac{e}{c} \left( \partial_l A_i - \partial_i A_l \right) u^i\end{aligned} \hspace{\stretch{1}}(2.1)
or
\begin{aligned}\frac{d{{ p_l }}}{ds} = \frac{e}{c} F_{l i} u^i\end{aligned} \hspace{\stretch{1}}(2.2)
## Elements of the strength tensor
\paragraph{Claim}: there are only 6 independent elements of this matrix (tensor)
\begin{aligned}\begin{bmatrix}0 & . & . & . \\ & 0 & . & . \\ & & 0 & . \\ & & & 0 \\ \end{bmatrix}\end{aligned} \hspace{\stretch{1}}(2.3)
This is a no-brainer, for we just have to mechanically plug in the elements of the field strength tensor
Recall
\begin{aligned}A^i &= (\phi, \mathbf{A}) \\ A_i &= (\phi, -\mathbf{A})\end{aligned} \hspace{\stretch{1}}(2.4)
\begin{aligned}F_{0\alpha} &= \partial_0 A_\alpha - \partial_\alpha A_0 \\ &= -\partial_0 (\mathbf{A})_\alpha - \partial_\alpha \phi \\ \end{aligned}
\begin{aligned}F_{0\alpha} = E_\alpha\end{aligned} \hspace{\stretch{1}}(2.6)
For the purely spatial index combinations we have
\begin{aligned}F_{\alpha\beta} &= \partial_\alpha A_\beta - \partial_\beta A_\alpha \\ &= -\partial_\alpha (\mathbf{A})_\beta + \partial_\beta (\mathbf{A})_\alpha \\ \end{aligned}
Written out explicitly, these are
\begin{aligned}F_{1 2} &= \partial_2 (\mathbf{A})_1 -\partial_1 (\mathbf{A})_2 \\ F_{2 3} &= \partial_3 (\mathbf{A})_2 -\partial_2 (\mathbf{A})_3 \\ F_{3 1} &= \partial_1 (\mathbf{A})_3 -\partial_3 (\mathbf{A})_1 .\end{aligned} \hspace{\stretch{1}}(2.7)
We can compare this to the elements of $\mathbf{B}$
\begin{aligned}\mathbf{B} = \begin{vmatrix}\hat{\mathbf{x}} & \hat{\mathbf{y}} & \hat{\mathbf{z}} \\ \partial_1 & \partial_2 & \partial_3 \\ A_x & A_y & A_z\end{vmatrix}\end{aligned} \hspace{\stretch{1}}(2.10)
We see that
\begin{aligned}(\mathbf{B})_z &= \partial_1 A_y - \partial_2 A_x \\ (\mathbf{B})_x &= \partial_2 A_z - \partial_3 A_y \\ (\mathbf{B})_y &= \partial_3 A_x - \partial_1 A_z\end{aligned} \hspace{\stretch{1}}(2.11)
So we have
\begin{aligned}F_{1 2} &= - (\mathbf{B})_3 \\ F_{2 3} &= - (\mathbf{B})_1 \\ F_{3 1} &= - (\mathbf{B})_2.\end{aligned} \hspace{\stretch{1}}(2.14)
These can be summarized as simply
\begin{aligned}F_{\alpha\beta} = - \epsilon_{\alpha\beta\gamma} B_\gamma.\end{aligned} \hspace{\stretch{1}}(2.17)
This provides all the info needed to fill in the matrix above
\begin{aligned}{\left\lVert{ F_{i j} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.18)
## Index raising of rank 2 tensor
To raise indexes we compute
\begin{aligned}F^{i j} = g^{i l} g^{j k} F_{l k}.\end{aligned} \hspace{\stretch{1}}(2.19)
### Justifying the raising operation.
To justify this consider raising one index at a time by applying the metric tensor to our definition of $F_{l k}$. That is
\begin{aligned}g^{a l} F_{l k} &=g^{a l} (\partial_l A_k - \partial_k A_l) \\ &=\partial^a A_k - \partial_k A^a.\end{aligned}
Now apply the metric tensor once more
\begin{aligned}g^{b k} g^{a l} F_{l k} &=g^{b k} (\partial^a A_k - \partial_k A^a) \\ &=\partial^a A^b - \partial^b A^a.\end{aligned}
This is, by definition $F^{a b}$. Since a rank 2 tensor has been defined as an object that transforms like the product of two pairs of coordinates, it makes sense that this particular tensor raises in the same fashion as would a product of two vector coordinates (in this case, it happens to be an antisymmetric product of two vectors, and one of which is an operator, but we have the same idea).
### Consider the components of the raised $F_{i j}$ tensor.
\begin{aligned}F^{0\alpha} &= -F_{0\alpha} \\ F^{\alpha\beta} &= F_{\alpha\beta}.\end{aligned} \hspace{\stretch{1}}(2.20)
\begin{aligned}{\left\lVert{ F^{i j} }\right\rVert} = \begin{bmatrix}0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.22)
## Back to chewing on the Lorentz force equation.
\begin{aligned}m c \frac{d{{ u_i }}}{ds} = \frac{e}{c} F_{i j} u^j\end{aligned} \hspace{\stretch{1}}(2.23)
\begin{aligned}u^i &= \gamma \left( 1, \frac{\mathbf{v}}{c} \right) \\ u_i &= \gamma \left( 1, -\frac{\mathbf{v}}{c} \right)\end{aligned} \hspace{\stretch{1}}(2.24)
For the spatial components of the Lorentz force equation we have
\begin{aligned}m c \frac{d{{ u_\alpha }}}{ds} &= \frac{e}{c} F_{\alpha j} u^j \\ &= \frac{e}{c} F_{\alpha 0} u^0+ \frac{e}{c} F_{\alpha \beta} u^\beta \\ &= \frac{e}{c} (-E_{\alpha}) \gamma+ \frac{e}{c} (- \epsilon_{\alpha\beta\gamma} B_\gamma ) \frac{v^\beta}{c} \gamma \end{aligned}
But
\begin{aligned}m c \frac{d{{ u_\alpha }}}{ds} &= -m \frac{d{{(\gamma \mathbf{v}_\alpha)}}}{ds} \\ &= -m \frac{d(\gamma \mathbf{v}_\alpha)}{c \sqrt{1 - \frac{\mathbf{v}^2}{c^2}} dt} \\ &= -\gamma \frac{d(m \gamma \mathbf{v}_\alpha)}{c dt}.\end{aligned}
Canceling the common $-\gamma/c$ terms, and switching to vector notation, we are left with
\begin{aligned}\frac{d( m \gamma \mathbf{v}_\alpha)}{dt} = e \left( E_\alpha + \frac{1}{{c}} (\mathbf{v} \times \mathbf{B})_\alpha \right).\end{aligned} \hspace{\stretch{1}}(2.26)
Now for the energy term. We have
\begin{aligned}m c \frac{d{{u_0}}}{ds} &= \frac{e}{c} F_{0\alpha} u^\alpha \\ &= \frac{e}{c} E_{\alpha} \gamma \frac{v^\alpha}{c} \\ \frac{d{{ m c \gamma }}}{ds} &=\end{aligned}
Putting the final two lines into vector form we have
\begin{aligned}\frac{d{{ (m c^2 \gamma)}}}{dt} = e \mathbf{E} \cdot \mathbf{v},\end{aligned} \hspace{\stretch{1}}(2.27)
or
\begin{aligned}\frac{d{{ \mathcal{E} }}}{dt} = e \mathbf{E} \cdot \mathbf{v}\end{aligned} \hspace{\stretch{1}}(2.28)
# Transformation of rank two tensors in matrix and index form.
## Transformation of the metric tensor, and some identities.
With
\begin{aligned}\hat{G} = {\left\lVert{ g_{i j} }\right\rVert} = {\left\lVert{ g^{i j} }\right\rVert}\end{aligned} \hspace{\stretch{1}}(3.29)
\paragraph{We claim:}
The rank two tensor $\hat{G}$ transforms in the following sort of sandwich operation, and this leaves it invariant
\begin{aligned}\hat{G} \rightarrow \hat{O} \hat{G} \hat{O}^\text{T} = \hat{G}.\end{aligned} \hspace{\stretch{1}}(3.30)
To demonstrate this let’s consider a transformed vector in coordinate form as follows
\begin{aligned}{x'}^i &= O^{i j} x_j = {O^i}_j x^j \\ {x'}_i &= O_{i j} x^j = {O_i}^j x_j.\end{aligned} \hspace{\stretch{1}}(3.31)
We can thus write the equation in matrix form with
\begin{aligned}X &= {\left\lVert{x^i}\right\rVert} \\ X' &= {\left\lVert{{x'}^i}\right\rVert} \\ \hat{O} &= {\left\lVert{{O^i}_j}\right\rVert} \\ X' &= \hat{O} X\end{aligned} \hspace{\stretch{1}}(3.33)
Our invariant for the vector square, which is required to remain unchanged is
\begin{aligned}{x'}^i {x'}_i &= (O^{i j} x_j)(O_{i k} x^k) \\ &= x^k (O^{i j} O_{i k}) x_j.\end{aligned}
This shows that we have a delta function relationship for the Lorentz transform matrix, when we sum over the first index
\begin{aligned}O^{a i} O_{a j} = {\delta^i}_j.\end{aligned} \hspace{\stretch{1}}(3.37)
It appears we can put 3.37 into matrix form as
\begin{aligned}\hat{G} \hat{O}^\text{T} \hat{G} \hat{O} = I\end{aligned} \hspace{\stretch{1}}(3.38)
Now, if one considers that the transpose of a rotation is an inverse rotation, and the transpose of a boost leaves it unchanged, the transpose of a general Lorentz transformation, a composition of an arbitrary sequence of boosts and rotations, must also be a Lorentz transformation, and must then also leave the norm unchanged. For the transpose of our Lorentz transformation $\hat{O}$ lets write
\begin{aligned}\hat{P} = \hat{O}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.39)
For the action of this on our position vector let’s write
\begin{aligned}{x''}^i &= P^{i j} x_j = O^{j i} x_j \\ {x''}_i &= P_{i j} x^j = O_{j i} x^j\end{aligned} \hspace{\stretch{1}}(3.40)
so that our norm is
\begin{aligned}{x''}^a {x''}_a &= (O_{k a} x^k)(O^{j a} x_j) \\ &= x^k (O_{k a} O^{j a} ) x_j \\ &= x^j x_j \\ \end{aligned}
We must then also have an identity when summing over the second index
\begin{aligned}{\delta_{k}}^j = O_{k a} O^{j a} \end{aligned} \hspace{\stretch{1}}(3.42)
Armed with these facts on the products of $O_{i j}$ and $O^{i j}$ we can now consider the transformation of the metric tensor.
The rule (definition) supplied to us for the transformation of an arbitrary rank two tensor, is that this transforms as its indexes transform individually. Very much as if it was the product of two coordinate vectors and we transform those coordinates separately. Doing so for the metric tensor we have
\begin{aligned}g^{i j} &\rightarrow {O^i}_k g^{k m} {O^j}_m \\ &= ({O^i}_k g^{k m}) {O^j}_m \\ &= O^{i m} {O^j}_m \\ &= O^{i m} (O_{a m} g^{a j}) \\ &= (O^{i m} O_{a m}) g^{a j}\end{aligned}
However, by 3.42, we have $O_{a m} O^{i m} = {\delta_a}^i$, and we prove that
\begin{aligned}g^{i j} \rightarrow g^{i j}.\end{aligned} \hspace{\stretch{1}}(3.43)
Finally, we wish to put the above transformation in matrix form, look more carefully at the very first line
\begin{aligned}g^{i j}&\rightarrow {O^i}_k g^{k m} {O^j}_m \\ \end{aligned}
which is
\begin{aligned}\hat{G} \rightarrow \hat{O} \hat{G} \hat{O}^\text{T} = \hat{G}\end{aligned} \hspace{\stretch{1}}(3.44)
We see that this particular form of transformation, a sandwich between $\hat{O}$ and $\hat{O}^\text{T}$, leaves the metric tensor invariant.
## Lorentz transformation of the electrodynamic tensor
Having identified a composition of Lorentz transformation matrices, when acting on the metric tensor, leaves it invariant, it is a reasonable question to ask how this form of transformation acts on our electrodynamic tensor $F^{i j}$?
\paragraph{Claim:} A transformation of the following form is required to maintain the norm of the Lorentz force equation
\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T} ,\end{aligned} \hspace{\stretch{1}}(3.45)
where $\hat{F} = {\left\lVert{F^{i j}}\right\rVert}$. Observe that our Lorentz force equation can be written exclusively in upper index quantities as
\begin{aligned}m c \frac{d{{u^i}}}{ds} = \frac{e}{c} F^{i j} g_{j l} u^l\end{aligned} \hspace{\stretch{1}}(3.46)
Because we have a vector on one side of the equation, and it transforms by multiplication with by a Lorentz matrix in SO(1,3)
\begin{aligned}\frac{du^i}{ds} \rightarrow \hat{O} \frac{du^i}{ds} \end{aligned} \hspace{\stretch{1}}(3.47)
The LHS of the Lorentz force equation provides us with one invariant
\begin{aligned}(m c)^2 \frac{d{{u^i}}}{ds} \frac{d{{u_i}}}{ds}\end{aligned} \hspace{\stretch{1}}(3.48)
so the RHS must also provide one
\begin{aligned}\frac{e^2}{c^2} F^{i j} g_{j l} u^lF_{i k} g^{k m} u_m=\frac{e^2}{c^2} F^{i j} u_jF_{i k} u^k.\end{aligned} \hspace{\stretch{1}}(3.49)
Let’s look at the RHS in matrix form. Writing
\begin{aligned}U = {\left\lVert{u^i}\right\rVert},\end{aligned} \hspace{\stretch{1}}(3.50)
we can rewrite the Lorentz force equation as
\begin{aligned}m c \dot{U} = \frac{e}{c} \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.51)
In this matrix formalism our invariant 3.49 is
\begin{aligned}\frac{e^2}{c^2} (\hat{F} \hat{G} U)^\text{T} G \hat{F} \hat{G} U=\frac{e^2}{c^2} U^\text{T} \hat{G} \hat{F}^\text{T} G \hat{F} \hat{G} U.\end{aligned} \hspace{\stretch{1}}(3.52)
If we compare this to the transformed Lorentz force equation we have
\begin{aligned}m c \hat{O} \dot{U} = \frac{e}{c} \hat{F'} \hat{G} \hat{O} U.\end{aligned} \hspace{\stretch{1}}(3.53)
Our invariant for the transformed equation is
\begin{aligned}\frac{e^2}{c^2} (\hat{F'} \hat{G} \hat{O} U)^\text{T} G \hat{F'} \hat{G} \hat{O} U&=\frac{e^2}{c^2} U^\text{T} \hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} G \hat{F'} \hat{G} \hat{O} U \\ \end{aligned}
Thus the transformed electrodynamic tensor $\hat{F}'$ must satisfy the identity
\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} G \hat{F'} \hat{G} \hat{O} = \hat{G} \hat{F}^\text{T} G \hat{F} \hat{G} \end{aligned} \hspace{\stretch{1}}(3.54)
With the substitution $\hat{F}' = \hat{O} \hat{F} \hat{O}^\text{T}$ the LHS is
\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{F'}^\text{T} \hat{G} \hat{F'} \hat{G} \hat{O} &= \hat{O}^\text{T} \hat{G} ( \hat{O} \hat{F} \hat{O}^\text{T})^\T \hat{G} (\hat{O} \hat{F} \hat{O}^\text{T}) \hat{G} \hat{O} \\ &= (\hat{O}^\text{T} \hat{G} \hat{O}) \hat{F}^\text{T} (\hat{O}^\text{T} \hat{G} \hat{O}) \hat{F} (\hat{O}^\text{T} \hat{G} \hat{O}) \\ \end{aligned}
We’ve argued that $\hat{P} = \hat{O}^\text{T}$ is also a Lorentz transformation, thus
\begin{aligned}\hat{O}^\text{T} \hat{G} \hat{O}&=\hat{P} \hat{G} \hat{O}^\text{T} \\ &=\hat{G}\end{aligned}
This is enough to make both sides of 3.54 match, verifying that this transformation does provide the invariant properties desired.
## Direct computation of the Lorentz transformation of the electrodynamic tensor.
We can construct the transformed field tensor more directly, by simply transforming the coordinates of the four gradient and the four potential directly. That is
\begin{aligned}F^{i j} = \partial^i A^j - \partial^j A^i&\rightarrow {O^i}_a {O^j}_b \left( \partial^a A^b - \partial^b A^a \right) \\ &={O^i}_a F^{a b} {O^j}_b \end{aligned}
By inspection we can see that this can be represented in matrix form as
\begin{aligned}\hat{F} \rightarrow \hat{O} \hat{F} \hat{O}^\text{T}\end{aligned} \hspace{\stretch{1}}(3.55)
# Four vector invariants
For three vectors $\mathbf{A}$ and $\mathbf{B}$ invariants are
\begin{aligned}\mathbf{A} \cdot \mathbf{B} = A^\alpha B_\alpha\end{aligned} \hspace{\stretch{1}}(4.56)
For four vectors $A^i$ and $B^i$ invariants are
\begin{aligned}A^i B_i = A^i g_{i j} B^j \end{aligned} \hspace{\stretch{1}}(4.57)
For $F_{i j}$ what are the invariants? One invariant is
\begin{aligned}g^{i j} F_{i j} = 0,\end{aligned} \hspace{\stretch{1}}(4.58)
but this isn’t interesting since it is uniformly zero (product of symmetric and antisymmetric).
The two invariants are
\begin{aligned}F_{i j}F^{i j}\end{aligned} \hspace{\stretch{1}}(4.59)
and
\begin{aligned}\epsilon^{i j k l} F_{i j}F_{k l}\end{aligned} \hspace{\stretch{1}}(4.60)
where
\begin{aligned}\epsilon^{i j k l} =\left\{\begin{array}{l l}0 & \quad \mbox{if any two indexes coincide} \\ 1 & \quad \mbox{for even permutations ofi j k l=0123$} \\ -1 & \quad \mbox{for odd permutations of$i j k l=0123} \\ \end{array}\right.\end{aligned} \hspace{\stretch{1}}(4.61)
We can show (homework) that
\begin{aligned}F_{i j}F^{i j} \propto \mathbf{E}^2 - \mathbf{B}^2\end{aligned} \hspace{\stretch{1}}(4.62)
\begin{aligned}\epsilon^{i j k l} F_{i j}F_{k l} \propto \mathbf{E} \cdot \mathbf{B}\end{aligned} \hspace{\stretch{1}}(4.63)
This first invariant serves as the action density for the Maxwell field equations.
There’s some useful properties of these invariants. One is that if the fields are perpendicular in one frame, then will be in any other.
From the first, note that if ${\left\lvert{\mathbf{E}}\right\rvert} > {\left\lvert{\mathbf{B}}\right\rvert}$, the invariant is positive, and must be positive in all frames, or if ${\left\lvert{\mathbf{E}}\right\rvert} {\left\lvert{\mathbf{B}}\right\rvert}$ in one frame, we can transform to a frame with only $\mathbf{E}'$ component, solve that, and then transform back. Similarly if ${\left\lvert{\mathbf{E}}\right\rvert} < {\left\lvert{\mathbf{B}}\right\rvert}$ in one frame, we can transform to a frame with only $\mathbf{B}'$ component, solve that, and then transform back.
# The first half of Maxwell’s equations.
\paragraph{Claim: } The source free portions of Maxwell’s equations are a consequence of the definition of the field tensor alone.
Given
\begin{aligned}F_{i j} = \partial_i A_j - \partial_j A_i,\end{aligned} \hspace{\stretch{1}}(5.64)
where
\begin{aligned}\partial_i = \frac{\partial {}}{\partial {x^i}}\end{aligned} \hspace{\stretch{1}}(5.65)
This alone implies half of Maxwell’s equations. To show this we consider
\begin{aligned}e^{m k i j} \partial_k F_{i j} = 0.\end{aligned} \hspace{\stretch{1}}(5.66)
This is the Bianchi identity. To demonstrate this identity, we’ll have to swap indexes, employ derivative commutation, and then swap indexes once more
\begin{aligned}e^{m k i j} \partial_k F_{i j} &= e^{m k i j} \partial_k (\partial_i A_j - \partial_j A_i) \\ &= 2 e^{m k i j} \partial_k \partial_i A_j \\ &= 2 e^{m k i j} \frac{1}{{2}} \left( \partial_k \partial_i A_j + \partial_i \partial_k A_j \right) \\ &= e^{m k i j} \partial_k \partial_i A_j e^{m i k j} \partial_k \partial_i A_j \\ &= (e^{m k i j} - e^{m k i j}) \partial_k \partial_i A_j \\ &= 0 \qquad \square\end{aligned}
This is the 4D analogue of
\begin{aligned}\boldsymbol{\nabla} \times (\boldsymbol{\nabla} f) = 0\end{aligned} \hspace{\stretch{1}}(5.67)
i.e.
\begin{aligned}e^{\alpha\beta\gamma} \partial_\beta \partial_\gamma f = 0\end{aligned} \hspace{\stretch{1}}(5.68)
Let’s do this explicitly, starting with
\begin{aligned}{\left\lVert{ F_{i j} }\right\rVert} = \begin{bmatrix}0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0.\end{bmatrix}\end{aligned} \hspace{\stretch{1}}(5.69)
For the $m= 0$ case we have
\begin{aligned}\epsilon^{0 k i j} \partial_k F_{i j}&=\epsilon^{\alpha \beta \gamma} \partial_\alpha F_{\beta \gamma} \\ &= \epsilon^{\alpha \beta \gamma} \partial_\alpha (-\epsilon_{\beta \gamma \delta} B_\delta) \\ &= -\epsilon^{\alpha \beta \gamma} \epsilon_{\delta \beta \gamma }\partial_\alpha B_\delta \\ &= - 2 {\delta^\alpha}_\delta \partial_\alpha B_\delta \\ &= - 2 \partial_\alpha B_\alpha \end{aligned}
We must then have
\begin{aligned}\partial_\alpha B_\alpha = 0.\end{aligned} \hspace{\stretch{1}}(5.70)
This is just Gauss’s law for magnetism
\begin{aligned}\boldsymbol{\nabla} \cdot \mathbf{B} = 0.\end{aligned} \hspace{\stretch{1}}(5.71)
Let’s do the spatial portion, for which we have three equations, one for each $\alpha$ of
\begin{aligned}e^{\alpha j k l} \partial_j F_{k l}&=e^{\alpha 0 \beta \gamma} \partial_0 F_{\beta \gamma}+e^{\alpha 0 \gamma \beta} \partial_0 F_{\gamma \beta}+e^{\alpha \beta 0 \gamma} \partial_\beta F_{0 \gamma}+e^{\alpha \beta \gamma 0} \partial_\beta F_{\gamma 0}+e^{\alpha \gamma 0 \beta} \partial_\gamma F_{0 \beta}+e^{\alpha \gamma \beta 0} \partial_\gamma F_{\beta 0} \\ &=2 \left( e^{\alpha 0 \beta \gamma} \partial_0 F_{\beta \gamma}+e^{\alpha \beta 0 \gamma} \partial_\beta F_{0 \gamma}+e^{\alpha \gamma 0 \beta} \partial_\gamma F_{0 \beta}\right) \\ &=2 e^{0 \alpha \beta \gamma} \left(-\partial_0 F_{\beta \gamma}+\partial_\beta F_{0 \gamma}- \partial_\gamma F_{0 \beta}\right)\end{aligned}
This implies
\begin{aligned}0 =-\partial_0 F_{\beta \gamma}+\partial_\beta F_{0 \gamma}- \partial_\gamma F_{0 \beta}\end{aligned} \hspace{\stretch{1}}(5.72)
Referring back to the previous expansions of 2.6 and 2.17, we have
\begin{aligned}0 =\partial_0 \epsilon_{\beta\gamma\mu} B_\mu+\partial_\beta E_\gamma- \partial_\gamma E_{\beta},\end{aligned} \hspace{\stretch{1}}(5.73)
or
\begin{aligned}\frac{1}{{c}} \frac{\partial {B_\alpha}}{\partial {t}} + (\boldsymbol{\nabla} \times \mathbf{E})_\alpha = 0.\end{aligned} \hspace{\stretch{1}}(5.74)
These are just the components of the Maxwell-Faraday equation
\begin{aligned}0 = \frac{1}{{c}} \frac{\partial {\mathbf{B}}}{\partial {t}} + \boldsymbol{\nabla} \times \mathbf{E}.\end{aligned} \hspace{\stretch{1}}(5.75)
# Appendix. Some additional index gymnastics.
## Transposition of mixed index tensor.
Is the transpose of a mixed index object just a substitution of the free indexes? This wasn’t obvious to me that it would be the case, especially since I’d made an error in some index gymnastics that had me temporarily convinced differently. However, working some examples clears the fog. For example let’s take the transpose of 3.37.
\begin{aligned}{\left\lVert{ {\delta^i}_j }\right\rVert}^\text{T} &= {\left\lVert{ O^{a i} O_{a j} }\right\rVert}^\text{T} \\ &= \left( {\left\lVert{ O^{j i} }\right\rVert} {\left\lVert{ O_{i j} }\right\rVert} \right)^\text{T} \\ &={\left\lVert{ O_{i j} }\right\rVert}^\text{T}{\left\lVert{ O^{j i} }\right\rVert}^\text{T} \\ &={\left\lVert{ O_{j i} }\right\rVert}{\left\lVert{ O^{i j} }\right\rVert} \\ &={\left\lVert{ O_{a i} O^{a j} }\right\rVert} \\ \end{aligned}
If the transpose of a mixed index tensor just swapped the indexes we would have
\begin{aligned}{\left\lVert{ {\delta^i}_j }\right\rVert}^\text{T} = {\left\lVert{ O_{a i} O^{a j} }\right\rVert} \end{aligned} \hspace{\stretch{1}}(6.76)
From this it does appear that all we have to do is switch the indexes and we will write
\begin{aligned}{\delta^j}_i = O_{a i} O^{a j} \end{aligned} \hspace{\stretch{1}}(6.77)
We can consider a more general operation
\begin{aligned}{\left\lVert{{A^i}_j}\right\rVert}^\text{T}&={\left\lVert{ A^{i m} g_{m j} }\right\rVert}^\text{T} \\ &={\left\lVert{ g_{i j} }\right\rVert}^\text{T}{\left\lVert{ A^{i j} }\right\rVert}^\text{T} \\ &={\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ A^{j i} }\right\rVert} \\ &={\left\lVert{ g_{i m} A^{j m} }\right\rVert} \\ &={\left\lVert{ {A^{j}}_i }\right\rVert}\end{aligned}
So we see that we do just have to swap indexes.
## Transposition of lower index tensor.
We’ve saw above that we had
\begin{aligned}{\left\lVert{ {A^{i}}_j }\right\rVert}^\text{T} &= {\left\lVert{ {A_{j}}^i }\right\rVert} \\ {\left\lVert{ {A_{i}}^j }\right\rVert}^\text{T} &= {\left\lVert{ {A^{j}}_i }\right\rVert} \end{aligned} \hspace{\stretch{1}}(6.78)
which followed by careful treatment of the transposition in terms of $A^{i j}$ for which we defined a transpose operation. We assumed as well that
\begin{aligned}{\left\lVert{ A_{i j} }\right\rVert}^\text{T} = {\left\lVert{ A_{j i} }\right\rVert}.\end{aligned} \hspace{\stretch{1}}(6.80)
However, this does not have to be assumed, provided that $g^{i j} = g_{i j}$, and $(AB)^\text{T} = B^\text{T} A^\text{T}$. We see this by expanding this transposition in products of $A^{i j}$ and $\hat{G}$
\begin{aligned}{\left\lVert{ A_{i j} }\right\rVert}^\text{T}&= \left( {\left\lVert{g_{i j}}\right\rVert} {\left\lVert{ A^{i j} }\right\rVert} {\left\lVert{g_{i j}}\right\rVert} \right)^\text{T} \\ &= \left( {\left\lVert{g^{i j}}\right\rVert} {\left\lVert{ A^{i j} }\right\rVert} {\left\lVert{g^{i j}}\right\rVert} \right)^\text{T} \\ &= {\left\lVert{g^{i j}}\right\rVert}^\text{T} {\left\lVert{ A^{i j}}\right\rVert}^\text{T} {\left\lVert{g^{i j}}\right\rVert}^\text{T} \\ &= {\left\lVert{g^{i j}}\right\rVert} {\left\lVert{ A^{j i}}\right\rVert} {\left\lVert{g^{i j}}\right\rVert} \\ &= {\left\lVert{g_{i j}}\right\rVert} {\left\lVert{ A^{i j}}\right\rVert} {\left\lVert{g_{i j}}\right\rVert} \\ &= {\left\lVert{ A_{j i}}\right\rVert} \end{aligned}
It would be worthwhile to go through all of this index manipulation stuff and lay it out in a structured axiomatic form. What is the minimal set of assumptions, and how does all of this generalize to non-diagonal metric tensors (even in Euclidean spaces).
## Translating the index expression of identity from Lorentz products to matrix form
A verification that the matrix expression 3.38, matches the index expression 3.37 as claimed is worthwhile. It would be easy to guess something similar like $\hat{O}^\text{T} \hat{G} \hat{O} \hat{G}$ is instead the matrix representation. That was in fact my first erroneous attempt to form the matrix equivalent, but is the transpose of 3.38. Either way you get an identity, but the indexes didn’t match.
Since we have $g^{i j} = g_{i j}$ which do we pick to do this verification? This appears to be dictated by requirements to match lower and upper indexes on the summed over index. This is probably clearest by example, so let’s expand the products on the LHS explicitly
\begin{aligned}{\left\lVert{ g^{i j} }\right\rVert} {\left\lVert{ {O^{i}}_j }\right\rVert} ^\text{T}{\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ {O^{i}}_j }\right\rVert} &=\left( {\left\lVert{ {O^{i}}_j }\right\rVert} {\left\lVert{ g^{i j} }\right\rVert} \right) ^\text{T}{\left\lVert{ g_{i j} }\right\rVert}{\left\lVert{ {O^{i}}_j }\right\rVert} \\ &=\left( {\left\lVert{ {O^{i}}_k g^{k j} }\right\rVert} \right) ^\text{T}{\left\lVert{ g_{i m} {O^{m}}_j }\right\rVert} \\ &={\left\lVert{ O^{i j} }\right\rVert} ^\text{T}{\left\lVert{ O_{i j} }\right\rVert} \\ &={\left\lVert{ O^{j i} }\right\rVert} {\left\lVert{ O_{i j} }\right\rVert} \\ &={\left\lVert{ O^{k i} O_{k j} }\right\rVert} \\ \end{aligned}
This matches the ${\left\lVert{{\delta^i}_j}\right\rVert}$ that we have on the RHS, and all is well.
# References
[1] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980.
|
2018-06-22 07:10:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 124, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 7908.594025359152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00101.warc.gz"}
|
https://indico.cern.ch/event/336103/contributions/786817/
|
# 28th Texas Symposium on Relativistic Astrophysics
13-18 December 2015
International Conference Centre Geneva
Europe/Zurich timezone
## First-order Fermi acceleration at pulsar wind termination shock.
14 Dec 2015, 17:15
20m
Level 0, Room 23 (International Conference Centre Geneva)
### Speaker
Mr Simone Giacche (Max-Planck Institute for Nuclear Physics)
### Description
The Pulsar Wind Nebulae (PWNe) PSR B1259-63 has been observed to emit periodic GeV flares, whose power can be comparable to the total pulsar spin-down luminosity. Because of the short timescale involved, these photons are likely to be produced via inverse Compton scattering of stellar photons or Synchrotron radiation by a population of very energetic electrons (from GeV to TeV energies) in proximity of the wind termination shock (TS). This perpendicular shock is created by the interaction between the magnetised, relativistic, electron-positron wind launched by the pulsar with the companion star outflow. When the rotational frequency of the pulsar is greater than the local plasma frequency in the wind, a shock precursor forms ahead of the TS, where the Poynting flux is dissipated. This condition is satisfied at the TS in a gamma-ray binary when the system is far from periastron, but not necessarily when the stars are in close proximity to each other (Mochol & Kirk 2013). It is stll unclear whether and how this structure can accelerate electrons to high energies. We investigate this in a two-step procedure. Firstly, a 1-dimensional, relativistic, 2-fluid code is used to reproduce the turbulent fields in the equatorial plane at the location of the TS. We numerically integrate test particle trajectories in the background fields of a steady configuration of the precursor realised for an upstream Lorentz factor $\Gamma=40$ and a magnetisation parameter $\sigma=10$. We follow each particle until it either escapes downstream after transmission or upstream after reflection. We find that $\sim 50\%$ of the incoming particles are reflected upstream by the turbulent fields for these parameters. Secondly, we simulate Fermi-like acceleration by supplementing magnetic fluctuations with prescribed statistical properties both in the pulsar wind upstream of the shock, and in the nebula downstream of the shock, where the field is assumed to have been dissipated. The resulting stochastic trajectories are numerically integrated (Achterberg & Kruells 1992). We compare the power-law index and the angular distribution of accelerated particles with the same quantities obtained with a numerical simulation where the average magnetic field is null on both sides of the shock and the only source of deflection for energetic particles is the scattering off magnetic irregularities (Achterberg et al. 2001). We argue that the proposed scenario is relevant for PWNe in $\gamma$-ray binaries such as PSR B1259-63.
### Primary author
Mr Simone Giacche (Max-Planck Institute for Nuclear Physics)
### Co-authors
Prof. John Kirk (Max-Planck Institute for Nucear Physics) Dr Takanobu Amano (Department of Earth and Planetary Science, University of Tokyo)
|
2020-12-02 19:56:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6968026757240295, "perplexity": 2181.6739667291813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00512.warc.gz"}
|
https://stats.stackexchange.com/questions/223644/decrease-in-filter-size-as-cnn-progresses
|
# Decrease in filter size as CNN progresses?
I have noticed, as a trend, people seem to "taper" the size of their filters as a convolutional network progresses. By this I mean they begin convolving the image/patch with a larger filter, and slowly decrease the size each layer until the output.
I have also employed this approach and had very good results, however I am not sure why.
Is there a reason this should be done, or is it just "black magic"?
Thanks!
|
2021-09-17 16:42:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8952805399894714, "perplexity": 1018.2794236624392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00454.warc.gz"}
|
http://www.swmath.org/?term=worst-case%20complexity
|
• # na24
• Referenced in 12 articles [sw11485]
• Envelope (PE) algorithms are performed. Worst-case time complexity, convergence results, and examples are included ... reduce the brute force quadratic worst-case time complexity to linear time by using either...
• # BEDFix
• Referenced in 8 articles [sw04469]
• method averaged 31 percent of this worst-case bound. BEDFix works for nonsmooth continuous functions ... Lipschitz constants equal to 1, whereas the complexity of simple iteration approaches infinity ... compute absolute criterion solutions, the worst-case complexity depends on the logarithm of the reciprocal...
• # TRecS
• Referenced in 2 articles [sw14145]
• Luke Ong in 2006, but its worst-case complexity is k-EXPTIME complete for order ... many typical inputs, despite the huge worst-case complexity. Since the development of TRecS...
• # Algorithm 848
• Referenced in 2 articles [sw04408]
• defined on all rectangular domains, the worst-case complexity of PFix has order equal ... order of the worst-case bound for the case of the unit hypercube. PFix ... found in the authors’ paper [J. Complexity...
• # TiML
• Referenced in 1 article [sw27564]
• lower annotation burden, and, furthermore, big-O complexity can be inferred from recurrences generated during ... usability by implementing a broad suite of case-study modules, demonstrating that TiML, though lacking ... versatile enough to verify worst-case and/or amortized complexities for algorithms and data structures like...
• outperforms all previous approaches on real-world complex networks. The efficiency of the new algorithm ... with respect to the $Theta(|E|)$ worst-case performance. We experimentally show that this ... technique achieves similar speedups on real-world complex networks, as well. par The second contribution...
|
2019-12-11 23:07:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6385786533355713, "perplexity": 3337.3053071748227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00360.warc.gz"}
|
https://www.homebuiltairplanes.com/forums/threads/the-big-engine-small-plane-problem.14091/page-2
|
# The big engine small plane problem
### Help Support Homebuilt Aircraft & Kit Plane Forum:
#### akwrencher
##### Well-Known Member
HBA Supporter
Obviously I'm pretty far out of my league here as far as experience goes, so I'll just whisper some thoughts.....
W 10 tailwind wing area is 92 sq ft, can keep up with older lancairs, and can be built very nice and still cheep. See Jan 2011 kitplanes for an example of a very nice one.
Not advocating this design in particular, but for cruise performance per build dollar it's hard to beat. Depends on how fast you need to go though....
Someone said earlier that Cert engines don't have modern FI with alt comp etc. Not true. On a hombuilt you can put whatever you want on that "cert" engine, and give it modern FI tech.
#### SVSUSteve
##### Well-Known Member
Someone said earlier that Cert engines don't have modern FI with alt comp etc. Not true. On a hombuilt you can put whatever you want on that "cert" engine, and give it modern FI tech.
I have a feeling that comment was said with a healthy dose of sarcasm.
#### Toobuilder
##### Well-Known Member
HBA Supporter
Log Member
Keeping an engine alive by stirring all the handles seems like a holdon from a bygone era to me...
While this is true compared with modern cars, we should also recognize that the typical "transport category" mission profile as discussed here is pretty easy to manage if you have the correct instrumentation. For example, a typical flight in the RV has me advancing the throttle to full on takeoff (and not touched again until near the destination), maintaining 100 ROP EGT (1250, in this case) untill level off, and managing RPM with the blue knob to meet my climb requirement (generally 2450ish). At level off, blue knob screws out until 2350 RPM, and I use the lean find function on the EMS to set the mixture to 50 LOP. This whole process from brake release to cruising LOP is 10-15 minutes tops. From then on I have 5 hours available to just watch the world slide by at 165 KTAS and 8.0 GPH. My only "management" function is switching tanks once per hour.
#### Jay Kempf
##### Curmudgeon in Training (CIT)
While this is true compared with modern cars, we should also recognize that the typical "transport category" mission profile as discussed here is pretty easy to manage if you have the correct instrumentation. For example, a typical flight in the RV has me advancing the throttle to full on takeoff (and not touched again until near the destination), maintaining 100 ROP EGT (1250, in this case) untill level off, and managing RPM with the blue knob to meet my climb requirement (generally 2450ish). At level off, blue knob screws out until 2350 RPM, and I use the lean find function on the EMS to set the mixture to 50 LOP. This whole process from brake release to cruising LOP is 10-15 minutes tops. From then on I have 5 hours available to just watch the world slide by at 165 KTAS and 8.0 GPH. My only "management" function is switching tanks once per hour.
Switching tanks once and hour seems sorta old school too. Two pumps and one Arduino controller. But that's just me Kidding.
#### Toobuilder
##### Well-Known Member
HBA Supporter
Log Member
I'm with you my friend... I'd like nothing more than an "ON/OFF" control!
#### SVSUSteve
##### Well-Known Member
I'm with you my friend... I'd like nothing more than an "ON/OFF" control!
There's a reason why I'm designing an non-gravity fuel system with a "BOTH" option on the fuel selector. There are several reasons I'm not going with a gravity-feed system but that's a major one.
#### akwrencher
##### Well-Known Member
HBA Supporter
Not at all. Just seemed like there was some debate between old airplane engines and modern car engines. The beauty of Experimental is that you can mix it up. Get the best of both technologys. That is, if your checkbook can handle the heat
I have a feeling that comment was said with a healthy dose of sarcasm.
#### bmcj
##### Well-Known Member
HBA Supporter
Aren't there any mechanically adjustable (in flight) "big" props? Such a two-speed prop is far simpler as an CS prop and works just as well.
Are you familiar with the Aeromatic? Ground adjustable semi-constant speed prop.
#### Autodidact
##### Well-Known Member
[h=2]Re: The big engine small plane problem [/h]
I'll take it over the big plane small engine problem!
##### Well-Known Member
Are you familiar with the Aeromatic? Ground adjustable semi-constant speed prop.
Familiar, no. Never seen one up-close. What I've read and heard is rather contradictionary, one side sees them as expensive, unreliable, while most owners seem perfectly happy with them.
Unfortunately, last time I checked they were MORE expensive as an electric prop from MT. (Just under 10K US$...) I'll take it over the big plane small engine problem! I wouldn't. Flying an 1800 lbs sailplane with 30 HP is a lot of fun #### akwrencher ##### Well-Known Member HBA Supporter #### Toobuilder ##### Well-Known Member HBA Supporter Log Member I know there are fans of the Aeromatic, but I have a buddy who put one on his 125 HP Swift without success. Despite spending several days with the company owner fiddling with it, the prop never performed all that well. Maybe others have had better luck, but the Aeromatic is off my candidate list. BTW, if anyone wants the Swift or prop, it's available for the taking at the bottom of Cheasepeak Bay. #### SVSUSteve ##### Well-Known Member I wouldn't. Flying an 1800 lbs sailplane with 30 HP is a lot of fun Yeah, it's probably a bit fun but not terribly practical for anyone who isn't a glider junkie. BTW, if anyone wants the Swift or prop, it's available for the taking at the bottom of Cheasepeake Bay. As in a Globe Swift? #### Toobuilder ##### Well-Known Member HBA Supporter Log Member Yes. He had unexplained engine stoppage and ditched about a year and a half ago. He and his mother survived the actual ditching, but she succumbed during the failed rescue attempt. #### SVSUSteve ##### Well-Known Member He and his mother survived the actual ditching, but she succumbed during the failed rescue attempt. Yeah, I think I heard about that. He had unexplained engine stoppage and ditched about a year and a half ago The NTSB put it down to a mishandled fuel selector valve and fuel starvation. They said they recovered it from the bay. Either way, condolences to your friend and his family. #### Dan Thomas ##### Well-Known Member Someone said earlier that Cert engines don't have modern FI with alt comp etc. Not true. On a hombuilt you can put whatever you want on that "cert" engine, and give it modern FI tech. It's also not true in the real certified world. See http://www.lycoming.com/news-and-events/pdfs/iE2_Engine.pdf This engine has been available for some time now, but the very slow GA production market, the declining pilot population, the recession, and the cost of airplanes (mostly due to the stupidest litigation laws in the world and the greed of people), all conspire to make it unlikely that any of us will fly one anytime soon. When I started flying in 1973, I flew an "old" 172: it was a 1966 model. Seven years old. When I left the flight school a year ago, we were using airplanes as old as 1973: 39 years old, and that is typical of a lot of places now. (We did have a 2006 172, but those cost$300K now). With declining interest in aviation and a smaller pilot population, there's no market for new engines or airplanes. If that Lycoming IE2 had come out in 1976, the peak of GA production, they'd be all over the place.
But they're not. The fact that they're only now showing up is due entirely to the cost of certification; again, due to the propensity of people to sue the pants off anyone having absolutely anything to do with an airplane in which they get hurt (and they usually get hurt because of their own mistakes, not any manufacturer or mechanic), and the FAA succumbs to public demands that airplanes be 100% safe. Achieving even 99% is really expensive and time-consuming.
So let's not blame the engine makers for being decades behind in fuelling and ignition technology or anything else. The fact is that most of us would never have learned to fly at all if it wasn't for their continued production of "antiquated" airplanes.
Dan
#### Detego
##### Well-Known Member
The fact is that most of us would never have learned to fly at all if it wasn't for their continued production of "antiquated" airplanes.
Dan
I see cost and performance of these "antiquated airplanes" as the reason why more people don't fly; this lead me to investigate Homebuilt Designs like the KR2, Jeffair Barracuda and BD-4, using Auto Engine conversions in the mid 1970's.
Fact is, most peoples itch for flight can be served nicely with a powered paraglider.
#### SVSUSteve
##### Well-Known Member
Fact is, most peoples itch for flight can be served nicely with a powered paraglider
And you're basing your opinion on what precisely?
#### Toobuilder
##### Well-Known Member
HBA Supporter
Log Member
Back to the topic at hand- I'm looking at the typical overpowered two place design for my own needs. I have been going along the big inch NA powerplant (540 Lyc or V8) route so far. I wonder though, if you took something like a 300 HP Rocket and replaced the engine with a normalized 180, how it would perform?
As I sit in my fuselage mockup and ponder the V8 integration, I can't help but consider the 180 Lyc sitting in the corner. If I add a turbo, would I come out ahead...
#### akwrencher
##### Well-Known Member
HBA Supporter
Interesting. If you only neaded the hp at alt, and climb, etc. was still acceptable, that might be a good option. Just out of curiosity, what are you building?
Back to the topic at hand- I'm looking at the typical overpowered two place design for my own needs. I have been going along the big inch NA powerplant (540 Lyc or V8) route so far. I wonder though, if you took something like a 300 HP Rocket and replaced the engine with a normalized 180, how it would perform?
As I sit in my fuselage mockup and ponder the V8 integration, I can't help but consider the 180 Lyc sitting in the corner. If I add a turbo, would I come out ahead...
|
2021-06-16 08:08:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27739599347114563, "perplexity": 4023.815440567643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00342.warc.gz"}
|
http://learningwithdata.com/category/ml.html
|
## Bayes Primer
Posted on Sat 17 October 2015 in ml • Tagged with tutorial, bayesian
# What is Bayes Theorem?¶
Bayes theorem is what allows us to go from a sampling (or likelihood) distribution and a prior distribution to a posterior distribution.
## What is a Sampling Distribution?¶
A sampling distribution is the probability of seeing our data (X) given our parameters ($\theta$). This is written as $p(X|\theta)$.
For example, we might have data on 1,000 coin flips. Where 1 indicates a head. This can be represented in python as
## Logistic Regression and Optimization
Posted on Wed 29 April 2015 in ml • Tagged with tutorial, logistic-regression
# Logistic Regression and Gradient Descent¶
Logistic regression is an excellent tool to know for classification problems. Classification problems are problems where you are trying to classify observations into groups. To make our examples more concrete, we will consider the Iris dataset. The iris dataset contains 4 attributes for 3 types of iris plants. The purpose is to classify which plant you have just based on the attributes. To simplify things, we will only consider 2 attributes and 2 classes. Here are the data visually:
## Bayes With Continuous Prior
Posted on Fri 03 April 2015 in ml • Tagged with bayesian, tutorial
# Continuous Prior¶
In my introduction to Bayes post, I went over a simple application of Bayes theorem to Bernoulli distributed data. In this post, I want to extend our example to use a continous prior.
In my last post, I ended with this code:
|
2020-02-23 01:36:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24676555395126343, "perplexity": 1081.2295142044732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145742.20/warc/CC-MAIN-20200223001555-20200223031555-00486.warc.gz"}
|
http://math.tutorcircle.com/probability-and-statistics/how-to-solve-for-pie.html
|
Sales Toll Free No: 1-800-481-2338
# How to Solve for Pie?
Topπ (pie) is a constant which is given as Ratio of circumference to Diameter of a Circle. Circumference is circular length and diameter is twice of Radius of Circle. Constant pi (π) is approximately equals to 3.14159 (22 / 7). 'π' is is never a root of any non-zero polynomial that is 'pi' is a transcendental number. Constant pi (π) is required to calculate various parameters such as area, volume, circumference etc. of a circle. Determination of circumference can be done by multiplying pi (π) by diameter and to determine the area, pi (π) is multiplied with Square of the radius. Let’s discuss how to solve for pie.
We know that volume of Sphere is given as:
V = (4 / 3) π r2 h.
Here constant pi (π) can be calculated by rearranging the given equation:
π r2 h = 3 V / 4,
Thus π = 3 V / 4 r2 h.
Standard values of all parameters in above formula gives the value of pi (π) which will be approximately equals to 3.14.
Constant 'π' is also equals to 180º in Trigonometry. It is widely used in defining various trigonometric Functions.
If we write, sin (3π / 2), it represents sin (3 * 180º / 2) = sin (270º).
|
2014-12-23 00:01:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9240769743919373, "perplexity": 1376.7542315102116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777418.140/warc/CC-MAIN-20141217075257-00065-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://imparo.wordpress.com/category/science/
|
# Ancora Imparo
## 14 November 2015
### Motion Along a Curve, Part 3
Filed under: mathematics,physics — Darmok @ 15:58 UTC
Tags: , , ,
See Part 1 and Part 2, where I discuss trying to find equations of motion for a ball rolling along a track defined by an arbitrary function y(x). So far, I’ve worked out my general approach, and tested it on the very simple case of an inclined line. I do want to tackle some complex functions, but first I want to summarize the method, and to incorporate some changes I learned as I tried my first solution.
I had initially stated I wanted to start with x(0) = 0, to keep it simple. But I didn’t end up needing this restriction. My equation for velocity used the initial yi, which we got from plugging in the initial value of x. Also, we used the x(0) = 0 to find the constant after integrating. But these happened later in the process.
I also had started with an initial velocity of zero. That did make a difference. But looking back on part 1, I don’t think it would complicate the equation too much, and if it’s zero it will be an extra term to just drop out. Let’s go back to the conservation of energy equation, and keep vi in this time.
$\displaystyle U_i + K_i = U + K$
$\displaystyle mgy_i + \frac{1}{2}m{v_i}^2 = mgy + \frac{1}{2}mv^2$
$\displaystyle gy_i - gy = \frac{1}{2}(v^2 - {v_i}^2)$
$\displaystyle 2g(y_i - y) = v^2 - {v_i}^2$
$\displaystyle 2g(y_i - y) + {v_i}^2 = v^2$
$\displaystyle v = \sqrt{2g(y_i-y)+{v_i}^2} \blacktriangleleft$
I’m leaving the ± in this time. Strictly speaking, I’m not treating this as the magnitude of the vector, since magnitudes must be positive. Rather, I want to consider a velocity vector that can point ether forwards or backwards along the direction of the curve. I’m going to allow motion in both directions, not just forward.
Recall the graph showing components of the velocity vector:
Graph generated in Python/Matplotlib.
Now I want to find the x-component. As I discovered last time, I don’t need to bother with the y-component — once I find an equation for x(t), I can use that directly to obtain y(t). The x-component will be
$\displaystyle v_x = v \frac{1}{\sqrt{(y')^2+1}}$
$\displaystyle v_x = \pm \sqrt{\frac{2g(y_i - y) + {v_i}^2}{(y')^2+1}}$
$\displaystyle \frac{dx}{dt} = \pm \sqrt{\frac{2g(y_i - y) + {v_i}^2}{(y')^2+1}} \blacktriangleleft$
Since I will have y and y′ in terms of x, I would need to rearrange to solve the differential equation. Let’s see how far I can take the general case:
$\displaystyle \pm \sqrt{\frac{(y')^2+1}{2g(y_i-y)+{v_i}^2}} \, dx = dt$
$\displaystyle \pm \int \sqrt{\frac{(y')^2+1}{2g(y_i-y)+{v_i}^2}} \, dx = \int dt$
$\displaystyle \pm \int \sqrt{\frac{(y')^2+1}{2g(y_i-y)+{v_i}^2}} \, dx = t \blacktriangleleft$
where I did not include a constant of integration on the right side, since it can be absorbed into the constant that the left integral will produce.
So, the general approach should be as follows: Given our equation y(x), find y′(x). Plug in those expressions, plug in the initial velocity, and plug in the initial height y[x(0)]. Integrate, and solve for x in terms of t to get x(t), then plug that into y(x) to get y(t). I’ll test if this approach can actually work in the next post.
## 31 October 2015
### Motion Along a Curve, Part 1
Filed under: mathematics,physics — Darmok @ 02:24 UTC
Tags: , , ,
Earlier this week, I saw an intriguing problem suggested, one involving motion along a curve. The idea is this: in elementary mechanics, we learn about how an object will move when placed on an inclined surface. If we restrict motion to two dimensions, this surface can be represented by a line. With gravity as the propelling force, can we find a general approach to deriving its motion along an arbitrary curve? In other words, given a function f(x), can we find functions describing the position (and therefore velocity and acceleration) over time?
Here are my initial thoughts. One, I am sure that this type of problem has been analyzed before. That doesn’t matter; I am going to try figuring it out myself. Two, I think we can use conservation of energy to determine the object’s velocity at any point. However, that will depend on knowing its height, and I don’t know how this approach would handle the object flying up off a short hill, or dropping away from a cliff. So three, I am going to restrict the object’s motion to the curve — consider it a track, rather than a 1D surface. That means that four, I can use the derivative to find the direction, but that means the object can only move forward. Five, I have to ignore friction. My conservation of energy approach will require that all energy be potential or kinetic; I won’t know hot to deal with losses to friction. Six, I am using constant downward gravitation. Seven, I am starting with initial position x=0 and initial velocity of 0. I don’t believe this method requires it, but I am already concerned about the complexity of the math. And seven, I am planning to obtain a velocity equation and integrate to find position, but I don’t know if the equation will have an analytic solution (or if I will know how to integrate it).
These are some significant limitations, of course. If this approach works, I suspect that it could be expanded to deal with several of these. For instance, we could keep in terms for initial position and velocity. Gravity or whichever force could be represented by a vector field rather than a constant force; this would make the potential energy term much more complex. I also wonder if I could incorporate friction by adding in an extra term, but I think this would turn the equation into a more complex differential equation.
Let’s start with a simple case that we can solve using conventional means, so we’ll have an answer to check later. I can also use this to check my conservation of energy approach. Let’s have the object rolling down an incline of 30° (π/6), starting at a height of 1. The ball will roll down to the right (I find it more intuitive to imagine a rolling ball rather than a frictionless object sliding, especially if the surface is curved). The equation for this surface will be
$\displaystyle y(x) = -\frac{x\sqrt{3}}{3}+1 \, \bullet$
where the slope is -tan(π/6).
You can see that it would form a right triangle with height 1, length √3, and hypotenuse 2. The slope is therefore 1/√3, or √3/3. Perhaps it would have been better to use an incline of 60° (π/3) so that the slope would be -√3, but I like this one better. Let’s try the conservation of energy approach to see what the velocity would be when the ball reaches the bottom (that is, y = 0). The total energy E is the sum of the potential energy U and the kinetic energy K. This should remain constant, so
$\displaystyle E_i = E_f$
$\displaystyle U_i + K_i = U_f + K_f$
We know that potential energy is given by
$\displaystyle U = mgh$
assuming constant gravitation g (the acceleration due to gravity), with mass m and height h; and kinetic energy is given by
$\displaystyle K = \frac{1}{2}mv^2$
where v is the magnitude of the velocity vector v. I confess I am not very proficient with vectors, but clearly the ball is moving in two dimensions and we will need both components in the x and y directions. I am going to try to not be sloppy but to carefully think about what we mean by v or v every time I write it.
We assume that the initial velocity, and therefore kinetic energy, is zero.
$\displaystyle U_i + K_i = U_f + K_f$
$\displaystyle mgh_i + \frac{1}{2}m\cdot0^2 = mgh_f + \frac{1}{2}mv_f^2$
The height is y, and we can divide both sides by the nonzero mass m:
$\displaystyle gy_i = gy_f + \frac{1}{2}v_f^2$
Let’s solve for v:
$\displaystyle 2g(y_i-y_f)=v_f^2$
$\displaystyle v_f = \sqrt{2g(y_i-y_f)} \blacktriangleleft$
This should be the general case, so if this checks out, we can use this equation to develop the method. As a check, if the units of g are m/s² and y is in m, the units will be m/s, appropriate for velocity. Also note that this is simply the magnitude of the velocity vector (the speed); the actual vector could point in any direction if all we know is conservation of energy. That’s why I earlier constrained the ball to move along the curve — that will give us our direction. Plugging in our initial and final values of y, we get
$\displaystyle v_f = \sqrt{2g(1-0)} = \sqrt{2g}$
Next, to verify this solution, let’s solve using the traditional free-body approach. Gravity applies a force downwards of W=mg. This can be split into two components, one perpendicular to the surface mg cos(θ), and one along the surface mg sin(θ). The perpendicular force will be balanced out by the equal and opposite normal force, leaving a force of mg sin(θ). This is simple motion in one dimension, and let me introduce a variable s to represent its position along the surface (so I don’t generate confusion with our x’s and y’s). Think of it as placing a tape measure from the bottom of the incline up to the starting point. We know that for a given constant acceleration a,
$\displaystyle v = \int \! a \, dt = at + v_0$
$\displaystyle s = \int \! v \, dt = \int \! (at+v_0) \, dt = \frac{1}{2}at^2 + v_0 t + s_0$
By Newton’s second law, F = ma, so a = g sin(θ) and will be negative, since it’s pointing towards decreasing s (since this is motion in one dimension, we can use the sign to expression direction rather than need vectors). Initial velocity is zero, and initial position is 2 (the distance along the incline, if the bottom is at s=0).
$\displaystyle 0 = -\frac{1}{2}g \sin(\frac{\pi}{6}) \cdot t^2 + 0t + 2$
$\displaystyle \frac{1}{2} g \left( \frac{1}{2}\right)t^2 = 2$
$\displaystyle \frac{gt^2}{4} = 2$
$\displaystyle t^2 = \frac{8}{g}$
$\displaystyle t = \pm \sqrt{\frac{8}{g}}$
I can discard the negative solution, since I’m interested in the behavior starting from time = 0, not how it might have been launched before to come to a temporary stop at time = 0. Let’s plug this time to get our velocity from v = at (since the initial velocity is zero):
$\displaystyle v = at = -g\left(\frac{1}{2}\right)\sqrt{\frac{8}{g}} = - \sqrt{\frac{g^2}{4}}\sqrt{\frac{8}{g}} = -\sqrt{2g}$
This velocity is negative since it points downwards along the s direction, along the incline. Its magnitude will be the absolute value which is what we obtained earlier using the conservation of energy method. Armed with this success, and with some test values, we’ll be ready to work on the actual problem in the next post.
## 8 January 2009
### Individual Action is Not Enough
Filed under: environment,Uncategorized,video — Darmok @ 06:25 UTC
Tags: , ,
The Canadian chapter of the World Wildlife Federation produced this cool video/commercial:
Most people try, to at least some degree, to take steps to help the environment. And these small changes, when summed across the whole population, are significant. But still, the collective action of individuals can only do so much — government and industry need to be on board, too. Unfortunately, in the United States, leadership from the federal government has been lacking (and at times, actively impedes) so state and local governments and industry have had to take their own steps. There is some hope, though, that this will change when the Obama administration takes office (I hope to write more on this in a later post).
## 1 December 2008
### Questions on How to Live a Green Lifestyle
Filed under: environment — Darmok @ 05:25 UTC
Tags:
Most Americans are interested in living more environmentally friendly lifestyles, yet it’s not always easy to know which practices really are best. New Scientist takes a look at these “Dumb Eco-questions You Were Afraid to Ask”. Hope this helps clear up some doubts or misconceptions.
## 20 November 2008
### President-elect Obama Delivers Strong Statement on Climate Change
As we prepare to move past President Bush’s disastrous environmental policies, I’ve been interested to see what President-elect Obama plans to do for the environment. The economy has garnered the most attention, and in the short term, is more important. But continued neglect of the environment will, in the long-term, lead to crises both in the economy and in other sectors.
President-elect Obama addressed the attendees of the Governor’s Global Climate Summit in a four-minute video (high-resolution version is available at change.gov; full text of speech at the end of this post).
He thanked the governors for their work (Governor Schwarzenegger of California along with governors of other U. S. states are hosting the Governor’s Global Climate Summit; leaders of key nations around the world are attending) and also thanked businesses for their efforts, going on to remark “But too often, Washington has failed to show the same kind of leadership. That will change when I take office. My presidency will mark a new chapter in America’s leadership on climate change that will strengthen our security and create millions of new jobs in the process.”
President-elect Obama went on to deliver more specific goals: “That will start with a federal cap-and-trade system. We’ll establish strong annual targets that set us on a course to reduce emissions to their 1990 levels by 2020 and reduce them an additional 80 percent by 2050. Further, we’ll invest $15 billion each year to catalyze private sector efforts to build a clean energy future”, indicating plans to invest in renewable resources as well as nuclear power and clean coal technology. He intends for this to help the economy as well, creating jobs and helping industry. Mr. Obama also indicated a change in the way the U. S. has participated on the international stage, stating that the U. S. would work with and depend on other nations: “And once I take office, you can be sure that the United States will once again engage vigorously in these negotiations, and help lead the world toward a new era of global cooperation on climate change.” Perhaps the most significant statement is the strong importance Mr. Obama still places on environmental problems, despite the problems with the economy. As John Broder writes in the New York Times, “State officials and environmental advocates were cheered that Mr. Obama choose to address climate change as only the second major policy area [after the economy] he has discussed as president-elect.” Reaction from environmental groups appears quite favorable. The CEO of the World Wildlife Fund (WWF) praised President-elect Obama’s remarks: “Today President-elect Obama gave us his first official statements on climate and without a doubt he nailed it. He sees clearly the huge risk that climate change poses to our economy and our future, and he understands that solving climate change is a foundation for a global economic recovery. Writing in the Sierra Club blog, Heather Moyer called the speech “very enjoyable”. And Peter Miller, in the National Resources Defense Council blog, wrote “Looking very presidential, Obama enunciated an unambiguous commitment to enacting a federal cap and trade program with tight annual caps leading to an 80% reduction in emissions by 2050. The contrast with President Bush’s stance on climate change was abundantly evident to everyone. It was the first time I’ve ever seen a standing ovation for a video.” I look forward to more. Below is a transcript of the speech, taken from Grist with slight editing. Let me begin by thanking the bipartisan group of U.S. governors who convened this meeting. Few challenges facing America — and the world — are more urgent than combating climate change. The science is beyond dispute and the facts are clear. Sea levels are rising. Coastlines are shrinking. We’ve seen record drought, spreading famine, and storms that are growing stronger with each passing hurricane season. Climate change and our dependence on foreign oil, if left unaddressed, will continue to weaken our economy and threaten our national security. I know many of you are working to confront this challenge. In particular, I want to commend Governor Sebelius, Governor Doyle, Governor Crist, Governor Blagojevich and your host, Governor Schwarzenegger — all of you have shown true leadership in the fight to combat global warming. And we’ve also seen a number of businesses doing their part by investing in clean energy technologies. But too often, Washington has failed to show the same kind of leadership. That will change when I take office. My presidency will mark a new chapter in America’s leadership on climate change that will strengthen our security and create millions of new jobs in the process. That will start with a federal cap-and-trade system. We’ll establish strong annual targets that set us on a course to reduce emissions to their 1990 levels by 2020 and reduce them an additional 80 percent by 2050. Further, we’ll invest$15 billion each year to catalyze private sector efforts to build a clean energy future. We’ll invest in solar power, wind power, and next generation biofuels. We’ll tap nuclear power, while making sure it’s safe. And we will develop clean coal technologies.
This investment will not only help us reduce our dependence on foreign oil, making the United States more secure. And it will not only help us bring about a clean energy future, saving the planet. It will also help us transform our industries and steer our country out of this economic crisis by generating five million new green jobs that pay well and can’t be outsourced.
But the truth is, the United States can’t meet this challenge alone. Solving this problem will require all of us working together. I understand that your meeting is being attended by government officials from over a dozen countries, including the U.K., Canada, Mexico, Brazil and Chile, Poland and Australia, India and Indonesia. And I look forward to working with all nations to meet this challenge in the coming years.
Let me also say a special word to the delegates from around the world who will gather at Poland next month: your work is vital to the planet. While I won’t be president at the time of your meeting and while the United States has only one president at a time, I’ve asked members of Congress who are attending the conference as observers to report back to me on what they learn there.
And once I take office, you can be sure that the United States will once again engage vigorously in these negotiations, and help lead the world toward a new era of global cooperation on climate change. Now is the time to confront this challenge once and for all. Delay is no longer an option. Denial is no longer an acceptable response. The stakes are too high. The consequences, too serious.
Stopping climate change won’t be easy. It won’t happen overnight. But I promise you this: When I am president, any governor who’s willing to promote clean energy will have a partner in the White House. Any company that’s willing to invest in clean energy will have an ally in Washington. And any nation that’s willing to join the cause of combating climate change will have an ally in the United States of America. Thank you.
## 2 September 2008
### Sarah Palin’s Anti-Science and Anti-Environment Policies Are Worrisome
Filed under: environment,global warming,politics,science — Darmok @ 06:37 UTC
Tags: ,
Senator John McCain, the presumptive Republican presidential nominee, just announced Alaska governor Sarah Palin as his running mate. She was a surprise pick and is relatively unknown, but what I’ve found so far is somewhat disturbing. While I haven’t made my final electoral decision, what I do know is that I don’t want another George W. Bush.
Wired Science, part of the Wired blog network, discusses her views on teaching creationism in public school science classes. (Merriam-Webster defines “creationism” as “a doctrine or theory holding that matter, the various forms of life, and the world were created by God out of nothing and usually in the way described in Genesis [the first book of the Judeo-Christian Bible].”) They refer to an article in the Anchorage Daily News covering a 2006 Alaska gubernatorial debate:
The volatile issue of teaching creation science in public schools popped up in the Alaska governor’s race this week when Republican Sarah Palin said she thinks creationism should be taught alongside evolution in the state’s public classrooms.
Palin was answering a question from the moderator near the conclusion of Wednesday night’s televised debate on KAKM Channel 7 when she said, “Teach both. You know, don’t be afraid of information. Healthy debate is so important, and it’s so valuable in our schools. I am a proponent of teaching both.”
The article goes on to point out:
The Republican Party of Alaska platform says, in its section on education: “We support giving Creation Science equal representation with other theories of the origin of life. If evolution is taught, it should be presented as only a theory.”
This stance alone is a significant strike against her. However, her anti-environment policies are also troubling. For instance, she told NewsMax, “I’m not one though who would attribute [global warming] to being man-made.” As I discussed in a previous post, all major scientific societies concur that humans are responsible for climate change. Senator McCain, as well as Democratic nominee Senator Barack Obama and his running mate Senator Joe Biden, all agree that climate change is a real threat and have proposed plans to combat it.
It’s not surprising, therefore, that her policies appear to show general disregard for the environment, especially with regards to her strong advocacy for oil drilling. For instance, she stated, “I beg to disagree with any candidate who would say we can’t drill our way out of our problem…”, as quoted in Investor’s Business Daily (IBD) and “When I look every day, the big oil company’s building is right out there next to me, and it’s quite a reminder that we should have mutually beneficial relationships with the oil industry” as quoted in Roll Call. She supports opening up the Arctic National Wildlife Refuge (ANWR, commonly pronounced “AN-war”) for drilling, a move generally opposed by environmentalists as well as Congress. Expressing her frustation, she stated to IBD, “But these lands [ANWR] are locked up by Congress, and we are not allowed to drill to the degree America needs the development…”; to Lawrence Kudlow on CNBC, “Very, very disappointed in Congress though [for not voting on drilling in ANWR]”; and so on. Both Senators Obama and McCain opposing drilling in ANWR, and she has attacked Senator McCain for this stance: “I have not talked him into ANWR yet…I think we need McCain in that White House despite, still, the close-mindedness on ANWR” (Lawrence Kudlow, CNBC).
Nor has Alaska, under Mrs. Palin’s governorship, promoted environmental issues. In Massachussets v. Environmental Protection Agency, when twelve states as well as several cities and environmental organizations sued the EPA to regulate carbon dioxide and other greenhouse gases, Alaska argued against them. (In a split decision, the Supreme Court largely agreed with Massachussets et al; see my previous post.)
Earlier this year, the Interior Department listed the polar bear as a threatened species under the Endangered Species Act (ESA). Somewhat bizzarrely, Governor Palin claims that polar bears are not threatened (“In fact, the number of polar bears has risen dramatically over the past 30 years” she states). She opposed the ESA listing and Alaska now plans to sue the Interior Department. Similarly, Governor Palin is opposing plans to list beluga whales as endangered, as it could damage Alaska’s economy.
Eight years of disregard for science and for the environment is enough; I don’t think I want to see someone like this in high office, certainly not in a position where she could become president. If anyone has any examples of Governor Palin promoting science or the environment, please let me know.
## 29 May 2008
### Incredible Photograph of Phoenix landing on Mars
Filed under: astronomy,science,visualization — Darmok @ 05:55 UTC
Tags: , , ,
Phoenix, a NASA robotic probe, landed successfully on Mars on May 25. It landed in the north polar region of Mars, at around the equivalent latitude of northern Alaska, and it will study Mars’ soil to look for clues of past water patterns and if it was ever hospitable to life.
Incredibly, as it parachuted down towards the surface, its picture was taken by a satellite orbiting Mars, the Mars Reconnaissance Orbiter (MRO)! From an amazing distance of 750 kilometers (470 miles), it snapped this photograph of Phoenix parachuting towards Mars. This is the first time one probe has photographed another landing on a planet.
See full-sized version. Credit: NASA/JPL-Caltech/University of Arizona
To see how this fits in to the landing, take a look at this cool animation of Phoenix landing, produced by MAAS Digital and NASA.
## 22 April 2008
### Happy Earth Day!
Filed under: environment — Darmok @ 06:30 UTC
Tags:
Commonly used Earth Day flag. Source: Wikipedia.
April 22 is celebrated as Earth Day in the United States. (In the rest of the world, the March equinox is chosen.) Please use this day to reflect on our planet, our relationship with it, and how our species can exist in harmony with other lifeforms. Help to ensure our children and their children will be able to enjoy our home.
Make every day Earth Day.
## 10 April 2008
### Seafood Watch Helps Consumers Choose Sustainable Seafood
Filed under: environment — Darmok @ 04:46 UTC
Tags: , ,
An article in last month’s Scientific American, “Fishing Blues” highlights the problems that fishing poses for our marine life. As Earth’s population swells in both number and appetite, our fishing takes its toll through various harms, from overfishing to habitat destruction. And in addition to the inherent loss of losing biodiversity, this will have major impacts on humans — whether from simple shortages to far-reaching effects of damaged ecosystems.
Governmental regulations are important, but the most powerful force is that of the consumer. By choosing what to buy and what to avoid, consumers set the priorities for the industry. Clearly, eating sustainable vegetarian food in lieu of seafood or other animals is preferable, when possible. But for those times when one does wish to eat seafood, the Scientific American article points out a useful resource: Seafood Watch (www.seafoodwatch.org, Wikipedia), a program run by the Monterey Bay Aquarium.
You can browse through different seafood or search for the one you want. For different areas of the U.S., they have regional guides categorizing common seafood into best choices, ones to select with caution, and ones to avoid. PDF pocket guides are available as well. (You can also access a streamlined mobile version at http://mobile.seafoodwatch.org/.)The tricky aspect, though, is that the same fish can be sometimes be a good or bad choice, depending on where or how it was caught. For instance, U.S. mahi mahi is a good choice, but not necessarily from elsewhere in the world (due to U.S. policies regulating its fishing). This means that you will have to look at labels at markets or ask your server at restaurants to determine if a certain menu item is a responsible item or not. If your server doesn’t know, ask him or her to ask the chef, and if the origin still can’t be reliably determined, select something else. If people keep asking questions, perhaps next time they’ll make sure they know where their seafood comes from.
Seafood Watch also has a lot more information, including highlighting which fishing practices are harmful and why, and other actions you can take.
## 29 March 2008
### Earth Hour is Today! Google Gets Involved, Too
Filed under: environment — Darmok @ 18:37 UTC
Tags: ,
Google changes its background color to black in observation of Earth Hour 2008.
Today is Earth Hour! If that time hasn’t already passed for you, please remember to turn off your lights from 8–9 p.m. today. And even if it has passed, please remember that, ideally, this should be part of an overall energy-conserving lifestyle. Plan for regular periods of very low energy use, and learn how much you can do even with turning some things off.
And in case you missed it, Google has redesigned their home page for today. They’ve changed their color scheme to use a dark background. In my memory, this is unprecedented. I have never seen them take up an initiative like this, and while I have seen them change their logo on numerous occasions, I do not recall them ever changing their whole color scheme like this. I am really impressed that they did this—they have the potential to reach so many people and what a great way to really call attention to Earth Hour. Of course, they have a long history of supporting environmental projects. They include a prominent link explaining their support of Earth Hour so anyone will be able to read about why they’re doing this.
Next Page »
The Rubric Theme. Create a free website or blog at WordPress.com.
|
2016-05-04 06:03:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36774006485939026, "perplexity": 1408.782512079322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122501.61/warc/CC-MAIN-20160428161522-00165-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://pydata-sphinx-theme.readthedocs.io/en/stable/demo/theme-elements.html
|
# Theme-specific elements#
There are a few elements that are unique or particularly important to this theme. This page is a reference for how these look.
Note
Here’s a note with:
• A nested list
• List item two
As well as:
Warning
A nested warning block to test nested admonitions.
## Version changes#
You can write in your documentation when something has been changed, added or deprecated from one version to another.
New in version 0.1.1: Something is new, use it from now.
Changed in version 0.1.1: Something is modified, check your version number.
Deprecated since version 0.1.1: Something is deprecated, use something else instead.
## HTML elements#
There are some libraries in the PyData ecosystem that use HTML and require their own styling. This section shows a few examples.
### Plotly#
The HTML below shouldn’t display, but it uses RequireJS to make sure that all works as expected. If the widgets don’t show up, RequireJS may be broken.
import plotly.io as pio
import plotly.express as px
import plotly.offline as py
pio.renderers.default = "notebook"
df = px.data.iris()
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", size="sepal_length")
fig
|
2022-07-03 06:18:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36307960748672485, "perplexity": 14007.894234248759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00145.warc.gz"}
|
https://www.physicsforums.com/threads/signal-strength-parameter-interpretation.871576/
|
# Signal strength parameter (interpretation)
• A
Gold Member
How can in general the signal strength parameter ##\mu## be interpreted?
I am talking for the parameter defined in Eq.1 here and plots like the Fig.1 here:
http://arxiv.org/abs/1507.04548
It says that it's the ratio of the i->H->f of the observed over what's expected by the SM... is the last the cross section prediction of the Higgs or for any other background?
Then what would the <1 or >1 indicate? I think the >1 indicate a signal excess, while the <1 indicate a signal underestimation(????)
mfb
Mentor
is the last the cross section prediction of the Higgs or for any other background?
Only Higgs. The background in data is subtracted before μ is calculated.
Then what would the <1 or >1 indicate?
A deviation from the standard model. If μ=1 gets ruled out in some channel, things get interesting.
Gold Member
Only Higgs. The background in data is subtracted before μ is calculated.
Is that the case even if you have background overestimation compared to data?
mfb
Mentor
I'm not sure if I understand your question. If you overestimate something, you are doing something wrong and should fix it, or not use what you cannot get right.
Gold Member
mfb
Mentor
That is (hopefully) not an overestimate, just a statistical fluctuation. Yes, estimated signal strengths can be negative. As a random example, this ATLAS note has -0.4 +- 1.1 for VH -> Vbb in table 2. Note that it is consistent with 1, the uncertainties were just very large in 2012.
|
2022-05-21 19:59:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8844945430755615, "perplexity": 1886.5132164507143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00066.warc.gz"}
|
https://science.sciencemag.org/content/320/5876/646?ijkey=923a712ccccaaf94e7a762a77f5cb50ec76354c6&keytype2=tf_ipsecsha
|
Report
# Silica-on-Silicon Waveguide Quantum Circuits
See allHide authors and affiliations
Science 02 May 2008:
Vol. 320, Issue 5876, pp. 646-649
DOI: 10.1126/science.1155441
## Abstract
Quantum technologies based on photons will likely require an integrated optics architecture for improved performance, miniaturization, and scalability. We demonstrate high-fidelity silica-on-silicon integrated optical realizations of key quantum photonic circuits, including two-photon quantum interference with a visibility of 94.8 ± 0.5%; a controlled-NOT gate with an average logical basis fidelity of 94.3 ± 0.2%; and a path-entangled state of two photons with fidelity of >92%. These results show that it is possible to directly “write” sophisticated photonic quantum circuits onto a silicon chip, which will be of benefit to future quantum technologies based on photons, including information processing, communication, metrology, and lithography, as well as the fundamental science of quantum optics.
Quantum information science (1) has shown that quantum mechanical effects can dramatically improve performance for certain tasks in communication, computation, and measurement. Of the various physical systems being pursued, single particles of light (photons) have been widely used in quantum communication (2), quantum metrology (35), and quantum lithography (6) settings. Low noise (or decoherence) also makes photons attractive quantum bits (or qubits), and they have emerged as a leading approach to quantum information processing (7).
In addition to single-photon sources (8) and detectors (9), photonic quantum technologies require sophisticated optical circuits involving high-visibility classical and quantum interference. Although a number of photonic quantum circuits have been realized for quantum metrology (3, 4, 1013), lithography (6), quantum logic gates (1420), and other entangling circuits (2123), these demonstrations have relied on large-scale (bulk) optical elements bolted to large optical tables, thereby making them inherently unscalable.
We demonstrate photonic quantum circuits using silica waveguides on a silicon chip. The monolithic nature of these devices means that the correct phase can be stably realized in what would otherwise be an unstable interferometer, greatly simplifying the task of implementing sophisticated photonic quantum circuits. We fabricated hundreds of devices on a single wafer and find that performance across the devices is robust, repeatable, and well understood.
A typical photonic quantum circuit takes several optical paths or modes (some with photons, some without) and mixes them together in a linear optical network, which in general consists of nested classical and quantum interferometers (e.g., Fig. 1C). In a standard optical implementation, the photons propagate in air, and the circuit is constructed from mirrors and beam splitters (BSs), or half-reflective mirrors, which split and recombine optical modes, giving rise to both classical and quantum interference. High-visibility quantum interference (24) demands excellent optical mode overlap at a BS, which requires exact alignment of the modes, whereas high visibility classical interference also requires subwavelength stability of optical path lengths, which often necessitates the design and implementation of sophisticated stable interferometers. Combined with photon loss, interference visibility is the major contributor to optical quantum circuit performance.
In conventional (or classical) integrated optics devices, light is guided in waveguides—consisting of a core and slightly lower refractive index cladding (analogous to an optical fiber)—which are usually fabricated on a semiconductor chip. By careful choice of core and cladding dimensions and refractive index difference, it is possible to design such waveguides to support only a single transverse mode for a given wavelength range. Coupling between waveguides, to realize BS-like operation, can be achieved when two waveguides are brought sufficiently close together that the evanescent fields overlap; this is known as a directional coupler. By lithographically tuning the separation between the waveguides and the length of the coupler, the amount of light coupling from one waveguide into the other (the coupling ratio 1 – η, where η is equivalent to BS reflectivity) can be tuned.
The most promising approach to photonic quantum circuits for practical technologies appears to be realizing integrated optics devices that operate at the single-photon level. Key requirement are single-mode guiding of single photons, high-visibility classical interference, high-visibility quantum interference, and the ability to combine these effects in a waveguide optical network.
We required a material system that (i) is low loss at a wavelength of λ ∼ 800 nm, where commercial silicon avalanche photodiode single-photon counting modules (SPCMs) are near their peak efficiency of ∼70%; (ii) enables a refractive index contrast Δ = (ncore2ncladding2)/2ncore2 that results in single-mode operation for waveguide dimensions comparable to the core size of conventional single-mode optical fibers at ∼ 800 nm (4 to 5 μm), to allow good coupling of photons to fiber-coupled single-photon sources and detectors; and (iii) is amenable to standard optical lithography fabrication techniques. The most promising material system to meet these requirements was silica (silicon dioxide SiO2), with a low level of doping to control the refractive index, grown on a silicon substrate (Fig. 1B).
A refractive index contrast of Δ = 0.5% was chosen to give single-mode operation at 804 nm for 3.5 by 3.5 μm waveguides (25). This value of Δ provides moderate mode confinement (the transverse intensity profile is shown in Fig. 1B), thereby minimizing the effects of fabrication or modeling imperfections. We designed a number of devices, including directional couplers with various η's, Mach-Zender interferometers (consisting of two directional couplers), and more sophisticated devices built up from several directional couplers with different η's.
Starting with a 4′′ silicon wafer, a 16-μm layer of thermally grown undoped silica was deposited as a buffer (material I in Fig. 1B), followed by flame hydrolysis deposition of a 3.5-μm waveguide core of silica doped with germanium and boron oxides (II). The core material was patterned into 3.5-μm-wide waveguides with standard optical lithography techniques and finally overgrown with a further 16-μm cladding layer of phosphorus and boron-doped silica with a refractive index matched to that of the buffer (III). The wafer was diced into several dozen individual chips, each containing typically several devices. Some chips were polished to enhance coupling in and out of the waveguides (26).
We used a beta-barium borate type-I spontaneous parametric down-conversion (SPDC) crystal, pumped with a 60-mW, 402-nm continuous wave diode laser to produce 804-nm degenerate photon pairs at a detected rate of 4000 s–1 when collected into single-mode polarization maintaining fibers (PMFs). We used 2-nm interference filters to ensure good spectral indistinguishability (27). Single photons were launched into the waveguides on the integrated optical chips and then collected at the outputs using two arrays of 8 PMFs, with 250 μm spacing, to match that of the waveguides, and detected with fiber-coupled SPCMs. The PMF arrays and chip were directly butt-coupled, with index matching fluid. Overall coupling efficiencies of ∼60% through the device (insertion loss = 40%) were routinely achieved (28).
Figure 2 shows the classic signature of quantum interference: a dip in the rate of detecting two photons at each output of a directional coupler near zero delay in relative photon arrival time (24). The raw visibility (29) V = 94.8 ± 0.5% is a measure of the quality of the interference and demonstrates very good quantum behavior of photons in an integrated optics architecture.
Figure 3A shows the measured nonclassical visibility for 10 couplers on a single chip with a range of design η's. The observed behavior is well explained by the theoretical curves, which include a small amount of mode mismatch ϵ and an offset of δη = 3.4 ± 0.7% from the design ratio. It is inherently difficult to identify in which degree of freedom this small mode mismatch occurs (30). Misalignment of PMF fibers in the array (specified to be <3°) would cause polarization mode mismatch. Small spatial mode mismatch could arise if weakly guided higher-order modes propagate across the relatively short devices (31). These results demonstrate the high yield and excellent reproducibility of the devices.
General photonic quantum circuits require both quantum and classical interference and their combination for conditional phase shifts (32). An ideal device for testing all of these requirements is the entangling controlled-NOT (CNOT) logic gate shown in Fig. 1C (33, 34), which has previously been experimentally demonstrated using bulk optics (1519). The control C and target T qubits are each encoded by a photon in two waveguides, and the success of the gate is heralded by detection of a photon in both the control and target outputs, which happens with probability 1/9. The waveguide implementation of this gate is essentially a direct writing onto the chip of the theoretical scheme presented in (33); it is composed of two 1/2 couplers and three 1/3 couplers.
To allow for possible design and fabrication imperfections, we designed and fabricated on the same chip several CNOT devices with 1/2 couplers ranging from η (1/2) = 0.4 to 0.6 and, correspondingly, 1/3 couplers ranging from η (1/3) = 0.27 to 0.4 (i.e., 2/3 of the 1/2 couplers). The quantum interference measurements described above (Fig. 3B) show that the devices are in fact very close to the design η: δη = 3.4 ± 0.7%. To measure the 1/2 couplers, we sent single photons into the T0 and T1 inputs and collected photons from the C1 and VB outputs (and the reverse for the other 1/2 coupler); the 1/3 data are for the couplers between the C0 and VA waveguides (see Fig. 1C).
For the CNOT device with nominally η (1/2) = 0.5 and η (1/3) = 0.33 couplers, we input the four computational basis states |0〉C|0〉T, |0〉C|1〉T, |1〉C|0〉T, and |1〉C|1〉T and measured the probability of detecting each of the computational basis states at the output (Fig. 4A). The excellent agreement for the |0〉C inputs (peak values of 98.5%) is a measure of the classical interference in the target interferometer and demonstrates that the waveguides are stable on a subwavelength scale—a key advantage arising from the monolithic nature of an integrated optics architecture. The average of the logical basis fidelities (1420) is F = 94.3 ± 0.2%. The fidelities for the other four devices (with different η's) are lower (Fig. 3B), as expected.
To directly confirm coherent quantum operation and entanglement in our devices, we launched pairs of photons into the T0 and T1 waveguides. This state should ideally be transformed at the first 50:50 coupler as follows: $Math$(1) that is, a maximally path-entangled superposition of two photons in the top waveguide and two photons in the bottom waveguide. A very low rate of detecting a pair of photons at the C1 and VA outputs, combined with a high rate of detecting two photons in either of these outputs (with a pair of cascaded SPCMs) confirmed that the state was predominantly composed of |20〉 and |02〉 components but did not indicate a coherent superposition. At the second 50:50 coupler between the T0 and T1 waveguides, the reverse transformation of Eq. 1 should occur, provided the minus superposition exists. A high rate of detecting photon pairs at the T0 and T1 outputs, combined with a low rate of detecting two photons in either of these outputs, confirmed this transformation. From each of these measured count rates, we were able to estimate the two-photon density matrix (Fig. 4D). The fidelity with the maximally path-entangled state |20〉–|02〉 is >92% (35). This high-fidelity generation of the lowest-order maximally path-entangled state, combined with confirmation of the phase stability of the superposition, demonstrates the applicability of integrated devices for quantum metrology applications.
Finally, we tested the simple quantum circuits shown in Fig. 4, B and C, consisting of a CNOT gate and Hadarmard H gates—|0〉→|0〉+ |1〉|1〉→|0〉–|1〉—each implemented with a 50:50 coupler between the C0 and C1 waveguides (25). In both cases, we observe good agreement with the ideal operation, as quantified by the average classical fidelity between probability distributions (36, 37): 97.9 ± 0.4% and 91.5 ± 0.2%, respectively. The device shown in Fig. 4B should produce equal superpositions of the four computation basis states |00〉±|01〉±|10〉±|11〉 and that shown in Fig. 4C should produce the four maximally entangled Bell states Ψ± ≡ |01〉±|10〉 and Φ± ≡ |00〉±|11〉. Although this cannot be confirmed directly on-chip, the above demonstrations of excellent logical basis operation of the CNOT and coherent quantum operation give us great confidence.
Previous bulk optical implementations of similar photonic quantum circuits have required the design and implementation of sophisticated interferometers. Constructing such interferometers has been a major obstacle to the realization of photonic quantum circuits. The results presented here show that this problem can be drastically reduced by using waveguide devices: It becomes possible to directly write the theoretical “blackboard sketch” onto the chip, without requiring sophisticated interferometers.
We have demonstrated high-fidelity integrated implementations of each of the key components of photonic quantum circuits, as well as several small-scale circuits. This opens the way for miniaturizing, scaling, and improving the performance of photonic quantum circuits for both future quantum technologies and the next generation of fundamental quantum optics studies in the laboratory.
Supporting Online Material
Materials and Methods
Fig. S1
References
View Abstract
|
2020-02-21 01:12:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30557242035865784, "perplexity": 2430.345568646429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00032.warc.gz"}
|
https://www.qb365.in/materials/stateboard/11th-standard-computer-science-iteration-and-recursion-english-medium-free-online-test-one-mark-questions-2020-2021-7890.html
|
#### 11th Standard Computer Science Iteration and recursion English Medium Free Online Test One Mark Questions 2020 - 2021
11th Standard
Reg.No. :
•
•
•
•
•
•
Computer Science
Time : 00:10:00 Hrs
Total Marks : 10
10 x 1 = 10
1. A loop invariant need not be true
(a)
at the start of the loop
(b)
at the start of each iteration
(c)
at the end of each iteration
(d)
at the start of the algorithm
2. If Fibonacci number is defined recursively as$f(n)=\begin{cases} 0n=0 \\ 1n=1 \\ f(n-1)+f(n-2)otherwise \end{cases}$to evaluate F(4), how many times F() is applied?
(a)
3
(b)
4
(c)
8
(d)
9
3. Which of the following algorithm design techniques to execute the same action repeatedly
(a)
Assignment
(b)
Iteration
(c)
Recursion
(d)
Both b and c
4. Which of the following is updated when each time the loop body is executed?
(a)
Data
(b)
Variables
(c)
Function
(d)
All of these
5. How many cases are there a recursive solver has
(a)
2
(b)
3
(c)
4
(d)
many
6. How many important points the loop variant is true?
(a)
1
(b)
2
(c)
3
(d)
4
7. ___________is an algorithm design technique, closely related to induction.
(a)
Iteration
(b)
Invariant
(c)
Loop invariant
(d)
Recursion
8. In a loop, if L is an invariant of the loop body B, then L is known as a __________
(a)
recursion
(b)
variant
(c)
loop invariant
(d)
algorithm
9. The loop invariant need not be true at the __________.
(a)
Start of the loop
(b)
end of the loop
(c)
end of each iteration
(d)
middle of algorithm
10. The input size to a sub problem is __________ than the input size to the original problem.
(a)
equal
(b)
smaller
(c)
greater
(d)
no criteria
|
2021-02-27 09:29:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5299936532974243, "perplexity": 2529.540075419603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358798.23/warc/CC-MAIN-20210227084805-20210227114805-00388.warc.gz"}
|
http://eurosurveillance.org/ViewArticle.aspx?ArticleId=22825
|
Announcements
The new Eurosurveillance website is almost here.
Eurosurveillance is on the updated list of the Directory of Open Access Journals and in the SHERPA/RoMEO database. Read more here.
On 6 June 2017, the World Health Organization (WHO) published updates to its ‘Essential Medicines List’ (EML). Read more here.
In this issue
Home Eurosurveillance Edition 2017: Volume 22/ Issue 26 Article 4
Tweet
Eurosurveillance, Volume 22, Issue 26, 29 June 2017
Research article
Soetens, Hahné, and Wallinga: Dot map cartograms for detection of infectious disease outbreaks: an application to Q fever, the Netherlands and pertussis, Germany
+ Author affiliations
Citation style for this article: Soetens L, Hahné S, Wallinga J. Dot map cartograms for detection of infectious disease outbreaks: an application to Q fever, the Netherlands and pertussis, Germany. Euro Surveill. 2017;22(26):pii=30562. DOI: http://dx.doi.org/10.2807/1560-7917.ES.2017.22.26.30562
Received:13 July 2016; Accepted:02 December 2016
## Introduction
Here we propose a method of mapping infectious disease data, the dot map cartogram, which displays the geographical locations of reported cases from routine surveillance or outbreak investigations, such that public health experts can visualise both absolute numbers and spatial trends in incidence of infection. The method is developed to address two major causes of misinterpretation in commonly used maps: masking patterns of disease by not taking into account the population density distribution, and masking patterns by categorisation of information (across space and incidence of disease). With the dot map cartogram, we address the problem, raised by a recent systematic review [1], that spatial methods are underutilised and used in only ca 0.4% of all published outbreak investigations.
The two most frequently used map types for spatial outbreak data are the dot map and choropleth incidence map [1]. In a dot map, every case is represented by a point on the map, showing absolute numbers of cases. Illustrating absolute numbers of cases reveals the size of the outbreak and identifies areas without cases. Dot maps do not account for the underlying geographical distribution of the population. As populations are usually heterogeneously distributed, important variations in incidence of infection can be masked. Another drawback of dot maps is that they may reveal too much information about the location of specific cases, by which the privacy of a case might be violated.
In a choropleth map, a quantitative attribute is displayed per spatial unit. For example, ordinal classes of incidence may be displayed by municipality, with the areas shaded according to their incidence value and a range of shading classes [2]. Choropleth maps are mostly used to display surveillance data. They are less suitable to display outbreak data when the interest is in exact locations of cases. Although this type of map gives a quick overview of where there are more cases than expected based on the background population, it has a major limitation: The appearance of this map is heavily influenced by arbitrary choices with regard to the classification system for the ordinal classes and the spatial unit used (the latter is also referred to as the Modifiable Areal Unit Problem, MAUP [3]). Because the data are categorised both across the quantitative attribute and across the geographical units, this can lead to a loss of information by masking important internal patterns and variations. Therefore, this type of map can be misleading. This is especially problematic when mapping rare events, which is often the case with infectious diseases.
Here we combine the advantages of both maps into one map type which we call dot map cartogram. This map type uses a diffusion-based algorithm [4], which creates contiguous or value-by-area cartograms. In these value-by-area cartograms, regions are enlarged or reduced relative to the number of individuals they contain. We apply this principle to traditional dot maps: we deform the map contours based on population size and simultaneously deform the dot pattern. In this way, a dense point pattern in a big city will become more dispersed, whereas a dense point pattern in a rural area will become even more dense. A similar technique called density-equalising map projection (DEMP) has been pioneered before to describe the spatial distribution of cryptosporidiosis among AIDS patients in San Francisco [5]. This earlier study used a different density-equalising algorithm, and application was limited to local outbreak data. This method has the advantages that the density of the dots reflects the incidence, the number of dots represent the absolute number of cases, and arbitrary choices for scale of spatial unit and classification system are avoided.
In this study, we assessed whether the advantages of the dot map cartogram outweigh the potential disadvantages. One potential drawback is that the dot map cartogram may reveal too much information with regard to the privacy of the cases. Another potential drawback is that the spatial distortion makes it hard to recognise or localise the map topology. We created dot map cartograms using real life data and compared them to the traditional choropleth and dot maps regarding four criteria: (i) ability to show both absolute numbers and incidence of the disease, (ii) sensitivity to choices regarding spatial scale and classification system, (iii) ability to ensure the privacy of individual cases, and (iv) ability to recognise locations. The comparison was applied to the mapping of a point-source outbreak and to the mapping of the occurrence of a human-to-human transmissible disease.
## Methods
### Data
As an example of outbreak mapping to locate a source, we used data from the Q fever outbreak in the Netherlands in 2009. Q fever case reports in the Netherlands communicable disease notification system (Osiris) include the 4-digit postal code of residence. Cases with date of onset of illness in 2009 (n = 1,740) were extracted for this analysis. Population data and shapefiles (a data format which stores geometrical locations and metadata such as population size per geometrical location) on different spatial levels (4-digit postal code and municipality level) were accessed via Statistics Netherlands [5].
As an example of a human-to-human transmissible disease, we used pertussis notifications in Germany in 2015. Data on pertussis cases was made available by the Robert Koch Institute through the SurvStat@RKI 2.0 web portal [6], in which the district of residence is registered for every case. We extracted data on laboratory-confirmed pertussis cases with a date of diagnosis in 2015 (n = 9,017). The most recent population data and shapefiles for district and federal state level boundaries (2013) were accessed through the open data portal of the Federal Agency for Cartography and Geodesy Germany [7]. We assumed that the population size per district or federal state in 2015 was sufficiently similar to that in 2013.
### Geographical representation
#### Dot map
In a dot map, every case is represented by a dot on their location of residence. As this would reveal the exact locations of the cases and harm their privacy, we chose to use a proxy location by assigning a random point location in the spatial unit of residence (4-digit postal code for the Netherlands and district for Germany) to every case. The 4-digit postal code areas in the Netherlands have a mean population of 4,119 persons (mean surface: 8.6 km2), and the districts in Germany have a mean population of 200,914 persons (mean surface: 888.7 km2). For the map of the Netherlands we have used the national RD (‘rijksdriehoeksstelsel’)-based projection and for Germany we used the Universal Transverse Mercator (UTM) 32 projection. Both are conformal map projections in which local angles are preserved and shapes are represented accurately and without distortion for small areas.
#### Choropleth incidence map
In a choropleth incidence map, disease incidence is displayed per spatial unit, using ordinal classes. We chose two spatial unit levels for each country: one at the same level as the patient data (spatial unit of residence as described above) and the other one level higher (municipality in the Netherlands or federal state in Germany). In addition, we categorised incidence into five ordinal classes, using two separate classification systems: the Jenks’ natural breaks algorithm [8] which seeks to reduce the variance within classes and maximise the variance between classes, and the quantile method in which equal numbers of spatial units are placed into each class [8]. The colour schemes for the incidence classes were based on previously established map colour palettes [9].
#### Basic principle
We created cartograms by reshaping the spatial units such that their area was proportional to their population by applying the Gastner-Newman diffusion-based algorithm [4] without changing the underlying map topology [10]. The principle is that once the areas have been scaled to be proportional to their population, then population density is by definition the same for each area of visually the same size on the cartogram. To create dot map cartograms, the point patterns of cases (as described in the dot map section) were simultaneously transformed in the reshape process of the spatial units so that the number of cases per unit area reflected the incidence. In addition, to provide points of reference to interpret the dot map cartograms, capital municipality, provincial or state boundaries were simultaneously transformed, and an inset was added with the undistorted map. We used ScapeToad 1.2 software [11] to reshape the spatial units and the point patterns. From this programme, the transformed layers were exported as shapefiles. The exported files were imported into R statistical software to create the maps. The main R package used to create the maps was ggplot2. The R code for the construction of the dot map cartogram is provided elsewhere (https://github.com/lsoetens/DotMapCartogram).
#### Cartogram quality
As we used spatial units with a population size at a certain spatial level to deform the original map, the MAUP problem was inherited. However, the consequences of this problem can be reduced by relying on an objective measure to assess which spatial level we could best use for our transformations. We assessed the cartogram quality by the objective measures, i.e. the average and maximum normalised cartographic error [10]. The cartographic error is the most commonly used measure for distortion in the value-by-area realisation. We assumed an input map M that is partitioned into n regions with polygonal boundaries. For each region v, a(v) denotes the area of v in M and the weight w(v) is the desired area for the region based on the background population. The diffusion-based algorithm constructed the cartogram M0, that is a deformation of M, in which o(v) is the observed area of region v. The average normalised cartographic error e was calculated as:
### Formula 1
$e = 1 n ∑ v ∈ V | o ( v ) − w ( v ) | max { o ( v ) , w ( v ) }$with a range of [0,1]
and the maximum normalised cartographic error x was calculated as:
### Formula 2
$x = max ( | o ( v ) − w ( v ) | max { o ( v ) , w ( v ) } )$with a range of [0,1]
No guidelines exist as to what constitutes acceptable cartogram quality; to be consistent with the values measured in a previous publication [10], we aimed for e < 0.1 and x < 0.4 in our cartograms. The performance of the algorithm, and subsequently the cartogram quality, depends on the distribution of the population density in the input file and the grid size on which the computation of the population density is based. Large discrepancies in the population density between regions impair the algorithm performance, while a finer grid will produce a higher quality cartogram, but may introduce distortion. We started with an input map at the same spatial unit of residence as the case data (i.e. 4-digit postal code for the Netherlands and district level for Germany) and assessed the average and maximum error of the cartograms. For the Netherlands, cartogram error was not satisfactory and we changed to the municipality level and accepted a maximum normalised cartogram error of x > 0.4; this was due to the very small population size of the municipalities on the West Frisian Barrier Islands. These areas should be smaller than presented in the created maps, but would no longer be visible. We used a 512 × 512 grid, which gave sufficient results; a higher-resolution grid of 1,024 × 1,024 is possible, but requires substantially greater computation times.
### User requirements
To create the dot map cartograms, no specific geographical information system skills are needed. The only requirement is some basic knowledge of the R statistical software to be able to run the provided code (https://github.com/lsoetens/DotMapCartogram). Required software includes R studio and ScapeToad [11]. Both programmes are open source programmes, which can be downloaded for free. A detailed manual for creating the dot map cartograms is provided along with the code (https://github.com/lsoetens/DotMapCartogram).
## Results
### Geographical representation of point-source outbreaks: Q fever in the Netherlands
The maps of Q fever cases are compared in Figure 1. The dot map cartogram (Figure 1A) is based on the population size per municipality in the Netherlands in 2009 (e = 0.08, sd = 0.09; x = 0.78). It is compared with the dot map (Figure 1B) and the choropleth incidence maps (Figures 1C-E).
#### Ability to show both absolute numbers and incidence
The dot map cartogram is able to show both absolute numbers and incidence. Incidence is readily inferred from the choropleth incidence maps, but the exact size of the outbreak in absolute numbers is not apparent. The dot map showed a clear clustering pattern with several hotspots, but whether this is related to the population density (incidence) cannot be inferred from the dot map.
#### Sensitivity to choices regarding spatial scale and classification system
The dot map cartogram and dot map do not suffer from arbitrary classification issues. The choropleth incidence maps are sensitive to this; different spatial scale and classification systems result in highly variable maps. When only one of those maps is used, the geography of the outbreak could be misinterpreted. One would conclude that the outbreak is more widespread (Figure 1C) or more intense (Figure 1E) than is illustrated in Figure 1D.
#### Ability to ensure the privacy of cases
The dot map compromises case privacy by revealing exact locations of cases, while the choropleth incidence map protects privacy by aggregating case information. The dot map cartogram meets this criterion partially by deforming the underlying region.
#### Ability to recognise locations
The dot map cartogram is harder to read, and specific locations are difficult to recognise. This can be improved by providing locations of major towns or district boundaries. The choropleth incidence map and dot map outperform the dot map cartogram on this criterion.
### Geographical representation of a human-to-human transmissible disease: pertussis in Germany
The maps for pertussis cases are compared in Figure 2. Figure 2A depicts the dot map cartogram based on the population size by district in Germany in 2013 (e = 0.07, sd = 0.06; x = 0.29); this map is contrasted with the corresponding dot map (Figure 2B) and choropleth incidence maps (Figures 2C-E).
#### Ability to show both absolute numbers and incidence
A much larger number of cases are illustrated in this example than for the Q fever data. This is immediately apparent from the dot map and the dot map cartogram, but less so from the choropleth incidence map. The clustering patterns on the dot map in several large cities and for example in the Ruhr district (in the west of Germany) disappear into an almost random pattern in the dot map cartogram, indicating that the clustering patterns on the dot map can be explained by the underlying population density in those areas. In contrast, in the south-east of Germany near Munich, a clustering pattern is present in both the dot map and the dot map cartogram, indicating that this cluster is not attributable to population density but reflects a higher incidence of pertussis notifications. Therefore, in this situation, both incidence and absolute number of cases are important in interpreting the map: they reveal the size of the problem and provide clustering of cases not attributable to the population density.
#### Sensitivity to choices regarding spatial scale and classification system
The choropleth incidence maps showed that the choice of classification system or spatial level results in highly variable maps. In this comparison, Figure 2D, based on the Jenks’ natural breaks classification system and the district level, is comparable to the dot map cartogram, showing the same areas with high incidence; however, there is no way to determine a priori which classification system and spatial level would result in the ‘right’ map.
#### Ability to secure the privacy of cases
All maps meet this criterion. In this dataset, the dot map does not compromise privacy because of the frequency of occurrence of the disease. As before, the dot map cartogram protects case privacy due to the deformation of the underlying regions, and the choropleth incidence map because information is aggregated.
#### Ability to recognise locations
The choropleth incidence map and dot map outperform the dot map cartogram on this criterion. Adding reference points in the dot map cartogram, such as the federal state and federal state municipality boundaries, can help in recognising locations.
## Discussion
We have proposed the dot map cartogram for displaying spatial infectious disease data and illustrating both incidence and absolute numbers of cases. A similar technique has been suggested before [12], but to our knowledge it was never used to study clustering of point patterns at a national level in infectious disease epidemiology. We compared the dot map cartogram to the frequently used choropleth incidence map and the dot map [1]. The main advantages of the dot map cartogram over the other two is that it is able to simultaneously reveal epidemic patterns adjusted for the population distribution, and to unmask patterns that are hidden by aggregation and categorisation of information. The visual distortion of the dot map cartogram is a barrier to pinpointing the location of a case: this is a benefit in the field of infectious diseases because case privacy should be ensured in presenting surveillance data. In addition, dot map cartograms illustrate areas without cases, which are harder to discern by the use of choropleth incidence maps.
We did not address map types other than dot maps and incidence maps for this comparison. Smooth incidence maps are an alternative to the choropleth incidence maps, in which the incidence is smoothed across the spatial units. This technique was applied to the data from a previous study for the Q fever outbreak in the Netherlands in 2009 [13]. In illustrating hotspots, the dot map cartogram is comparable to the smooth incidence map. The advantage of the smooth incidence map is that it permits identification of the exact location of hot spots, since the map is not deformed. However, smooth incidence maps do not reveal areas without cases and it is hard to discern the absolute number of cases and the true scope and size of the outbreak. Many more mapping techniques exist, such as map types based on other interpolation techniques (such as inverse distance weighting, kriging and trend surface fitting [2]), incidence based value-by-area cartograms [14,15], and many others. Assessment of all available methods was not within the scope of this paper.
We used the Gastner and Newman diffusion-based algorithm [4] to construct the cartogram. Alternative algorithms are available to construct contiguous cartograms [16-26]. We chose the Gastner and Newman algorithm because it performs well compared to the alternative algorithms in terms of quantitative measures such as the cartographic error, shape error and topology error [10], and qualitative measures such as subjective preferences and task performance [27]. Recent studies have proposed quantitative [10] and qualitative [27] measures for cartogram generation techniques. The development of standardised performance measures will allow objective ranking and selecting of cartogram algorithms.
With the demonstration of dot map cartograms, we provide public health professionals with an alternative spatial method for outbreak analysis. Firstly, we expect that dot map cartograms minimise misinterpretation of the data. Secondly, as demonstrated in this study, dot map cartograms protect the privacy of cases. Thirdly, dot map cartograms do not require expensive specialised GIS software, which facilitates use in settings with limited resources. The only requirements to construct dot map cartograms are free software, case reports with location data and access to population data as shapefiles. As there is a general trend for governments to make administrative population data publicly available, the latter is available for most countries. Finally, our method can easily be extended with information on covariates relevant for mapping: a relevant example in infectious disease surveillance or research is to colour the case dots for a specific attribute, such as age group, sex or vaccination status. Introducing other layers of detail or attributes further broadens the utility of this spatial method for use in infectious disease research and surveillance. However, as the technique is graphical rather than geographical, it does not replace a geographical information system explaining the impact of various geographical factors on the spread of a certain infectious disease.
### Acknowledgements
We thank Lisa Hansen for thoroughly editing the manuscript for English language.
None declared.
### Authors’ contributions
Authors LS, SH, and JW contributed to the design of the study. LS led on the data analysis and drafting of the manuscript supported by SH and JW. All authors commented on drafts of the manuscript and approved the final version.
## References
1. Smith CM, Le Comber SC, Fry H, Bull M, Leach S, Hayward AC. Spatial methods for infectious disease outbreak investigations: systematic literature review.Euro Surveill. 2015;20(39):30026.
2. Heywood I, Cornelius S, Carver S. An introduction to geographical information systems. New Jersey: Prentice-Hall; 2011. ISBN: 9780273722595.
3. Openshaw S. Ecological fallacies and the analysis of areal census data.Environ Plann A. 1984;16(1):17-31. DOI: 10.1068/a160017 PMID: 12265900
4. Gastner MT, Newman ME. From The Cover: Diffusion-based method for producing density-equalizing maps.Proc Natl Acad Sci USA. 2004;101(20):7499-504. DOI: 10.1073/pnas.0400280101 PMID: 15136719
5. Wijk- en buurtkaart 2008. [District and neighborhood maps 2008]. Den Haag: Statistics Netherlands. [Accessed: 20 Jan 2016]. Dutch. Available from: http://www.cbs.nl/nl-NL/menu/themas/dossiers/nederland-regionaal/publicaties/geografische-data/archief/2009/2010-wijk-en-buurtkaart-2008.htm
6. SurvStat@RKI 2.0. Berlin: Robert Koch Institute. [Accessed: 3 Feb 2016]. Available from: https://survstat.rki.de/
7. Open Data - Freie Daten und Dienste des BKG. [Open data – free data and services of the BKG]. Leipzig: Federal Agency for Cartography and Geodesy (BKG). [Accessed: 15 Nov 2015]. German. Available from: http://www.geodatenzentrum.de/geodaten/gdz_rahmen.gdz_div?gdz_spr=deu&gdz_akt_zeile=5&gdz_anz_zeile=0&gdz_user_id=0
8. Brewer CA, Pickle L. Evaluation of methods for classifying epidemiological data on choropleth maps in series.Ann Assoc Am Geogr. 2002;92(4):662-81. DOI: 10.1111/1467-8306.00310
9. Harrower M, Brewer CA. ColorBrewer.org: An online tool for selecting colour schemes for maps.Cartogr J. 2003;40(1):27-37. DOI: 10.1179/000870403235002042
10. Alam MJ, Kobourov SG, Veeramoni S. Quantitative Measures for Cartogram Generation Techniques.Comput Graph Forum. 2015;34(3):351-60. DOI: 10.1111/cgf.12647
11. Andrieu D, Kaiser C, Ourednik A. ScapeToad: not just one metric. [Accessed: 20 Sep 2015]. Lausanne: Choros Laboratory, Available from: http://scapetoad.choros.ch/
12. Khalakdina A, Selvin S, Merrill DW, Erdmann CA, Colford JM. Analysis of the spatial distribution of cryptosporidiosis in AIDS patients in San Francisco using density equalizing map projections (DEMP).Int J Hyg Environ Health. 2003;206(6):553-61. DOI: 10.1078/1438-4639-00245 PMID: 14626902
13. van der Hoek W, van de Kassteele J, Bom B, de Bruin A, Dijkstra F, Schimmer B, et al. Smooth incidence maps give valuable insight into Q fever outbreaks in The Netherlands. Geospat Health. 2012;7(1):127-34. DOI: 10.4081/gh.2012.111 PMID: 23242690
14. Dodd PJ, Sismanidis C, Seddon JA. Global burden of drug-resistant tuberculosis in children: a mathematical modelling study.Lancet Infect Dis. 2016;16(10):1193-201. DOI: 10.1016/S1473-3099(16)30132-3 PMID: 27342768
15. Dorling D, Barford A, Newman M. WORLDMAPPER: the world as you’ve never seen it before.IEEE Trans Vis Comput Graph. 2006;12(5):757-64. DOI: 10.1109/TVCG.2006.202 PMID: 17080797
16. Tobler WR. Democratic representation and apportionment: a continous transformation useful for districting.Ann N Y Acad Sci. 1973;219:215-20. DOI: 10.1111/j.1749-6632.1973.tb41401.x PMID: 4518429
17. Dougenik JA, Chrisman NR, Niemeyer DR. An algorithm to construct continuous area cartograms.Prof Geogr. 1985;37(1):75-81. DOI: 10.1111/j.0033-0124.1985.00075.x
18. Gusein-Zade SM, Tikunov VS. A new technique for constructing continuous cartograms.Cartogr Geogr Inf Syst. 1993;20(3):167-73. DOI: 10.1559/152304093782637424
19. Keim DA, North SC, Panse C. CartoDraw: a fast algorithm for generating contiguous cartograms.IEEE Trans Vis Comput Graph. 2004;10(1):95-110. DOI: 10.1109/TVCG.2004.1260761 PMID: 15382701
20. Keim DA, Panse C, North SC. Medial-axis-based cartograms.IEEE Comput Graph Appl. 2005;25(3):60-8. DOI: 10.1109/MCG.2005.64 PMID: 15943089
21. Kämper JH, Kobourov SG, Nöllenburg M. Circular-arc cartograms. 2013 IEEE Pacific Visualization Symposium (PacificVis). Sydney; 2013. http://dx.doi.org/10.1109/PacificVis.2013.6596121
22. Dorling D. Area cartograms: their use and creation. In: The map reader: theories of mapping practice and cartographic representation. Dodge M, Kitchin R, Perkins C, eds. Chichester: John Wiley & Sons; 2011. p. 252-60. http://dx.doi.org/10.1002/9780470979587.ch33
23. Edelsbrunner H, Waupotitsch R. A combinatorial approach to cartograms.J. Comput. Geom.1997;7(5-6):343-60.
24. House DH, Kocmoud CJ. Continuous cartogram construction. Proceedings of the IEEE Visualization. Research Triangle Park; 18-23 Oct 1998. p. 197-204. https://dx.doi.org/10.1109/VISUAL.1998.745303
25. Schulman J, Selvin S, Merrill DW. Density equalized map projections: a method for analysing clustering around a fixed point.Stat Med. 1988;7(4):491-505. DOI: 10.1002/sim.4780070406 PMID: 3368676
26. Selvin S, Merrill D, Schulman J, Sacks S, Bedell L, Wong L. Transformations of maps to investigate clusters of disease.Soc Sci Med. 1988;26(2):215-21.
27. Nusrat S, Alam MJ, Kobourov SG. Evaluating cartogram effectiveness. IEEE Trans Vis Comput Graph. 2016. https://dx.doi.org/10.1109/TVCG.2016.2642109
Tweet
Disclaimer: The opinions expressed by authors contributing to Eurosurveillance do not necessarily reflect the opinions of the European Centre for Disease Prevention and Control (ECDC) or the editorial team or the institutions with which the authors are affiliated. Neither ECDC nor any person acting on behalf of ECDC is responsible for the use that might be made of the information in this journal. The information provided on the Eurosurveillance site is designed to support, not replace, the relationship that exists between a patient/site visitor and his/her physician. Our website does not host any form of commercial advertisement. Except where otherwise stated, all manuscripts published after 1 January 2016 will be published under the Creative Commons Attribution (CC BY) licence. You are free to share and adapt the material, but you must give appropriate credit, provide a link to the licence, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
|
2017-09-25 07:44:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5131531357765198, "perplexity": 3663.1568069464824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690376.61/warc/CC-MAIN-20170925074036-20170925094036-00505.warc.gz"}
|
http://cms.math.ca/10.4153/CMB-2005-009-4
|
Canadian Mathematical Society www.cms.math.ca
location: Publications → journals → CMB
Abstract view
On the Ranges of Bimodule Projections
We develop a symbol calculus for normal bimodule maps over a masa that is the natural analogue of the Schur product theory. Using this calculus we are easily able to give a complete description of the ranges of contractive normal bimodule idempotents that avoids the theory of J*-algebras. We prove that if $P$ is a normal bimodule idempotent and $\|P\| < 2/\sqrt{3}$ then $P$ is a contraction. We finish with some attempts at extending the symbol calculus to non-normal maps.
|
2015-05-26 05:55:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135414361953735, "perplexity": 799.6433830570377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928780.77/warc/CC-MAIN-20150521113208-00005-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.cs.jhu.edu/~jason/tutorials/GMM-optimization.html
|
# Dynamics of optimizing Gaussian mixture models¶
This super-simple example illustrates how a Gaussian mixture model behaves with high and low fixed variance (that is, how it collapses or specializes the clusters as in deterministic annealing).
We then examine what happens when we try to optimize the variance together with the means.
• Good news: It's optimal to specialize into low-variance clusters, as expected, and EM or gradient descent will eventually discover this.
• Bad news: Many optimization algorithms are attracted into the neighborhood of a bad saddle point. They then spend most of their time hanging out there before eventually finding the right solution. Or worse, they stop numerically near the saddle point before they get to the right solution.
This bad saddle point is a symmetric solution with high variance. The clusters collapse into a single cluster whose variance matches the sample variance.
• Recommendation: Thus, rather than using the gradient to optimize the variance, it really is a good idea to sweep through different variance values as an outer loop around the optimizer. This actively pushes the system away from the saddle point.
Deterministic annealing (DA) does this. Often DA is sold as a way to (hopefully) escape bad local optima in latent-variable models. But it may also be useful in general for escaping the saddle points of such models, which may not be terminal points for optimization but can greatly slow it down. This super-simple example has no bad local optima, just saddle points.
# Model and log-likelihood function¶
Throughout the investigation, our model is an equal mixture of two univariate Gaussians with symmetric means $\mu,-\mu$ and common variance $\sigma^2$ with $\sigma > 0$. The log-density of the mixture is
In [1]:
# addition in the log domain to avoid underflow.
# This version can add two vectors in parallel.
logadd <- function (a,b) { ifelse(a==-Inf, b, # The ifelse test avoids the problem that -Inf minus -Inf NaN.
pmax(a,b) + log1p(exp(pmin(a,b)-pmax(a,b)))) }
In [2]:
# mixture of univariate Gaussians
ldmix <- function (x,mu,sigma) {
sigma <- pmax(0,sigma) # since an unbounded optimizer might explore sigma < 0, just treat that like 0
# Loses precision:
# log((dnorm(x,mu,pmax(0,sigma))+dnorm(x,-mu,pmax(0,sigma)))/2)
logadd( dnorm(x,mu,sigma,log=T) , dnorm(x,-mu,sigma,log=T) ) - log(2)
}
We'll also assume that the only observed datapoints are at 1 and -1. Then the log-likelihood is
In [3]:
ll <- function (mu,sigma) ldmix(1,mu,sigma)+ldmix(-1,mu,sigma)
Let's visualize that log-likelihood function:
In [4]:
mu <- seq(-2,2,length.out=50)
sigma <- seq(0.5,2,length.out=50)
log_lik <- outer(mu,sigma,Vectorize(ll))
persp(mu, sigma, log_lik, theta=30, phi=30, ticktype="detailed")
At each fixed $\sigma$:
• The symmetry in $\mu$ comes from the fact that we can swap the clusters and get the same likelihood.
• There is an extremum at $\mu=0$,
• which is a local minimum for $\sigma < 1$ (so the clusters like to move apart)
• but a global maximum for $\sigma \geq 1$ (where the clusters like to move together).
• These correspond to low and high temperature in deterministic annealing.
• The phase shift happens at the "critical temperature" where $\sigma = 1$, probably because that's the sample stdev.
We can see that transition by zooming in on $\sigma \approx 1$:
In [5]:
sigma <- seq(0.9,1.1,length.out=50)
log_lik <- outer(mu,sigma,Vectorize(ll))
persp(mu, sigma, log_lik, theta=30, phi=30, ticktype="detailed")
If we allow $\sigma$ to vary, however:
• The global maximum puts one cluster at each point, with $\sigma$ as small as possible. (Actually there are two symmetric maxima of this sort.)
• So $\sigma$ wants to become small, allowing the clusters to specialize.
# Alternating optimization¶
But when can we computationally find this maximum? There are are no other local maxima on the whole surface. But finding a global maximum might still be hard! The problem is that $(\mu=0,\sigma=1)$ is a very stable and somewhat attractive saddle point.
Let's see what goes wrong in the case of alternating optimization.
First we hold $\sigma > 1$ fixed while optimizing $\mu$. The argmax is $\mu=0$ (the Gaussian mixture becomes a single Gaussian):
In [6]:
library(repr)
options(repr.plot.width=3, repr.plot.height=3) # smaller plots
In [7]:
plot(function (mu) ll(mu,1.1), xlab="mu", xlim=c(-2,2))
Now we hold $\mu=0$ fixed while optimizing $\sigma$. It moves to the sample stdev, namely 1. (In fact, this is true regardless of where $\sigma$ starts.)
In [8]:
plot(function (sigma) ll(0,sigma), xlab="sigma", xlim=c(0.5, 2))
We're now stuck at this saddle point $(\mu,\sigma)=(0,1)$: for this $\sigma$, our current $\mu=0$ is already optimal.
In [9]:
plot(function (mu) ll(mu,1), xlab="mu", xlim=c(-2,2))
plot(function (mu) ll(mu,1), xlim=c(-.01,.01)) # zooming in
# The Hessian¶
Pause for a moment to notice how flat the top of that graph is. Not only is this a critical point, but the Hessian is singular, so if we start near the critical point, even a second-order optimizer will be slow to get away. The Hessian is $\left( \begin{array}{cc}\frac{\partial^2}{\partial^2\mu} & \frac{\partial^2}{\partial\sigma\partial\mu} \\ \frac{\partial^2}{\partial\mu\partial\sigma} & \frac{\partial^2}{\partial^2\sigma}\end{array} \right)L = \left( \begin{array}{rr}0 & 0 \\ 0 & -4\end{array} \right)$:
In [10]:
library(numDeriv)
hessian(function (x) ll(x[1],x[2]), c(0,1)) # numerical Hessian
det(.Last.value) # it's singular, obviously
1. 0
2. -9.99709819140423e-12
-1.04952e-08 -2.64677e-09 -2.64677e-09 -4
4.19809200064392e-08
This really is a saddle point, not a local maximum, since there are better points arbitrarily close by (to find one, we have to nudge $\mu$ more than $\sigma$):
In [11]:
ll(0.1,0.999) - ll(0,1)
1.31106506362499e-06
In [12]:
ll(0.01,0.99999) - ll(0,1)
1.3331113990489e-10
It seems that alternating optimization led us right into the maw of a pathological case. Maybe gradient ascent will be better? There's no hope if we start with $\mu=0$ exactly -- we can't break the symmetry and we'll immediately jump to the saddle point $(0,1)$ and stay there. But as long as $\mu \neq 0$, we do prefer $\sigma < 1$. So the hope is that as long as we start at $\mu \neq 0$, then we'll eventually get to $\sigma < 1$, at which point $\mu$ will want to move away from 0 to specialize the clusters.
More precisely, the optimal $\sigma \neq 1$ for any $\mu \neq 0$. (You might expect that if $\mu=0.1$, then $\sigma=0.9$ so that the Gaussian at 0.1 can best explain the point at 1, but actually $\sigma > 0.9$ in this case so that it can help explain the point at -1 as well.)
Here are the optimal $\sigma$ values for $\mu=.1, .4, .6, .8, 1$. (The limiting case of $\mu=1$ is interesting because then the optimal clusters are Dirac deltas, i.e., $\sigma = 0$. For $\mu > 1$, which isn't shown here, the optimal $\sigma$ grows again ... in fact without bound as $\mu \rightarrow \infty$.)
In [13]:
for (mu in c(.1,.4,.6,.8, 1)) plot(function (sigma) ll(mu,sigma), ylab="log lik", xlab="sigma", main=paste("mu=",mu), xlim=c(0.1,2))
Again, if we could keep $\mu \neq 0$ long enough to get to $\sigma < 1$, we'd be out of danger -- at least if $\sigma$ stayed $< 1$ -- since then the Gaussians would prefer to separate, leading us to the global max. To see the separation, here's what $\mu$ prefers for $\sigma=0.9$:
In [14]:
plot(function (mu) ll(mu,.9), ylab="log lik", xlim=c(-2,2), xlab="mu")
This doesn't guarantee that we'll be saved from the saddle point. One might worry that if we start with $\sigma > 1$, which prefers $\mu = 0$, then $\mu$ might approach 0 so quickly that $\sigma$ doesn't have a chance to pull away from 0 and achieve "escape velocity." However, the plots below seem to show that this problem does not arise: $\sigma$ changes more quickly than $\mu$ if we follow the gradient.
In [15]:
options(repr.plot.width=7, repr.plot.height=7)
In [16]:
# Show the vector field of a function's gradients (in gray),
# as well as some trajectories followed by gradient ascent (in dark green with green arrowheads).
# The function must be from R^2 -> R.
# partial derivatives of fun (should probably use grad from numDeriv package instead, but that won't eval at many points in parallel)
# methods that will be used below to draw gradient steps
drawfield <- function (x0,y0,x1,y1) { suppressWarnings(arrows(x0,y0,x1,y1, length=0.05, angle=15, col="gray65")) }
drawtraj <- function (x0,y0,x1,y1) { segments(x0,y0,x1,y1, col="darkgreen") } # line segments with no head
drawtrajlast <- function (x0,y0,x1,y1) { suppressWarnings(arrows(x0,y0,x1,y1, length=0.1, col="green")) }
# grid of points
x <- seq(xlim[1],xlim[2],length.out=nn[1])
y <- seq(ylim[1],ylim[2],length.out=nn[2])
xx <- outer(x,y, function (x,y) x)
yy <- outer(x,y, function (x,y) y)
# draw the function using filled.contour, and imposed the arrows
filled.contour(x,y,fun(xx,yy),
plot.axes = { # how to add to a filled.contour graph
axis(1); axis(2) # label the axes
# draws a bunch of gradient steps in parallel starting at (xx,yy), and sets (newxx,newyy) to endpoints
segs <- function (xx,yy,scale,draw) { draw( xx, yy,
newyy <<- yy + scale*grady(xx,yy) ) }
# vector field
segs(xx,yy, scale,drawfield)
# grid of points where we'll start trajectories
seqmid <- function (lo,hi,len) { nudge <- (hi-lo)/(2*len); seq(lo+nudge,hi-nudge,length.out=len) } # e.g., seqmid(0,1,5) gives 0.1,0.3,...,0.9, the same as the odd positions in seq(0,1,11)
x <- seqmid(xlim[1],xlim[2],nntraj[1])
y <- seqmid(ylim[1],ylim[2],nntraj[2])
xx <- outer(x,y, function (x,y) x)
yy <- outer(x,y, function (x,y) y)
# draw those trajectories
for (i in 1:iters) {
segs(xx,yy, scaletraj, if (i < iters) drawtraj else drawtrajlast)
done <- gradx(xx,yy)==0 & grady(xx,yy)==0 # which trajectories won't move any more?
xx <- ifelse(done, xx, newxx) # update for next segment except where there wouldn't be a next segment
yy <- ifelse(done, yy, newyy)
}
# hack for this application: plot markers at specific critical points
points( c(0,1,-1),c(1,0,0),col=2)
})
}
In [17]:
gradient_field(ll,c(-2,2),c(0.5,1.5), scale=0.03)
|
2022-05-20 01:17:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7237756848335266, "perplexity": 2180.4699826409646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00353.warc.gz"}
|
https://renatocunha.com/2012/04/playstation-eye-audio-linux/
|
# Making the Playstation Eye’s microphones work on Linux
## Introduction
I'm the proud owner of a PlayStation® Move system and also a proud user of the Arch Linux distribution. As much as pride is concerned, it's not very “proudful” when you decide to give your PlayStation® Eye (PS3 Eye) a try as a webcam, and you realize it doesn't quite work as you expected. This post investigates this issue and proposes a solution.
Even though the PS3 Eye is supported by Linux 2.6.29 and above 1, and the video worked fine for me. The audio did not… :/
All I wanted was to see something similar to the following image. Keep reading if you want to know how to do it.
## The problem
As one can see, by the output of the following command, my system had no trouble recognizing the PS3 Eye as a sound “card”:
$cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xfbff8000 irq 43 1 [CameraB409241 ]: USB-Audio - USB Camera-B4.09.24.1 OmniVision Technologies, Inc. USB Camera-B4.09.24.1 at usb-0000:00:1d.0-1.1, hi (Un)Fortunately, I don't really use ALSA directly. Instead, as most kids these days, I use the PulseAudio sound server to manage the access to my sound card. So, even though ALSA can recognize the PS3 Eye microphone array just fine, PulseAudio can't2. Once you understand what you want to do, and who is responsible for it, the solution is quite easy. ## The solution Fortunately, PulseAudio supports configuration files and comes with good documentation. Upon startup, its daemon reads and interprets the files ~/.pulse/default.pa and /etc/pulse/default.pa. Thus, all we have to do is to instruct PulseAudio to look for the PS3 Eye when loading. In our case, we want to add a new alsa audio source, and adding the line "load-module module-alsa-source device=hw:1,0" to /etc/pulse/default.pa does the trick for the whole system. Or, in the shell: $ sudo echo "load-module module-alsa-source device=hw:1,0" >> /etc/pulse/default.pa
### How can I know what to pass as an argument to device?
You might be asking yourself that question. Fear not, dear friend. With ALSA, you can always take a look at the /proc/asound/cards file. There, you will see the index of your device. On my case, the PS3 Eye was on line 1 with the label CameraB409241. Then, for the PS3 Eye, you will want the stream with index zero. Thus, hw:1,0 is our device. The directory /proc/asound/card<index> will contain all the information you might need about your card.
### A small problem
It seems like the PS3 Eye must be connected when the PulseAudio daemon is brought up. To prevent from having to logout/login, you can use the pacmd command. It supports the same syntax of the /etc/pulse/default.pa file. So, calling
\$ pacmd load-module module-alsa-source device=hw:1,0
Will work just fine.
1. And with a kernel module available for older versions.
2. More precisely, the problem seems to be in PulseAudio's module-udev-detect. I don't really want to understand where the problem is, just want to know why it happens and how to solve it.
##### Renato Cunha
###### Machine Learning Software Developer
My research interests include distributed robotics, mobile computing and programmable matter.
|
2022-01-21 20:56:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3623577654361725, "perplexity": 2275.5638217667724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303709.2/warc/CC-MAIN-20220121192415-20220121222415-00622.warc.gz"}
|
http://mathhelpforum.com/discrete-math/105833-graph-theory-trees.html
|
Math Help - Graph theory: Trees
1. Graph theory: Trees
Let T be a tree of order n. Show that the size of the complement of T (T bar) is the same as the size of K(n-1)
Thanks
2. Originally Posted by zhupolongjoe
Let T be a tree of order n. Show that the size of the complement of T (T bar) is the same as the size of K(n-1)
If $T$ is a tree with $n$ vertices the you know that $T$ has $n-1$ edges.
The complement $\overline{T}$ has $\binom{n}{2}-(n-1)$ edges.
The graph $K_{n-1}$ has $\binom{n-1}{2}$ edges.
|
2016-05-03 11:37:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4967295825481415, "perplexity": 238.02794285236942}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121423.81/warc/CC-MAIN-20160428161521-00006-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://www.physicsoverflow.org/24221/mathematical-expression-for-map-from-%24-0-1-%24-to-%24s-2%24
|
# Mathematical expression for map from $[0,1]$ to $S^2$
+ 4 like - 3 dislike
193 views
A topological space is called arcwise connected if, for any points $x,y\in X$, there exists a continuous map $f: [0,1]\rightarrow X$ such that $f(0)=x$ and $f(1)=y$. Although it is intuitively understandable but how does such a map mathematically look like for $S^2$?
According to this definition is there way to show that $SU(2)$ is connected but $O(3)$ is not? As I continuously change the group parameters (up to their allowed ranges) can I show in the first case that I can reach all points on the $SU(2)$ manifold and in case of $O(3)$ I cannot exhaust all points? Only this can prove the nature of connectedness, in this definition, Right?
This post imported from StackExchange Mathematics at 2014-10-05 10:04 (UTC), posted by SE-user Roopam Sinha
closed Jun 22, 2015
Undergrad level question, voted to close, but one way to show that a Lie group is connected and construct explicit connecting curves is using the exponential map, so that every point x can be written (nonuniquely) as exp(A) for A in the Lie algebra. The Lie algebra is a real vector space, you can connect any two points using straight lines, so x and y connect by tx + (1-t)y in the vector space, and then exponentiating exp(tA + (1-t)B) interpolates exp(A) and exp(B).
SO(3) and SU(2) are both connected. O(3) is disconnected because there is a determinant for the rotation matrix which can be either +1 or -1. This is a complete answer, but I made it a comment because I voted to close this question.
Also, this is an early duplicate of the much clearer and well answered: http://physicsoverflow.org/24226/connectedness-of-%24o-3-%24-group-manifold , which is also technically undergrad level, but who cares, it's a well phrased question with clear answers.
+ 1 like - 0 dislike
(This answer was written to address the original version of the question, before the second paragraph was added.)
As an example, on $S^2$ it's always possible to define a coordinate patch
$$\psi:S^2\rightarrow R^3$$
using typical spherical coordinates $(r,\theta,\phi)$, defined such that $\psi(x)=(1,0,0)$ and $\psi(y)=(1,0,\phi_y)$. The coordinate patch can be defined everywhere on $S^2$ except in some neighborhoods of the poles. Like any coordinate patch, $\psi$ is injective (although it isn't surjective), so $\psi^{-1}$ is defined on $\psi$'s image. Then the simplest possible definition for the map $f$ using those coordinates would be
$$f(\lambda)=\psi^{-1}(1,0,\lambda \phi_y)\ .$$
There are of course many other possible ways to define $f$, which take the image of $f$ along different paths on $S^2$ between $x$ and $y$.
This post imported from StackExchange Mathematics at 2014-10-05 10:04 (UTC), posted by SE-user Red Act
answered Oct 5, 2014 by (10 points)
+ 0 like - 2 dislike
$f:[0,1]\to S^2$ given by $f(x)=(\sin x\cos x,\cos^2 x,\sin x)\in S^2$, I think it is very easy to check now taking any two arbitrary point from sphere and connect by a path like $tx+(1-t)y;x,y\in S^2$
This post imported from StackExchange Mathematics at 2014-10-05 10:04 (UTC), posted by SE-user Une Femme Douce
answered Oct 5, 2014 by (-10 points)
This is nonsense-- you just mapped a line to a particular curve in the spehre. The correct version of this argument uses the exponential map, or on the sphere, the manifold exponential map, not this, which is meaningless.
|
2019-01-18 15:52:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142582416534424, "perplexity": 463.8430103541545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660175.18/warc/CC-MAIN-20190118151716-20190118173716-00272.warc.gz"}
|
https://math.stackexchange.com/questions/1098053/proof-of-exists-xpx-rightarrow-forall-y-py
|
Proof of $\exists x(P(x) \Rightarrow \forall y P(y))$
Exercise 31 of chapter 3.5 in How To Prove It by Velleman is proving this statement: $\exists x(P(x) \Rightarrow \forall y P(y))$.
(Note: The proof shouldn't be formal, but in the "usual" theorem-proving style in mathematics)
Of course I've given it a try and came up with this:
Proof: Suppose $\neg \exists x(P(x) \Rightarrow \forall y P(y))$. This is equivalent to $\forall x(P(x) \wedge \neg \forall y P(y))$, and since the universal quantifier distributes over conjunctions, it follows that $\forall x P(x)$ and $\forall x \neg \forall y P(y)$. Thus, for any $x_0, \neg \forall y P(y)$. But this contradicts $\forall x P(x)$, therefore $\exists x(P(x) \Rightarrow \forall y P(y))$.
I'm not sure if the condradiction is legal, so I'd like to know if there are any flaws in my proof.
Thanks!
• It's correct. This is the Drinker's Paradox. Edit: See alternative informal proofs here and here. – Git Gud Jan 9 '15 at 19:10
• math.stackexchange.com/questions/412387/… – MJD Jan 9 '15 at 19:28
• For slight variation of DP (more intuitive IHMO), see my blog posting "The Drinker's Paradox" (originally posting June 3, 2014) at dcproof.wordpress.com – Dan Christensen Jan 11 '15 at 18:01
• I actually already read all those links (including the blog posting) before asking here, looking for clues about the correctness of my proof – user2103480 Jan 11 '15 at 18:41
"In every (populated) bar there is a person such that, if that person is drinking, then everyone is drinking".
It takes advantage of the 2 cases of vacuous implication:
• (1) "False implies anything"
• (2) "Anything implies true"
So divide the theorem into 2 cases:
Case (1): Someone is not drinking. Then that person is an example of vacuous implication; specifically, if a person who is not drinking is drinking, then anything follows.
Case (2): Everyone is drinking. That is the other case of vacuous implication, if "anything" then everyone is drinking.
There error in your given proof is that you haven't explicitly stated the domain of $x$. In an empty universe, $\forall x ~~(p(x))$ is true no matter what $p$ is.
• There was no specified domain given in the exercise...Does that mean I have to divide my proof into "Case 1: Universe is empty, then vacuously true" and "Case 2: Universe is not empty, ... (rest of my proof above)" for the proof to be valid? – user2103480 Jan 9 '15 at 19:31
• Case 1, universe is empty means that the statement is false. Remember that you negated your claim and it resulted in a $\forall$. But yes, you are otherwise correct. – DanielV Jan 9 '15 at 20:23
• Oh, so the the theorem is only correct if the universe isn't empty? – user2103480 Jan 9 '15 at 20:29
• You are the one studying proofs, why do you ask my opinion? It is true if you can prove it and it is false if you can disprove it. I can say that the definition of $\forall x ~~(p(x))$ makes it false when the universe is empty, no matter what $p$ is. The rest you should be able to prove. – DanielV Jan 9 '15 at 20:33
• I'd say that, if the universe is empty, since $\forall x \in \emptyset (P(x))$ is always true and the negation of the statement is of the form $\forall x (P(x))$, the negation is always true. Thanks for your time, I'll probably accept the answer later for others to have a shot (you already got my +1). – user2103480 Jan 9 '15 at 21:12
To confirm the result, observe:
\begin{align} \exists x\Big(P(x) &\to \big(\forall y \;P(y)\big)\Big) \\ &\Updownarrow &\text{implication equivalence}\\ \exists x \Big(\neg P(x) &\vee \big(\forall y \;P(y)\big)\Big) \\ & \Updownarrow & \text{quantifier movement}\\ \big(\exists x \;\neg P(x)\big) &\vee \big(\forall y\; P(y)\big) \\ & \Updownarrow & \text{quantifier negation}\\ \neg \big(\forall x\; P(x)\big) &\vee \big(\forall y\;P(y)\big) \\ &\Updownarrow & \text{change of variable}\\ \neg \big(\forall x\; P(x)\big) &\vee \big(\forall x\;P(x)\big) \\ &\Updownarrow & \text{tautology: } \neg A \vee A \\ & {\large\top} \end{align}
Remark:
The statement looks like it says "if there is one example then it's true for all". However, that would be: $\big(\exists x\; P(x)\big)\to\big(\forall y\;P(y)\big)$, which is not the same thing at all.
• I believe the quantifier movement step is a bit of misleading. I think the relevant "axioms" would be $\exists x ~ (P(x) \lor Q(x)) \vdash (\exists x ~ P(x)) \lor (\exists x ~ Q(x))$, and that $\exists x \forall y Q(y) \vdash \forall y Q(y)$ in the case of a non empty universe. – DanielV Jan 13 '15 at 4:45
• In a similar manner $\forall x ~ (P(x) \land Q(x)) \vdash (\forall x ~ P(x)) \land (\forall x ~ Q(x))$ and $\forall x \neg \forall y P(y) \vdash \neg \forall y P(y)$ are the "hinges" of my proof I guess – user2103480 Jan 13 '15 at 13:21
|
2019-05-19 21:23:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496095776557922, "perplexity": 910.6054408590463}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255165.2/warc/CC-MAIN-20190519201521-20190519223521-00123.warc.gz"}
|
http://bib-pubdb1.desy.de/collection/PUB_FS-SCS-20131031?ln=en&as=1
|
# FS-SCS
2018-01-0513:44 [PUBDB-2018-00199] Journal Article et al Photodissociation of Aligned $\mathrm{CH_3I}$ and $\mathrm{C_{6}H_{3}F_{2}I}$ Molecules Probed with Time-Resolved Coulomb Explosion Imaging by Site-Selective XUV Ionization Structural dynamics 5(1), 014301 - (2018) [10.1063/1.4998648] We explore time-resolved Coulomb explosion induced by intense, extreme ultraviolet (XUV) femtosecond pulses from a free-electron laser as a method to image photo-induced molecular dynamics in two molecules, iodomethane and 2,6-difluoroiodobenzene. At an excitation wavelength of 267 nm, the dominant reaction pathway in both molecules is neutral dissociation via cleavage of the carbon–iodine bond. [...] OpenAccess: PDF PDF (PDFA); 2017-11-1318:29 [PUBDB-2017-11924] Poster et al Ultrafast fragmentation dynamics of Phenanthrene andPyrene after photoionization at 30.3 nm wavelength DESY Users Meeting 2017, Meeting locationHamburg, Germany, 25 Jan 2017 - 27 Jan 2017 2017-11-0910:46 [PUBDB-2017-11808] Journal Article et al Coulomb-explosion imaging of concurrent $\mathrm{CH_2BrI}$ photodissociation dynamics Physical review / A 96(4), 043415 (2017) [10.1103/PhysRevA.96.043415] The dynamics following laser-induced molecular photodissociation of gas-phase CH$_{2}$BrI at 271.6 nm were investigated by time-resolved Coulomb-explosion imaging using intense near-IR femtosecond laser pulses. The observed delay-dependent photofragment momenta reveal that CH$_{2}$BrI undergoes C-I cleavage, depositing 65.6% of the available energy into internal product states, and that absorption of a second UV photon breaks the C-Br bond ofCH$_{2}$Br. [...] Fulltext: PDF PDF (PDFA); OpenAccess: PDF PDF (PDFA); External link: Fulltext 2017-10-2519:37 [PUBDB-2017-11558] Journal Article et al Near $L$-edge Single and Multiple Photoionization of Singly Charged Iron Ions The astrophysical journal 849(1), 5 (2017) [10.3847/1538-4357/aa8fcc] Absolute cross sections for m-fold photoionization (m=1,...,6) of Fe+ by a single photon were measured employing the photon-ion merged-beams setup PIPE at the PETRA III synchrotron light source, operated by DESY in Hamburg, Germany. Photon energies were in the range 680-920 eV which covers the photoionization resonances associated with 2p and 2s excitation to higher atomic shells as well as the thresholds for 2p and 2s ionization. [...] Restricted: PDF PDF (PDFA); External link: Fulltext 2017-08-2210:13 [PUBDB-2017-09619] Journal Article et al From micromolecules to macromolecules structural dynamics properties: ultrafast chiroscopy with synchrotron and free-electron laser radiation Acta crystallographica / A 72(a1), s410 - s410 (2016) [10.1107/S2053273316094018] 2017-08-2115:33 [PUBDB-2017-09608] Journal Article et al Electron- and proton-impact excitation of He-like uranium Journal of physics / Conference Series 635(2), 022063 - (2015) [10.1088/1742-6596/635/2/022063] In this contribution, we present an experimental and theoretical study of the proton- and electron-impact excitation effects in helium-like uranium ions in relativistic collisions with different gaseous targets. Theexperiment was carried out at the experimental storage ring at GSI Darmstadt. [...] Restricted: PDF PDF (PDFA); 2017-08-2115:12 [PUBDB-2017-09601] Journal Article et al An endstation for resonant inelastic X-ray scattering studies of solid and liquid samples Journal of synchrotron radiation 24(1), 302 - 306 (2017) [10.1107/S1600577516016611] A novel experimental setup is presented for resonant inelastic X-ray scattering investigations of solid and liquid samples in the soft X-ray region for studying the complex electronic configuration of (bio)chemical systems. The uniqueness of the apparatus is its high flexibility combined with optimal energy resolution and energy range ratio. [...] Restricted: PDF PDF (PDFA); 2017-08-1916:12 [PUBDB-2017-09551] Journal Article/Contribution to a conference proceedings et al Photoionization and photofragmentation of ${{\rm{Sc}}}_{3}{{\rm{N@C}}}_{80}^{+}$ at energies from the carbon K edge to the scandium L and nitrogen K edges XXX International Conference on Photonic, Electronic, and Atomic Collisions, Meeting locationCairns, Australia, 25 Jul 2017 - 1 Aug 2017 Photoionization of endohedral ${{\rm{Sc}}}_{3}{{\rm{N@C}}}_{80}^{+}$ fullerene ions is investigated using synchrotron radiation in the energy range 280 to 420 eV. In the product channels ${{\rm{Sc}}}_{3}{{\rm{N@C}}}_{80}^{2+}$ and ${{\rm{Sc}}}_{3}{{\rm{N@C}}}_{78}^{2+}$ clear signaiures of the cross section contributions arising from L-shell photoabsorption by the encapsulated Sc atoms have been detected [...] OpenAccess: PDF PDF (PDFA); 2017-08-1915:55 [PUBDB-2017-09549] Journal Article/Contribution to a conference proceedings et al Multi-electron processes in K-shell double and triple photodetachment of oxygen anions XXX International Conference on Photonic, Electronic, and Atomic Collisions, Meeting locationCairns, Australia, 25 Jul 2017 - 1 Aug 2017 Absolute cross sections for double and triple photodetachment of O- ions have been measured at photon energies around the threshold for K-shell ionization using the photon-ion merged-beams technique at a synchrotron light-source. In addition, corresponding large-scale ab-initio calculations were carried out at a level of complexity that has never been invoked before. [...] OpenAccess: PDF PDF (PDFA); External link: Fulltext 2017-08-1822:39 [PUBDB-2017-09548] Journal Article et al Transmission zone plates as analyzers for efficient parallel 2D RIXS-mapping Scientific reports 7(1), 8849 (2017) [10.1038/s41598-017-09052-0] We have implemented and successfully tested an off-axis transmission Fresnel zone plate as spectral analyzer for resonant inelastic X-ray scattering (RIXS). The imaging capabilities of zone plates allow for advanced two-dimensional (2D) mapping applications. [...] OpenAccess: PDF PDF (PDFA);
|
2018-02-21 22:50:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5127253532409668, "perplexity": 13058.221030033574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813818.15/warc/CC-MAIN-20180221222354-20180222002354-00663.warc.gz"}
|
http://math.okstate.edu/people/lebl/osu4233-f18/
|
# Math 4233 - Intermediate Differential Equations
http://math.okstate.edu/people/lebl/osu4233-f18/
Lecture: TR 2:00-3:15pm MSCS 514
## Lecturer:
Jiří Lebl
Web: http://math.okstate.edu/people/lebl/
Office: MSCS 505
Office hours: 3:30-5:00pm Tu Th, and by appointment at other times.
Office phone: (405) 744-7750
Email: lebl at okstate dot edu
## Text:
Notes on Diffy Qs, by Yours Truly,
or on amazon as a cheap paperback.
The plan is to cover chapters 3, 8, 4, (and perhaps 5 if there's any time left) in that order.
We will be using Gradescope (http://gradescope.com) for all exams. Create an account. I will provide (in class) an Add code that will add you to the class.
The grading scheme is given below:
$\begin{multline} \text{Grade} = 0.2 \times \text{(Homework)} + 0.2 \times \text{(Exam 1)} \\ + 0.2 \times \text{(Exam 2)} + 0.4 \times \text{(Final Exam)} \end{multline}$
To account for bad exam day, etc..., an alternative grade will be computed as follows
$\begin{multline} \text{Grade} = 0.2 \times \text{(Homework)} + 0.1 \times \text{(Exam 1)} \\ + 0.1 \times \text{(Exam 2)} + 0.58 \times \text{(Final Exam)} \end{multline}$
A second alternative (to account for bad final day) will be follows
$\begin{multline} \text{Grade} = 0.2 \times \text{(Homework)} + 0.3 \times \text{(Exam 1)} \\ + 0.3 \times \text{(Exam 2)} + 0.18 \times \text{(Final Exam)} \end{multline}$
The highest of the three will be used for your grade. Notice that in the alterative schemes, the score does not sum to 100 percent. That is on purpose! You should count on the first scheme, the second/third schemes are only to account for things going terribly terribly wrong on one of your exams.
## Exams:
Exam 1: Tuesday, September 25, (in class)
Exam 2: Thursday, November 1, (in class)
Final Exam: Thursday, December 13, 2:00-3:50pm (same room as the class), Comprehensive, think of the final exam as half exam 3 and half comprehensive final
Exam Policies: No books, calculators or computers allowed on the exams or the final. One page (one sided) of notes allowed on the exams.
## Homework:
Assigned weekly (some weeks may be skipped). Homework will be done using WeBWorK. Go to:
https://webwork.math.okstate.edu/webwork2/MATH-4233-F18/
You have been/will be sent instruction on how to log in by email sometime in the first week of class.
Lowest 2 homework grades dropped (so no late homeworks).
## Missed Work:
No makeup or late homework (two lowest are dropped anyhow). For exams, there will be reasonable accommodation if you have a valid and documented reason, and the documentation is provided in advance unless absolutely impossible. If you have a university approved (see the syllabus attachment) final conflict exam, you must tell me at least two weeks befre the final exam week, so so that we can figure out what to do.
|
2019-02-21 16:13:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31241753697395325, "perplexity": 5919.0038995996565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247505838.65/warc/CC-MAIN-20190221152543-20190221174543-00275.warc.gz"}
|
https://www.physicsforums.com/threads/transition-amplitudes-and-relation-between-wavefunctions.157299/
|
# Transition amplitudes and relation between wavefunctions
bb] The dipole transition amplitude for the transition (nlm) -> (n'l'm') is given by [/b]
$$\int \psi_{n'l'm'}^* \vec{r} \psi_{nlm} d\tau$$
Is the dipole transition amplitude simply a measure of how likely a certain transiton is??
Heres another question
In converting $$\psi_{nlm_{l}m_{s}} = C_{n}\psi_{nljm_{j}}$$
my prof said that finding the Cn would involve a rather messy calculation involving group theory... how does that come about?? How does a simple relation like j = l + s bring about something like that??
Related Advanced Physics Homework Help News on Phys.org
dextercioby
Homework Helper
bb] The dipole transition amplitude for the transition (nlm) -> (n'l'm') is given by [/b]
$$\int \psi_{n'l'm'}^* \vec{r} \psi_{nlm} d\tau$$
Is the dipole transition amplitude simply a measure of how likely a certain transiton is??
It's a probability amplitude. Its square modulus gives the transition probability.
Heres another question
In converting $$\psi_{nlm_{l}m_{s}} = C_{n}\psi_{nljm_{j}}$$
my prof said that finding the Cn would involve a rather messy calculation involving group theory... how does that come about?? How does a simple relation like j = l + s bring about something like that??
The Clebsch-Gordan coefficients that you need are a result of group theory. Angular momentum theory (including the addition of angular momenta) is a result of group theory.
|
2020-03-31 10:42:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7630031108856201, "perplexity": 1389.273232598701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00121.warc.gz"}
|
https://astrophytheory.com/page/4/
|
# Simple Harmonic Oscillators (SHOs) (Part I)
We all experience or see this happening in our everyday experience: objects moving back and forth. In physics, these objects are called simple harmonic oscillators. While I was taking my undergraduate physics course, one of my favorite topics was SHOs because of the way the mathematics and physics work in tandem to explain something we see everyday. The purpose of this post is to engage followers to get them to think about this phenomenon in a more critical manner.
Every object has a position at which these objects tend to remain at rest, and if they are subjected to some perturbation, that object will oscillate about this equilibrium point until they resume their state of rest. If we pull or push an object with an applied force $F_{A}$ we find that this force is proportional to Hooke’s law of elasticity, that is, $F_{A}=-k\textbf{r}$. If we consider other forces we also find that there exists a force balance between the restoring force (our applied force), a resistance force, and a forcing function, which we assume to have the form
$F=F_{forcing}+F_{A}-F_{R}= -k\textbf{r}-\beta \dot{\textbf{r}}; (1)$
note that we are assuming that the resistance force is proportional to the speed of an object. Suppose further that we are inducing these oscillations in a periodic manner by given by
$F_{forcing}=F_{0}\cos{\omega t}. (2)$
Now, to be more precise, we really should define the position vector. So, $\textbf{r}=x\hat{i}+y\hat{j}+z\hat{k}$. Therefore, we actually have a system of three second order linear non-homogeneous ordinary differential equations in three variables:
$m\ddot{ x}+\beta \dot{x}+kx=F_{0}\cos{\omega t}, (3.1)$
$m\ddot{y}+\beta \dot{y}+ky=F_{0}\cos{\omega t}, (3.2)$
$m\ddot{z}+\beta \dot{z}+kz=F_{0}\cos{\omega t}. (3.3)$
(QUICK NOTE: In the above equations, I am using the Newtonian notation for derivatives, only for convenience.) I will just make some simplifications. I will divide both sides by the mass, and I will define the following parameters: $\gamma \equiv \beta/m$, $\omega_{0} \equiv k/m$, and $\alpha \equiv F_{0}/m$. Furthermore, I am only going to consider the $y$ component of this system. Thus, the equation that we seek to solve is
$\ddot{y}+\gamma \dot{y}+\omega_{0}y=\alpha\cos{\omega t}. (4)$
Now, in order to solve this non-homogeneous equation, we use the method of undetermined coefficients. By this we mean to say that the general solution to the non-homogeneous equation is of the form
$y = Ay_{1}(t)+By_{2}(t)+Y(t), (5)$
where $Y(t)$ is the particular solution to the non-homogeneous equation and the other two terms are the fundamental solutions of the homogeneous equation:
$\ddot{y}_{h}+\gamma \dot{y}_{h}+\omega_{0} y_{h} = 0. (6)$
Let $y_{h}(t)=D\exp{(\lambda t)}$. Taking the first and second time derivatives, we get $\dot{y}_{h}(t)=\lambda D\exp{(\lambda t)}$ and $\ddot{y}_{h}(t)=\lambda^{2}D\exp{(\lambda t)}$. Therefore, Eq. (6) becomes, after factoring out the exponential term,
$D\exp{(\lambda t)}[\lambda^{2}+\gamma \lambda +\omega_{0}]=0. (7)$
Since $D\exp{(\lambda t)}\neq 0$, it follows that
$\lambda^{2}+\gamma \lambda +\omega_{0}=0. (8)$
This is just a disguised form of a quadratic equation whose solution is obtained by the quadratic formula:
$\lambda =\frac{-\gamma \pm \sqrt[]{\gamma^{2}-4\omega_{0}}}{2}. (9)$
Part II of this post will discuss the three distinct cases for which the discriminant $\sqrt[]{\gamma^{2}-4\omega_{0}}$ is greater than, equal to , or less than 0, and the consequent solutions. I will also obtain the solution to the non-homogeneous equation in that post as well.
# Monte Carlo Simulations of Radiative Transfer: Project Overview
The goal of this project was to simulate radiative transfer in the circumbinary disk GG Tau by using statistical methods (i.e. Monte Carlo methods). To provide some context for the project, I will describe the astrophysical system that I considered, and I will describe the objectives of the project
Of the many stars in the night sky, a significant number of them, when you are able to resolve them, are in fact binary stars. These are star systems in which two stars orbit a common center of mass known as a barycenter. For this reason, we regard GG Tau to be a close binary system; a system in which two stars are separated by a significant distance comparable to the diameters of each star.
The system of GG Tau has four known components with a hypothesized fifth component. There is a disk of material encircling the entire system which we call a circumbinary disk. Additionally, there is also an inner disk of material that surrounds the primary and secondary components referred to as a circumstellar disk.
The goal of this project was to use Monte Carlo methods to simulate radiative transfer processes present in GG Tau. This is accomplished through the use of importance sampling two essential quantities over cumulative distribution function of the form
$\displaystyle \xi=\int_{0}^{L} \mathcal{P}(x)dx\equiv \psi(x_{0}), (1)$
using FORTRAN code. I will discuss the mathematical background of probability and Monte Carlo methods (stochastic processes) in more detail in a later post. The general process of radiative transfer is relatively simple. Starting with an emitter, radiation travels from its emission point, travels a certain distance, and at this point the radiation could either be scattered into an angle different from the incidence angle, or it can be absorbed by the material. In the context of this project, the emitter (or emitters) are the primary and secondary components of GG Tau. The radiation emitted is electromagnetic radiation, or light. This light will travel a distance $L$ after which the light is either scattered or absorbed. The code uses Eq.(1) (cumulative distribution function) to sample optical depths and scattering angles to calculate the distance traveled $L$. I will reserve the exact algorithm for a future post as well. In the next post, I will discuss the basics of radiative transfer theory as presented in the text “Radiative Transfer” by S. Chandrasekhar, 1960.
The following are the sources used in this project
Wood, K., Whitney, B., Bjorkman, J., and Wolff M., 2013. Introduction to Monte Carlo Radiative Transfer.
Whitney, B. A., 2011. Monte Carlo Radiative Transfer. Bull. Astr. Soc. India.
Kitamura, Y., Kawabe, R., Omodaka, T., Ishiguro, M., and Miyama, S., 1994. Rotating Protoplanetary Gas Disk in GG Tau. ASP Conference Series. Vol. 59.
Piètu, V., Gueth, F., Hily-Blant, P., Schuster, K.F., and Pety, J., 2011. High Resolution Imaging of the GG Tauri system at 267 GHz. Astronomy & Astrophysics. 528. A81.
Skrutskie, M.F., Snell, R.L., Strom, K.M., Strom, S.E., and Edwards, S., 1993. Detection of Circumstellar Gas Associated with GG Tauri. The Astrophysical Journal. 409:422-428.
Choudhuri, A.R., 2010. Astrophysics for Physicists. 2.
Carroll B.W., Ostlie, D.A., 2007. An Introduction to Modern Astrophysics. 9,10.
Chandrasekhar, S., 1960. Radiative Transfer. 1.
Schroeder, D., 2000. Introduction to Thermal Physics. 8.
Ross, S., 2010. Introduction to Probability Model. 2,4,11.
I obtained the FORTRAN code from K. Wood via the following link. (For the sake of completeness the URL is: www-star.st-and.ac.uk/~kw25/research/montecarlo/montecarlo.html)
# Monte Carlo Simulations of Radiative Transfer: Overview
FEATURED IMAGE: This is one of the plots that I will be explaining in Post IV. This plot represents the radiation flux emanating from the primary and secondary components of the system considered.
I realize I haven’t posted anything new recently. This is because I have been working on finishing the research project that I worked on during my final semester as an undergraduate.
However, I now intend to share the project on this blog. I will be starting a series of posts in which I will attempt to explain my project and the results I have obtained and open up a dialogue as to any improvements and/or questions that you may have.
Here is the basic overview of the series and projected content:
Post I: Project Overview:
In this first post, I will introduce the project. I will describe the goals of the project and, in particular, I will describe the nature of the system considered. I will also give a list of references used and mention any and all acknowledgements.
Posts II & III: Radiative Transfer Theory
These posts will most likely be the bulk of the concepts used in the project. I will be defining the quantities used in the research, developing the radiative transfer equation, formulating the problems of radiative transfer, and using the more elementary methods to solving the radiative transfer equation.
Post IV: Monte Carlo Simulations
This post will largely be concerned with the mathematics and physics of Monte Carlo simulations both in the context of probability theory and Ising models. I will describe both and explain how it applies to Monte Carlo simulations of Radiative Transfer.
Post V: Results
I will discuss the results that the simulations produced and explain the data displayed in the posts. I will then discuss the conclusions that I have made and I will also explain the shortcomings of the methods used to produce the results I obtained. I will then open it up to questions or suggestions.
# Basic Equations of Ideal One-Fluid Magnetohydrodynamics (Part III & IV)
FEATURED IMAGE CREDIT: U.R. Christensen from the Nature article: “Earth Science: A Sheet-Metal Dynamo”
The image shows the overall distortion of magnetic field lines inside the core and its’ effect on the magnetic field outside.
This post shall continue to derive the principal equations of ideal one-fluid magnetohydrodynamics. Here I shall derive the continuity equation and the vorticity equation. Furthermore, I shall show that the Boussinesq approximation results in a zero-valued divergence. I have consulted the following works while researching this topic:
Davidson, P.A., 2001. An Introduction to Magnetohydrodynamics. 3-4,6.
Murphy, N., 2016. Ideal Magnetohydrodynamics. Lecture presentation. Harvard-Smithsonian Center for Astrophysics.
Consider a fluid element through which fluid passes. The mass of this element can be represented as the volume integral of the material density $\rho$:
$\displaystyle m=\iiint \rho dV. (1)$
Take the first order time derivative of the mass, we arrive at
$\displaystyle \dot{m}=\frac{d}{dt}\iiint\rho dV = \iiint \frac{\partial \rho}{\partial t}dV. (2).$
We now make a slight notation change; let the triple integral be represented as
$\displaystyle \dot{m} = \int_{V}\frac{\partial \rho}{\partial t}dV. (3)$
Now, the mass flux through a surface element $dV = \hat{n}dV$ is $\rho \textbf{v} \cdot dV$. Thus, the integral of the mass flux is
$\displaystyle \int_{V}(\rho\cdot \textbf{v})dV = \dot{m}. (4)$
We may write this as
$-\displaystyle \oint_{\partial V}(\rho\cdot \textbf{v})dV= \int_{V}\frac{\partial \rho}{\partial t}dV. (5)$
Now, we invoke Gauss’s theorem (also known as the Divergence Theorem) of the form
$\displaystyle \int_{V}(\nabla \cdot \rho \textbf{v})dV=-\oint_{\partial V}(\rho \cdot \textbf{v})dV, (6)$
we get
$\displaystyle \int_{V}\bigg\{\frac{\partial \rho}{\partial t}+(\nabla \cdot \rho \textbf{v})\bigg\}dV=0. (7)$
Since the integral cannot be zero, the integrand must be. Therefore, we get the continuity equation:
$\displaystyle \frac{\partial \rho}{\partial t}+ (\nabla \cdot \rho \textbf{v})=0. (8)$
Recall the following equation from a previous post (specifically Eq. (6) of that post)
$\displaystyle \frac{\partial \textbf{v}}{\partial t}+(\nabla\cdot \textbf{v})\textbf{v}=-\frac{1}{\rho}\nabla\bigg\{P+\frac{B^{2}}{2\mu_{0}}\bigg\}+\frac{(\nabla\cdot \textbf{B})\textbf{B}}{\mu_{0}\rho}, (8)$
Now we define the concept of vorticity. Conceptually, this refers to the rotation of the fluid within its velocity field. Mathematically, we define the vorticity $\Omega$ to be the curl of the fluid velocity:
$\Omega \equiv \nabla \times \textbf{v}. (9)$
Now recall, the vector identity
$\displaystyle \nabla \frac{\textbf{v}^{2}}{2}=(\nabla \cdot \textbf{v})\textbf{v}+\textbf{v}\times \Omega, (10.1)$
which upon rearrangement is
$(\nabla \cdot \textbf{v})\textbf{v}=\nabla \frac{\textbf{v}^{2}}{2}-\textbf{v}\times\Omega. (10.2)$
Using Eq.(10.2) with Eq.(8) we get
$\displaystyle \frac{\partial \textbf{v}}{\partial t}=\textbf{v} \times \Omega -\nabla \bigg\{\frac{P}{\rho}-\frac{\textbf{v}^{2}}{2} \bigg\}+\nu \nabla^{2} \textbf{v}. (11)$
Recall our definition of vorticity. Upon taking the curl of Eq.(11) we arrive at a variation of the induction equation (see post)
$\displaystyle \frac{\partial \Omega}{\partial t}=\nabla \times (\textbf{v} \times \Omega)+\nu \nabla^{2}\Omega. (12.1)$
Next, we invoke another vector identity
$\displaystyle \nabla \times (\textbf{v} \times \Omega) = (\Omega \cdot \nabla)\textbf{v}-(\textbf{v} \cdot \nabla)\Omega (12.2)$
Using Eq. (12.2) in Eq. (12.1) yields the vorticity equation
$\displaystyle \frac{\partial \Omega}{\partial t}+(\nabla \cdot \textbf{v})\Omega = (\nabla \cdot \Omega)\textbf{v}+\nu \nabla^{2}\Omega. (13)$
Returning to the continuity equation:
$\displaystyle \frac{\partial \rho}{\partial t}+(\nabla \cdot \rho \textbf{v})=0.$
I will now show that the flow does not diverge. In other words, there are no sources in the fluid velocity field. The Boussinesq approximation’s main assertion is that of isopycnal flow (i.e. flow of constant density). Therefore, let $\rho = \rho_{0}>0$. Substitution into the continuity equation yields the following
$\displaystyle 0+ \rho_{0}(\nabla \cdot \textbf{v})=0. (14)$
The temporal derivative is a derivative of a non-zero constant which is itself zero. This just leaves $\rho_{0}(\nabla \cdot \textbf{v})=0$. Now, since $\rho_{0}\neq 0$, this then means that
$\displaystyle (\nabla \cdot \textbf{v})=0. (15)$
Hence, the divergence of the fluid’s velocity field is zero.
# Solution to Laplace’s Equation
This post deals with the familiar (to the physics student) Laplace’s equation. I am solving this equation in the context of physics, instead of a pure mathematical perspective. This problem is considered most extensively in the context of electrostatics. This equation is usually considered in the spherical polar coordinate system. A lot of finer details are also considered in a mathematical physics course where the topic of spherical harmonics is discussed. Assuming spherical-polar coordinates, Laplace’s equation is
$\displaystyle \frac{1}{r^{2}}\frac{\partial}{\partial r}\bigg\{r^{2}\frac{\partial \psi}{\partial r}\bigg\}+\frac{1}{r^{2}\sin{\theta}}\frac{\partial}{\partial \theta}\bigg\{\sin{\theta}\frac{\partial \psi}{\partial \theta}\bigg\}+\frac{1}{r^{2}\sin{\theta}}\frac{\partial^{2}\psi}{\partial \phi^{2}}=0, (1)$
Suppose that the function $\psi\rightarrow \psi(r,\theta,\phi)$. Furthermore, by the method of separation of variables we suppose that the solution is a product of eigenfunctions of the form $R(r)Y(\theta,\phi)$. Hence, Laplace’s equation becomes
$\displaystyle \frac{Y}{r^{2}}\frac{d}{dr}\bigg\{r^{2}\frac{d^{2}R}{dr^{2}}\bigg\}+\frac{R}{r^{2}\sin{\theta}}\frac{\partial}{\partial \theta}\bigg\{\sin{\theta}\frac{\partial Y}{\partial \theta}\bigg\}+\frac{R}{r^{2}\sin^{\theta}}\bigg\{\frac{\partial^{2}Y}{\partial \phi^{2}}\bigg\}=0. (2)$
Furthermore, we can separate further the term $Y(\theta, \phi)$ into $\Theta(\theta)\Phi(\phi)$. Rewriting (2) and multiplying by $r^{2}\sin^{2}{\theta}R^{-1}Y^{-1}$, we get
$\displaystyle \frac{1}{R}\frac{d}{dr}\bigg\{r^{2}\frac{dR}{dr}\bigg\}+\frac{\sin{\theta}}{\Theta}\frac{d}{d\theta}\bigg\{\sin{\theta}\frac{d\Theta}{d\theta}\bigg\}+\frac{1}{\Phi}\frac{d^{2}\Phi}{d\phi^{2}}=0.(3)$
Bringing the radial and angular component to the other side of the equation and setting the azimuthal component equal to a separation constant $-m^{2}$, yielding
$\displaystyle -\frac{1}{R}\frac{d}{dr}\bigg\{r^{2}\frac{dR}{dr}\bigg\}-\frac{\sin{\theta}}{\Theta}\frac{d}{d\theta}\bigg\{\sin{\theta}\frac{d\Theta}{d\theta}\bigg\}=\frac{1}{\Phi}\frac{d^{2}\Phi}{d\phi^{2}}=-m^{2}.$
Solving the right-hand side of the equation we get
$\displaystyle \Phi(\phi)=A\exp({+im\phi}). (4)$
Now, we set the azimuthal component equal to $m^{2}$ and carry out the derivative in the angular component to get the Associated Legendre equation:
$\displaystyle \frac{d}{d\mu}\bigg\{(1-\mu^{2})\frac{d\Theta}{d\mu}\bigg\}+\bigg\{l(l+1)-\frac{m^{2}}{1-\mu^{2}}\bigg\}\Theta=0, (5)$
where I have let $\mu=\cos{\theta}$. The solutions to this equation are known as the associated Legendre functions. However, instead of solving this difficult equation by brute force methods (i.e. power series method), we consider the case for which $m^{2}=0$. In this case, Eq.(5) simplifies to Legendre’s differential equation discussed previously. Instead of quoting the explicit form of the Legendre polynomials, we equivalently state the Rodrigues formula for these polynomials
$\displaystyle P_{l}(\mu)=\frac{1}{2^{l}l!}\frac{d^{l}}{d\mu^{l}}(\mu^{2}-1)^{l}. (6)$
However, the solutions to Eq.(5) are the associated Legendre functions, not polynomials. Therefore, we use the following to determine the associated Legendre functions from the Legendre polynomials:
$\displaystyle {\Theta_{l}}^{m}(\mu) = (1-\mu^{2})^{\frac{|m|}{2}}\frac{d^{|m|}}{d\mu^{|m|}}P_{l}(\mu). (7)$
For real solutions $|m|\leq l$ and for complex solutions $-l \leq m \leq l$, thus we can write the solutions as series whose indices run from 0 to l for the real-values and from -l to l for the complex-valued solutions.
Now we turn to the radial part which upon substitution of $-l(l+1)$ in place of the angular component, we get
$\displaystyle \frac{d}{dr}\bigg\{r^{2}\frac{dR}{dr}\bigg\}-l(l+1)R=0,$
and if we evaluate the derivatives in the first term we get an Euler equation of the form
$\displaystyle r^{2}\frac{d^{2}R}{dr^{2}}+2r\frac{dR}{dr}-l(l+1)R=0. (8)$
Let us assume that the solution can be represented as
$\displaystyle R(r)=\sum_{j=0}^{\infty}a_{j}r^{j}.$
Taking the necessary derivatives and substituting into Eq.(8) gives
$\displaystyle r^{2}\sum_{j=0}^{\infty}j(j-1)a_{j}r^{j-2}+\sum_{j=0}^{\infty}2rja_{j}r^{j-1}-l(l+1)\sum_{j=0}^{\infty}a_{j}r^{j}=0. (9)$
Simplifying the powers of r we find that
$\displaystyle \sum_{j=0}^{\infty}[j(j-1)+2j-l(l+1)]a_{j}r^{j}=0.$
Now, since $\displaystyle \sum_{j=0}^{\infty}a_{j}r^{j}\neq 0$, this means that
$\displaystyle j(j-1)+2j-l(l+1)=0. (10)$
We can simplify this further to get
$\displaystyle j^{2}-l^{2}+j-l=0.$
Factoring out $(j-l)$, we arrive at
$\displaystyle (j-l)((j+l)+1)=0 \implies j=l ; j=-l-1.$
Since this must be true for all values of l, the solution therefore becomes
$\displaystyle R(r)= \sum_{l=0}^{\infty}Br^{l}+Cr^{-l-1}. (11)$
Thus, we can write the solution to Laplace’s equation as
$\displaystyle \psi(r,\theta,\phi) = \sum_{l=0}^{\infty} \sum_{m=0}^{\infty}[Br^{l}+Cr^{-l-1}]\Theta_{l}^{m}(\mu)A\exp({im\phi}), (12)$
for real solutions, and
$\displaystyle \psi(r,\theta,\phi)=\sum_{l=0}^{\infty}\sum_{m=-l}^{l}[Br^{l}+Cr^{-l-1}]\Theta_{l}^{m}(\mu)A\exp({im\phi}), (13)$
for complex solutions.
# Solution to Legendre’s Differential Equation
Typically covered in a first course on ordinary differential equations, this problem finds applications in the solution of the Schrödinger equation for a one-electron atom (i.e. Hydrogen). In fact, this equation is a smaller problem that results from using separation of variables to solve Laplace’s equation. One finds that the angular equation is satisfied by the Associated Legendre functions. However, if it is assumed that $m=0$ then the equation reduces to Legendre’s equation.
The equation can be stated as
$\displaystyle (1-x^{2})\frac{d^{2}y}{dx^{2}}-2x\frac{dy}{dx}+l(l+1)y(x)=0. (1)$
The power series method starts with the assumption
$\displaystyle y(x)=\sum_{j=0}^{\infty}a_{j}x^{j}. (2)$
Next, we require the first and second order derivatives
$\displaystyle \frac{dy}{dx}=\sum_{j=1}^{\infty}ja_{j}x^{j-1}, (3)$
and
$\displaystyle \frac{d^{2}y}{dx^{2}}=\sum_{j=2}^{\infty}j(j-1)a_{j}x^{j-2}. (4)$
Substitution yields
$\displaystyle (1-x^{2})\sum_{j=2}^{\infty}j(j-1)a_{j}x^{j-2}-2x\sum_{j=1}^{\infty}ja_{j}x^{j-1}+l(l+1)\sum_{j=0}^{\infty}a_{j}x^{j}=0, (5)$
Distribution of the first terms gives
$\displaystyle \sum_{j=2}^{\infty}j(j-1)a_{j}x^{j-2}-\sum_{j=0}^{\infty}j(j-1)a_{j}x^{j}-2ja_{j}x^{j}+l(l+1)a_{j}x^{j}=0, (6)$
where in the second summation term we have rewritten the index to start at zero since the terms for which $j=0,1$ are zero, and hence have no effect on the overall sum. This allows us to write the sum this way. Next, we introduce a dummy variable. Therefore, let $m=j-2\implies j=m+2\implies m+1=j-1$. Thus, the equation becomes
$\displaystyle \sum_{j=0}^{\infty}\bigg\{(j+2)(j+1)a_{j+2}-j(j-1)a_{j}-2ja_{j}+l(l+1)a_{j}\bigg\}x^{j}=0. (7)$
In order for this to be true for all values of j, we require the coefficients of $x^{j}$ equal zero. Solving for $a_{j+2}$ we get
$\displaystyle a_{j+2}=\frac{j(j+1)-l(l+1)}{(j+2)(j+1)}. (8)$
It becomes evident that the terms $a_{2},a_{3},a_{4},...$ are dependent on the terms $a_{0}$ and $a_{1}$. The first term deals with the even solution and the second deals with the odd solution. If we let $p=j+2$ and solve for $a_{p}$, we arrive at the term $a_{p-2}$ and we can obtain the next term $a_{p-4}$. (I am not going to go through the details. The derivation is far too tedious. If one cannot follow there is an excellent video on YouTube that goes through a complete solution of Legendre’s ODE where they discuss all finer details of the problem. I am solving this now so that I can solve more advanced problems later on.) A pattern begins to emerge which we may express generally as:
$\displaystyle a_{p-2n}=\frac{(-1)^{n}(2p-2n)!}{(n!)(2^{p})(p-n)!(p-2n)!}. (9)$
Now, for even terms $0\leq n \leq \frac{p}{2}$ and for odd terms $0 \leq n \leq \frac{p-1}{2}$. Thus for the even solution we have
$\displaystyle P_{p}(x)=\sum_{n=0}^{\frac{p}{2}}\frac{(-1)^{n}(2p-2n)!}{(n!)(2^{p})(p-n)!(p-2n)!}, (10.1)$
and for the odd solution we have
$\displaystyle P_{p}(x)=\sum_{n=0}^{\frac{p-1}{2}}\frac{(-1)^{n}(2p-2n)!}{(n!)(2^{p})(p-n)!(p-2n)!}. (10.2)$
These two equations make up the even and odd solution to Legendre’s equation. They are an explicit general formula for the Legendre polynomials. Additionally we see that they can readily be used to derive Rodrigues’ formula
$\displaystyle \frac{1}{2^{p}p!}\frac{d^{p}}{dx^{p}}\bigg\{(x^{2}-1)^{p}\bigg\}, (11)$
and that we can relate Legendre polynomials to the Associated Legendre function via the equation
$\displaystyle P_{l}^{m}(x)= (1-x^{2})^{\frac{|m|}{2}}\frac{d^{|m|}P_{l}(x)}{dx^{|m|}}, (12)$
where I have let $p=l$ so as to preserve a more standard notation. This is the general rule that we will use to solve the associated Legendre differential equation when solving the Schrödinger equation for a one-electron atom.
# Solution to the three-dimensional Heat Equation
After taking a topics course in applied mathematics (partial differential equations), I found that there were equations that I should solve since I would later see those equations embedded into other larger-scale equations. This equation was Laplace’s equation (future post). Once I solved this equation, I realized that it becomes a differential operator when acted upon a function of at least two variables. Thus, I could solve equations such as the Schrödinger equation using a three-dimensional laplacian in spherical-polar coordinates (another future post) and the three-dimensional heat equation. I will be solving the latter.
The heat equation initial-boundary-value-problem is therefore
$\frac{\partial u}{\partial t}=\frac{1}{c\rho}\nabla^{2}u+Q(x,y,z), (1.1)$
subjected to the boundary conditions
$u(0,0,0,t)=0, (1.2)$
and the initial condition
$u(x,y,z,0)= \xi(x,y,z). (1.8)$
Now, this solution is not specific to a single thermodynamic system, but rather it is a more general solution in a mathematical context. However, I will be appropriating certain concepts from physics for reasons that are well understood (i.e. that time exists on the interval $I=0\leq t < +\infty$). It is nonphysical or nonsensical to speak of negative time.
To start, consider any rectangular prism in which heat flows through the volume, from the origin to the point $(X,Y,Z)$.At a time $t=0$, the overall heat of the volume can be regarded to be a function $\xi(x,y,z)$. After a time period $\delta t=t$ has passed, the heat will have traversed to the point $(X,Y,Z)$ from the origin (think of the heat traveling along the diagonal of a cube). The goal is to find the heat as a function of the three spatial components and a single time component. Furthermore, the boundary conditions maintain that at the origin and the final point, the heat vanishes. In other words, these points act as heat sinks. (A sink is a point where energy can leave the system.)
To simplify the notation, we define $\textbf{r}=x\hat{i}+y\hat{j}+z\hat{k}.$ Thus the initial-boundary-value-problem becomes
$\frac{\partial u}{\partial t}=\frac{1}{c\rho}\frac{\partial^{2}u}{\partial \textbf{r}^{2}}, (2.1)$
where $u\rightarrow u(\textbf{r},t)$. Also, this definition also reduces the three-dimensional laplacian to a second-order partial derivative of u. The boundary and initial conditions are then
$u(0,t)=0,(2.2)$
$u(R,t)=0, (2.3)$
$u(\textbf{r},0)=\xi(\textbf{r}). (2.2)$
Now, we assume that the solution is a product of eigenfunctions of the form
$u(\textbf{r},t)=\alpha(\textbf{r})\Gamma(t). (3)$
Taking the respective derivatives and dividing by the assumed form of the solution, we get
$\frac{\Gamma^{\prime}(t)}{\Gamma(t)}=\frac{1}{c\rho}\frac{\alpha^{\prime\prime}(\textbf{r})}{\alpha(\textbf{r})}+Q(\textbf{r}). (4)$
Now, Eq.(4) can be equal to three different values, the first of which is zero, but this solution does not help in any way, nor is it physically significant since it produces a trivial solution. The second is $\lambda^{2}>0, \lambda \neq 0$, so we can apply to the time dependence equation which then becomes
$\frac{\Gamma^{\prime}(t)}{\Gamma(t)}=\lambda^{2}, (5.1)$
whose solution is
$\Gamma(t)=\Gamma_{0}\exp({\lambda^{2}t}). (5.2)$
The third case where $\lambda^{2}<0, \lambda \neq 0$ allows us to write the spatial equation upon rearrangement as
$\alpha^{\prime\prime}(\textbf{r})+c\rho(\lambda^{2}+Q)\alpha(\textbf{r})=0, (6.1)$
whose solution is
$\alpha(\textbf{r})=c_{1}\cos({\sqrt[]{c\rho(\lambda^{2}+Q)}\textbf{r}})+c_{3}\sin({\sqrt[]{c\rho(\lambda^{2}+Q)}\textbf{r}}), (6.2)$
where $c_{3}=-c_{2}$. Next we apply the boundary conditions. Let $\textbf{r}=0$ in $\alpha(\textbf{r})$:
$\alpha(0)=c_{1}+0=0 \implies c_{1}=0 \implies \alpha(\textbf{r})=C\sin(\sqrt[]{c\rho(\lambda^{2}+Q)} \textbf{r}). (7.1)$
and let $\textbf{r}=R$ in (7.1) to get
$\alpha(R)\implies \sin({\sqrt[]{c\rho(\lambda^{2}+Q)}R})=0\implies \sqrt[]{c\rho(\lambda^{2}+Q)}R^{2}=(n\pi)^{2}.$
Solving for $\lambda^{2}$ above gives
$\lambda^{2}=\frac{1}{c\rho}\bigg\{\frac{n\pi}{R}\bigg\}^{2}-Q. (8)$
Substituting into Eq.(7.1) and simplifying gives
$\alpha(\textbf{r})=C\sin\bigg\{\frac{n\pi}{c\rho}\bigg\}^{2}\bigg\{\frac{\textbf{r}}{R}\bigg\}. (9)$
To find as many solutions as possible we construct a superposition of solutions of the form
$u(\textbf{r},t)=u_{l}(\textbf{r},t)=\sum_{l=0}^{\infty}\alpha_{l}(\textbf{r})\Gamma(t). (10)$
Rewriting $\alpha(\textbf{r})$ as $\alpha_{l}(\textbf{r})$ and using the solution for the time dependence, and also let the coefficients form a product equivalent to the indexed coefficient $A_{l}$, we arrive at the solution for the heat equation:
$u(\textbf{r},t)=\sum_{l=0}^{\infty}A_{l}\sin\bigg\{\frac{l\pi}{c\rho}\bigg\}^{2}\bigg\{\frac{\textbf{r}}{R}\bigg\}\exp(\lambda^{2}t). (11)$
Now suppose that $t=0$ in Eq.(11). In this case we get the initial heat distribution given by $\xi(\textbf{r})$:
$\xi(\textbf{r})=\sum_{l=0}^{\infty} A_{l}\sin\bigg\{\frac{l\pi}{c\rho}\bigg\}^{2}\bigg\{\frac{\textbf{r}}{R}\bigg\}. (12)$
# Deriving the speed of light from Maxwell’s equations
We are all familiar with the concept of the speed of light. It is the speed beyond which no object may travel. Many seem to associate Einstein for the necessity of this universal constant, and while it is inherent to his theory of special and general theories of relativity, it was not necessarily something he discovered. It is, in fact, a consequence of the Maxwell equations from my first post. I will be deriving the speed of light quantity using the four field equations of electrodynamics, and I will explain how Einstein used this fact to challenge Newtonian relativity in his theory of special relativity (I am not as familiar with general relativity). The reason for this post is just to demonstrate the origin of a well-known concept; the speed of light.
$\nabla \cdot \textbf{E}=\frac{\rho}{\epsilon_{0}}, (1)$
$\nabla \cdot \textbf{B}=0, (2)$
$\nabla \times \textbf{E}=-\frac{\partial \textbf{B}}{\partial t}, (3)$
$\nabla \times \textbf{B}=\mu_{0}\textbf{j}+\mu_{0}\epsilon_{0}\frac{\partial \textbf{E}}{\partial t}. (4)$
Now, we let $\rho =0$, which means that the charge density must be zero, and we also let the current density $\textbf{j}=0$. Moreover, note that the form of the wave equation as
$\frac{\partial^{2} u}{\partial t^{2}}=\frac{1}{v^{2}}\nabla^{2}u. (5)$
This equation describes the change in position of material in three dimensions (choose whichever coordinate system you like) propagating through some amount of time, with some velocity v.
After making these assumptions, we arrive at
$\nabla \cdot \textbf{E}=0, (6)$
$\nabla \cdot \textbf{B}=0, (7)$
$\nabla \times \textbf{E}=-\frac{\partial \textbf{B}}{\partial t}, (8)$
$\nabla \times \textbf{B}=\mu_{0}\epsilon_{0}\frac{\partial \textbf{E}}{\partial t}. (9)$
Also note the vector identity$\nabla \times (\nabla \times \textbf{A})=\nabla(\nabla\cdot\textbf{A})-\nabla^{2}\textbf{A}$. Now, take the curl of Eqs.(8) and (9), and we get
$\frac{1}{\mu_{0}\epsilon_{0}}\nabla^{2}\textbf{E}=\frac{\partial^{2}\textbf{E}}{\partial t^{2}}, (10)$
and
$\frac{1}{\mu_{0}\epsilon_{0}}\nabla^{2}\textbf{B}=\frac{\partial^{2}\textbf{B}}{\partial t^{2}}, (11)$
where we have used Eqs. (6), (7), (8), and (9) to simplify the expressions. Eqs. (10) and (11) are the electromagnetic wave equations. Note the form of these equations and how they compare to Eq. (5). They are identical, and upon inspection one can see that the velocity with which light travels is
$\frac{1}{c^{2}}=\frac{1}{\mu_{0}\epsilon_{0}} \implies c=\sqrt[]{\mu_{0}\epsilon_{0}}, (12)$
where $\mu_{0}$ is the permeability of free space and $\epsilon_{0}$ is the permittivity of free space.
Most waves on Earth require a medium to travel. Sound waves, for example are actually pressure waves that move by collisions of the individual molecules in the air. For some time, light was thought to require a medium to travel. So it was proposed that since light can travel through the vacuum of space, there must exist a universal medium dubbed “the ether”. This “ether” was sought after most notably in the famous Michelson-Morley experiment, in which an interferometer was constructed to measure the Earth’s velocity through this medium. However, when they failed to find any evidence that the “ether” existed, the new way of thinking was that it didn’t exist. It turned out that light doesn’t need a medium to travel through space. Technically-speaking, space itself acts as the medium through which light travels.
In Newtonian relativity, it was assumed that time and space were separate constructs and were regarded as absolute. In other words, it was the speed that changed. What this meant is that even as speeds became very large, space and time remained the same. What Einstein did was that he saw the consequence of Maxwell’s equations and regarded this speed as absolute, and allowed space and time (really spacetime) to vary. In Einstein’s theory of special relativity, as one approaches the speed of light, time slows down, and objects become contracted. These phenomena are known as time dilation and length contraction:
$\delta t = \frac{\delta t_{0}}{\sqrt[]{1-v^{2}/c^{2}}}, (13)$
$\delta l = l_{0}\sqrt[]{1-v^{2}/c^{2}}. (14)$
These phenomena will be discussed in more detail in a future post. Thus, Maxwell’s formulation of the electrodynamic field equations led Einstein to change the way we perceive the fundamental concepts of space and time.
# Electron Scattering of a Step Potential
The following post was initially one of my assignments for an independent study in modern physics in my penultimate year as an undergraduate. While studying this problem the text that I used to verify my answer was:
R. Eisberg and R. Resnick, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles. John Wiley & Sons. 1985. 6.
One of the hallmarks of quantum theory is the Schrödinger equation. There are two forms: the time-dependent and the time-independent equation. The former can be turned into the latter by way of assuming stationary states in which case there is no time evolution (i.e. the time derivative $\partial_{t} \Psi(x,t)=0$. $\partial_{t}\Psi(x,t)$ denotes the derivative of the wavefunction $\Psi(x,t)$ with respect to time.)
The wavefunction describes the state of a system, and it is found by solving the Schrödinger equation. In this post, I’ll be considering a step potential in which
$V(x) = V_{0}, x < 0, (1.1)$
$V(x) = 0, x > 0. (1.2)$
After solving for the wavefunction, I will calculate the reflection and transmission coefficients for the case where the energy of the electron is less than that of the step potential $E < V_{0}$.
First, we assume that we are dealing with stationary states, by doing so we assume that there is no time-dependence. The wavefunction $\Psi(x,t)$ becomes an eigenfunction (eigen– is German for “characteristic” e.g. characteristic function, characteristic value (eigenvalue), and so on), $\psi(x)$. There are requirements for this eigenfunction in the context of quantum mechanics: eigenfunction $\psi(x)$ and its first order spatial derivative $\frac{d\psi(x)}{dx}$ must be finite, single-valued, and continuous. Using the wavefunction, we write the time-independent Schrödinger equation as
$-\frac{\hbar^{2}}{2m}\frac{d^{2}\psi(x)}{dx^{2}}+V(x)\psi(x) = E\psi(x), (2)$
where $\hbar$ is the reduced Planck’s constant $\hbar \equiv h(2\pi)^{-1}$, m is the mass of the particle, $V(x)$ represents the potential, and $E$ is the energy.
Now, in electron scattering there are two cases regarding a step potential: the case for which $E < V_{0}$ and the case for which $E>V_{0}$ The focus of this post is the former case. In such a case, the potential is given mathematically by Eqs. (1.1) and (1.2) and can be depicted by the image below:
Image Credit: http://physics.gmu.edu/~dmaria/590%20Web%20Page/public_html/qm_topics/potential/barrier/STUDY-GUIDE.htm
The first part of this problem is to solve for $\psi(x)$ when $x<0$. Then Eq.(2) becomes
$\frac{d^{2}\psi_{I}(x)}{dx^{2}}=-\kappa_{I}^{2}\psi_{I}(x), (3)$
where $\kappa_{I}^{2}\equiv \frac{-2mE}{\hbar^{2}}$. Eq.(3) is a second order linear homogeneous ordinary differential equation with constant coefficients and can be solved using a characteristic equation. We assume that the solution is of the general form
$\psi_{I}(x)=\exp{rx},$
which upon taking the first and second order spatial derivatives and substituting into Eq.(3) yields
$r^{2}\exp{rx}+\kappa_{I}^{2}\exp{rx}=0.$
We can factor out the exponential and recalling that the graph of $e^{x} \neq 0$ (except in the limit $x\rightarrow -\infty$)we can then conclude that
$r^{2}+\kappa_{I}^{2}=0$.
Hence, $r = \pm i\kappa_{I}$. Therefore, we can write the solution Schrödinger equation in the region $x < 0$ as
$\psi_{I}(x) = A\exp{+i\kappa_{I}x}+B\exp{-i\kappa_{I}x}. (4)$
This is the eigenfunction for the first region. Coefficients A and B will be determined later.
Now we can use the same logic for the Schrödinger equation in the region $x > 0$ where $V(x)=V_{0}$:
$\frac{d^{2}\psi_{II}(x)}{dx^{2}}= -\kappa_{II}^{2}\psi_{II}(x), (5)$
where $\kappa_{II}\equiv \frac{\sqrt[]{2m(E-V_{0})}}{\hbar}$. The general solution for this region is
$\psi_{II}(x) = C\exp{+i\kappa_{II}x}+D\exp{-i\kappa_{II}x}. (6)$
Now the next step is taken using two different approaches: the first using a mathematical argument and the other from a conceptual interpretation of the problem at hand. The former is this: Suppose we let $x\rightarrow \infty$. What results is that the first term on the right hand side of Eq.(6) diverges (i.e. becomes arbitrarily large). The second term on the right hand side ends up going to zero (it converges). Therefore, D remains finite, hence $D\neq 0$. However, in order to suppress the divergence of the first term, we let $C=0$. Thus we arrive at the eigenfunction for $x>0$
$\psi_{II}(x) = D\exp{-i\kappa_{II}x}. (7)$
The latter argument is this: In the region $x < 0$, the first term of the solution represents a wave propagating in the positive x-direction, while the second denotes a wave traveling in the negative x-direction. Similarly, in the region where $x > 0$, the first term corresponds to a wave traveling in the positive x-direction. However, this cannot be because the energy of the wave is not sufficient enough to overcome the potential. Therefore, the only term that is relevant here for this region is the second term, for it is a wave propagating in the negative x-direction.
We now determine the coefficients A and B. Recall that the eigenfunction must satisfy the following continuity requirements
$\psi_{I}(x)=\psi_{II}(x), (8.1)$
$\psi^{\prime}_{I}(x)=\psi^{\prime}_{II}(x), (8.2)$
evaluated when $x=0$. Doing so in Eqs. (4) and (7), and equating them we arrive at the continuity condition for $\psi(x)$
$A+B = D. (9)$
Taking the derivative of $\psi_{I}(x)$ and $\psi_{II}(x)$ and evaluating them for when $x=0$ we arrive at
$A-B = \frac{i\kappa_{II}}{i\kappa_{I}}D. (10)$
If we add Eqs. (9) and (10) we get the value for the coefficient A in terms of the arbitrary constant D
$A =\frac{D}{2}(1+\frac{i\kappa_{II}}{i\kappa_{I}}. (11)$
Conversely, if we subtract (9) and (10) we get the value for B in terms of D
$B = \frac{D}{2}(1-\frac{i\kappa_{II}}{i\kappa_{I}}. (12)$
Now the reflection coefficient defined as
$R = \frac{B^{*}B}{A^{*}A}, (13)$
which is the probability that an incident electron (wave) will be reflected. On the other hand, the transmission coefficient is the probability that an electron will be transmitted through the potential (e.g. barrier potential). These two also must satisfy the relation
$R+T =1. (14)$
This means that the probability that the electron will be reflected or transmitted is 100%. Therefore, to evaluate R (the reason why I don’t calculate T will become apparent shortly), we take the complex conjugate of A and B and using them in Eq.(13) we get
$R = \frac{\frac{D}{2}(1+\frac{\kappa_{II}}{\kappa_{I}})\frac{D}{2}(1-\frac{\kappa_{II}}{\kappa_{I}})} {\frac{D}{2}(1-\frac{\kappa_{II}}{\kappa_{I}}) \frac{D}{2}(1+\frac{\kappa_{II}}{\kappa_{I}})} = 1. (15)$
What this conceptually means is that the probability that the electron is reflected is 100%. This implies that it is impossible for an electron to be transmitted through the potential for this system.
# Basic Equations of Ideal One-Fluid Magnetohydrodynamics (Part II)
Continuing with the derivation of the ideal one-fluid MHD equations, the next equation governs the motion of a parcel of fluid (in this case plasma). This momentum equation stems from the Navier-Stokes’ equation. The derivation of this equation will be reserved for a future post. However, the solution of this equation will not be attempted. (Incidentally, the proof of the existence and uniqueness of solutions to the Navier-Stokes’ equation is one of the Millennium problems described by the Clay Institute of Mathematics.)
The purpose of this post is to derive the momentum equation from the Navier-Stokes’ equation.
The Navier-Stokes’ equation has the form
$\frac{\partial \textbf{v}}{\partial t} + (\nabla \cdot \textbf{v})\textbf{v} = \textbf{F} - \frac{1}{\rho}\nabla P+\nu \nabla^{2}\textbf{v}, (1)$
where $\textbf{F}$ represents a source of external forces, $\textbf{v}$ is the velocity field, $\nabla P$ is the pressure gradient, $\rho$ is the material density, $\nu$ is kinematic viscosity, and $\nabla^{2}\textbf{v}$ is the laplacian of the velocity field. More specifically, it is a consequence of the viscous stress tensor whose components can cause the parcel of fluid to experience stresses and strains.
Defining the magnetic force per unit volume as
$\textbf{F}=\frac{\textbf{J}\times \textbf{B}}{\rho}, (2)$
where we recall that $\textbf{J}$ is the current density defined by Ohm’s law in a previous post, and also recall from Basic Equations of Ideal One-Fluid Magnetohydrodynamics (Part I) the equation
$\nabla \times \textbf{B}=\mu_{0}\textbf{J}, (3)$
if we solve for the current density $\textbf{J}$, we get
$\textbf{J}=\frac{1}{\mu_{0}\rho}[(\nabla \times \textbf{B})\times \textbf{B}], (4)$
where $\mu_{0}$ is the permeability of free space. Now we invoke the vector identity
$[(\nabla \times \textbf{B})\times \textbf{B}]=(\nabla \cdot \textbf{B})\textbf{B}-\nabla \bigg\{\frac{B^{2}}{2}\bigg\}. (5)$
At this point, we assume that we are dealing with laminar flows in which case the term $\nu=0.$ The Navier-Stokes’ equation becomes the Euler equation at this point. (Despite great understanding of classical mechanics, one phenomena for which we cannot account is turbulence and sources of friction, so this assumption is made out of necessity as well as simplicity. For processes in which turbulence cannot be neglected the best we can do in this regard is to parameterize turbulence in numerical models.) Using the vector identity as well as making use of our assumption of laminar flows, ideal one-fluid momentum equation is
$\frac{\partial \textbf{v}}{\partial t}+ (\nabla \cdot \textbf{v})\textbf{v}=-\frac{1}{\rho}\nabla \bigg\{P + \frac{B^{2}}{2\mu_{0}}\bigg\}+\frac{(\nabla \cdot \textbf{B})\textbf{B}}{\mu_{0}\rho}, (6)$
where the additive term to the pressure $\frac{B^{2}}{2\mu_{0}}$ is the magnetic pressure exerted on magnetic field lines and the additional term $\frac{(\nabla \cdot \textbf{B})\textbf{B}}{\mu_{0}\rho}$ is the magnetic tension acting along the magnetic field lines.
|
2021-03-01 00:11:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 246, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906339704990387, "perplexity": 276.2493723264247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00406.warc.gz"}
|