text
stringlengths 256
16.4k
|
|---|
Management Accounting Ts Grewal 2017 for Class 12 Commerce Accountancy Chapter 2 - Cash Flow Statement Based On As 3 Revised
cash flow statement based on as 3 revised
Management Accounting Ts Grewal 2017 Solutions for Class 12 Commerce Accountancy Chapter 2 Cash Flow Statement Based On As 3 Revised are provided here with simple step-by-step explanations. These solutions for Cash Flow Statement Based On As 3 Revised are extremely popular among Class 12 Commerce students for Accountancy Cash Flow Statement Based On As 3 Revised Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Management Accounting Ts Grewal 2017 Book of Class 12 Commerce Accountancy Chapter 2 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Management Accounting Ts Grewal 2017 Solutions. All Management Accounting Ts Grewal 2017 Solutions for class Class 12 Commerce Accountancy are prepared by experts and are 100% accurate.
Particulars Details (Rs) Amount
+ Provision for Tax 1,50,000
+ Reserve 2,00,000
Depreciation 2,50,000
Goodwill written off 80,000
Loss on Sale of Machinery 2,50,000 5,80,000
Operating Profit Before Working Capital Changes 11,80,000
Add: Decrease in Current Assets & Increase in Current Liabilities
O/standing Expenses 15,000
Prepaid Expenses 20,000 1,63,000
Less: Increase in Current Assets & Decrease in Current Liabilities
Bills Payable (47,000)
Current Investments (1,50,000) (1,97,000)
Cash From Operating Activities 11,46,000
Net Cash From Operating Activities (A) 7,96,000
For the Year Ending March 31,2015
Particulars Details (Rs) Amount (Rs)
+ Provision for Tax 50,000
+ General Reserve 30,000
+ Proposed Dividend 50,000
Goodwill written off 25,000 25,000
Debtors (92,000)
O/Standing Current Liabilities (4,000) (96,000)
Cash From Operating Activities 1,45,000
Add: Tax Expense
Office Expenses Outstanding
Selling Expenses Outstanding
Accrued Commission
For the Year Ending …….
Purchase of Investment (5,00,000)
Proceeds from Sale of Investments 6,00,000
Proceeds from Sale of Machinery 1,00,000
Dividend Received on Shares 40,000
Rent Received ( Land used for Commercial Purpose) 80,000
Purchase of Patents (40,000)
Proceeds from Sale of Land 40,000
Purchase of Furniture (4,60,000)
Proceeds from Sale of Investment 40,000
Accrued Interest on Investment 20,000
Purchase of Machinery (WN1)
Sale of Patents (WN3)
WN 1:
Profit and Loss A/c (Profit on Sale)
Bank A/c (Purchase)
Bank A/c (Sale) (Balancing Figure)
Profit and Loss A/c (Depreciation charged during the year)
Proceeds from Sale of Investments (2,50,000 + 10%) 2,75,000
Dividend Received on Investment 30,000
Rent Received 20,000
Purchase of Fixed Assets (23,80,000 + 2,00,000 – 19,50,000) (6,30,000)
For the Year Ending …..
Proceeds from Sale of Building 6,00,000
Interest received 20,000
Rent Received 90,000 (1,10,000)
Purchase of Plant & Machinery (2,60,000)
Proceeds from Sale of Plant & Machinery 40,000
Proceeds from Sale of Land 1,60,000
Purchase of Investment (60,000) (1,20,000)
(Rs) Particulars Amount
Balance b/d 8,50,000 Depreciation A/c 50,000
Bank A/c (Purchases)
(Balancing Figure) 2,60,000 Sales A/c 40,000
P& L A/c ( Loss) 20,000
Balance b/d 2,00,000 Sales A/c (Balancing Figure) 1,60,000
P& L A/c (Profit) 60,000 Balance c/d 1,00,000
Interest on Investment received 7,200
Net Cash From Investing Activities 4,32,800
Balance b/d 10,20,000 Depreciation A/c 1,40,000
Bank A/c ( Purchases)
(Balancing Figure) 4,40,000 Bank A/c 50,000
Balance b/d 3,80,000 P& L A/c ( Loss) 40,000
P& L A/c ( Profit) 20,000 Sales A/c (Balancing Figure) 1,00,000
Balance b/d 60,000 Sales A/c (Balancing Figure) 1,00,000
Bank A/c 1,80,000 Balance c/d 1,60,000
P& L A/c ( Profit) 20,000
Proceeds from Issue of Share Capital 2,00,000
Proceeds from Issue of Debenture 1,00,000 2,31,000
Proceeds from Sale of Plant & Machinery 2,50,000
Purchase of Plant & Machinery (8,00,000 + 2,50,000 – 6,00,000) (4,50,000)
Purchase of Non Current Investment (50,000)
Dividend Received on Investment 5,000
Sale of Non Current Investments 2,00,000
Particulars Details (Rs)
Proceeds from Issue of Share Capital (1,00,000 +10,000) 1,10,000
Redemption of Debenture (50,000 + 5,000) (55,000)
Redemption of 10% Debenture (2,00,000)
Proceeds from Issue of 8% Debenture 3,00,000
Repayment of Bank Loan (1,50,000)
Interest paid on Bank Overdraft (1,000)
Dividend Paid on Preference Share Capital (4,00,000 × 10%) (40,000)
Interim Dividend Paid (6,00,000 × 15%) (90,000)
Redemption of Preference Shares (2,00,000 + 5%) (2,10,000)
Proceeds from Issue of Equity Share Capital 2,00,000
Premium Received from Equity Share 25,000
Interest Paid on Debentures (48,000 – 9,600) (38,400)
Proceeds from Issue of Equity Share Capital (15,00,000+ 75,000) 15,75,000
Share Issue Expenses Paid (1,75,000)
Net Cash From Financing Activities 15,00,000
Calculation of Cash from Financing Activities:
Issue of 11% Debentures
Preference Dividend paid
Interest on Debentures paid
Interest on Bank Loan Paid
1. If Equity dividend is paid, it is assumed Preference Dividend is also paid.
\begin{array}{l}2.\text{ }\mathrm{Interest} \mathrm{}\mathrm{on} \mathrm{}\mathrm{Debentures} \mathrm{}\mathrm{Paid}=\left[3,75,000\text{\hspace{0.17em}}×\text{\hspace{0.17em}}\frac{8}{100} \text{\hspace{0.17em}}×\text{\hspace{0.17em}}\frac{6}{12}+\text{ }2,75,000×\text{\hspace{0.17em}}\frac{8}{100} \text{\hspace{0.17em}}×\text{\hspace{0.17em}}\frac{6}{12}\right]\text{ }=\text{ }\mathrm{Rs}\text{ }41,000\\ 3.\text{ }\mathrm{Interest} \mathrm{}\mathrm{on} \mathrm{}\mathrm{Bank} \mathrm{}\mathrm{Loan}\mathrm{} \mathrm{Paid}=\left[1,25,000×\frac{8}{100}\text{\hspace{0.17em}}×\text{\hspace{0.17em}}\frac{6}{12}+1,00,000×\text{\hspace{0.17em}}\frac{8}{100} ×\text{\hspace{0.17em}}\frac{6}{12}\right]\text{ }=\text{ }\mathrm{Rs}\text{ }25,000\end{array}
Bank A/c (Tax Paid)
+ Provision for Tax
Premium on Redemption of Preference Share 25,000 25,000
Interim Dividend 2,40,000 2,40,000
Proposed Dividend (10, 00,000 × 10%) 1,00,000 1,00,000
Less: Non Operating Incomes –
Less: Tax Paid -
Redemption of Preference Shares Capital (5, 00,000 + 5%) (5,25,000)
Interim Dividend Paid (30, 00,000 × 8%) (2,40,000)
Preference Dividend Paid (1,00,000)
Proceeds from Public Deposits (1,00,000)
Proceeds from Issue of Equity Share Capital 10,00,000
Proceeds from Security Premium 2,00,000
Net Cash From Financing Activities (C) 3,35,000
Cash Credit 25,000
Note: There is a misprint in the answer given in the textbook. It should be 'Cash Flows from Financing Activities' instead of 'Cash Used in Financing Activities'.
For the Year Ending March31,2014
O/standing Expenses (12,000)
Cash From Operating Activities (4,000)
Net Cash From Operating Activities (A) (4,000)
Net Cash From Investing Activities (B) (1,23,000)
Proceeds from Mortgage Loan 30,000
Repayment of Public Deposits (60,000)
Net Cash From Financing Activities (C) 70,000
Net decrease in Cash (A + B + C) (57,000)
Add: Opening Cash and Cash Equivalents 90,000
(Cash + Marketable Investments- Bank Overdraft)
Closing Cash and Cash Equivalents 33,000
Other Current Liabilities (2,000) (35,000)
Cash From Operating Activities 90,000
Less: Tax Paid (28,000)
Net Cash From Operating Activities (A) 62,000
Purchase of Plant (1,01,000)
Purchase of Investment (25,000)
Proceeds from Sale of Land & Building 53,000 (73,000)
Net Cash From Investing Activities (B) (73,000)
Proposed Dividend (28,000)
Proceeds from Share Capital 50,000 22,000
Net Increase in Cash (A + B + C) 11,000
Creditors (18,000) (60,000)
Purchase of Plant (60,000)
Purchase of Building (30,000) (90,000)
Proceeds from Issue of Share Capital 50,000
Redemption of Preference Share Capital (10,000) 40,000
Net Increase in Cash (A + B + C) 2,000
Balance b/d 40,000 Depreciation 30,000
Cash A/c (Balancing Figure) 60,000 Balance c/d 70,000
Balance b/d 1,00,000 Depreciation 5,00,000
+ Interim Dividend 20,000
+ Share Issue Expenses 5,000
Bills Payable (4,000) (86,000)
Purchase of Plant (1,20,000) (1,20,000)
Payment of Share Issue Expenses (5,000)
Interim Dividend (20,000)
Payment of Proposed Dividend (42,000)
Payment of Land & Building (50,000) (17,000)
Net Increase in Cash (A + B + C) (7,000)
Cash A/c (Balancing Figure) 1,20,000 Balance c/d 1,90,000
Particulars Amount (Rs) Particulars Amount
Balance c/d 50,000 P/ L A/c 45,000
+ Provision for Tax –
+ Interim Dividend –
Debenture Interest 10,000 60,000
Current Investments (20,000)
Share Issue Expenses Paid (7,000)
Debenture Interest Paid (10,000)
Repayment of Land & Building (10,000) (7,000)
Net Cash From Financing Activities (C) (7,000)
Add: Opening Cash and Cash Equivalents 3,000
Closing Cash and Cash Equivalents 2,000
Cash A/c (Balancing Figure) 93,000 Balance c/d 4,40,000
+ General Reserve –
Debenture Interest 10,600
Creditors (5,000)
Redemption of 8% Debentures (90,000) (10,600)
Net Increase in Cash (A + B + C) (27,000)
Add: Opening Cash and Cash Equivalents (68,000)
Closing Cash and Cash Equivalents (95,000)
Cash A/c (Balancing Figure) 18,000 Balance b/d 20,000
Balance c/d 25,000 Profit & Loss A/c 23,000
\mathrm{Interest} \mathrm{on} \mathrm{Debentures} @ 8%\phantom{\rule{0ex}{0ex}}\begin{array}{cll}\mathrm{on} \mathrm{Rs}. 2,00,000 \mathrm{for} 3 \mathrm{months}& =& \mathrm{Rs} 4,000\\ \mathrm{on} \mathrm{Rs} 1,10,000 \mathrm{for} 3 \mathrm{months}& =& \mathrm{Rs} 6,600\\ & =& \overline{)\mathrm{Rs} 10,600}\end{array}
Note: Answer for Cash flow from Financing Activities calculated above is different from the answer given in the book.
+ General Reserve
+ Interim Dividend
Add: Non Operating Expenses –
Creditors (25,000)
Other Current Investments (10,000) (45,000)
Cash From Operating Activities (27,000)
Net Cash Used in Operating Activities (A) (27,000)
Net Cash From Investing Activities (B) NIL
Proceeds from Issue of Share Capital 50,000 50,000
Add: Opening Cash and Cash Equivalents (30,000 + 10,000) 40,000
(Cash + Short-term Investments)
Closing Cash and Cash Equivalents (47,000 + 16,000) 63,000
Depreciation on Plant & Machinery 10,000 10,000
Proceeds from sale of Plant & Machinery 50,000 50,000
Net Cash From Investing Activities (B) 50,000
Payment of Land & Building (75,000)
Proposed Dividend (20,000) (95,000)
(Rs) Particulars Amount (Rs)
Bank A/c (Balancing Figure) 50,000
|
Two inline skate wheels with different durometer – 85A and 83A
The Shore durometer is a device for measuring the hardness of a material, typically of polymers, elastomers, and rubbers.[1]
Higher numbers on the scale indicate a greater resistance to indentation and thus harder materials. Lower numbers indicate less resistance and softer materials.
The term is also used to describe a material's rating on the scale, as in an object having a "'Shore durometer' of 90."
The scale was defined by Albert Ferdinand Shore, who developed a suitable device to measure hardness in the 1920s. It was neither the first hardness tester nor the first to be called a durometer (ISV duro- and -meter; attested since the 19th century), but today that name usually refers to Shore hardness; other devices use other measures, which return corresponding results, such as for Rockwell hardness.
1 Durometer scales
2 Method of measurement
3 ASTM D2240 hardness and elastic modulus
Durometer scales[edit]
There are several scales of durometer, used for materials with different properties. The two most common scales, using slightly different measurement systems, are the ASTM D2240 type A and type D scales.
The A scale is for softer ones, while the D scale is for harder ones.
However, the ASTM D2240-00 testing standard calls for a total of 12 scales, depending on the intended use: types A, B, C, D, DO, E, M, O, OO, OOO, OOO-S, and R. Each scale results in a value between 0 and 100, with higher values indicating a harder material.[2]
Diagram of a durometer indenter or presser foot used for Shores A and D
Durometer, like many other hardness tests, measures the depth of an indentation in the material created by a given force on a standardized presser foot. This depth is dependent on the hardness of the material, its viscoelastic properties, the shape of the presser foot, and the duration of the test. ASTM D2240 durometers allows for a measurement of the initial hardness, or the indentation hardness after a given period of time. The basic test requires applying the force in a consistent manner, without shock, and measuring the hardness (depth of the indentation). If a timed hardness is desired, force is applied for the required time and then read. The material under test should be a minimum of 6 mm (0.25 inches) thick.[3] Theoretical background of the test is considered e.g. in[4]
Test setup for type A & D[3]
Indenting foot
Applied mass (kg)
Resulting force (N)
Hardened steel rod 1.1 mm – 1.4 mm diameter, with a truncated 35° cone, 0.79 mm diameter 0.822 8.064
Hardened steel rod 1.1 mm – 1.4 mm diameter, with a 30° conical point, 0.1 mm radius tip 4.550 44.64
The ASTM D2240 standard recognizes twelve different durometer scales using combinations of specific spring forces and indentor configurations. These scales are properly referred to as durometer types; i.e., a durometer type is specifically designed to determine a specific scale, and the scale does not exist separately from the durometer. The table below provides details for each of these types, with the exception of Type R.[5]
Durometer type
Spring force[6]
A 35° truncated cone (frustum) 1.40 mm (0.055 in) 2.54 mm (0.100 in) 8.05 N (821 gf)
B 30° cone 1.40 mm (0.055 in) 2.54 mm (0.100 in) 8.05 N (821 gf)
C 35° truncated cone (frustum) 1.40 mm (0.055 in) 2.54 mm (0.100 in) 44.45 N (4,533 gf)
D 30° cone 1.40 mm (0.055 in) 2.54 mm (0.100 in) 44.45 N (4,533 gf)
E 2.5 mm (0.098 in) spherical radius 4.50 mm (0.177 in) 2.54 mm (0.100 in) 8.05 N (821 gf)
M 30° cone 0.79 mm (0.031 in) 1.25 mm (0.049 in) 0.765 N (78.0 gf)
0 1.20 mm (0.047 in) spherical radius 2.40 mm (0.094 in) 2.54 mm (0.100 in) 8.05 N (821 gf)
00 1.20 mm (0.047 in) spherical radius 2.40 mm (0.094 in) 2.54 mm (0.100 in) 1.111 N (113.3 gf)
D0 1.20 mm (0.047 in) spherical radius 2.40 mm (0.094 in) 2.54 mm (0.100 in) 44.45 N (4,533 gf)
000 6.35 mm (0.250 in) spherical radius 10.7–11.6 mm (0.42–0.46 in) 2.54 mm (0.100 in) 1.111 N (113.3 gf)
000-S 10.7 mm (0.42 in) radius disk 11.9 mm (0.47 in) 5.0 mm (0.20 in) 1.932 N (197.0 gf)
Note: Type R is a designation, rather than a true "type". The R designation specifies a presser foot diameter (hence the R, for radius; obviously D could not be used) of 18 ± 0.5 mm (0.71 ± 0.02 in) in diameter, while the spring forces and indenter configurations remain unchanged. The R designation is applicable to any D2240 Type, with the exception of Type M; the R designation is expressed as Type xR, where x is the D2240 type, e.g., aR, dR, etc.; the R designation also mandates the employment of an operating stand.[5]
Some conditions and procedures that have to be met, according to DIN ISO 7619-1 standard are:
For measuring Shore A the foot indents the material while for Shore D the foot penetrates the surface of the material.
Material for testing needs to be in laboratory climate storage at least one hour before testing.
Measuring time is 15s.
Force is 1 kg +0.1 kg for Shore A, and 5 kg +0.5 kg for Shore D.
Five measurements need to be taken.
Calibration of the Durometer is one per week with elastomer blocks of different hardness.
The final value of the hardness depends on the depth of the indenter after it has been applied for 15 seconds on the material. If the indenter penetrates 2.54 mm (0.100 inch) or more into the material, the durometer is 0 for that scale. If it does not penetrate at all, then the durometer is 100 for that scale. It is for this reason that multiple scales exist. But if the hardness is <10 °Sh or >90 °Sh the results are not to be trusted. The measurement must be redone with adjacent scale type.
Durometer is a dimensionless quantity, and there is no simple relationship between a material's durometer in one scale, and its durometer in any other scale, or by any other hardness test.[1]
Shore Durometers of Common Materials
Bicycle gel seat 15–30 OO
Chewing gum 20 OO
Sorbothane 30–70 OO
Rubber band 25 A
Door seal 55 A
Automotive tire tread 70 A
Soft wheels of roller skates and skateboard 78 A
Hydraulic O-ring 70–90 A
Pneumatic O-ring 65–75 A
Hard wheels of roller skates and skateboard 98 A
Ebonite rubber 100 A
Solid truck tires 50 D
Hard hat (typically HDPE) 75 D
Cast urethane plastic 80 D
ASTM D2240 hardness and elastic modulus[edit]
Using linear elastic indentation hardness, a relation between the ASTM D2240 hardness and the Young's modulus for elastomers has been derived by Gent[7] and by Mix and Alan Jeffrey Giacomin.[8] Gent's relation has the form
{\displaystyle E={\frac {0.0981(56+7.62336S)}{0.137505(254-2.54S)}},}
{\displaystyle E}
is the Young's modulus in MPa and
{\displaystyle S}
is the ASTM D2240 type A hardness.
This relation gives a value of
{\displaystyle E=\infty }
{\displaystyle S=100}
but departs from experimental data for
{\displaystyle S<40}
. Mix and Giacomin derive comparable equations for all 12 scales that are standardized by ASTM D2240.[8]
Another relation that fits the experimental data slightly better is[9]
{\displaystyle S=100\operatorname {erf} (3.186\times 10^{-4}~E^{1/2}),}
{\displaystyle \operatorname {erf} }
{\displaystyle E}
is in units of Pa.
A first-order estimate of the relation between ASTM D2240 type D hardness (for a conical indenter with a 15° half-cone angle) and the elastic modulus of the material being tested is[10]
{\displaystyle S_{\text{D}}=100-{\frac {20(-78.188+{\sqrt {6113.36+781.88E}})}{E}},}
{\displaystyle S_{\text{D}}}
is the ASTM D2240 type D hardness, and
{\displaystyle E}
is in MPa.
Another Neo-Hookean linear relation between the ASTM D2240 hardness value and material elastic modulus has the form[10]
{\displaystyle \log _{10}E=0.0235S-0.6403,\quad S={\begin{cases}S_{\text{A}}&{\text{for}}~20<S_{A}<80,\\S_{\text{D}}+50&{\text{for}}~30<S_{D}<85,\end{cases}}}
{\displaystyle S_{\text{A}}}
is the ASTM D2240 type A hardness,
{\displaystyle S_{\text{D}}}
{\displaystyle E}
is the Young's modulus in MPa.
US patent 1770045, A. F. Shore, "Apparatus for Measuring the Hardness of Materials", issued 1930-07-08
US patent 2421449, J. G. Zuber, "Hardness Measuring Instrument", issued 1947-06-03
^ a b "Shore (Durometer) Hardness Testing of Plastics". Retrieved 2006-07-22.
^ "Material Hardness". CALCE and the University of Maryland. 2001. Archived from the original on 2007-07-07. Retrieved 2006-07-22.
^ a b "Rubber Hardness". National Physical Laboratory, UK. 2006. Retrieved 2006-07-22.
^ Willert, Emanuel (2020). Stoßprobleme in Physik, Technik und Medizin: Grundlagen und Anwendungen (in German). DOI: 10.1007/978-3-662-60296-6: Springer Vieweg. {{cite book}}: CS1 maint: location (link)
^ a b "DuroMatters! Basic Durometer Testing Information" (PDF). CCSi, Inc. Retrieved 29 May 2011.
^ "Standard Test Method for Rubber Property—Durometer Hardness1". ASTM International. November 2017. p. 5. doi:10.1520/D2240-15E01.
^ A. N. Gent (1958), On the relation between indentation hardness and Young's modulus, Institution of Rubber Industry -- Transactions, 34, pp. 46–57. doi:10.5254/1.3542351
^ a b A. W. Mix and A. J. Giacomin (2011), Standardized Polymer Durometry, Journal of Testing and Evaluation, 39(4), pp. 1–10. doi:10.1520/JTE103205
^ British Standard 903 (1950, 1957), Methods of testing vulcanised rubber Part 19 (1950) and Part A7 (1957).
^ a b Qi, H. J., Joyce, K., Boyce, M. C. (2003), Durometer hardness and the stress-strain behavior of elastomeric materials, Rubber Chemistry and Technology, 76(2), pp. 419–435. doi:10.5254/1.3547752
Растеряев Ю.К., Агальцов Г.Н. Связь между твёрдостью и модулем упругости резин (Connection between hardness and a modulus of gums)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Shore_durometer&oldid=1086274484"
|
Vectors | Brilliant Math & Science Wiki
Sravanth C., Ram Mohith, Sharky Kesa, and
Matheus Jahnke
Most quantities with which you are probably familiar are represented by single numbers:
\frac{\sqrt{2}}{3}
, 15 kilograms, or 25.63 seconds, for instance. Such quantities are known as scalar quantities or simply scalars.
However, some quantities have both a size and direction and thus require two or more numbers to specify completely. Such quantities are known as vector quantities or vectors.
^\text{[1]}
For instance, while the speed of an object is simply its rate of motion
(18 \, \text{km}/\text{h}),
an object's velocity reflects not just its rate of motion but also the direction of its motion
(18 \, \text{km}/\text{h}
).
As such, speed is a scalar quantity while velocity is a vector quantity. Other vector quantities include force, displacement, and electric field.
Many important physical and mathematical quantities are vectors, and the analysis of generalized vectors and their properties (a subject known as linear algebra) forms part of the core of modern mathematics.
Before going further, we first need to know what a vector is.
Mathematically, a directed line segment is called a vector. Or, in other words, a line segment with a specified magnitude and direction is called a vector.
Representation on a Coordinate Plane
One elementary way to specify a vector is simply to give its size and direction
(
5 \, \text{meters}
north or
12
45^\circ )
. However, in some cases, neither quantity may be immediately apparent. Sometimes it is easier to specify two points—an initial point
A
and a terminal point
B
—and represent a vector as the (directed) displacement from one to the other.
On a Cartesian plane, one can take
A = (x_A, y_A)
B = (x_B, y_B)
, in which case the horizontal part of the displacement is
x_B - x _A
and the vertical part is
y_B - y_A
. Generally, one notates a vector
\vec{V}
by indicating both parts, also called components, in the form of an ordered pair
\vec{V} = (x_B - x_A, y_B - y_A),
\vec{V} = (v_x, v_y)
v_x = x_B - x_A
v_y = y_B - y_A
To differentiate a vector from a scalar, one generally writes a vector with an arrow, as shown (although this notation is not strictly required).
In general, one is only interested in the displacement—that is to say, one does not differentiate between vectors with different initial and terminal points as long as each component is the same. Vectors with identical components are considered equal or congruent. (Indicating the initial and terminal points is simply a visual and calculation aid.)
\vec{V}
has initial point
A = (-1,2)
and terminal point
B = (3,2)
. Determine the components of
\vec{V}
\vec{V} = \big(3- (-1), 2 - 2\big) = (4, 0).\ _\square
The size, length, or magnitude of a vector is usually taken as the Euclidean distance given by the Pythagorean theorem and written as
| \vec{V} |
\vec{V} = \sqrt{(x_B - x_A)^2 + (y_B - y_A)^2} = \sqrt{v_x^2 + v_y^2}.
The direction is often given as the angle
\theta
with respect to the positive
x
-axis, which is given by
\tan{\theta} = \frac{y_B - y_A}{x_B - x_A} = \frac{v_y}{v_x}.
A certain vector
\vec{V}
6\hat{\imath}+8\hat{\jmath}
. Find its magnitude and the angle it makes with respect to the positive
x
\Big|\vec{V}\Big|=\sqrt{v_x^2+v_y^2}=\sqrt{6^2+8^2}=\sqrt{36+64}=\sqrt{100}=10,
and the angle is
\tan(\theta_{\text{w.r.t. } x\text{-axis}})=\dfrac{v_y}{v_x}=\dfrac{8}{6}=\dfrac{4}{3} \implies \theta_{\text{w.r.t. } x\text{-axis}}=\tan^{-1}\left(\dfrac{4}{3}\right).\ _\square
Plane vectors can be specified both by indicating both components or by giving the magnitude and angle. In both cases, the two values are sufficient to provide all information about the vector and move between both the "component" and "magnitude and angle" representations. (Equivalently, one can think of the "magnitude and angle" representation as simply the coordinates of a point in plane polar coordinates.)
(a), (b)
(a), (d)
(b)
(b), (c)
Which of the following quantities are vectors?
(a) The cost of a theater ticket
(b) The current in a river
(c) The flight path from Houston to Dallas
(d) The population of the world
Suppose there are two vectors
\vec{u}
\vec{v}
|\vec{v}| = 16
|\vec{u}| = 30
, and the angle between the vectors is
90^\circ
|\vec{u}+\vec{v}|?
\vec{v} = 2\mathbf{i} + 4\mathbf{j} + 7\mathbf{k}
|\vec{v}|^2?
In three dimensions, one adds a third component for the
z
\vec{V} = (v_x, v_y, v_z).
\vec{V} = (v_x, v_y, v_z)
is any point in 3D space, then the vector
\vec{OV}
having origin
O(0,0,0)
\vec{V}
as its initial and terminal points, respectively, is called the position vector of
\vec{V}
and its length is given by
\Big|\vec{V}\Big| = \sqrt{v_x^2 + v_y^2 + v_z^2}.
\vec{V}
A = (-1,2,5)
B = (3,2,2)
\vec{V}
\Big|\vec{V}\Big| = \sqrt{\big(3-(-1)\big)^2 + (2 - 2)^2 + (2 - 5)^2} = 5.\ _\square
One may "scale" a vector by multiplying it by a scalar value. Such a product
\big(
between a scalar
and a vector
\vec{V}\big)
a\vec{V} = a(v_x, v_y, v_z)
and defined as
a\vec{V} = (av_x, av_y, av_z ).
Such a product merely multiplies the length by a factor
a
\Big|a\vec{V}\Big| = \sqrt{(av_x)^2 + (av_y)^2 + (av_z)^2} = \sqrt{a^2\big(v_x^2 + v_y^2 + v_z^2\big)} = a \sqrt{v_x^2 + v_y^2 + v_z^2} = a \Big|\vec{V}\Big|.
\vec V=(3,4,12)
3\vec V
The first way:
\Big|3\vec V\Big|=\big|3\times(3,4,12)\big|=\big|(9,12,36)\big|=\sqrt{9^2+12^2+36^2}=39.
The second way:
\Big|3\vec V\Big|=3\Big|\vec V\Big|=3\big|(3,4,12)\big|=3\sqrt{3^2+4^2+12^2}=3\times 13=39.
_\square
Note : The vector
\vec{AB} = \vec{OB} - \vec{OA},
\vec{OA}, \vec{OB}
A
B,
The following are various types of vectors:
Null vector: A vector whose initial and terminal points are the same is called a null vector
\big(\vec{0}\big)
or zero vector. Its magnitude is zero and its direction is indeterminate.
Unit vector: A vector whose magnitude is unity (1 unit) is called a unit vector. If
\vec{a}
is any vector, then its unit vector is denoted by
\hat{a}
and the unit vector of
\vec{a}
\hat{a} = \dfrac{\vec{a}}{|\vec{a}|}
Equal vector: Two vectors
\vec{a}
\vec{b}
are said to be equal if they have the same magnitude and direction.
Negative of a vector: A vector whose magnitude is the same as that of the given vector but in opposite direction to it is called the negative of that vector. If
\vec{a}
is any vector, then its negative is denoted by
-\vec{a}
\vec{AB}
is any vector, then its negative is given by
\vec{BA}
. Always remember that
\vec{AB} = - \vec{BA}
Parallel vector: Two vectors are said to be parallel if and only if they have the same support or parallel support. In other words, if
\vec{a}
\vec{b}
are any two parallel vectors, then
\vec{a} = \alpha \vec{b},
\alpha
is some scalar constant. If the two vectors have the same direction, they are said to be parallel vectors (or like vectors); but if the two vectors are in opposite directions, they are said to be anti-parallel vectors (or unlike vectors).
Co-linear vector: Two vectors
\vec{a}
\vec{b}
are said to be co-linear if they have the same direction, parallel or anti-parallel.
Co-planar vector: Vectors whose supports are in the same plane or parallel to the same plane are called co-planar vectors. Vectors which are not co-planar are called non-co-planar vectors.
Co-initial vector: Two or more vectors having the same initial point are called co-initial vectors.
a_1,a_2,a_3, \cdots , a_n
be vectors and
x_1,x_2,x_3, \cdots , x_n
be scalars. Then the vector
x_1a_1 + x_2a_2 + x_3a_3 + \cdots + x_na_n
is called a linear combination of the vectors
a_1,a_2,a_3, \cdots , a_n
2a - b + 3c
a,b,c
Three vectors are co-planar if and only if one of them is a linear combination of the other two.
[1] Strictly speaking, in this article we refer to geometric or Euclidean vectors. Generalized vectors need not be multi-dimensional.
Cite as: Vectors. Brilliant.org. Retrieved from https://brilliant.org/wiki/vector-terminology/
|
MR. GLADSTONE CORRECTS MR. SPENCER.
IN reply to Herbert Spencer's last paper on the "Study of Sociology" (in Popular Science Monthly for December, 1873, p. 134), Mr. Gladstone sent the following letter to the editor of the Contemporary Review:
November 3, 1873.
{\displaystyle {\begin{matrix}{\big \}}\end{matrix}}}
My dear Sir: I observe in the Contemporary Review for October, p. 670, that the following words are quoted from an address of mine at Liverpool:
"Upon the ground of what is termed evolution, God is relieved of the labor of creation: in the name of unchangeable laws, He is discharged from governing the world."
The distinguished writer in the Review says that by these words I have made myself so conspicuously the champion (or exponent) of the anti-scientific view, that the words may be regarded as typical.
To go as directly as may be to my point, I consider this judgment upon my declaration to be founded on an assumption or belief that it contains a condemnation of evolution, and of the doctrine of unchangeable laws. I submit that it contains no such thing. Let me illustrate by saying, What if I wrote as follows:
"Upon the ground of what is termed liberty, flagrant crimes have been committed: and (likewise) in the name of law and order, human rights have been trodden under foot."
I should not by thus writing condemn liberty, or condemn law and order; but condemn only the inferences that men draw, or say they draw, from them. Up to that point the parallel is exact: and I hope it will be seen that Mr. Spencer has inadvertently put upon my words a meaning they do not bear.
Using the parallel thus far for the sake of clearness, I carry it no farther. For while I am ready to give in my adhesion to liberty, and likewise to law and order, on evolution and on unchangeable laws I had rather be excused.
The words with which I think Madame de Staël ends "Corinne," are the best for me: "Je ne veux ni la blâmer, ni l'absoudre." Before I could presume to give an opinion on evolution, or on unchangeable laws, I should wish to know, more clearly and more fully than I yet know, the meaning attached to those phrases by the chief apostles of the doctrines: and very likely, even after accomplishing this preliminary stage, I might find myself insufficiently supplied with the knowledge required to draw the line between true and false.
I have, then, no repugnance to any conclusions whatever, legitimately arising upon well-ascertained facts or well-tested reasonings: and my complaint is that the functions of the Almighty as Creator and Governor of the world are denied upon grounds, which, whatever be the extension given to the phrases I have quoted, appear to me to be utterly and manifestly insufficient to warrant such denial.
I am desirous to liberate myself from a supposition alien, I think, to my whole habit of mind and life. But I do not desire to effect this by the method of controversy; and if Mr. Spencer does not see, or does not think, that he has mistaken the meaning of my words, I have no more darts to throw; and will do myself, indeed, the pleasure of concluding with a frank avowal that his manner of handling what he must naturally consider to be a gross piece of folly is as far as possible from being offensive.
Believe me, most faithfully yours,
W. E. Gladstone.
MR. SPENCER ON THE CORRECTION.
To the second edition of Mr. Spencer's "Study of Sociology," he appends the fore, going letter, and remarks as follows (page 425):
Mr. Gladstone's explanation of his own meaning must, of course, be accepted; and,
|
On the Theory of Relativity (Kaluza) - Wikisource, the free online library
Translation:On the Theory of Relativity (Kaluza)
On the Theory of Relativity (1910)
by Theodor Kaluza, translated from German by Wikisource
In German: Zur Relativitätstheorie, Physikalische Zeitschrift, 11: 977-978, Source
1185859On the Theory of RelativityTheodor Kaluza1910
On the Theory of Relativity.
Th. Kaluza (Königsberg/Pr.).
Not read by the author himself due to illness.
Regarding the peculiar position attained by the rotation of a rigid body (in the sense of Born) in the theory of relativity, it seemed to be interesting to investigate two closely related questions, on one side the question concerning the geometry to be established upon a body moving in this way, and then the question concerning the laws of time-comparison which hold at this place.
In accordance with the relativity principle, in general one has to see the geometry of the relevant orthogonal-intersection of the worldline-bundle as the "proper geometry" (at a certain proper time) of a body moving in any way; furthermore one can admit two space-time points as "simultaneous", when they are located upon the same orthogonal-intersection. In the case of rigid translation, these orthogonal-spaces are linear; the proper geometry is Euclidean throughout. It is essentially different in the simplest special case of rigid motion (see Herglotz, Noether), i.e. in the case of rigid rotation. If one remains in
{\displaystyle R_{3}}
for simplicities sake (a circular disc rotating in its plane around the center, angular velocity = 1), then one has a bundle of coaxial helical lines of same pitch ("helical bundle"). The helical bundle is not straightly intersected [anorthotom]; consequently we are at first not speaking about a proper geometry in the previous sense, nor about a "rest shape" of the rotating disc. One can only speak about a constantly Euclidean "individual geometry" and a "individual shape", related to a certain point of the disc (see the theory of linear complexes). For example, if one has three points 1, 2, 3, and if one draws the connecting line 23 in the individual geometry of 1, then 23 in general doesn't appear as a line in the geometries of 2 and 3.
However, it is possible to formulate the general geometry, which shortly shall also be called proper geometry. For that, one has to state the following requirement: If one considers any point P of the disc, then the lines of the proper geometry which pass through P, shall also be lines of the individual geometry of P. The proper geometry is formed in the helical bundle as follows: the individual helical lines are the points, and the lines are one-dimensional manifolds of helical lines which are orthogonally intersected by one and the same line. The uniqueness postulate of the line requires special attention at this place. If one intersects (perpendicular to the axis) the helical bundle with the plane (individual geometry of the center), then the lines of the proper geometry represent themselves in this plane as spirals of type
{\displaystyle \varphi -\varphi _{0}=\arccos {\frac {r_{0}}{r}}\pm r_{0}{\sqrt {r^{2}-r_{0}^{2}}}}
{\displaystyle r}
{\displaystyle \varphi }
are polar coordinates; depending on the reality relations, the upper or lower sign holds).
The task, to lay a "proper line" (i.e. one of the previous spirals) through two points of the plane, leads to the equation for
{\displaystyle \Psi }
{\displaystyle \Phi =\Psi +R\sin \Psi ,}
which is known from astronomy as "Kepler's equation". The question concerning the uniqueness of the solutions is to be affirmed for
{\displaystyle R<1}
(simple geometric interpretation). Since the radius of a rotating disc is not allowed to exceed a certain finite limit due to the confinement to subluminal velocity, one can conclude that the uniqueness postulate of the line is indeed satisfied (which were not generally the case when superluminal velocities are admitted). A closer examination then shows the proper geometry of the rotating disc as a non-Euclidean, specifically Lobachevskian geometry.
As arc-length one has to define:
{\displaystyle \int {\sqrt {1+{\frac {r^{2}}{1\pm r^{2}}}\left({\frac {d\varphi }{dr}}\right)^{2}dr}}}
("proper arc"),
{\displaystyle \int {\frac {r^{2}}{\sqrt {1\pm r^{2}}}}d\varphi }
("proper content")
The second of the questions posed at the beginning, gives rise to the following considerations:
If a time-comparison is done between two disc points, by comparing the two points directly at one occasion, and at another occasion by inclusion of a intermediary point located outside of the connecting line of both points, then the results of both time-comparisons will mutually deviate: Upon the rotating disc, the time comparison appears to be as depending on the path. If a point is "differentially" compared with itself along a closed curve, then the point indicates a time-difference with respect to itself ("error of closure"). A more detailed investigation gives the magnitude of the error of closure (measured in proper time) as equal to the double proper-content of the time-comparison curve. From the existence of the error of closure, the theoretical possibility of a demonstration of Earth's rotation by purely optical or electromagnetic experiments is given. (No contradiction against the theory of relativity.) Yet the idea can probably not realized practically; since it is, in the best case, only about
{\displaystyle 2\cdot 10^{-7}}
Retrieved from "https://en.wikisource.org/w/index.php?title=Translation:On_the_Theory_of_Relativity_(Kaluza)&oldid=11637403"
|
Clause (logic) — Wikipedia Republished // WIKI 2
Clause (logic)
For other uses, see Clause (disambiguation).
In logic, a clause is a propositional formula formed from a finite collection of literals (atoms or their negations) and logical connectives. A clause is true either whenever at least one of the literals that form it is true (a disjunctive clause, the most common use of the term), or when all of the literals that form it are true (a conjunctive clause, a less common use of the term). That is, it is a finite disjunction[1] or conjunction of literals, depending on the context. Clauses are usually written as follows, where the symbols
{\displaystyle l_{i}}
are literals:
{\displaystyle l_{1}\vee \cdots \vee l_{n}}
cse471 s12 week7 2
Lecture 18 | Planning 1: Representation
Lecture 23 | Logic 3: Bottom-up and Top-down Proof Procedures
1 Empty clauses
2 Implicative form
Empty clauses
A clause can be empty (defined from an empty set of literals). The empty clause is denoted by various symbols such as
{\displaystyle \emptyset }
{\displaystyle \bot }
{\displaystyle \Box }
. The truth evaluation of an empty disjunctive clause is always
{\displaystyle false}
. This is justified by considering that
{\displaystyle false}
is the neutral element of the monoid
{\displaystyle (\{false,true\},\vee )}
The truth evaluation of an empty conjunctive clause is always
{\displaystyle true}
. This is related to the concept of a vacuous truth.
Implicative form
Every nonempty (disjunctive) clause is logically equivalent to an implication of a head from a body, where the head is an arbitrary literal of the clause and the body is the conjunction of the negations of the other literals. That is, if a truth assignment causes a clause to be true, and none of the literals of the body satisfy the clause, then the head must also be true.
This equivalence is commonly used in logic programming, where clauses are usually written as an implication in this form. More generally, the head may be a disjunction of literals. If
{\displaystyle b_{1},\ldots ,b_{m}}
are the literals in the body of a clause and
{\displaystyle h_{1},\ldots ,h_{n}}
are those of its head, the clause is usually written as follows:
{\displaystyle h_{1},\ldots ,h_{n}\leftarrow b_{1},\ldots ,b_{m}.}
If n = 1 and m = 0, the clause is called a (Prolog) fact.
If n = 1 and m > 0, the clause is called a (Prolog) rule.
If n = 0 and m > 0, the clause is called a (Prolog) query.
If n > 1, the clause is no longer Horn.
Conjunctive normal form
Disjunctive normal form
Horn clause
^ Chang, Chin-Liang; Richard Char-Tung Lee (1973). Symbolic Logic and Mechanical Theorem Proving. Academic Press. p. 48. ISBN 0-12-170350-9.
Clause logic related terminology
|
cutout - Maple Help
Home : Support : Online Help : Graphics : Packages : Plot Tools : cutout
cut windows in POLYGONS
cutout(p, r)
cutout([p1, p2, ... ], r)
cut ratio, ranging from 0 to 1
For each polygon in a given POLYGONS structure, this command cuts out a smaller polygon from the middle of the given one. The polygon being cut out is similar to the original and its center and orientation are the same as the original. The ratio of similarity is specified with the parameter r. This command complements the cutin command.
The cutout command forces a display style of PATCHNOGRID. Other options to style are ignored.
\mathrm{with}\left(\mathrm{plottools}\right):
\mathrm{with}\left(\mathrm{plots}\right):
\mathrm{display}\left(\mathrm{cutout}\left(\mathrm{octahedron}\left([1,1,1]\right),\frac{1}{3}\right)\right)
p≔\mathrm{convert}\left(\mathrm{plot3d}\left(\mathrm{sin}\left(xy\right),x=-2..2,y=-1..1,\mathrm{grid}=[4,4]\right),\mathrm{POLYGONS}\right):
\mathrm{display}\left(\mathrm{cutout}\left(p,\frac{1}{3}\right),\mathrm{axes}=\mathrm{frame},\mathrm{orientation}=[-30,70]\right)
p≔\mathrm{display}\left(\mathrm{cutout}\left(\mathrm{tetrahedron}\left([0,0,0]\right),\frac{3}{4}\right)\right):
a≔[[0,\mathrm{\pi },0],[0,0,\mathrm{\pi }],[\mathrm{\pi },0,0],[\mathrm{\pi },\mathrm{\pi },0],[\mathrm{\pi },0,\mathrm{\pi }],[0,\mathrm{\pi },\mathrm{\pi }],[\mathrm{\pi },\mathrm{\pi },\mathrm{\pi }],[0,0,0]]:
\mathrm{display}\left(\mathrm{seq}\left(\mathrm{rotate}\left(p,\mathrm{op}\left(i\right)\right),i=a\right),\mathrm{scaling}=\mathrm{constrained},\mathrm{shading}=\mathrm{zgrayscale},\mathrm{lightmodel}=\mathrm{light2}\right)
|
Physics | James's Knowledge Graph
Fundamental Terms of Physics
Energy, Work, and Displacement
Energy is the ability to do work. Work is the energy transferred to an object (force) that causes a displacement. A displacement is the measure of the distance moved in a specific direction, or the shortest distance from an initial to a final position. A simple way to think of work is as the amount of energy given to or taken away from an object.
Joules, Force, and Newtons
Energy and work are most commonly measured in joules (
J
). One joule is equal to the energy required for one newton (
N
) of force to perform a displacement of one meter (
m
J=Nm
Force is energy transferred to an object (a "push" or a "pull") that will change the motion of an object, if unopposed by another force. Force is measured in Newtons. One newton is the force required to accelerate one kilogram (
kg
) one meter per second squared, or
1N = \frac{1kg*1m}{1s^2}
. The equation for force can also be written as the product of mass (
M
) and acceleration (
A
F=MA
Video: How Much Energy is One Joule?
Mass, Velocity, and Acceleration
Mass is the amount of matter that makes up an object. Mass is measured in kilograms (
kg
). A kilogram was originally defined to equal to the mass of one litre of water, or 1,000 cubic centimeters of water (thus
1g
is the mass of
1cm^3
of water). It's still useful to think of kilogram as defined this way; however the formal definition of kilogram is more complicated.
Velocity is the distance (
d
) traveled in a given direction over time (
t
V=d/t
and is often expressed as meters per second (
m/s
Acceleration is the rate of change in velocity (
V
) over time (
T
A=\Delta{V}/\Delta{T}
, in other words acceleration is how much an object is speeding up, slowing down, or changing direction.
Video: Acceleration, One-Dimensional Motion
Kinetic energy is the energy an object possesses by being in motion (moving)
Potential energy is the energy an object has by being at rest (not moving)
Gravitational Potential Energy is the energy an object has by being at some height and would be converted to kinetic energy when the object is dropped. The force of Gravity on Earth (
G
) is constant at
9.8m/s^2
, therefore to calculate the gravitational energy of an object, we can use
G
as the value for force, the object's height (
H
) as the distance, and the objects mass (
M
) for the mass, or:
J=MGH
Physics Course on Khan Academy
Deeper Knowledge on Physics
The basics of angular motion
Learn about mechanical advantage
Richard Feynman: Lectures and Other Works
Information about the famous scientist, Richard Feynman
Learn about torque: Force that causes an object to rotate around an axis.
Broader Topics Related to Physics
Physics Knowledge Graph
|
Integer Equations - Stars and Bars | Brilliant Math & Science Wiki
Mei Li, Jubayer Nirjhor, Sandeep Bhardwaj, and
A frequently occurring problem in combinatorics arises when counting the number of ways to group identical objects, such as placing indistinguishable balls into labelled urns. We discuss a combinatorial counting technique known as stars and bars or balls and urns to solve these problems, where the indistinguishable objects are represented by stars and the separation into groups is represented by bars.
This allows us to transform the set to be counted into another, which is easier to count. As we have a bijection, these sets have the same size.
Stars and Bars Theorem
a+b+c+d=12
a,b,c,d
are non-negative integers. We're looking for the number of solutions this equation has. At first, it's not exactly obvious how we can approach this problem. One way is brute force: fixing possibilities for one variable, and analyzing the result for other variables. But we want something nicer, something really elegant. We use the above-noted strategy: transforming a set to another by showing a bijection so that the second set is easier to count.
15
places, where we put
12
3
bars, one item per place. The key idea is that this configuration stands for a solution to our equation. For example,
\{*|*****|****|**\}
stands for the solution
1+5+4+2=12
. Because we have
1
star, then a bar (standing for a plus sign), then
5
stars, again a bar, and similarly
4
2
stars follow. Similarly,
\{|*****|***|****\}
denotes the solution
0+5+3+4=12
because we have no star at first, then a bar, and similar reasoning like the previous.
We see that any such configuration stands for a solution to the equation, and any solution to the equation can be converted to such a stars-bars series. So we've established a bijection between the solutions to our equation and the configurations of
12
3
bars. So our problem reduces to "in how many ways can we place
12
3
bars in
15
places?" This is the same as fixing
3
places out of
15
places and filling the rest with stars. We can do this in, of course,
\dbinom{15}{3}
ways. So the number of solutions to our equation is
\dbinom{15}{3}=455.
The stars and bars/balls and urns technique is as stated below.
The number of ways to place
n
k
labelled urns is
\binom{n+k-1}{n} = \binom{n+k-1}{k-1}. \ _\square
Here is the proof the above theorem.
We represent the
n
balls by
n
adjacent stars and consider inserting
k-1
bars in between stars to separate the bars into
k
groups. For example, for
n=12
k=5
, the following is a representation of a grouping of
12
indistinguishable balls in 5 urns, where the size of urns 1, 2, 3, 4, and 5 are 2, 4, 0, 3, and 3, respectively:
* * | * * * * | \, | * * * | * * *
Note that in the grouping, there may be empty urns. There are a total of
n+k-1
positions, of which
n
are stars and
k-1
are bars. Thus, the number of ways to place
n
k
labelled urns is the same as the number of ways of choosing
n
positions among
n+k-1
spaces for the stars, with all remaining positions taken as bars. The number of ways this can be done is
\binom{n+k-1}{n}.
_\square
\binom{n+k-1}{n} = \binom{n+k-1}{k-1}
can be interpreted as the number of ways to instead choose the positions for
k-1
bars and take all remaining positions to be stars.
(a, b, c, d)
a + b + c + d = 10 ?
We first create a bijection between the solutions to
a+b+c +d = 10
and the sequences of length 13 consisting of 10
1
's and 3
0
's. In other words, we will associate each solution with a unique sequence, and vice versa.
Given a set of 4 integers
(a, b, c, d)
, we create the sequence that starts with
a
1
's, then has a
0
, then has
b
1
0
c
1
0
d
1
's. For example, if
(a, b, c, d) = (1, 4, 0, 2)
, then the associated sequence is
1 0 1 1 1 1 0 0 1 1
. Now, if we add the restriction that
a + b + c + d = 10
, the associated sequence will consist of 10
1
's (from
a, b, c, d
) and 3
0
's (from our manual insert), and thus has total length 13.
Conversely, given a sequence of length 13 that consists of 10
1
0
's, let
a
be the length of the initial string of
1
's (before the first
0
be the length of the next string of 1's (between the first and second
0
c
be the length of the third string of
1
's (between the second and third
0
d
be the length of the last string of
1
's (after the third
0
). These values give a solution to the equation
a + b + c + d = 10
This construction associates each solution with a unique sequence, and vice versa, and hence gives a bijection.
Now that we have a bijection, the problem is equivalent to counting the number of sequences of length 13 that consist of 10
1
0
's, which we count using the stars and bars technique. There are
13
positions from which we choose
10
positions as 1's and let the remaining positions be 0's. By stars and bars, there are
{13 \choose 10} = {13 \choose 3} = 286
different choices.
_\square
Note: Another approach for solving this problem is the method of generating functions.
This section contains examples followed by problems to try.
Find the number of non-negative integer solutions to
a+b+c+d+e+f=23.
6
variables, thus
5
plus signs. So by stars and bars, the answer is
\dbinom{23+5}{5}=\dbinom{28}{5}=98280. \ _\square
How many ways are there to choose a 5-letter word from the 26-letter English alphabet with replacement, where words that are anagrams are considered the same?
Observe that since anagrams are considered the same, the feature of interest is how many times each letter appears in the word (ignoring the order in which the letters appear). To translate this into a stars and bars problem, we consider writing 5 as a sum of 26 integers
c_A, c_B, \ldots c_Y,
c_Z,
c_A
is the number of times letter
A
is chosen,
c_B
B
is chosen, etc.
n
k
Then by stars and bars, the number of 5-letter words is
\binom{26 +5 -1}{5} = \binom{30}{25} = 142506. \ _\square
For some problems, the stars and bars technique does not apply immediately. In these instances, the solutions to the problem must first be mapped to solutions of another problem which can then be solved by stars and bars. We illustrate one such problem in the following example:
How many ordered sets of positive integers
(a_1, a_2, a_3, a_4, a_5, a_6)
a_i \geq i
i = 1,2, \ldots, 6
a_1 + a_2 + a_3 + a_4 + a_5 + a_6 \leq 100 ?
Because of the inequality, this problem does not map directly to the stars and bars framework. To proceed, consider a bijection between the integers
(a_1, a_2, a_3, a_4, a_5, a_6)
satisfying the conditions and the integers
(a_1, a_2, a_3, a_4, a_5, a_6, c)
a_i \geq i, c \geq 0,
a_1 + a_2 + a_3 + a_4 + a_5 + a_6 + c = 100 .
Now, by setting
b_i= a_i-i
i = 1,2, \ldots, 6
, we would like to find the set of integers
(b_1, b_2, b_3, b_4, b_5, b_6, c)
b_i \geq 0, c \geq 0,
b_1 + b_2 + b_3 + b_4 + b_5 + b_6 + c = 100 - (1 + 2 + 3 + 4 + 5 + 6) = 79.
By stars and bars, this is equal to
\binom{79+7-1}{79} = \binom{85}{79}
_\square
Find the number of ordered triples of positive integers
(a,b,c)
a+b+c=8
3x +y + z = 24.
Find the number of positive integer solutions of the equation
x + y + z = 12.
Find the number of non-negative integers
x_1,x_2,\ldots,x_5
\large{x_1 + x_2 + x_3 + x_4 + x_5 = 17.}
Cite as: Integer Equations - Stars and Bars. Brilliant.org. Retrieved from https://brilliant.org/wiki/integer-equations-star-and-bars/
|
Complete and incomplete elliptic integrals of the second kind - MATLAB ellipticE - MathWorks Benelux
Find Complete Elliptic Integrals of Second Kind
Differentiate Elliptic Integrals of Second Kind
Elliptic Integral for Matrix Input
Plot Complete and Incomplete Elliptic Integrals of Second Kind
Complete and incomplete elliptic integrals of the second kind
ellipticE(m)
ellipticE(phi,m)
ellipticE(m) returns the complete elliptic integral of the second kind.
ellipticE(phi,m) returns the incomplete elliptic integral of the second kind.
Compute the complete elliptic integrals of the second kind for these numbers. Because these numbers are not symbolic objects, you get floating-point results.
s = [ellipticE(-10.5), ellipticE(-pi/4),...
ellipticE(0), ellipticE(1)]
Compute the complete elliptic integral of the second kind for the same numbers converted to symbolic objects. For most symbolic (exact) numbers, ellipticE returns unresolved symbolic calls.
s = [ellipticE(sym(-10.5)), ellipticE(sym(-pi/4)),...
ellipticE(sym(0)), ellipticE(sym(1))]
[ ellipticE(-21/2), ellipticE(-pi/4), pi/2, 1]
[ 3.70961391, 1.844349247, 1.570796327, 1.0]
Differentiate these expressions involving elliptic integrals of the second kind. ellipticK and ellipticF represent the complete and incomplete elliptic integrals of the first kind, respectively.
diff(ellipticE(pi/3, m))
diff(ellipticE(m^2), m, 2)
ellipticE(pi/3, m)/(2*m) - ellipticF(pi/3, m)/(2*m)
2*m*((ellipticE(m^2)/(2*m^2) -...
ellipticK(m^2)/(2*m^2))/m - ellipticE(m^2)/m^3 +...
ellipticK(m^2)/m^3 + (ellipticK(m^2)/m +...
ellipticE(m^2)/(m*(m^2 - 1)))/(2*m^2)) +...
ellipticE(m^2)/m^2 - ellipticK(m^2)/m^2
Call ellipticE for this symbolic matrix. When the input argument is a matrix, ellipticE computes the complete elliptic integral of the second kind for each element.
ellipticE(sym([1/3 1; 1/2 0]))
[ ellipticE(1/3), 1]
[ ellipticE(1/2), pi/2]
Plot the incomplete elliptic integrals ellipticE(phi,m) for phi = pi/4 and phi = pi/3. Also plot the complete elliptic integral ellipticE(m).
fplot([ellipticE(pi/4,m) ellipticE(pi/3,m) ellipticE(m)])
title('Elliptic integrals of the second kind')
legend('E(\pi/4|m)','E(\pi/3|m)','E(m)','Location','Best')
The incomplete elliptic integral of the second kind is defined as follows:
E\left(\phi |m\right)=\underset{0}{\overset{\phi }{\int }}\sqrt{1-m{\mathrm{sin}}^{2}\theta }d\theta
The complete elliptic integral of the second kind is defined as follows:
E\left(m\right)=E\left(\frac{\pi }{2}|m\right)=\underset{0}{\overset{\pi /2}{\int }}\sqrt{1-m{\mathrm{sin}}^{2}\theta }d\theta
ellipticE returns floating-point results for numeric arguments that are not symbolic objects.
For most symbolic (exact) numbers, ellipticE returns unresolved symbolic calls. You can approximate such results with floating-point numbers using vpa.
If m is a vector or a matrix, then ellipticE(m) returns the complete elliptic integral of the second kind, evaluated for each element of m.
At least one input argument must be a scalar or both arguments must be vectors or matrices of the same size. If one input argument is a scalar and the other one is a vector or a matrix, then ellipticE expands the scalar into a vector or matrix of the same size as the other argument with all elements equal to that scalar.
ellipticE(pi/2, m) = ellipticE(m).
You can use ellipke to compute elliptic integrals of the first and second kinds in one function call.
ellipke | ellipticCE | ellipticCK | ellipticCPi | ellipticF | ellipticK | ellipticPi | vpa
|
With the discovery of quantum mechanics via the quantization of the emission lines of the hydrogen atom, Bohr updated Rutherford's model so that the electrons lay in orbits of integral angular momentum in units of
\hbar
. All of the elements in the periodic table that comprise all matter were thus ordered by the energies of their constituent electrons.
The reason for this divergence was that classical statistical mechanics assumed every classical electromagnetic mode to have the same energy, dictated by the temperature. Planck's solution to this problem was to assume that radiation was quantized in discrete packets with energy
E = h \nu
\nu
the radiation frequency and
h
some constant. Einstein later interpreted this result as a demonstration of the particle nature of light, since the quantization of light energy into discrete packets suggests that each packet is a light particle or photon.
A few years after Planck, Einstein's interpretation of a different experiment, the photoelectric effect, would further substantiate this claim. The photoelectric effect refers to the fact that high frequency light causes emission of electrons of metals regardless of how low the intensity of the light may be. Treating light as quantized packets or photons and using
E = h \nu
explained this effect, because the intensity of light would only specify the number of photons, not their energy. Thus a small number of photons at high energy would still cause electronic emission.
\frac{1}{c^2} \frac{\partial^2 E}{\partial t^2} = \frac{\partial^2 E}{\partial x^2}, \qquad \frac{1}{c^2} \frac{\partial^2 B}{\partial t^2} = \frac{\partial^2 B}{\partial x^2},
i.e., the wave equation in each of the electric and magnetic fields. Maxwell thus established light as an electromagnetic wave, since the constant
c = \frac{1}{\sqrt{\mu_0 \epsilon_0}}
in the above equation was numerically exactly the speed of light in vacuum.
\lambda = \frac{h}{p}.
This relation says that matter particles with momentum
p
could be equally well described as waves of wavelength
\lambda
, with a proportionality constant
h
equal to Planck's constant. The equation was motivated by the corresponding equation for light:
E = \frac{hc}{\lambda},
E = pc
for light. Observing the wave-particle duality of light, de Broglie suggested that matter ought to obey the same relation. This hypothesis was experimentally justified in the following years with interference and diffraction experiments performed using electrons:
\lambda_{dB} > \lambda_c
Not enough information.
\lambda_{dB} < \lambda_c
\lambda_{dB} = \lambda_c
For a non-relativistic particle, which of the following gives the correct relationship between the de Broglie wavelength
\lambda_{dB}
\lambda_c = \frac{h}{mc}
, which is also a useful wavelength in quantum mechanics?
6.63 \times 10^{-28} \text{ m}
6.63 \times 10^{-13} \text{ m}
6.63 \times 10^{-34} \text{ m}
6.63 \times 10^{-31} \text{ m}
Compute the de Broglie wavelength of a massive particle with mass
1 \text{ g}
traveling at
1 \text{ m}/\text{s}
Even when fired one at a time, as in an experiment performed much later in the twentieth century, one found the same results for electrons in interference and diffraction experiments as predicted by wave mechanics. In recent decades the wave-particle duality of matter has been confirmed for even larger objects such as buckyballs, which are
C_{60}
allotropes.
\sigma_x \sigma_p \geq \frac{\hbar}{2}.
|
Magnetometer sensor parameters - MATLAB - MathWorks 한êµ
Generate Magnetometer Data from Stationary Inputs
Magnetometer sensor parameters
The magparams class creates a magnetometer sensor parameters object. You can use this object to model a magnetometer when simulating an IMU with imuSensor. See the Algorithms section of imuSensor for details of magparams modeling.
params = magarams
params = magparams(Name,Value)
params = magarams returns an ideal magnetometer sensor parameters object with default values.
params = magparams(Name,Value) configures magparams object properties using one or more Name,Value pair arguments. Name is a property name and Value is the corresponding value. Name must appear inside single quotes (''). You can specify several name-value pair arguments in any order as Name1,Value1,...,NameN,ValueN. Any unspecified properties take default values.
MeasurementRange — Maximum sensor reading (μT)
Maximum sensor reading in μT, specified as a real positive scalar.
Resolution — Resolution of sensor measurements (μT/LSB)
Resolution of sensor measurements in μT/LSB, specified as a real nonnegative scalar. Here, LSB is the acronym for least significant bit.
ConstantBias — Constant sensor offset bias (μT)
Constant sensor offset bias in μT, specified as a real scalar or 3-element row vector. Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
{v}_{measure}=\frac{1}{100}M\text{â}{v}_{true}=\frac{1}{100}\left[\begin{array}{ccc}{m}_{11}& {m}_{12}& {m}_{13}\\ {m}_{21}& {m}_{22}& {m}_{23}\\ {m}_{31}& {m}_{32}& {m}_{33}\end{array}\right]{v}_{true}
NoiseDensity — Power spectral density of sensor noise (μT/√Hz)
Power spectral density of sensor noise in μT/√Hz, specified as a real scalar or 3-element row vector. Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
BiasInstability — Instability of the bias offset (μT)
Instability of the bias offset in μT, specified as a real scalar or 3-element row vector. Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
RandomWalk — Integrated white noise of sensor (μT/√Hz)
Integrated white noise of sensor in (μT/√Hz), specified as a real scalar or 3-element row vector. Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
TemperatureBias — Sensor bias from temperature (μT/℃)
Sensor bias from temperature in (μT/℃), specified as a real scalar or 3-element row vector. Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
Scale factor error from temperature in (%/℃), specified as a real scalar or 3-element row vector with values ranging from 0 to 100. Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
Generate magnetometer data for an imuSensor object from stationary inputs.
Generate a magnetometer parameter object with a maximum sensor reading of 1200
\mathrm{μ}T
and a resolution of 0.1
\mathrm{μ}T/LSB
. The constant offset bias is 1
\mathrm{μ}T
. The sensor has a power spectral density of
\text{â}\left(\frac{\left[0.6\text{â}\text{â}0.6\text{â}\text{â}0.9\right]}{\sqrt{100}}\right)
\mathrm{μ}T/\sqrt{Hz}
. The bias from temperature is [0.8 0.8 2.4]
\mathrm{μ}T/{}^{0}C
. The scale factor error from temperature is 0.1 %
/{}^{0}C
params = magparams('MeasurementRange',1200,'Resolution',0.1,'ConstantBias',1,'NoiseDensity',[0.6 0.6 0.9]/sqrt(100),'TemperatureBias',[0.8 0.8 2.4],'TemperatureScaleFactor',0.1);
Use a sample rate of 100 Hz spaced out over 1000 samples. Create the imuSensor object using the magnetometer parameter object.
imu = imuSensor('accel-mag','SampleRate', Fs, 'Magnetometer', params);
Generate magnetometer data from the imuSensor object.
[~, magData] = imu(acc, angvel, orient);
Plot the resultant magnetometer data.
plot(t, magData)
ylabel('\mu T')
accelparams | gyroparams | imuSensor
|
Too Much Sasss – John Siwicki
For such a long time, I wasn’t buying into this. I can write CSS just fine. I did need to add another layer on top of that to add another step to my workflow. I was also worried that this would be just a small movement and then it would be “hot” again to write vanilla CSS. Which is what is going on with Javascript. I feel like I just read four posts titled “You don’t need jQuery anymore.”
But, blog post after blog post got me to finally give it a chance. I started with some very basic syntax. I just wanted to dip my toes in and see how I liked it. Within about 30 seconds, I saw and felt all this power that I was missing. Yes, SASS does compile to CSS, but this layer will give you more power and will clean up your code and make your work easier to scale and easier to maintain.
I want to walk through some small tweaks that you can implement today—some ways to get it into your workflow, even if your development environment is not be ready for it. I’ll help you get comfortable with these basic SASS concepts to just get your feet wet and get you looking at your CSS a little differently. Nesting
When I first read about nesting, I got upset. I thought to myself: “This is dumb. Why would I ever use this? It would just mess up my CSS.” Literally, the nav project I worked on looked like this:
.nav ul { } .nav ul li {} .nav ul li a{}
That went on for easily 50 lines, but it’s Sass that can be taken care of with just a small tweak.
.nav { ul {} ul li {} ul li {} }
It makes your file so much more readable, and makes it easier to find the story that is unfolding in your stylesheet. Also, you need to be careful of nesting too deep. If you go more than three levels deep, you are going to have a bad time. If you want to go deeper than the span in the example, it might be worth taking a step back and looking at your markup. That is going to a hyper specific rule and you might be able to work around that.
.nav { ul {} ul li {} ul li { span {} } }
font-stack: Helvetica, sans-serif;
white-main: #FF0; $link-color: #CACACAl;
SASS use $ to set a variable. The best way to think about a variable is as something that is stored and can be accessed throughout your document.
Variables allow something like below to happen. We have all had that project where you have to change your color scheme or maybe some last second updates. Setting Variables for your colors, fonts, and anything else that might change over the course of a project is as simple as changing the one line.
.nav { color: $link-color; }
The agency I work for has a very specific LAMP stack that runs a very customized content management system. We really can’t roll out a full SASS workflow into our system without a major rewrite of the system. But, I found myself using SASS everyday. There are a number of tools that make this easy. Codepen
This is for if you are building a little component for an existing page or just need to make a quick mock up. Codepen has become an essential part of my mockup process. Codepen also has great SASS support and it will allow you to pull in Compass (link to more details) which, as stated on their homepage: “Compass is an open source CSS authoring framework. It allows you to pull in a lot of goodies into your SASS file.”
Codepen will compile your SASS into CSS and will allow you pull in Compass and really be able to expand your CSS and do it all in the browser. It will become an essential tool in the early process of learning. It is the best way to jump in and start trying some of these SASS rules. Apps
There are a number of Apps that will watch your local files and compile SASS To CSS. My favorite app is Prepros. It is cross platform with both Windows and Mac apps, and the options are endless. Prepros compiles not only SCSS, but will also compile your Coffeescript and even Haml and Markdown. It is free to try and the pro version is like 30 bucks. What you get makes this well worth it. It changed my workflow forever after grabbing it. The live refresh and Sass error reporting make it one hard thing to give up.
We Need More Weird
|
(Redirected from Integers modulo n)
{\displaystyle a\equiv b{\pmod {n}}.}
The parentheses mean that (mod n) applies to the entire equation, not just to the right-hand side (here b). This notation is not to be confused with the notation b mod n (without parentheses), which refers to the modulo operation. Indeed, b mod n denotes the unique integer a such that 0 ≤ a < n and
{\displaystyle a\equiv b\;({\text{mod}}\;n)}
(i.e., the remainder of
{\displaystyle b}
{\displaystyle n}
{\displaystyle a=kn+b,}
{\displaystyle a=pn+r,}
{\displaystyle b=qn+r,}
{\displaystyle a-b=kn,}
{\displaystyle 38\equiv 14{\pmod {12}}}
{\displaystyle {\begin{aligned}2&\equiv -3{\pmod {5}}\\-8&\equiv 7{\pmod {5}}\\-3&\equiv -8{\pmod {5}}.\end{aligned}}}
The multiplicative inverse x ≡ a–1 (mod n) may be efficiently computed by solving Bézout's equation
{\displaystyle ax+ny=1}
{\displaystyle x,y}
—using the Extended Euclidean algorithm.
{\displaystyle a^{(p-1)/2}\equiv 1{\pmod {p}}.}
The set of all congruence classes of the integers for a modulus n is called the ring of integers modulo n,[6] and is denoted
{\textstyle \mathbb {Z} /n\mathbb {Z} }
{\displaystyle \mathbb {Z} /n}
{\displaystyle \mathbb {Z} _{n}}
.[7] The notation
{\displaystyle \mathbb {Z} _{n}}
is, however, not recommended because it can be confused with the set of n-adic integers. The ring
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
is fundamental to various branches of mathematics (see § Applications below).
{\displaystyle \mathbb {Z} /n\mathbb {Z} =\left\{{\overline {a}}_{n}\mid a\in \mathbb {Z} \right\}=\left\{{\overline {0}}_{n},{\overline {1}}_{n},{\overline {2}}_{n},\ldots ,{\overline {n{-}1}}_{n}\right\}.}
(When n = 0,
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
is not an empty set; rather, it is isomorphic to
{\displaystyle \mathbb {Z} }
, since a0 = {a}.)
We define addition, subtraction, and multiplication on
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
by the following rules:
{\displaystyle {\overline {a}}_{n}+{\overline {b}}_{n}={\overline {(a+b)}}_{n}}
{\displaystyle {\overline {a}}_{n}-{\overline {b}}_{n}={\overline {(a-b)}}_{n}}
{\displaystyle {\overline {a}}_{n}{\overline {b}}_{n}={\overline {(ab)}}_{n}.}
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
becomes a commutative ring. For example, in the ring
{\displaystyle \mathbb {Z} /24\mathbb {Z} }
{\displaystyle {\overline {12}}_{24}+{\overline {21}}_{24}={\overline {33}}_{24}={\overline {9}}_{24}}
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
because this is the quotient ring of
{\displaystyle \mathbb {Z} }
by the ideal
{\displaystyle n\mathbb {Z} }
, a set containing all integers divisible by n, where
{\displaystyle 0\mathbb {Z} }
is the singleton set {0}. Thus
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
is a field when
{\displaystyle n\mathbb {Z} }
is a maximal ideal (i.e., when n is prime).
This can also be constructed from the group
{\displaystyle \mathbb {Z} }
under the addition operation alone. The residue class an is the group coset of a in the quotient group
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
, a cyclic group.[8]
Rather than excluding the special case n = 0, it is more useful to include
{\displaystyle \mathbb {Z} /0\mathbb {Z} }
(which, as mentioned before, is isomorphic to the ring
{\displaystyle \mathbb {Z} }
of integers). In fact, this inclusion is useful when discussing the characteristic of a ring.
The ring of integers modulo n is a finite field if and only if n is prime (this ensures that every nonzero element has a multiplicative inverse). If
{\displaystyle n=p^{k}}
is a prime power with k > 1, there exists a unique (up to isomorphism) finite field
{\displaystyle \mathrm {GF} (n)=\mathbb {F} _{n}}
with n elements, but this is not
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
, which fails to be a field because it has zero-divisors.
The multiplicative subgroup of integers modulo n is denoted by
{\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }}
. This consists of
{\displaystyle {\overline {a}}_{n}}
(where a is coprime to n), which are precisely the classes possessing a multiplicative inverse. This forms a commutative group under multiplication, with order
{\displaystyle \varphi (n)}
An algorithmic way to compute
{\displaystyle a\cdot b{\pmod {m}}}
{\displaystyle a^{b}{\pmod {m}}}
Algebraic number theory (Class field theory, Non-abelian class field theory)
Diophantine geometry (Arakelov theory, Hodge–Arakelov theory)
Arithmetic geometry (Anabelian geometry)
P-adic numbers (P-adic analysis)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Modular_arithmetic&oldid=1086334234#Integers_modulo_n"
|
m (s/addon/cmd for i.pansharpen)
[http://en.wikipedia.org/wiki/Pansharpened_image Pan-Sharpening] / [http://en.wikipedia.org/wiki/Image_fusion Fusion] is the process of merging high-resolution panchromatic and lower resolution multi-spectral imagery. [http://grass.osgeo.org/grass70/ GRASS 7] holds a dedicated pan-sharpening module, {{cmd|i.pansharpen}} which features three techniques for sharpening, namely the [http://wiki.awf.forst.uni-goettingen.de/wiki/index.php/Brovey_Transformation Brovey transformation], the classical IHS method and one that is based on [[Principal Components Analysis]] (PCA).
Another algorithm deriving excellent detail and a realistic representation of original multispectral scene colors, is the High-Pass Filter Addition (HPFA) technique. It is implemented via the {{AddonSrc|imagery|i.fusion.hpf|version=7}} add-on (for GRASS 6, please refer to a bash shell script https://github.com/NikosAlexandris/i.fusion.hpf.sh which is, however, unmaintained).
{\displaystyle {\frac {W}{m^{2}*sr*nm}}}
{\displaystyle L_{\lambda {\text{Pixel, Band}}}={\frac {K_{\text{Band}}*q_{\text{Pixel, Band}}}{\Delta \lambda _{\text{Band}}}}}
{\displaystyle L_{\lambda {\text{Pixel,Band}}}}
{\displaystyle K_{\text{Band}}}
{\displaystyle q_{\text{Pixel,Band}}}
{\displaystyle \Delta _{\lambda _{\text{Band}}}}
{\displaystyle \rho _{p}={\frac {\pi *L\lambda *d^{2}}{ESUN\lambda *cos(\Theta _{S})}}}
{\displaystyle \rho }
{\displaystyle \pi }
{\displaystyle L\lambda }
{\displaystyle d}
{\displaystyle Esun}
{\displaystyle cos(\theta _{s})}
{\displaystyle {\frac {W}{m^{2}*\mu m}}}
|
Rectangular Grid Walk - Walls Practice Problems Online | Brilliant
An ant crawling along the coordinate plane wishes to travel from the origin to
(3, \, -2)
(1, \, -1)
(2, \, -1)
. If each move the ant makes must be
1
unit right or down, then how many possible paths could the ant take?
In a game of basketball, the underdog's advantage says that whenever one team has one more point than the other, the losing team will score a point. Given that teams score one point at a time in this basketball game until one team has
8
points, in how many different ways could the game unfold?
Pablo is at the point
(4, \, 3)
in the coordinate plane and is able to walk
1
unit left or down each step he takes. There is a wall between
(2, \, 1)
(2, \, 2)
(1, 0)
(1, \, 1)
. How many possible paths could Pablo take from
(4, \, 3)
to the origin?
In a game of soccer, the home team advantage says that whenever the home team is one point away from winning, the other team is unable to score. Given that teams score one point at a time in a soccer game until one team has
5
Carmen San Diego is hiding in the coordinate plane at
(5, \, 3)
. She has a trap set between
(3, \, 2)
(4, \, 2)
(4, \, 1)
(4, \, 2)
. If you can start at the origin and can only move
1
unit right or up each turn, in how many ways can you get to her position?
|
Quantum Tunneling | Brilliant Math & Science Wiki
Matt DeCross, Andrew Ellinor, Lee Film, and
Quantum tunneling refers to the nonzero probability that a particle in quantum mechanics can be measured to be in a state that is forbidden in classical mechanics. Quantum tunneling occurs because there exists a nontrivial solution to the Schrödinger equation in a classically forbidden region, which corresponds to the exponential decay of the magnitude of the wavefunction.
Tunneling of an electron wavefunction through a potential barrier. A nonzero amount of the wavefunction transmits through the barrier [1].
To illustrate the concept of tunneling, consider trying to confine an electron in a box. One could try to pin down the location of the particle by shrinking the walls of the box, which will result in the electron wavefunction acquiring greater momentum uncertainty by the Heisenberg uncertainty principle. As the box gets smaller and smaller, the probability of measuring the location of the electron to be outside the box increases towards one, despite the fact that classically the electron is confined inside the box.
The easiest solvable example of quantum tunneling is in one dimension. However, tunneling is responsible for a wide range of physical phenomena in three dimensions such as radioactive decay, the behavior of semiconductors and superconductors, and scanning tunneling microscopy.
Scattering from a Potential Barrier in One Dimension
Gamow Model of Radioactive Decay
More Applications of Quantum Tunneling
Scattering particles off of a potential barrier in one dimension look like this:
Tunneling through a one-dimensional potential barrier [2]
Suppose that the height of the potential barrier is
V_0
and the width is
L
and that scattering particles have energy
E <V_0
. Then the picture can be divided into three regions:
(-\infty < x<0)
E > V(x)
(0 \le x \le L)
E <V(x)
(L < x< \infty)
E > V(x),
V(x)=0; x<0
V(x)=V_0;x=0, 0<x<L
V(x)=0, x>L.
In Region 1, the potential is zero. A moving wave thus has energy greater than the potential. This is also true in Region 3. However, in Region 2, the energy of the wave is less than the potential. Therefore, the Schrödinger equation yields two different differential equations depending on the region:
Region 1 and Region 3
\frac{{d}^{2}\psi}{d{x}^{2}}={k}^{2}\psi, \quad k=\sqrt{\frac{-2mE}{{\hbar}^{2}}}
\frac{{d}^{2}\psi}{d{x}^{2}}={\kappa}^{2}\psi, \quad \kappa=\sqrt{\frac{2m(V-E)}{{\hbar}^{2}}}.
The general solutions can be written as linear combinations of oscillatory terms in Regions 1 and 3, and as linear combinations of growing and decaying exponentials in Region 2:
\psi (x)=\begin{cases} { Ae }^{ ikx }+{ Be }^{ -ikx } \quad &\text{: Region 1} \\ { Ce }^{ \kappa x }+{ De }^{ -\kappa x } \quad &\text{: Region 2} \\ { Fe }^{ ikx } &\text{: Region 3}. \end{cases}
Note that plane-waves that travel to the right are of the form
{e}^{ikx}
and plane-waves that travel to the left are of the form
{e}^{-ikx}
. In this experiment, a particle (plane-wave) enters from the left and will partially transmit and partially reflect. However, no particle enters from the right heading towards the left; therefore, there is no
Ge^{-ikx}
term above in Region 3.
The coefficients above are fixed by the continuity of the wavefunction and its derivative at each point where the potential changes. One obtains two conditions from continuity at
x=0
x=L
A+B=C+D
{Ce}^{\kappa L}+{De}^{-\kappa L}= {Fe}^{ikL}
and two conditions from continuity of the derivative at
x=0
x=L
Aik-Bik=C\kappa-D\kappa
{C\kappa e}^{\kappa L}-{D\kappa e}^{-\kappa L}= {Fike}^{ikL}.
Dividing 3) by
ik
and adding to 1) obtains
2A=\left(1+\frac{\kappa}{ik}\right)C+\left(1-\frac{\kappa}{ik}\right)D.
Similarly, dividing 4) by
\kappa
and adding or subtracting from 2) obtains
2C{e}^{\kappa L}=\left(1+\frac{ik}{\kappa}\right)F{e}^{ikL}
2D{e}^{-\kappa L}=\left(1-\frac{ik}{\kappa}\right)F{e}^{ikL}.
Combining 5), 6), and 7) yields an equation for
A
F:
2A=\left(1+\frac{\kappa}{ik}\right)\left(1+\frac{ik}{\kappa}\right)\frac{F{e}^{ikL}{e}^{-\kappa L}}{2}+\left(1-\frac{\kappa}{ik}\right)\left(1-\frac{ik}{\kappa}\right)\frac{F{e}^{ikL}{e}^{\kappa L}}{2},
\frac{A{e}^{-ikL}}{F}=\cosh{(\kappa L)}+i\left(\frac{{\kappa}^{2}-{k}^{2}}{2k\kappa}\right)\sinh{(\kappa L)}.
Now the probability of a wave to tunnel through the barrier is equal to the probability of the wavefunction in Region 3 divided by the probability of the wavefunction in Region 1. Multiplying the above equation by its conjugate and taking the inverse, the probability of transmission is therefore quantified by
T=\frac{|F|^2}{|A|^2}={\left[{{\text{cosh}}^{2}(\kappa L)+\left(\frac{{\kappa}^{2}-{k}^{2}}{2k\kappa}\right)}^{2}{\text{sinh}}^{2}(\kappa L)\right]}^{-1}.
{\text{cosh}}^{2}(x)-{\text{sinh}}^{2}(x)=1
T={\left[1+\left(\frac{{k}^{2}+{\kappa}^{2}}{2k\kappa}\right){\text{sinh}}^{2}(\kappa L)\right]}^{-1}.
\beta=\left(\frac{{k}^{2}+{\kappa}^{2}}{2k\kappa}\right)
makes the solution more compact:
T=\frac{1}{1+{\beta}^{2}{\text{sinh}}^{2}(\kappa L)}.
This can also be rewritten in terms of the energies:
T = \Bigg(1+ \frac{V_0^2}{4E(V_0 -E)} \sinh^2 \left(\frac{L}{\hbar} \sqrt{2m(V_0 - E)}\right)\Bigg)^{-1} .
Naturally, the probability of reflection is
1-T
R=\frac{{\beta}^{2}{\text{sinh}}^{2}(\kappa L)}{1+{\beta}^{2}{\text{sinh}}^{2}(\kappa L)}.
Macroscopically, objects colliding against a wall will be deflected. This is analogous to the reflection probability being 100% and transmission probability being 0%. The above example shows that it is possible for matter waves to "go through walls" with some probability, given that a matter wave has sufficient energy or the barrier being sufficiently narrow
(
L).
Note that, for a very wide or tall barrier
(L
very large
)
V_0 \gg E,
\sinh
term in the expression for
T
\infty
T \approx 0:
for a very wide or tall barrier, there is almost no transmission.
The below animation shows a localized wavefunction tunneling through the one-dimensional barrier by evolving the time-dependent Schrödinger equation:
Wavepacket scattering through a very high, very narrow potential barrier. Note the presence of both reflected and transmitted components [3].
Challenge problem: derive the transmission coefficient for the rectangular potential well
V(x) = \begin{cases} -V_0 \quad & 0<x<L \\ 0 \quad & \text{ otherwise}. \end{cases}
T = \Bigg(1+ \frac{V_0^2}{4E(V_0 +E)} \sin^2 \left(\frac{L}{\hbar} \sqrt{2m(V_0 + E)}\right)\Bigg)^{-1}.\ _\square
0
1.81 \times 10^{-6}
7.24 \times 10^{-4}
3.62 \times 10^{-3}
1 \text{ eV}
3 \text{ eV}
1 \text{ nm}
One of the first applications of quantum tunneling was to explain alpha decay, the radioactive decay of a nucleus leading to emission of an alpha particle (helium nucleus). The relevant model is called the Gamow model after its creator George Gamow [4]. Gamow modeled the potential experienced by an alpha particle in the nucleus as a finite square well in the nuclear region and Coulombic repulsion outside the nucleus, as displayed in the diagram:
An alpha particle at energy indicated by the red line, confined in Gamow's potential. The well near
r = 0
is an approximation to the attractive dynamics of the strong nuclear force.
The corresponding potential can be written formally as below:
V = \begin{cases} -V_0 \quad &r < r_0\\\\ \frac{1}{4\pi \epsilon_0} \frac{2Z e^2}{r} \quad & r>r_0. \end{cases}
A fact in advanced quantum mechanics is that the transmission probability
T
is well approximated by
T = e^{-2\gamma},
\gamma
is defined via
\gamma = \frac{1}{\hbar} \int_{r_0}^{r_1} \sqrt{2m \big(V(r) - E\big)}\, dr,
r_0, r_1
E
V(r)
Performing the integration for the Gamow model under the assumption that
r_1 \gg r_0
, one obtains the result
\gamma = \frac{\sqrt{2mE}}{\hbar} \left(\frac{\pi}{2} r_1 - 2\sqrt{r_0 r_1} \right).
Although the radii
r_0
r_1
are not known a priori, the important part is the dependence of the logarithm of the lifetime on
E^{-1/2}
, which can be confirmed experimentally. The energy of emitted alpha particles can be computed using
E = (m_p - m_d - m_{\alpha}) c^2,
m_p
is the mass of the nucleus before decay,
m_d
is the mass of the nucleus of the decay product, and
m_{\alpha}
is the alpha particle mass.
\gamma = \dfrac{1}{\hbar} \int_{r_0}^{r_1} \sqrt{2m(V-E)}\, dr
E
\gamma - \frac{\ln 2}{2}
r_0
r_1
E
V
Quantum tunneling is responsible for many physical phenomena that baffled scientists in the early
20^\text{th}
century. One of the first was radioactivity, both via Gamow's model of alpha decay discussed above as well as via electrons tunneling into the nucleus to be captured by protons. Another wide area of applicability of quantum tunneling has been to the dynamics of electrons in materials, such as microscopy, semiconductors, and superconductors.
Diagrammatic setup of the scanning tunneling microscope [5].
A scanning tunneling microscope is an incredibly sensitive device used to map the topography of materials at the atomic level. It works by running an extremely sharp tip of only a single-atom-thick over the surface of the material, with the tip at a higher voltage than the material. This voltage allows a non-negligible tunneling current to flow from electrons that tunnel from the surface of the material, through the potential barrier represented by the air, to the tip of the microscope, completing a circuit. By measuring the amount of current that flows at a given distance, the microscope can resolve where the atoms are on the surface of the material.
In a tunnel diode, two p-type and n-type semiconductors are separated by a thin insulating region called the depletion region. Recall that a p-type semiconductor is one that has been doped with impurity atoms that carry one less valence electron, while an n-type semiconductor has been doped with impurities carrying one more valence electron; both allow conduction to occur more easily due to the extra electrons or "holes" provided by the dopant. In the depletion region, there are no conduction electrons; the electrons have been depleted to other regions. The main effect of a tunnel diode is that an applied voltage can make electrons from the n-type semiconductor tunnel through the depletion region, causing a unidirectional current towards the p-type semiconductor at low voltages. As voltage increases, the current drops as the depletion region widens and then increases again at high voltages to function as a normal diode. The ability of tunnel diodes to direct current at low voltages due to tunneling allows them to operate at very high AC frequencies.
Some semiconducting materials are superconductors, meaning that in certain temperature ranges a current can flow indefinitely without resistive heating occurring. In Josephson junctions, two superconducting semiconductors are separated by a thin insulating barrier. In the Josephson effect, superconducting pairs of electrons (Cooper pairs) can tunnel through this barrier to carry the superconducting current through the junction.
[1] By The original uploader was Jean-Christophe BENOIST at French Wikipedia - Transferred from fr.wikipedia to Commons., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=653747.
[2] Image from https://upload.wikimedia.org/wikipedia/commons/1/1d/TunnelEffektKling1.png under Creative Commons licensing for reuse and modification.
[3] By Yuvalr (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
[5] Illustration by Kristian Molhave for the Opensource Handbook of Nanoscience and Nanotechnology.
Cite as: Quantum Tunneling. Brilliant.org. Retrieved from https://brilliant.org/wiki/quantum-tunneling/
|
Price Elasticity of Demand and Total Revenue - Course Hero
Microeconomics/Elasticity/Price Elasticity of Demand and Total Revenue
Learn all about price elasticity of demand and total revenue in just a few minutes! Professor Jadrian Wooten of Penn State University explains how price elasticity of demand information impacts decisions that affect a firm's total revenue.
One beneficial use of the price elasticity of demand is to determine what impact changes in a good's or service's price will have on a firm's total revenue. Total revenue (TR) is the total amount earned by selling a good or service, and is equal to the price of a good (P) multiplied by the quantity of units sold (Q).
\text{TR}=\text{P}\times \text{Q}
The law of demand states that when price rises, the quantity sold will fall. The opposite case holds as well. Because these movements in price and quantity occur in opposite directions, the effect on total revenue can be ambiguous. However, when it is known whether the demand is elastic or inelastic, this ambiguity disappears. Knowledge of elasticity enables economists to determine whether, and by how much, the quantity demanded will change in response to a change in price, and this enables them to establish the impact on total revenue. For goods with elastic demand, firms should lower prices to increase total revenue by increasing the quantity demanded. For goods with inelastic demand, firms should increase price to raise total revenue. Calculating price elasticity helps a firm make these choices.
Demand Is Elastic: Effect on Total Revenue
Suppose that the absolute value of the price elasticity of demand (ED) is greater than 1, or
\lvert\text{E}_{\text{D}}\rvert\gt1
. Mathematically, this occurs because the percentage change in quantity demanded (
\% \Delta \text{Q}_{\text{D}}
the numerator) is greater than the percentage change in price (
\% \Delta \text{P}
, the denominator).
\lvert\%\Delta \text{Q}_{\text{D}}\rvert\gt\vert\%\Delta \text{P}\vert
When demand is elastic, quantity demanded changes by a larger percentage than the percentage change in price; thus, by following the total revenue formula (quantity of goods sold multiplied by the price of the good), the quantity effect will be greater than the price effect.
For example, for a firm manufacturing keyboards, suppose that the price elasticity of demand is equal to 1.5. This means that a 10% increase in price will reduce the quantity demanded by 15%. In this case, total revenue will fall when there is an increase in price. If the price of a keyboard is raised from $100 to $110, the demand will drop by 15%.
In contrast, if the price of the keyboard were to fall by 10% (from $100 to $90), the quantity demanded would rise by 15%. In this case, even though the price is falling, total revenue will rise. If a company faces elastic demand, it will want to lower prices in order to increase total revenue.
The demand curve is close to horizontal when demand is elastic. A small increase in price (from P1 to P2) results in a large decrease in demand. A small decrease in price (from P3 to P2) results in a large increase in demand.
Demand Is Inelastic: Effect on Total Revenue
Inelastic demand has a price elasticity of less than 1 in absolute value, or
\lvert \text{E}_{\text{D}}\rvert \lt \ 1
. Mathematically, the percentage change in price (
\%\Delta\text{P}
, denominator) is greater than the percentage change in quantity demanded (
\%\Delta \text{Q}_{\text{D}}
numerator).
\lvert\%\Delta \text{Q}_{\text{D}}\rvert \lt \lvert\% \Delta \text{P}\rvert
A company sells T-shirts, and the absolute value of the elasticity of demand for T-shirts was 0.625. Here, a 10% increase in price will result in only a 6.25% decline in quantity demanded, so total revenue will rise. For example, suppose the company sells 100 T-shirts for $10 each. If it increases its price for these T-shirts by 10%, the price will increase to $11. Suppose sales then decrease by 6.25%, or 93.75 T-shirts. The previous revenue was $1,000. The new revenue is higher, at $1,031.25. In contrast, a 10% decrease in price would lead to a 6.25% increase in quantity demanded but a fall in total revenue. If a firm faces inelastic demand, it will want to raise prices in order to increase total revenue.
The demand curve is close to vertical when demand is inelastic. A large increase in price (from P1 to P2) results in a small decrease in demand. A large decrease in price (from P3 to P2) results in a small increase in demand.
Demand Is Unit Elastic: Effect on Total Revenue
Unit elastic demand is a situation in which percentage change in quantity is equal to percentage change in price, and it occurs when the absolute value of the price elasticity of demand is equal to 1, or
\lvert \text{E}_{\text{D}}\rvert=1
In this case, the numerator and denominator are equal.
\lvert\%\Delta \text{Q}_{\text{D}}\rvert=\lvert\%\Delta \text{P}\rvert
With unit elasticity, a 10% rise in the price of a good or service will be exactly offset by a 10% decline in quantity demanded, leaving the total revenue unchanged. The same occurs when price falls by 10%; the quantity demanded rises by 10% and total revenue remains the same.
Relationship between the Price Elasticity of Demand and Total Revenue
Elastic Rises Falls Falls
Falls Rises Rises
Inelastic Rises Falls Rises
Falls Rises Falls
Unit Elastic Rises Falls Unchanged
Falls Rises Unchanged
Knowing the type of demand that applies to their product helps firms understand how to maximize total revenue.
<Price Elasticity of Demand>Income Elasticity of Demand
|
Max-flow Min-cut Algorithm | Brilliant Math & Science Wiki
Alex Chumbley, Zandra Vinegar, Eli Ross, and
The max-flow min-cut theorem is a network flow theorem. This theorem states that the maximum flow through any network from a given source to a given sink is exactly the sum of the edge weights that, if removed, would totally disconnect the source from the sink. In other words, for any network graph and a selected source and sink node, the max-flow from source to sink = the min-cut necessary to separate source from sink. Flow can apply to anything. For instance, it could mean the amount of water that can pass through network pipes. Or, it could mean the amount of data that can pass through a computer network like the Internet.
Max-flow min-cut has a variety of applications. In computer science, networks rely heavily on this algorithm. Network reliability, availability, and connectivity use max-flow min-cut. In mathematics, matching in graphs (such as bipartite matching) uses this same algorithm. In less technical areas, this algorithm can be used in scheduling. For example, airlines use this to decide when to allow planes to leave airports to maximize the "flow" of flights.
What is the fewest number of green tubes that need to be cut so that no water will be able to flow from the hydrant to the bucket?
Assume that the gray pipes in this system have a much greater capacity than the green tubes, such that it's the capacity of the green network that limits how much water makes it through the system per second. Additionally, assume that all of the green tubes have the same capacity as each other.
The network can be severed in 5 cuts:
How to know where to cut and a proof that five cuts are required:
If this system were real, a fast way to solve this puzzle would be to allow water to blast from the hydrant into the green hose system. Once water is flowing through the network at the highest capacity the system can manage, look at how the water is flowing through the system and follow these two steps repeatedly until the network is fully severed:
1) Find a tube-segment that water is flowing through at full capacity. Somewhere along the path that each stream of water takes, there will be at least one such tube (otherwise, the system isn't really being used at full capacity).
2) Once you've found such a tube-segment, test squeezing it shut. If squeezing it shut reduces the capacity of the system because the water can't find another way to get through, then cut it. Again, somewhere along the path each stream of water takes, there will be at least one such tube-segment, otherwise, the system isn't really being used at full capacity.
With each cut, the capacity of the system will decrease until, at last, it decreases to 0.
An illustration of how knowing the "Max-Flow" of a network allows us to prove that the"Min-Cut" of the network is, in fact, minimal:
In the center image above, you can see one example of how the hose system might be used at full capacity. Each of the black lines represents a stream of water totally filling the tubes it passes through. In this image, as many distinct paths as possible have been drawn in across the system. The distinct paths can share vertices but they cannot share edges. The maximum number of paths that can be drawn given these restrictions is the "max-flow" of this network. In this example, the max flow of the network is five (five times the capacity of a single green tube).
The final picture illustrates how cutting through each of these paths once along a single 'cutting path' will sever the network. Five cuts are required, otherwise there would be at least one unaffected stream of water. In other words, being able to find five distinct paths for water to stream through the system is proof that at least five cuts are required to sever the system. Therefore, five is also the "min-cut" of the network. The water-pushing technique explained above will always allow you to identify a set of segments to cut that fully severs the network with the 'source' on one side and the 'sink' on the other.
All networks, whether they carry data or water, operate pretty much the same way. The network wants to get some type of object (data or water) from the source to the sink. The amount of that object that can be passed through the network is limited by the smallest connection between disjoint sets of the network. Even if other edges in this network have bigger capacities, those capacities will not be used to their fullest.
In the example below, you can think about those networks as networks of water pipes. Each arrow can only allow 3 gallons of water to pass by. So, the network is limited by whatever partition has the lowest potential flow.
\
Look at the following graphic. It is a network with four edges. The source is on top of the network, and the sink is below the network. Each edge has a maximum flow (or weight) of 3. How much flow can pass through this network at any given time?
Network example 1
The answer is 3. The bottom three edges can pass 9 among the three of them, true. However, the limiting factor here is the top edge, which can only pass 3 at a time. This is the intuition behind max-flow min-cut. The minimum cut will be the limiting factor.
As you can see in the following graphic, by splitting the network into disjoint sets, we can see that one set is clearly the limiting factor, the top edge. The top set's maximum weight is only 3, while the bottom is 9. The top half limits the flow of this network.
The same network split into disjoint sets
\
What's the maximum flow for this network?
The answer is still 3! The limiting factor is now on the bottom of the network, but the weights are still the same, so the maximum flow is still 3. The same network, partitioned by a barrier, shows that the bottom edge is limiting the flow of the network.
The same network, partitioned
Let's look at another water network that has edges of different capacities.
In this graphic, each edge represents the amount of water, in gallons, that can pass through it at any given time.
\
What is the max-flow of this network?
The answer is 10 gallons. Let's walk through the process starting at the source, taking things level by level:
1) 6 gallons of water can pass from the source to both vertices at the next level down. That makes a total of 12 gallons so far.
2) From here, only 4 gallons can pass down the outside edges. However, there is another edge coming out of each edge that has a capacity of 3. 4 gallons plus 3 gallons is more than the 6 gallons that arrived at each node, so we can pass all of the water through this level.
3) From this level, our only path to the sink is through an edge with capacity 5. That means we can only pass 5 gallons of water per vertex, coming out to 10 gallons total. That is the max-flow of this network.
It's important to understand that not every edge will be carrying water at full capacity. This is one example of how the network might look from a capacity perspective. Now, every edge displays how much water it is currently carrying over its total capacity.
Water flow network capacity
There are a few key definitions for this algorithm. First, the network itself is a directed, weighted graph. That is, it is composed of a set of vertices connected by edges. These edges only flow in one direction (because the graph is directed) and each edge also has a maximum flow that it can handle (because the graph is weighted).
There are two special vertices in this graph, though. The source is where all of the flow is coming from. All edges that touch the source must be leaving the source. And, there is the sink, the vertex where all of the flow is going. Similarly, all edges touching the sink must be going into the sink.
A cut is a partitioning of the network,
G
, into two disjoint sets of vertices. These sets are called
S
T
S
is the set that includes the source, and
T
is the set that includes the sink. The only rule is that the source and the sink cannot be in the same set. A cut has two important properties. The first is the cut-set, which is the set of edges that start in
S
and end in
T
. The second is the capacity, which is the sum of the weights of the edges in the cut-set. Look at the following graphic for a visual depiction of these properties.
Cut S with capacity 7
In this picture, the two vertices that are circled are in the set
S
, and the rest are in
T
S
has three edges in its cut-set, and their combined weights are 7, the capacity of this cut. The goal of max-flow min-cut, though, is to find the cut with the minimum capacity.
There are many specific algorithms that implement this theorem in practice. The most famous algorithm is the Ford-Fulkerson algorithm, named after the two scientists that discovered the max-flow min-cut theorem in 1956.
First, there are some important initial logical steps to proving that the maximum flow of any network is equal to the minimum cut of the network.
For any flow
f
and any cut
(S, T)
on a network, it holds that
f \leq \text{capacity}(S, T)
. This makes sense because it is impossible for there to be more flow than there is room for that flow (or, for there to be more water than the pipes can fit).
Due to Lemma 1, we have a clear next step. For the maximum flow
f^{*}
and the minimum cut
(S, T)^{*}
f^{*} \leq \text{capacity}\big((S, T)^{*}\big).
These two mathematical statements place an upper bound on our maximum flow.
Begin with any flow
f
. This is possible because the zero flow is possible (where there is no flow through the network). Then the following process of residual graph creation is repeated until no augmenting paths remain.
Define augmenting path
p_a
as a path from the source to the sink of the network in which more flow could be added (thus augmenting the total flow of the network).
We want to create, at each step of this process, a residual graph
G_f
. To do so, first find an augmenting path
p_a
with a given minimum capacity
c_p
c_p
is the lowest capacity of all the edges along path
p_a
For each edge with endpoints
(u, v)
p_a
, increase the flow from
u
v
c_p
and decrease the flow from
v
u
c_p
. This might require the creation of a new edge in the backward direction. This process does not change the capacity constraint of an edge and it preserves non-negativity of flows. Also, this increases the flow from the source to the sink by exactly
c_p
. This is how a residual graph is created.
Now, it is important to note that our new flow
f^{*} = f + c_p
no longer contains the augmenting path
c_p
. This is because the process of augmenting our flow by
c_p
has either given one of the forward edges a maximum capacity or one of the backward edges a flow of zero.
This process is repeated until no augmenting paths remain. Once that happens, denote all vertices reachable from the source as
V
and all of the vertices not reachable from the source as
V^c
. Trivially, the source is in
V
and the sink is in
V^c
. Importantly, the sink is not in
V
because there are no augmenting paths and therefore no paths from the source to the sink.
Consider a pair of vertices,
u
v
u
V
v
V^c
. The flow of
(u, v)
must be maximized, otherwise we would have an augmenting path. And, the flow of
(v, u)
must be zero for the same reason. Therefore,
\text{flow}(u, v) = \text{capacity}(u, v)
for all edges with
u
V
v
V^c
\text{flow}(V, V^{c}) = \text{capacity}(V, V^{c}).
Then, by Corollary 2,
f^{*} = \text{capacity}(S, T)^{*}.
Networks can look very different from the basic ones shown in this wiki. However, the max-flow min-cut theorem can still handle them. What about networks with multiple sources like the one below (each source vertex is labeled S)?
Flow network with multiple sources
With no trouble at all, a new network can be created with just one source. This source connects to all of the sources from the original version, and the capacity of each edge coming from the new source is infinity. The same process can be done to deal with multiple sink vertices.
Flow network with consolidated source vertex
This small change does nothing to affect the flow potential for the network because these only added edges having an infinite capacity and they cannot contribute to any bottleneck. This allows us to still run the max-flow min-cut theorem.
Cite as: Max-flow Min-cut Algorithm. Brilliant.org. Retrieved from https://brilliant.org/wiki/max-flow-min-cut-algorithm/
|
How to Calculate Accounts Receivable Collection Period: 12 Steps
1 Gathering Your Data
2 Calculating the Accounts Receivable Collection Period
Businesses both large and small often sell their product to their customers on credit. Credit sales, unlike cash transactions, must be carefully managed in order to ensure prompt payment. Mismanaged accounts can lead to slow or late payments and default. One way to keep track of credit sales is to analyze the related financial ratios, such as the average collection period. Learning how to calculate the accounts receivable collection period will help your business keep track of how quickly payments can be expected.
Gathering Your Data Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/d\/d7\/Calculate-Accounts-Receivable-Collection-Period-Step-1-Version-3.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-1-Version-3.jpg","bigUrl":"\/images\/thumb\/d\/d7\/Calculate-Accounts-Receivable-Collection-Period-Step-1-Version-3.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-1-Version-3.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Know what data you need. The average accounts receivable collection period can be calculated from the following equation:
{\displaystyle Period={\frac {Days}{ReceivablesTurnover}}}
. In the equation, "days" refers to the number of days in the period being measures (usually a year or half of a year). However, the bottom of the equation, receivables turnover, must also be calculated from other data. This requires measurement of net credit sales during the period and average accounts receivable balance during the period.[1] X Research source Both can be calculated from sales and returns entries in the general ledger.
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/e\/ec\/Calculate-Accounts-Receivable-Collection-Period-Step-2-Version-3.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-2-Version-3.jpg","bigUrl":"\/images\/thumb\/e\/ec\/Calculate-Accounts-Receivable-Collection-Period-Step-2-Version-3.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-2-Version-3.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Determine net credit sales. Net credit sales equals all of the sales on credit less all sales returns and sales allowances. Sales on credit are non-cash sales where the customer is allowed to pay at a later date. Sales returns are credits issued to a customer due to a problem with the purchase. Sales allowances are reductions in price granted to a customer due to problems with the sales transaction. If a company grants a large amount of credit, even to customers with a poor credit history, its net credit sales will be higher.[2] X Research source
Use this equation: sales on credit – sales returns – sales allowances = net credit sales.[3] X Research source
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/8\/85\/Calculate-Accounts-Receivable-Collection-Period-Step-3-Version-3.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-3-Version-3.jpg","bigUrl":"\/images\/thumb\/8\/85\/Calculate-Accounts-Receivable-Collection-Period-Step-3-Version-3.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-3-Version-3.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Calculate the average accounts receivable balance. Use the month-end accounts receivable balance for each month in the measurement period. This information is always recorded on the company's balance sheet. For seasonal businesses, the best practice is to use 12 months of data to account for the effects of seasonality. Rapidly growing or declining businesses, on the other hand, should use a shorter measurement period, such as three months. Using 12 months of data would understate the average accounts receivable for a growing company and overstate it for a declining company.[4] X Research source
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/0\/0f\/Calculate-Accounts-Receivable-Collection-Period-Step-4-Version-3.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-4-Version-3.jpg","bigUrl":"\/images\/thumb\/0\/0f\/Calculate-Accounts-Receivable-Collection-Period-Step-4-Version-3.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-4-Version-3.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Calculate the accounts receivable turnover ratio. This is a company's annual net credit sales divided by its average balance in accounts receivable for the same time period. This calculation tells how many times a company's accounts receivable turns over.[5] X Research source
For example, suppose a company has $730,000 in net credit sales and an average balance in accounts receivable of $70,000. Use the equation $730,000 / $80,000 = 9.125 This means that the company's accounts receivable turns over about 9 times every year.
Calculating the Accounts Receivable Collection Period Download Article
Understand the equation for calculating the accounts receivable collection period. Again, the equation for this calculation is as follows:
{\displaystyle Period={\frac {Days}{ReceivablesTurnover}}}
. The variables can be explained as follows:
"Period" refers to the average accounts receivable collection period.
"Days" refers to the number of days in the period being measured.
"Receivables Turnover" refers to the receivables turnover ratio calculated earlier using net credit sales and average account receivable over the period.[6] X Research source
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/f\/f7\/Calculate-Accounts-Receivable-Collection-Period-Step-6.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-6.jpg","bigUrl":"\/images\/thumb\/f\/f7\/Calculate-Accounts-Receivable-Collection-Period-Step-6.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-6.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Input the variables. Using the information from the earlier example, we have a company with $730,000 in net credit sales and an average accounts receivable of $80,000. This results in a receivable turnover ratio of 9.125. This data was measured over a year, so 365 will be used for the top of the equation. The completed equation now looks like this:
{\displaystyle Period={\frac {365}{9.125}}}
The top of the equation, the number of days, should be substituted for the number of days in the period being measured. 365 is usually used for a whole year, 180 for a half year.
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/0\/0a\/Calculate-Accounts-Receivable-Collection-Period-Step-7.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-7.jpg","bigUrl":"\/images\/thumb\/0\/0a\/Calculate-Accounts-Receivable-Collection-Period-Step-7.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-7.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Solve the equation. Once you have your variables in the equation, you can simply divide to solve the equation. In the example, the equation solves as 365/9.125= 40 days.
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/0\/0d\/Calculate-Accounts-Receivable-Collection-Period-Step-8.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-8.jpg","bigUrl":"\/images\/thumb\/0\/0d\/Calculate-Accounts-Receivable-Collection-Period-Step-8.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-8.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Understand your result. The result of 40 indicates that the average accounts receivable collection period is 40 days. This means that the business owner can expect a credit sale to be paid by the customer within 40 days. This can help them plan for how much cash they need to have on hand for expenses and bills.
Using the Data Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/0\/01\/Calculate-Accounts-Receivable-Collection-Period-Step-9.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-9.jpg","bigUrl":"\/images\/thumb\/0\/01\/Calculate-Accounts-Receivable-Collection-Period-Step-9.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-9.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Understand the significance of the accounts receivable collection period. Calculating the accounts receivable collection period tells companies how long customers are taking to pay the company for their credit sales. A lower figure is better. This means that customers are paying the company in a timely manner. If customers pay in a shorter amount of time, the company then has less funds tied up in accounts receivable and more funds available to use for other purposes. A low number also indicates that customers are less likely to default on credit payments.[7] X Research source
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/7\/72\/Calculate-Accounts-Receivable-Collection-Period-Step-10.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-10.jpg","bigUrl":"\/images\/thumb\/7\/72\/Calculate-Accounts-Receivable-Collection-Period-Step-10.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-10.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Compare accounts receivable collection period to the standard number of days customers are allowed before a payment is due. For example, suppose a company has an accounts receivable collection period of 40 days. This means its accounts receivable is turning over approximately 9 times per year. On the face of it, this seems beneficial to the company. However, suppose the company's credit terms require customers to pay within 20 days. This difference between the credit terms and the accounts receivable collections period means the company does not have diligent collections procedures.[8] X Research source
Know how to keep the accounts receivable collection period short. Companies must grant credit prudently. Customers' credit should be screened before a credit sale is approved. Customers with poor credit histories should not be approved for credit sales. Also, companies should have vigorous collections activities. Accounts should not be allowed to linger unpaid beyond the company's credit terms.[9] X Research source
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/3\/3e\/Calculate-Accounts-Receivable-Collection-Period-Step-12.jpg\/v4-460px-Calculate-Accounts-Receivable-Collection-Period-Step-12.jpg","bigUrl":"\/images\/thumb\/3\/3e\/Calculate-Accounts-Receivable-Collection-Period-Step-12.jpg\/aid1640338-v4-728px-Calculate-Accounts-Receivable-Collection-Period-Step-12.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>License: <a target=\"_blank\" rel=\"nofollow noreferrer noopener\" class=\"external text\" href=\"https:\/\/creativecommons.org\/licenses\/by-nc-sa\/3.0\/\">Creative Commons<\/a><br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Consider the correlation between the annual sales figure and the average accounts receivable. Companies with seasonal sales may have unusually high or low average accounts receivable figures, depending on where they are in their seasonal billings. Companies should either annualize the receivables data or use a shorter measuring period to account for seasonal differences in the average accounts receivable balance.[10] X Research source
To annualize receivables, companies should average the accounts receivable balance for each month of an entire 12-month year.
Companies can calculate the accounts receivable collection period using a rolling average accounts receivable balance that changes every three months. The calculated accounts receivable collection period will fluctuate each quarter based on seasonal sales activity.
↑ http://www.financeformulas.net/Average-Collection-Period.html
↑ http://www.accountingtools.com/questions-and-answers/what-are-net-credit-sales.html
↑ http://www.accountingtools.com/questions-and-answers/how-do-i-calculate-average-accounts-receivable.html
↑ http://www.exinfm.com/board/accounts_receivable_ratios.htm
↑ http://www.investopedia.com/terms/a/average_collection_period.asp
↑ http://www.accountingtools.com/receivables-collection-period
Español:calcular el período de cobranza de las cuentas por cobrar
Français:calculer le délai de recouvrement des comptes clients
Bahasa Indonesia:Menghitung Periode Penagihan Piutang
Waliullah Farahany
"I am engaged in debt collection. This article helps me very much on calculating days in T/R."
|
Does an object accelerate under uniform circular motion? | Brilliant Math & Science Wiki
An object undergoing uniform circular motion does not accelerate.
Why some people say it's true: In uniform circular motion, speed remains constant.
Why some people say it's false: In uniform circular motion, the direction of motion is ever-changing.
Argument: why there is acceleration
To cut through the confusion, let's look at the definition of acceleration: the time rate of change of velocity. Whenever velocity changes, there must be a corresponding acceleration.
Car in uniform circular motion about a cul-de-sac.
The confusion comes from the difference between velocity in 1-dimension and velocity in multiple dimensions. In one dimension, velocity has a magnitude (e.g.
\SI[per-mode=symbol]{5}{\meter\per\second}
) and a direction (e.g. toward the northeast). However, as the direction can only be toward the left or the right, it isn't possible to smoothly vary the direction of velocity—as is the case in circular motion—we can only have discrete shifts. Such motion isn't usually encountered except during collisions, where few would doubt the existence of significant acceleration.
d \gt 1
dimensions, velocity is a full-fledged vector quantity and its direction can be varied naturally. One such case is uniform circular motion where the direction of velocity varies smoothly as we move about the circle. Despite the constancy of speed, the direction of motion is changing and therefore the time rate of change of velocity is nonzero—which constitutes an acceleration.
In what direction does the object accelerate? As its speed is unchanging, the acceleration must be perpendicular to the direction of motion, and thus toward the center of the circle. Its magnitude can be found in a number of ways, and is given by
\mathbf{a}_\textrm{cent} = v^2/R,
R
Rebuttal: In the case of uniform circular motion, what is the angle between velocity and acceleration?
Reply: If the speed remains constant, then the component of acceleration which is parallel to the velocity is zero. This component is called as tangential acceleration. The direction changes due to the centripetal acceleration which is radially inward. Thus, the net acceleration in the case of uniform circular motion is perpendicular to the velocity.
Rebuttal: In the case of uniform circular motion, can the magnitude of acceleration be written as equal to the rate of change of speed?
Reply: No, the rate of change of speed is entirely different from the rate of change of velocity. Acceleration is defined as the rate of change of velocity.
Only (a) and (b) are correct Only (a), (b), and (c) are correct Only (c) and (d) are correct All the options are correct
A particle is moving on a circular track with constant non-zero speed. Which of the following options are correct?
(b) The rate of change of speed equals the magnitude of the rate of change of velocity.
(c) Instantaneous speed equals the magnitude of instantaneous velocity.
(d) The angle between velocity and acceleration has to be
90^\circ
Uniform circular motion - medium
Velocity and acceleration - problem solving - medium
Cite as: Does an object accelerate under uniform circular motion?. Brilliant.org. Retrieved from https://brilliant.org/wiki/is-uniform-circular-motion-a-uniform-motion/
|
Gnomonic_projection Knowpia
A gnomonic map projection is a map projection which displays all great circles as straight lines, resulting in any straight line segment on a gnomonic map showing a geodesic, the shortest route between the segment's two endpoints. This is achieved by casting surface points of the sphere onto a tangent plane, each landing where a ray from the center of the sphere passes through the point on the surface and then on to the plane. No distortion occurs at the tangent point, but distortion increases rapidly away from it. Less than half of the sphere can be projected onto a finite map.[1] Consequently, a rectilinear photographic lens, which is based on the gnomonic principle, cannot image more than 180 degrees.
Gnomonic projection of a portion of the north hemisphere centered on the geographic North Pole
The gnomonic projection with Tissot's indicatrix of deformation
The gnomonic projection is said to be the oldest map projection, developed by Thales for star maps in the 6th century BC[1]: 164 . The path of the shadow-tip or light-spot in a nodus-based sundial traces out the same hyperbolae formed by parallels on a gnomonic map.
The gnomonic projection is from the centre of a sphere to a plane tangent to the sphere (Fig 1 below). The sphere and the plane touch at the tangent point. Great circles transform to straight lines via the gnomonic projection. Since meridians (lines of longitude) and the equator are great circles, they are always shown as straight lines on a gnomonic map. Since the projection is from the centre of the sphere, a gnomonic map can represent less than half of the area of the sphere. Distortion of the scale of the map increases from the centre (tangent point) to the periphery.[1]
If the tangent point is one of the poles then the meridians are radial and equally spaced (Fig 2 below). The equator cannot be shown as it is at infinity in all directions. Other parallels (lines of latitude) are depicted as concentric circles.
If the tangent point is on the equator then the meridians are parallel but not equally spaced (Fig 3 below). The equator is a straight line perpendicular to the meridians. Other parallels are depicted as hyperbolae.
If the tangent point is not on a pole or the equator, then the meridians are radially outward straight lines from a pole, but not equally spaced (Fig 4 below). The equator is a straight line that is perpendicular to only one meridian, indicating that the projection is not conformal. Other parallels are depicted as conic sections.
Fig 1. A great circle projects to a straight line in the gnomonic projection
Fig 2. Gnomonic projection centred on the North Pole
Fig 3. Gnomonic projection centred on the equator
Fig 4. Gnomonic projection centred on latitude 40 deg North
Figs 2 - 4 are from Snyder (1987) Figure 34[1]: 166 .
As with all azimuthal projections, angles from the tangent point are preserved. The map distance from that point is a function r(d) of the true distance d, given by
{\displaystyle r(d)=R\,\tan {\frac {d}{R}}}
where R is the radius of the Earth. The radial scale is
{\displaystyle r'(d)={\frac {1}{\cos ^{2}{\frac {d}{R}}}}}
and the transverse scale
{\displaystyle {\frac {1}{\cos {\frac {d}{R}}}}}
so the transverse scale increases outwardly, and the radial scale even more.
Gnomonic projections are used in seismic work because seismic waves tend to travel along great circles. They are also used by navies in plotting direction finding bearings, since radio signals travel along great circles. Meteors also travel along great circles, with the Gnomonic Atlas Brno 2000.0 being the IMO's recommended set of star charts for visual meteor observations. Aircraft and ship pilots use the projection to find the shortest route between start and destination.
The gnomonic projection is used extensively in photography, where it is called rectilinear projection. Because they are equivalent, the same viewer used for photographic panoramas can be used to render gnomonic maps (view as a 360° interactive panorama).
The gnomonic projection is used in astronomy where the tangent point is centered on the object of interest. The sphere being projected in this case is the celestial sphere, R = 1, and not the surface of the Earth.
Comparison of the Gnomonic projection and some azimuthal projections centred on 90° N at the same scale, ordered by projection altitude in Earth radii. (click for detail)
Beltrami–Klein model, the analogous mapping of the hyperbolic plane
^ a b c d Snyder, John P. (1987). Map Projections – A Working Manual. U.S. Geological Survey Professional Paper 1395. Washington, D.C: United States Government Printing Office. pp. 164–168. doi:10.3133/pp1395.
Calabretta, Mark R.; Greisen, Eric W. (July 19, 2002). "Representations of celestial coordinates in FITS (Paper II)". Astronomy & Astrophysics. 395: 1077–1122. arXiv:astro-ph/0207413. doi:10.1051/0004-6361:20021327. S2CID 18019255.
Wikimedia Commons has media related to Gnomonic projection.
|
CODING THEORY - Encyclopedia Information
Coding theory Information
Study of the properties of codes and their fitness
There are four types of coding: [1]
Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, ZIP data compression makes data files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may be studied in combination.
Error correction adds useful redundancy to the data from a source to make the transmission more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using error correction. A typical music compact disc (CD) uses the Reed–Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and the NASA Deep Space Network all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes.
4 Cryptographic coding
In 1948, Claude Shannon published " A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the information a sender wants to transmit. In this fundamental work he used tools in probability theory, developed by Norbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while essentially inventing the field of information theory.
In 1972, Nasir Ahmed proposed the discrete cosine transform (DCT), which he developed with T. Natarajan and K. R. Rao in 1973. [2] The DCT is the most widely used lossy compression algorithm, the basis for multimedia formats such as JPEG, MPEG and MP3.
Main article: Data compression
Data can be seen as a random variable
{\displaystyle X:\Omega \to {\mathcal {X}}}
{\displaystyle x\in {\mathcal {X}}}
appears with probability
{\displaystyle \mathbb {P} [X=x]}
Data are encoded by strings (words) over an alphabet
{\displaystyle \Sigma }
{\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}}
{\displaystyle \Sigma ^{+}}
if the empty string is not part of the alphabet).
{\displaystyle C(x)}
is the code word associated with
{\displaystyle x}
{\displaystyle l(C(x)).}
{\displaystyle l(C)=\sum _{x\in {\mathcal {X}}}l(C(x))\mathbb {P} [X=x].}
The concatenation of code words
{\displaystyle C(x_{1},\ldots ,x_{k})=C(x_{1})C(x_{2})\cdots C(x_{k})}
{\displaystyle C(\epsilon )=\epsilon }
{\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}}
is non-singular if injective.
{\displaystyle C:{\mathcal {X}}^{*}\to \Sigma ^{*}}
is uniquely decodable if injective.
{\displaystyle C:{\mathcal {X}}\to \Sigma ^{*}}
is instantaneous if
{\displaystyle C(x_{1})}
is not a prefix of
{\displaystyle C(x_{2})}
(and vice versa).
CDs use cross-interleaved Reed–Solomon coding to spread the data out over the disk. [3]
Other codes are more appropriate for different applications. Deep space communications are limited by the thermal noise of the receiver which is more of a continuous nature than a bursty nature. Likewise, narrowband modems are limited by the noise, present in the telephone network and also modeled better as a continuous disturbance.[ citation needed] Cell phones are subject to rapid fading. The high frequencies used can cause rapid fading of the signal even if the receiver is moved a few inches. Again there are a class of channel codes that are designed to combat fading.[ citation needed]
Main article: Linear code
The term algebraic coding theory denotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched.[ citation needed]
Algebraic coding theory is basically divided into two major types of codes:[ citation needed]
It analyzes the following three properties of a code – mainly:[ citation needed]
Main article: Block code
Linear block codes have the property of linearity, i.e. the sum of any two codewords is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property. [4]
Linear block codes are summarized by their symbol alphabets (e.g., binary or ternary) and parameters (n,m,dmin) [5] where
n is the length of the codeword, in symbols,
m is the number of source symbols that will be used for encoding at once,
dmin is the minimum hamming distance for the code.
There are many types of linear block codes, such as
Cyclic codes (e.g., Hamming codes)
Polynomial codes (e.g., BCH codes)
Block codes are tied to the sphere packing problem, which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful (24,12) Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is) the dimensions refer to the length of the codeword as defined above.
The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop, or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called "perfect" codes. The only nontrivial and useful perfect codes are the distance-3 Hamming codes with parameters satisfying (2r – 1, 2r – 1 – r, 3), and the [23,12,7] binary and [11,6,5] ternary Golay codes. [4] [5]
Another code property is the number of neighbors that a single codeword may have. [6] Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly. The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers. [6]
Properties of linear block codes are used in many applications. For example, the syndrome-coset uniqueness property of linear block codes is used in trellis shaping, [7] one of the best-known shaping codes.
Main article: Convolutional code
Cryptography or cryptographic coding is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). [8] More generally, it is about constructing and analyzing protocols that block adversaries; [9] various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation [10] are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
Other applications of coding theory
This article or section may contain misleading parts. Please help clarify this article according to any suggestions provided on the talk page. (August 2012)
Another concern of coding theory is designing codes that help synchronization. A code may be designed so that a phase shift can be easily detected and corrected and that multiple signals can be sent on the same channel.[ citation needed]
Another application of codes, used in some mobile phone systems, is code-division multiple access (CDMA). Each phone is assigned a code sequence that is approximately uncorrelated with the codes of other phones.[ citation needed] When transmitting, the code word is used to modulate the data bits representing the voice message. At the receiver, a demodulation process is performed to recover the data. The properties of this class of codes allow many users (with different codes) to use the same radio channel at the same time. To the receiver, the signals of other users will appear to the demodulator only as a low-level noise.[ citation needed]
Another general class of codes are the automatic repeat-request (ARQ) codes. In these codes the sender adds redundancy to each message for error checking, usually by adding check bits. If the check bits are not consistent with the rest of the message when it arrives, the receiver will ask the sender to retransmit the message. All but the simplest wide area network protocols use ARQ. Common protocols include SDLC (IBM), TCP (Internet), X.25 (International) and many others. There is an extensive field of research on this topic because of the problem of matching a rejected packet against a new packet. Is it a new one or is it a retransmission? Typically numbering schemes are used, as in TCP. "RFC793". RFCs. Internet Engineering Task Force (IETF). September 1981.
Group testing uses codes in a different way. Consider a large group of items in which a very few are different in a particular way (e.g., defective products or infected test subjects). The idea of group testing is to determine which items are "different" by using as few tests as possible. The origin of the problem has its roots in the Second World War when the United States Army Air Forces needed to test its soldiers for syphilis. [11]
Analog coding
Information is encoded analogously in the neural networks of brains, in analog signal processing, and analog electronics. Aspects of analog coding include analog error correction, [12] analog data compression [13] and analog encryption. [14]
Neural coding is a neuroscience-related field concerned with how sensory and other information is represented in the brain by networks of neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among electrical activity of the neurons in the ensemble. [15] It is thought that neurons can encode both digital and analog information, [16] and that neurons follow the principles of information theory and compress information, [17] and detect and correct [18] errors in the signals that are sent throughout the brain and wider nervous system.
^ James Irvine; David Harle (2002). "2.4.4 Types of Coding". Data Communications and Networks. p. 18. ISBN 9780471808725. There are four types of coding
^ Nasir Ahmed. "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing, Vol. 1, Iss. 1, 1991, pp. 4-5.
^ Todd Campbell. "Answer Geek: Error Correction Rule CDs".
^ a b Terras, Audrey (1999). Fourier Analysis on Finite Groups and Applications. Cambridge University Press. p. 195. ISBN 978-0-521-45718-7.
^ a b Blahut, Richard E. (2003). Algebraic Codes for Data Transmission. Cambridge University Press. ISBN 978-0-521-55374-2.
^ a b Christian Schlegel; Lance Pérez (2004). Trellis and turbo coding. Wiley-IEEE. p. 73. ISBN 978-0-471-22755-7.
^ Forney, G.D., Jr. (March 1992). "Trellis shaping". IEEE Transactions on Information Theory. 38 (2 Pt 2): 281–300. doi: 10.1109/18.119687.
^ Dorfman, Robert (1943). "The detection of defective members of large populations". Annals of Mathematical Statistics. 14 (4): 436–440. doi: 10.1214/aoms/1177731363.
^ Chen, Brian; Wornell, Gregory W. (July 1998). "Analog Error-Correcting Codes Based on Chaotic Dynamical Systems" (PDF). IEEE Transactions on Communications. 46 (7): 881–890. CiteSeerX 10.1.1.30.4093. doi: 10.1109/26.701312. Archived from the original (PDF) on 2001-09-27. Retrieved 2013-06-30.
^ Novak, Franc; Hvala, Bojan; Klavžar, Sandi (1999). "On Analog Signature Analysis". Proceedings of the conference on Design, automation and test in Europe. CiteSeerX 10.1.1.142.5853. ISBN 1-58113-121-6.
^ Shujun Li; Chengqing Li; Kwok-Tung Lo; Guanrong Chen (April 2008). "Cryptanalyzing an Encryption Scheme Based on Blind Source Separation" (PDF). IEEE Transactions on Circuits and Systems I. 55 (4): 1055–63. arXiv: cs/0608024. doi: 10.1109/TCSI.2008.916540.
^ Brown EN, Kass RE, Mitra PP (May 2004). "Multiple neural spike train data analysis: state-of-the-art and future challenges" (PDF). Nature Neuroscience. 7 (5): 456–461. doi: 10.1038/nn1228. PMID 15114358.
^ Thorpe, S.J. (1990). "Spike arrival times: A highly efficient coding scheme for neural networks" (PDF). In Eckmiller, R.; Hartmann, G.; Hauske, G. (eds.). Parallel processing in neural systems and computers (PDF). North-Holland. pp. 91–94. ISBN 978-0-444-88390-2. Retrieved 30 June 2013.
^ Gedeon, T.; Parker, A.E.; Dimitrov, A.G. (Spring 2002). "Information Distortion and Neural Coding". Canadian Applied Mathematics Quarterly. 10 (1): 10. CiteSeerX 10.1.1.5.6365.
^ Stiber, M. (July 2005). "Spike timing precision and neural error correction: local behavior". Neural Computation. 17 (7): 1577–1601. arXiv: q-bio/0501021. doi: 10.1162/0899766053723069. PMID 15901408.
Elwyn R. Berlekamp (2014), Algebraic Coding Theory, World Scientific Publishing (revised edition), ISBN 978-9-81463-589-9.
MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1
Vera Pless (1982), Introduction to the Theory of Error-Correcting Codes, John Wiley & Sons, Inc., ISBN 0-471-08684-3.
Randy Yates, A Coding Theory Tutorial.
Retrieved from " https://en.wikipedia.org/?title=Coding_theory&oldid=1087746075"
Coding Theory Videos
Coding Theory Websites
Coding Theory Encyclopedia Articles
|
Control Format Indicator (CFI) Channel - MATLAB & Simulink - MathWorks Australia
Control Format Indicator Values
PCFICH Resourcing
The PCFICH
When transmitting data on the downlink in an OFDM communication system, it is important to specify how many OFDM symbols are used to transmit the control channels so the receiver knows where to find control information. In LTE, the Control Format Indicator (CFI) value defines the time span, in OFDM symbols, of the Physical Downlink Control Channel (PDCCH) transmission (the control region) for a particular downlink subframe. The CFI is transmitted using the Physical Control Format Indicator Channel (PCFICH).
The CFI is limited to the value 1, 2, or 3. For bandwidths greater than ten resource blocks, the number of OFDM symbols used to contain the downlink control information is the same as the actual CFI value. Otherwise, the span of the downlink control information (DCI) is equal to CFI+1 symbols.
The PCFICH is mapped in terms of Resource Element Groups (REGs) and is always mapped onto the first OFDM symbol. The number of REGs allocated to the PCFICH transmission is fixed to 4 i.e. 16 Resource Elements (REs). A PCFICH is only transmitted when the number of OFDM symbols for PDCCH is greater than zero.
The CFI value undergoes channel coding to form the PCFICH payload, as shown in the following figure.
Using the following table contains the CFI codeword for each CFI value. Using these codewords corresponds to a block encoding rate of 1/16, changing a two bit CFI value to a 32 bit codeword.
CFI codeword <b0, b1, ... , b31>
1 <0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1,1,0,1>
4 (Reserved) <0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0>
The coded CFI is scrambled before undergoing QPSK modulation, layer mapping and precoding as shown in the following figure.
The 32-bit coded CFI block undergoes a bit-wise exclusive-or (XOR) operation with a cell-specific scrambling sequence. The scrambling sequence is a pseudo-random sequence created using a length-31 Gold sequence generator. At the start of each subframe, it is initialized using the slot number within the radio frame,
{n}_{s}
{N}_{ID}^{cell}
{c}_{init}=\left(⌊\frac{{n}_{s}}{2}⌋+1\right)×\left(2{N}_{ID}^{cell}+1\right)×{2}^{9}+{N}_{ID}^{cell}
Scrambling with a cell specific sequence serves the purpose of intercell interference rejection. When a UE descrambles a received bit stream with a known cell specific scrambling sequence, interference from other cells will be descrambled incorrectly and will only appear as uncorrelated noise.
The scrambled bits are then QPSK modulated to create a block of complex-valued modulation symbols.
{d}^{\left(0\right)}\left(i\right)
{x}^{\left(0\right)}\left(i\right),{x}^{\left(1\right)}\left(i\right),\dots ,{x}^{\left(v-1\right)}\left(i\right)
{x}^{\left(0\right)}\left(i\right)={d}^{\left(0\right)}\left(i\right)
{x}^{\left(0\right)}\left(i\right),{x}^{\left(1\right)}\left(i\right),\dots ,{x}^{\left(v-1\right)}\left(i\right)
{y}^{\left(p\right)}\left(i\right)
{y}^{\left(p\right)}\left(i\right)={x}^{\left(0\right)}\left(i\right)
\left(\begin{array}{c}{y}^{\left(0\right)}\left(2i\right)\\ {y}^{\left(1\right)}\left(2i\right)\\ {y}^{\left(0\right)}\left(2i+1\right)\\ {y}^{\left(1\right)}\left(2i+1\right)\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cccc}1& 0& j& 0\\ 0& -1& 0& j\\ 0& 1& 0& j\\ 1& 0& -j& 0\end{array}\right)\left(\begin{array}{c}\mathrm{Re}\left\{{x}^{\left(0\right)}\left(i\right)\right\}\\ \mathrm{Re}\left\{{x}^{\left(1\right)}\left(i\right)\right\}\\ \mathrm{Im}\left\{{x}^{\left(0\right)}\left(i\right)\right\}\\ \mathrm{Im}\left\{{x}^{\left(1\right)}\left(i\right)\right\}\end{array}\right)
{x}^{\left(0\right)}\left(i\right)
{x}^{\left(1\right)}\left(i\right)
{x}^{\left(0\right)}\left(i\right)
{x}^{\left(1\right)}\left(i\right)
\left(\begin{array}{c}{y}^{\left(0\right)}\left(4i\right)\\ {y}^{\left(1\right)}\left(4i\right)\\ {y}^{\left(2\right)}\left(4i\right)\\ {y}^{\left(3\right)}\left(4i\right)\\ {y}^{\left(0\right)}\left(4i+1\right)\\ {y}^{\left(1\right)}\left(4i+1\right)\\ {y}^{\left(2\right)}\left(4i+1\right)\\ {y}^{\left(3\right)}\left(4i+1\right)\\ {y}^{\left(0\right)}\left(4i+2\right)\\ {y}^{\left(1\right)}\left(4i+2\right)\\ {y}^{\left(2\right)}\left(4i+2\right)\\ {y}^{\left(3\right)}\left(4i+2\right)\\ {y}^{\left(0\right)}\left(4i+3\right)\\ {y}^{\left(1\right)}\left(4i+3\right)\\ {y}^{\left(2\right)}\left(4i+3\right)\\ {y}^{\left(3\right)}\left(4i+3\right)\end{array}\right)=\frac{1}{\sqrt{2}}\left(\begin{array}{cccccccc}1& 0& 0& 0& j& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& -1& 0& 0& 0& j& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 1& 0& 0& 0& j& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 0& 0& 0& -j& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 1& 0& 0& 0& j& 0\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& -1& 0& 0& 0& j\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 1& 0& 0& 0& j\\ 0& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 1& 0& 0& 0& -j& 0\end{array}\right)\left(\begin{array}{c}\mathrm{Re}\left\{{x}^{\left(0\right)}\left(i\right)\right\}\\ \mathrm{Re}\left\{{x}^{\left(1\right)}\left(i\right)\right\}\\ \mathrm{Re}\left\{{x}^{\left(2\right)}\left(i\right)\right\}\\ \mathrm{Re}\left\{{x}^{\left(3\right)}\left(i\right)\right\}\\ \mathrm{Im}\left\{{x}^{\left(0\right)}\left(i\right)\right\}\\ \mathrm{Im}\left\{{x}^{\left(1\right)}\left(i\right)\right\}\\ \mathrm{Im}\left\{{x}^{\left(2\right)}\left(i\right)\right\}\\ \mathrm{Im}\left\{{x}^{\left(3\right)}\left(i\right)\right\}\end{array}\right)
The complex valued symbols for each antenna are divided into quadruplets for mapping to resource elements. Each quadruplet is mapped to a Resource element Group (REG) within the first OFDM symbol. There are sixteen complex symbols to be mapped therefore four quadruplets are created.
The first quadruplet is mapped onto a REG with subcarrier index
k=\overline{k}
, given by the following equation.
\overline{k}=\left(\frac{{N}_{sc}^{RB}}{2}\right)×\left({N}_{ID}^{cell}mod2{N}_{RB}^{DL}\right)
The subsequent three quadruplets are mapped to REGs spaced at intervals of
⌊{N}_{RB}^{DL}/2⌋×\left({N}_{sc}^{RB}/2\right)
from the first quadruplet and each other. This spreads the quadruplets, and hence the PCFICH, over the entire subframe as illustrated in the following figure.
lteCFI | ltePCFICH | ltePCFICHInfo | ltePCFICHIndices | lteDLResourceGrid | lteSymbolModulate | lteSymbolDemodulate | ltePCFICHPRBS | lteLayerMap | lteLayerDemap | lteDLPrecode | lteDLDeprecode
|
numtheory(deprecated)/rootsunity - Maple Help
Home : Support : Online Help : numtheory(deprecated)/rootsunity
rootsunity(p, r)
Important: The numtheory package has been deprecated. Use the superseding command NumberTheory[RootsOfUnity] instead.
This function will calculate all the pth roots of unity mod r and return the result as an expression sequence.
The order, p, of the root must be prime.
Note that there will always be at least the root 1.
The command with(numtheory,rootsunity) allows the use of the abbreviated form of this command.
\mathrm{with}\left(\mathrm{numtheory}\right):
\mathrm{rootsunity}\left(5,11\right)
\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}
\mathrm{rootsunity}\left(3,11\right)
\textcolor[rgb]{0,0,1}{1}
numtheory(deprecated)[cyclotomic]
|
The following calculation is used to size the stone storage bed (reservoir) used as a base course. It is assumed that the footprint of the stone bed will be equal to the footprint of the pavement. The following equations are derived from the ICPI Manual [1]
{\displaystyle d_{r,max}={\frac {(RVC_{T}\times A_{p})+(RVC_{T}\times A_{i}\times C)-(f'\times D\times A_{p})}{n}}}
{\displaystyle RVC_{T}=D\times i}
C = Runoff coefficient of impervious contributing drainage area (e.g., 0.9 for asphalt)
R = the ratio of impervious contributing drainage area (Ai) to permeable pavement area (Ap); Ai/Ap
{\displaystyle d_{r}={\frac {f'\times t}{n}}}
Where the total area of the pavement (Ac) and total depth of clear stone aggregate needed for load bearing capacity are known (i.e., storage reservoir depth is fixed) or if available space is constrained in the vertical dimension due to water table or bedrock elevation, the minimum footprint area of the water storage reservoir, Ar can be calculated as follows:
{\displaystyle A_{r}={\frac {D(i-f')\times A_{c}}{d_{r}\times n}}}
Ac = Ai + Ap
Then increase Ar accordingly to keep R between 0 and 2, which reduces hydraulic loading and helps avoid premature clogging. This assumes that the water storage reservoir area and permeable pavement area are the same (Ar = Ap).
↑ Smith, D. 2006. Permeable Interlocking Concrete Pavements; Selection, Design, Construction, Maintenance. 3rd Edition. Interlocking Concrete Pavement Institute. Burlington, ON.
|
Dirichlet Series | Brilliant Math & Science Wiki
Dirichlet series are functions of a complex variable
s
that are defined by certain infinite series. They are generalizations of the Riemann zeta function, and are important in number theory due to their deep connections with the distribution of prime numbers. They have interesting connections with multiplicative functions and Dirichlet convolution.
Products and Dirichlet Convolution
A Dirichlet series is an expression of the form
\sum_{n=1}^{\infty} \frac{a_n}{n^s}.
s
is a complex variable and
a_n
is a sequence of complex numbers. If we write the
a_n
as the values of an arithmetic function
f(n) = a_n,
we say that the above series is the Dirichlet series associated with
f.
a_n = 1
n,
the associated Dirichlet series is
\sum \frac1{n^s},
the Riemann zeta function
\zeta(s).
So the Dirichlet series associated with the function
\mathbf{1}
\zeta(s).
Given two Dirichlet series
F(s)
G(s)
, the Dirichlet series representation of the product
F(s)G(s)
turns out to be related to Dirichlet convolution.
F(s) = \sum_{n=1}^{\infty} \frac{f(n)}{n^s}, \ \ \ G(s) = \sum_{n=1}^{\infty} \frac{g(n)}{n^s}.
Suppose that the series both converge for a given value of
s
and at least one of them converges absolutely. Then
\sum_{n=1}^{\infty} \frac{(f*g)(n)}{n^s}
converges, where
f*g
is the Dirichlet convolution of
g
F(s)G(s) = \sum \frac{(f*g)(n)}{n^s}.
Partial proof:
Ignoring questions of convergence, consider the product
F(s)G(s):
F(s)G(s) = \left( f(1) + \frac{f(2)}{2^s} + \cdots \right) \left( g(1) + \frac{g(2)}{2^s} + \cdots \right).
This expands to give a sum of fractions of the form
\frac{f(i)g(j)}{(ij)^s}.
The terms with denominator
n^s
will be ones where
j =\frac ni.
\frac1{n^s}
\sum_{i|n} f(i)g\left(\frac ni\right),
f*g
_\square
The Riemann zeta function has a well-known product formula, which is derived from unique factorization and the formula for the sum of a geometric series:
\begin{aligned} \sum_{n=1}^{\infty} \frac1{n^s} &= 1 + \frac1{2^s} + \frac1{3^s} + \cdots \\ &= \left(1+\frac1{2^s}+\frac1{4^s}+\cdots\right)\left(1+\frac1{3^s}+\frac1{9^s} +\cdots\right) \left(1+\frac1{5^s}+\frac1{25^s}+\cdots\right)\left(\cdots\right)\\ &= \frac1{1-\frac{1}{2^s}} \cdot \frac1{1-\frac{1}{3^s}} \cdot \frac1{1-\frac{1}{5^s}} \cdots \\ &= \prod_{p \ \text{prime}} \left( 1-\frac1{p^s} \right)^{-1}. \end{aligned}
We can generalize this to any Dirichlet series associated with a multiplicative function:
F(s) = \sum_{n=1}^{\infty} \frac{f(n)}{n^s}
if
is a multiplicative function, then
F(s) = \prod_{p \ \text{prime}} \left( 1+\frac{f(p)}{p^s}+\frac{f(p^2)}{p^{2s}}+\cdots \right);
if
F(s) = \prod_{p \ \text{prime}} \left( 1-\frac{f(p)}{p^s} \right)^{-1}.
The proof is essentially the same as the above derivation for the Riemann zeta function.
\sum_{n=1}^\infty \frac{\mu(n)}{n^s}= \frac1{\zeta(s)}.
The product formula gives
\sum_{n=1}^\infty \frac{\mu(n)}{n^s} = \prod_{p \ \text{prime}} \left( 1 - \frac1{p^s} \right),
which is the reciprocal of the product formula for
\zeta(s)
. Another way to solve the problem is to use the result on products of Dirichlet series: note that
\mu * \mathbf{1} = e,
the convolution identity function, so the product of the associated Dirichlet series is
\sum \frac{e(n)}{n^s}= 1
_\square
\sum_{n=1}^\infty \frac{\phi(n)}{n^s} = \frac{\zeta(s-1)}{\zeta(s)},
\phi
is the Euler phi function.
\phi = \mu * I
I(n) = n
(see Dirichlet convolution for details), so the theorem above gives
\begin{aligned} \sum_{n=1}^{\infty} \frac{\phi(n)}{n^s} &= \left( \sum_{n=1}^\infty \frac{\mu(n)}{n^s} \right) \left( \sum_{n=1}^{\infty} \frac{n}{n^s} \right) \\ &= \frac1{\zeta(s)} \sum_{n=1}^\infty \frac1{n^{s-1}} \\ &= \frac{\zeta(s-1)}{\zeta(s)}. \ _\square \end{aligned}
(Exercise: do this example a different way, by using the product formula as in the previous example.)
S=\sum_{n=1}^{\infty}\frac{\sigma_2(n)}{n^6}
\sigma_2(n)
denote the sum of the squares of all the positive integer divisors of
n
\sigma_2(6)=1^2+2^2+3^2+6^2=50
\frac{\pi^{10}}{S}
\large \sum_{n=1}^\infty \frac{2^{\omega(n)}}{n^2}
\omega(n)
denote the number of distinct prime divisors of
n
If the series above can be expressed as
\frac{a}{b}
and
b
a+b
Cite as: Dirichlet Series. Brilliant.org. Retrieved from https://brilliant.org/wiki/dirichlet-series/
|
Factor square Hermitian positive definite matrix into triangular components - Simulink - MathWorks Switzerland
Input Requirements for Valid Output
Response to Nonpositive Definite Input
Performance Comparisons with Other Blocks
Factor square Hermitian positive definite matrix into triangular components
Math Functions / Matrices and Linear Algebra / Matrix Factorizations
dspfactors
The Cholesky Factorization block uniquely factors the square Hermitian positive definite input matrix S as
S=L{L}^{*}
where L is a lower triangular square matrix with positive diagonal elements and L* is the Hermitian (complex conjugate) transpose of L. The block outputs a matrix with lower triangle elements from L and upper triangle elements from L*. The output is not in the same form as the output of the MATLAB® chol function. In order to convert the output of the Cholesky Factorization block to the MATLAB form, use the following equation:
R = triu(LL');
In order to extract the L matrix exclusively, pass the output of the Cholesky Factorization block, LL', to the Extract Triangular Matrix block. Setting the Extract parameter of the Extract Triangular Matrix to Lower extracts the L matrix. Setting the Extract parameter to Upper extracts the L' matrix.
Here, LL' is the output of the Cholesky Factorization block. Due to roundoff error, these equations do not produce a result that is exactly the same as the MATLAB result.
Block Output Composed of L and L*
The block output is valid only when its input has the following characteristics:
Hermitian — The block does not check whether the input is Hermitian; it uses only the diagonal and upper triangle of the input to compute the output.
Real-valued diagonal entries — The block disregards any imaginary component of the input's diagonal entries.
Positive definite — Set the block to notify you when the input is not positive definite as described in Response to Nonpositive Definite Input.
To generate a valid output, the block algorithm requires a positive definite input (see Input Requirements for Valid Output). Set the Non-positive definite input parameter to determine how the block responds to a nonpositive definite input:
Ignore — Proceed with the computation and do not issue an alert. The output is not a valid factorization. A partial factorization will be present in the upper left corner of the output.
Warning — Display a warning message in the MATLAB Command Window, and continue the simulation. The output is not a valid factorization. A partial factorization will be present in the upper left corner of the output.
Error — Display an error dialog and terminate the simulation.
The Non-positive definite input parameter is a diagnostic parameter. Like all diagnostic parameters on the Configuration Parameters dialog box, it is set to Ignore in the code generated for this block by Simulink® Coder™ code generation software.
Note that L and L* share the same diagonal in the output matrix. Cholesky factorization requires half the computation of Gaussian elimination (LU decomposition), and is always stable.
Response to nonpositive definite matrix inputs: Ignore, Warning, or Error. See Response to Nonpositive Definite Input.
Golub, G. H., and C. F. Van Loan. Matrix Computations. 3rd ed. Baltimore, MD: Johns Hopkins University Press, 1996.
Cholesky Inverse DSP System Toolbox
Cholesky Solver DSP System Toolbox
LDL Factorization DSP System Toolbox
LU Factorization DSP System Toolbox
QR Factorization DSP System Toolbox
See Matrix Factorizations for related information.
|
Microsoft Word - Curvenote Docs
After creating an article in Curvenote, you can export and download your article as a docx file for use with Microsoft Word.
The export feature is currently in beta. Please check back for more details or stay up to date with all Curvenote updates ➡️ Release Notes. If you experience issues please reach out via email or our Community Slack!
#Export and download Word
Only owners or collaborators can download articles.
Choose Word format
In the Exporting Article pop-up:
Click the ☁️⬇️ icon for the Download DOCX option.
\textcircled{\checkmark}
Download DOCX.
You can also download the log file for the Word export.
Your exported zip file will be available for download until you save a new version.
|
{\displaystyle d_{r,max}={\frac {(RVC_{T}\times A_{p})+(RVC_{T}\times A_{i}\times C)-(f'\times D\times A_{p})}{n}}}
{\displaystyle RVC_{T}=D\times i}
{\displaystyle d_{r}={\frac {f'\times t}{n}}}
Where the total contributing drainage area (Ac) and total depth of clear stone aggregate needed for load bearing capacity are known (i.e., storage reservoir depth is fixed) or if available space is constrained in the vertical dimension due to water table or bedrock elevation, the minimum footprint area of the water storage reservoir, Ar can be calculated as follows:
{\displaystyle A_{r}={\frac {D(i-f')\times A_{c}}{d_{r}\times n}}}
|
Pentation Knowpia
In mathematics, pentation (or hyper-5) is the next hyperoperation after tetration and before hexation. It is defined as iterated (repeated) tetration, just as tetration is iterated exponentiation.[1] It is a binary operation defined with two numbers a and b, where a is tetrated to itself b-1 times. For instance, using hyperoperation notation for pentation and tetration,
{\displaystyle 2[5]3}
means tetrating 2 to itself 2 times, or
{\displaystyle 2[4](2[4]2)}
. This can then be reduced to
{\displaystyle 2[4](2^{2})=2[4]4=2^{2^{2^{2}}}=2^{2^{4}}=2^{16}=65536.}
The first three values of the expression x[5]2. The value of 3[5]2 is about 7.626 × 1012; values for higher x are much too large to appear on the graph.
The word "pentation" was coined by Reuben Goodstein in 1947 from the roots penta- (five) and iteration. It is part of his general naming scheme for hyperoperations.[2]
There is little consensus on the notation for pentation; as such, there are many different ways to write the operation. However, some are more used than others, and some have clear advantages or disadvantages compared to others.
Pentation can be written as a hyperoperation as
{\displaystyle a[5]b}
. In this format,
{\displaystyle a[3]b}
may be interpreted as the result of repeatedly applying the function
{\displaystyle x\mapsto a[2]x}
{\displaystyle b}
repetitions, starting from the number 1. Analogously,
{\displaystyle a[4]b}
, tetration, represents the value obtained by repeatedly applying the function
{\displaystyle x\mapsto a[3]x}
{\displaystyle b}
repetitions, starting from the number 1, and the pentation
{\displaystyle a[5]b}
represents the value obtained by repeatedly applying the function
{\displaystyle x\mapsto a[4]x}
{\displaystyle b}
repetitions, starting from the number 1.[3][4] This will be the notation used in the rest of the article.
In Knuth's up-arrow notation,
{\displaystyle a[5]b}
{\displaystyle a\uparrow \uparrow \uparrow b}
{\displaystyle a\uparrow ^{3}b}
. In this notation,
{\displaystyle a\uparrow b}
represents the exponentiation function
{\displaystyle a^{b}}
{\displaystyle a\uparrow \uparrow b}
represents tetration. The operation can be easily adapted for hexation by adding another arrow.
In Conway chained arrow notation,
{\displaystyle a[5]b=a\rightarrow b\rightarrow 3}
Another proposed notation is
{\displaystyle {_{b}a}}
, though this is not extensible to higher hyperoperations.[6]
The values of the pentation function may also be obtained from the values in the fourth row of the table of values of a variant of the Ackermann function: if
{\displaystyle A(n,m)}
is defined by the Ackermann recurrence
{\displaystyle A(m-1,A(m,n-1))}
{\displaystyle A(1,n)=an}
{\displaystyle A(m,1)=a}
{\displaystyle a[5]b=A(4,b)}
As tetration, its base operation, has not been extended to non-integer heights, pentation
{\displaystyle a[5]b}
is currently only defined for integer values of a and b where a > 0 and b ≥ −1, and a few other integer values which may be uniquely defined. As with all hyperoperations of order 3 (exponentiation) and higher, pentation has the following trivial cases (identities) which holds for all values of a and b within its domain:
{\displaystyle 1[5]b=1}
{\displaystyle a[5]1=a}
Additionally, we can also define:
{\displaystyle a[5]0=1}
{\displaystyle a[5](-1)=0}
Other than the trivial cases shown above, pentation generates extremely large numbers very quickly such that there are only a few non-trivial cases that produce numbers that can be written in conventional notation, as illustrated below:
{\displaystyle 2[5]2=2[4]2=2^{2}=4}
{\displaystyle 2[5]3=2[4](2[4]2)=2[4]4=2^{2^{2^{2}}}=2^{2^{4}}=2^{16}=65,536}
{\displaystyle 2[5]4=2[4](2[4](2[4]2))=2[4](2[4]4)=2[4]65536=2^{2^{2^{\cdot ^{\cdot ^{\cdot ^{2}}}}}}{\mbox{ (a power tower of height 65,536) }}\approx \exp _{10}^{65,533}(4.29508)}
(shown here in iterated exponential notation as it is far too large to be written in conventional notation. Note
{\displaystyle \exp _{10}(n)=10^{n}}
{\displaystyle 3[5]2=3[4]3=3^{3^{3}}=3^{27}=7,625,597,484,987}
{\displaystyle 3[5]3=3[4](3[4]3)=3[4]7,625,597,484,987={\underset {{\text{3 is repeated 3[4]3 times}}=3^{3^{3}}}{3^{3^{.^{.^{.^{3}}}}}}}{\mbox{ (a power tower of height 7,625,597,484,987) }}\approx \exp _{10}^{7,625,597,484,986}(1.09902)}
{\displaystyle 4[5]2=4[4]4=4^{4^{4^{4}}}=4^{4^{256}}\approx \exp _{10}^{3}(2.19)}
(a number with over 10153 digits)
{\displaystyle 5[5]2=5[4]5=5^{5^{5^{5^{5}}}}=5^{5^{5^{3125}}}\approx \exp _{10}^{4}(3.33928)}
(a number with more than 10102184 digits)
^ Perstein, Millard H. (June 1962), "Algorithm 93: General Order Arithmetic", Communications of the ACM, 5 (6): 344, doi:10.1145/367766.368160, S2CID 581764 .
^ Goodstein, R. L. (1947), "Transfinite ordinals in recursive number theory", The Journal of Symbolic Logic, 12 (4): 123–129, doi:10.2307/2266486, JSTOR 2266486, MR 0022537 .
^ Knuth, D. E. (1976), "Mathematics and computer science: Coping with finiteness", Science, 194 (4271): 1235–1242, Bibcode:1976Sci...194.1235K, doi:10.1126/science.194.4271.1235, PMID 17797067, S2CID 1690489 .
^ Blakley, G. R.; Borosh, I. (1979), "Knuth's iterated powers", Advances in Mathematics, 34 (2): 109–136, doi:10.1016/0001-8708(79)90052-5, MR 0549780 .
^ Conway, John Horton; Guy, Richard (1996), The Book of Numbers, Springer, p. 61, ISBN 9780387979939 .
^ http://www.tetration.org/Tetration/index.html
^ Nambiar, K. K. (1995), "Ackermann functions and transfinite ordinals", Applied Mathematics Letters, 8 (6): 51–53, doi:10.1016/0893-9659(95)00084-4, MR 1368037 .
|
PUSCH uplink channel estimation - MATLAB lteULChannelEstimate - MathWorks Nordic
lteULChannelEstimate
Estimate Channel Characteristics for PUSCH
PUSCH uplink channel estimation
[hest, noiseest] = lteULChannelEstimate(ue,chs,rxgrid)
[hest, noiseest] = lteULChannelEstimate(ue,chs,cec,rxgrid)
[hest, noiseest] = lteULChannelEstimate(ue,chs,cec,rxgrid,refgrid)
[hest, noiseest] = lteULChannelEstimate(ue,chs,rxgrid,refgrid)
[hest, noiseest] = lteULChannelEstimate(ue,chs,rxgrid) returns an estimate for the channel by averaging the least squares estimates of the reference symbols across time and copying these estimates across the allocated resource elements within the time frequency grid. It returns the estimated channel between each transmit and receive antenna and an estimate of the noise power spectral density. See Algorithms.
[hest, noiseest] = lteULChannelEstimate(ue,chs,cec,rxgrid) returns the estimated channel using the method and parameters defined by the user in the channel estimator configuration cec structure.
[hest, noiseest] = lteULChannelEstimate(ue,chs,cec,rxgrid,refgrid) returns the estimated channel using the method and parameters defined by the channel estimation configuration structure and the additional information about the transmitted symbols found in refgrid.
When cec.InterpType is set to 'None', values in refgrid are treated as reference symbols and the resulting hest contains non-zero values in their locations.
[hest, noiseest] = lteULChannelEstimate(ue,chs,rxgrid,refgrid) returns the estimated channel using the estimation method as described in TS 36.101 [1], Annex F4. The method described utilizes extra channel information obtained through information of the transmitted symbols found in refgrid. This additional information allows for an improved estimate of the channel and is required for accurate EVM measurements. rxgrid and refgrid must only contain a whole subframe worth of SC-FDMA symbols.
Use lteULChannelEstimate to estimate the channel characteristics for a received resource grid.
Initialize a UE configuration structure to RMC A3-2. Initialize the channel estimation configuration structure. Generate a transmission waveform. For the purpose of this example, we bypass the channel stage of the system model and copy txWaveform to rxWaveform.
ue.TotSubframes = 1;
cec = struct('FreqWindow',7,'TimeWindow',1,'InterpType','cubic');
txWaveform = lteRMCULTool(ue,[1;0;0;1]);
Demodulate the SC-FDMA waveform and perform channel estimation operation on rxGrid.
rxGrid = lteSCFDMADemodulate(ue,rxWaveform);
hest = lteULChannelEstimate(ue,ue.PUSCH,cec,rxGrid);
ue — UE-specific configuration
UE-specific configuration, specified as a structure. ue can contain the following fields.
{N}_{\text{RB}}^{\text{UL}}
Only used if NDMRSID or NPUSCHID is absent.
{n}_{DMRS}^{\left(1\right)}
{n}_{ID}^{csh_DMRS}
chs — PUSCH channel settings
PUSCH channel settings, specified as a structure that can contain the following fields. The parameter field PMI is only required if ue.NTxAnts is set to 2 or 4.
Physical resource block set, specified as a 1-column or 2-column matrix. This parameter field contains the zero-based physical resource block (PRB) indices corresponding to the slot-wise resource allocations for this PUSCH.
If PRBSet is a column vector, the resource allocation is the same in both slots of the subframe. To specify differing PRBs for each slot in a subframe, use a 2-column matrix. The PRB indices are zero based.
NLayers Optional 1 (default), 2, 3, 4 Number of transmission layers
{n}_{DMRS}^{\left(2\right)}
The following field is required only when ue.NTxAnts is set to 2 or 4.
nonnegative scalar integer (0,...,23)
of the DRS reference symbols
FreqWindow Required
Size of window in resource elements used to average over frequency during channel estimation
The window size must be either an odd number or a multiple of 12.
TimeWindow Required
Size of window in resource elements used to average over time during channel estimation
The window size must be an odd number.
InterpType Required
'Antennas' (default), 'Layers', 'None'
Specifies point of reference (signals to internally generate) for channel estimation
The following field is required only when rxgrid contains more than one subframe. See footnote.
Setting cec.Reference to 'Antennas' uses the PUSCH DMRS after precoding onto the transmission antennas as the reference for channel estimation. In this case, the precoding matrix indicated in chs.PMI is used to precode the DMRS layers onto antennas, and the channel estimate, hest, is a matrix of size M-by-N-by-NRxAnts-by-chs.NTxAnts. Setting cec.Reference to 'Layers' uses the PUSCH DMRS without precoding as the reference for channel estimation. The channel estimate, hest, is of size M-by-N-by-NRxAnts-by-chs.NLayers. Setting cec.Reference to 'None' generates no internal reference signals, and the channel estimation can be performed on arbitrary known REs as given by the refgrid argument. This approach can be used to provide a refgrid containing the SRS signals created on all NTxAnts, allowing for full-rank channel estimation for the purposes of PMI selection when the PUSCH is transmitted with less than full rank.
Channel estimate between each transmit and receive antenna, returned as an NSC-by-NSym-by-NR-by-NT array of complex symbols.
NSym is the number of SC-FDMA symbols.
NT is the number of transmit antennas, ue.NTxAnts.
Optionally, the channel estimator can be configured to use the DM-RS layers as the reference signal. In this case, the 4-D array is an NSC-by-NSym-by-NR-by-NLayers array of complex symbols, where NLayers is the number of transmission layers.
The channel estimation algorithm is described in the following steps.
Extract the demodulation reference signals, or pilot symbols, for a transmit-receive antenna pair from the allocated physical resource blocks within the received subframe.
Using the cleaned pilot symbol estimates, interpolate to obtain an estimate of the channel for the entire number of subframes passed into the function.
Then, the averaged pilot symbol estimates are used to perform a 2-D interpolation across allocated physical resource blocks. The location of pilot symbols within the subframe is not ideally suited to interpolation. To account for this positioning, virtual pilots are created and placed out with the area of the current subframe. This placement allows complete and accurate interpolation to be performed.
The PUSCH channel estimator is only able to deal with contiguous allocation of resource blocks in time and frequency.
lteEqualizeMIMO | lteEqualizeMMSE | lteEqualizeZF | lteSCFDMADemodulate | lteULFrameOffset | lteULPerfectChannelEstimate | griddata | lteEqualizeULMIMO
|
CATEGORY THEORY - Encyclopedia Information
Category theory Information
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (November 2009) ( Learn how and when to remove this template message)
A category is formed by two sorts of objects, the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. One often says that a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one, and morphism composition has similar properties as function composition ( associativity and existence of identity morphisms). Morphisms are often some sort of function, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid.
{\displaystyle C_{1}}
{\displaystyle C_{2}:}
{\displaystyle C_{1}}
{\displaystyle C_{2}}
{\displaystyle C_{1}}
{\displaystyle C_{2}}
A binary operation ∘, called composition of morphisms, such that for any three objects a, b, and c, we have ∘ : hom(b, c) × hom(a, b) → hom(a, c). The composition of f : a → b and g : b → c is written as g ∘ f or gf, [a] governed by two axioms:
isomorphism if there exists a morphism g : b → a such that f ∘ g = 1b and g ∘ f = 1a. [b]
— Samuel Eilenberg and Saunders Mac Lane, General theory of natural equivalences [1]
Stanislaw Ulam, and some writing on his behalf, have claimed that related ideas were current in the late 1930s in Poland. Eilenberg was Polish, and studied mathematics in Poland in the 1930s. Category theory is also, in some sense, a continuation of the work of Emmy Noether (one of Mac Lane's teachers) in formalizing abstract processes;[ citation needed] Noether realized that understanding a type of mathematical structure requires understanding the processes that preserve that structure ( homomorphisms).[ citation needed] Eilenberg and Mac Lane introduced categories for understanding and formalizing the processes ( functors) that relate topological structures to algebraic structures ( topological invariants) that characterize them.
Category theory was originally introduced for the need of homological algebra, and widely extended for the need of modern algebraic geometry ( scheme theory). Category theory may be viewed as an extension of universal algebra, as the latter studies algebraic structures, and the former applies to any kind of mathematical structure and studies also the relationships between structures of different nature. For this reason, it is used throughout mathematics. Applications to mathematical logic and semantics ( categorical abstract machine) came later.
Category theory has been applied in other fields as well. For example, John Baez has shown a link between Feynman diagrams in physics and monoidal categories. [2] Another application of category theory, more specifically: topos theory, has been made in mathematical music theory, see for example the book The Topos of Music, Geometric Logic of Concepts, Theory, and Performance by Guerino Mazzola.
^ Eilenberg, Samuel; MacLane, Saunders (1945). "General theory of natural equivalences". Transactions of the American Mathematical Society. 58: 247. doi: 10.1090/S0002-9947-1945-0013131-6. ISSN 0002-9947.
^ Baez, J.C.; Stay, M. (2009). "Physics, topology, logic and computation: A Rosetta stone". New Structures for Physics. Lecture Notes in Physics. Vol. 813. pp. 95–172. arXiv: 0903.0340. doi: 10.1007/978-3-642-12821-9_2. ISBN 978-3-642-12820-2. S2CID 115169297.
Leinster, Tom (2004). Higher Operads, Higher Categories. Higher Operads. London Math. Society Lecture Note Series. Vol. 298. Cambridge University Press. p. 448. Bibcode: 2004hohc.book.....L. ISBN 978-0-521-53215-0. Archived from the original on 2003-10-25. Retrieved 2006-04-03.
Leinster, Tom (2014). Basic Category Theory. Cambridge Studies in Advanced Mathematics. Vol. 143. Cambridge University Press. arXiv: 1612.09375. ISBN 9781107044241.
Lurie, Jacob (2009). Higher Topos Theory. Annals of Mathematics Studies. Vol. 170. Princeton University Press. arXiv: math.CT/0608040. ISBN 978-0-691-14049-0. MR 2522659.
Simpson, Carlos (2010). Homotopy theory of higher categories. arXiv: 1001.4071. Bibcode: 2010arXiv1001.4071S. , draft of a book.
Rings ( Fields)
Modules ( Vector spaces)
Bicategory ( pseudofunctor)
2-category ( 2-functor)
( Traced)( Symmetric) monoidal category
Retrieved from " https://en.wikipedia.org/?title=Category_theory&oldid=1086179299"
Category Theory Videos
Category Theory Websites
Category Theory Encyclopedia Articles
|
Is the Michelson Experiment Conclusive? - Wikisource, the free online library
Translation:Is the Michelson Experiment Conclusive?
Is the Michelson Experiment Conclusive? (1910)
In German: Ist der Michelsonversuch beweisend?, Annalen der Physik, 338 (11), 186-191. Online
1085340Is the Michelson Experiment Conclusive?Max von Laue1910
Is the Michelson experiment conclusive?
Since the interference experiment of Michelson – as the foundation of the theory of relativity – has achieved a unique importance for the development of physics, it is necessary to ensure its meaning against all doubts, thus also against the reservations expressed by Kohl[1]. It lies in the nature of things, that only well-known theorems from the theory of interference phenomena can be debated here; the intention, which is to establish clarity, may excuse it when we should become somehow long-winded here.
We start with stating the essential parts of Michelson's interferometer; in neglecting the other parts of the much more complicated apparatus, we are in agreement with Kohl. Those are: the (half-permeably silvered) plane glass plate
{\displaystyle P}
(which we can assume as very thin against the wavelength of light), at which the light-ray incident under ca. 45° is split into a reflected and a passing ray; furthermore two mirrors
{\displaystyle I}
{\displaystyle II}
, upon which the latter impinge approximately perpendicular, so that they come back to plate
{\displaystyle P}
after reflection. A repeated splitting of them at
{\displaystyle P}
, gives (besides other things) two rays propagating to telescope
{\displaystyle F}
, whose interference can be observed in
{\displaystyle F}
The interference image now can be part of two totally different types; one studies them best, when one asks after the location of the mirror image casted from
{\displaystyle P}
by mirror
{\displaystyle II}
(indicated in the Fig. by
{\displaystyle II'}
I. That mirror image is exactly parallel to mirror
{\displaystyle I}
. Then one sees (in the focal plane of the telescope objective) the interference phenomenon at the plan-parallel layer of air
{\displaystyle I}
{\displaystyle II'}
The physical difference between the front- and the back-surface of a real plan-parallel plate, causing the phase shift
{\displaystyle \pi }
at one of the interfering rays, is replaced here by the difference of the reflections from the silvered surface of plate
{\displaystyle P}
; because one of the rays becomes reflected in glass, the other one in air.
These interference curves of same inclination are known to be concentric circles[2]. The rate-difference amounts
{\displaystyle (a-b)\cos \beta \,}
when we denote by
{\displaystyle \beta }
the angle between the ray direction and the normal of the plates, by
{\displaystyle a}nd
{\displaystyle b}
the distances of mirrors
{\displaystyle I}
{\displaystyle II}
{\displaystyle P}
, measured along the midmost ray. From that it follows first, that the rate-difference cannot be zero for any of the fringes, except when
{\displaystyle a=b}
, but in this case no interference fringes occur at all. In white light these fringes are therefore, if at all, only to be seen as blurred. If one also changes the difference
{\displaystyle a-b}
, then every fringe is moving, so that
{\displaystyle (a-b)\cos \beta \,}
remains constant, thus in the vicinity of the center of the ring systems it is moving, so that
{\displaystyle (a-b)\left(1-{\frac {\beta ^{2}}{2}}\right)}
doesn't change. The fringe displacement is thus not even approximately proportional to the change of
{\displaystyle a-b}
II. The mirror image
{\displaystyle II'}
forms a certain angle with
{\displaystyle I}
. Then in the air-wedge between
{\displaystyle II'}
{\displaystyle I}
, interference fringes occur that are complementary to the Newtonian fringes of equal thickness. They are located in that plane, in which the telescope objective is projecting the wedge, and they are lines parallel to the edge of the wedge. If
{\displaystyle a}
is chosen as equal to
{\displaystyle b}
{\displaystyle II'}
intersects mirror
{\displaystyle I}
. Then we have a fringe of rate-difference zero in the middle of the visual field, thus a minimum of intensity being the only one that is not displaced when changing the wave length, and remains clear and colorless under illumination by white light. If one of the arm lengths
{\displaystyle a}
{\displaystyle b}
is slightly changed, then this is the cause for the displacement of the mentioned double-wedge in its plane, and the interference fringes are traveling along with it perpendicular to their direction, by a distance proportional to the difference
{\displaystyle a-b}
. Another difference of these fringes with respect to the plan-parallel rings is the circumstance, that this whole system also occurs under the illumination by a single wave, while as regards the former one, only one point of the visual field is illuminated by that.
As to the relevant experiment concerning the influence of Earth's motion, interference fringes of the second kind have been used without doubt. This can be unequivocally seen from Michelson's representation in his book "Light waves and their uses"[3]. From the later papers of Morley and Miller, I literally quote as references: "In one of the usual adjustments of distances and angles, parallel fringes are seen when the eye or the telescope is made to give distinct vision of one of the mirrors I or II. The fringes apparently coincide with these surfaces. A central fringe is black; on either side are coloured fringes, less and less distinct till they fade away into uniform illumination. If the path of either ray is shortened, the fringes move rapidly to one side. If we engrave a scale on I or II, we can, after any alteration of one of the paths, restore with great accuracy and ease the former relations by bringing the central dark fringe to its original place on this scale." Furthermore: "A telescope magnifying thirty-five diameters gave distinct vision of mirror 8, at whose surface the interference-fringes are apparently localized."[4] From this quotation it also emerges, that the authors have seen, that a change of the arm lengths causes the fringe displacement required by the theory. Since the influence of motion changes the traversing times of
{\displaystyle a}
{\displaystyle b}
, it is thus acting as a mechanical change of the distances, thus by this alone the conclusiveness of the Michelson experiment is proven without respect to all other considerations.
Now to the work of Kohl. In its first part, it deals with the influence of Earth's motion upon the geometric law of reflection; namely it proves, that the images produced by reflection at
{\displaystyle I}
{\displaystyle II}
{\displaystyle P}
fall apart into
{\displaystyle F}
visible images of the light source in consequence of motion, so that the light source is thus apparently doubled for the observer. This contradicts the relativity principle, yet it is a necessary consequence of the theory of the stationary aether. If the distance of both mirror images wouldn't be unobservably small, then one would have another criterion for experimentally deciding between both theories. This appears to be paradox at first sight, because the geometric law of reflection (shared by all theories) is entirely based on the assumption of linearity of the limiting condition, thus it is common to all of them. Though the agreement is missing between them with respect to the form, that has to be ascribed to the moving interferometer. According to the theory of stationary aether, the angle between plate
{\displaystyle P}
and the central fringe, is 45° in motion as well as at rest. According to relativity theory, however, due to Lorentz contraction it deviates in the case of motion by magnitudes of second order from that value, when it amounts 45° in the state of rest. Conversely, if it would have exactly this value in the case of motion, then it would be different with respect to the co-moving system. One recognizes without further ado, that in this case two mirror images must occur, and thus the discussed result is confirmed in this way. Moreover, this question is of minor importance, since the distance of both images amounts ca.
{\displaystyle 10^{-5}}
, thus it (at best) could be observed only by the best microscope with the greatest aperture; as regards the telescope of the interferometer, however, the circles of diffraction (of which the image consists) have diameters that exceed the geometric projection of such a distance by a large multiple.[5]
The error of Kohl lies at another place. The author namely assumes, that it is about interferences of same inclination at a plan-parallel plate; namely, according to him the boundaries of the telescope- and the collimator objective should constrict the ray path, so that only the midmost spot of the whole ring-system should be visible, so that
{\displaystyle (a-b)\cos \beta }
has (with optical precision) the same value everywhere in the visual field.[6] This emerges with full certainty from his Fig. 3 and 4; because while discussing them on the basis of the doubling of the light source under consideration of smallest angles (
{\displaystyle 10^{-8}}
), the angles between the mirrors
{\displaystyle I}
{\displaystyle P}
{\displaystyle II}
{\displaystyle P}
are always assumed as being exactly 45°; this also follows from the fact, that he always sees the arm lengths
{\displaystyle a}nd
{\displaystyle b}
as slightly different, and eventually from the fact, that (neglecting the difference
{\displaystyle a-b}
) the inclination
{\displaystyle i}
of the rays against the central ray occurs as the only variable that determines the intensity. To explain the occurrence of rectilinear parallel rays, he makes the assumption not supported by any comment in the publications of Michelson or Morley, that light experiences diffraction at a slit being fixed at the collimator lens.[7] As one can see, according to him there is a interference phenomenon of constant phase difference between two equal diffraction images. Now, that under these assumptions no fringe displacement arises as the consequence of a change of arm lengths or as the effect of motion, but (with homogeneous light) only an uniform change of brightness in the entire image, can be seen without calculation.[8] Though it is near at hand, that it has nearly nothing to do with the real Michelson experiment any more, whose conclusiveness is in no way affected by that.
With authorization of Kohl I shall add, that after taking notice of these considerations, he completely agrees with the given explanations regarding the formation of the system of interference fringes.
Munich, Institute for theoretical physics, May 1910.
(Received May 12, 1910)
↑ E. Kohl, Ann. d. Phys. 28. p. 259. 1909
↑ See e.g. E. Gehrcke, Die Anwendung der Interferenzen in der Spektroskopie und Metrologie. Braunschweig 1906, Fig. 31.
↑ A. A. Michelson, Chicago 1903.
↑ E. W. Morley and C. L. Miller, Phil. Mag. (6) 9. p. 669 a. 680. 1905. Esp. p. 670 a. 681. Mirror 8 corresponds to
{\displaystyle I}
↑ See Kohl's paper, p. 285, section e.
↑ See p. 285, section d; 287, line 5 from below until 288, line 9 from above; 289, line 11 ff; p. 306, summary 1.
↑ p. 292, line 12 from above.
↑ p. 298 a. 307, section 2.
Retrieved from "https://en.wikisource.org/w/index.php?title=Translation:Is_the_Michelson_Experiment_Conclusive%3F&oldid=11637362"
|
Warmup Puzzles Practice Problems Online | Brilliant
Joseph, Kevin, and Nicholas are
3
brothers, and the following statements about them are all true:
One of the women above is named Kaylee and the other is named Inara. They each make a statement about who they are as shown.
We know that at least one of them is lying. What color is Inara's dress?
Black White There isn't enough information to be certain.
The next three problems gradually increase in difficulty, and all of them are more challenging than the warm-ups you just solved.
It's worth the effort. The most effective learning experiences are often those times when you get a problem wrong, and then challenge yourself to read, understand, and learn from the solution.
Arrange the cards to make the following true:
The king is in one of the two middle spaces.
The queen is left of the jack and right of the ace.
The ace is directly next to the queen.
(Note: Left and right are from the player's perspective).
Five friends competed in a race.
Pyrrha finished faster than Blake.
The smallest difference in finishing times was between Pyrrha and Ruby.
The largest difference in finishing times was between Ruby and Weiss.
Yang finished either
1^\text{st}
3^\text{rd}.
There is exactly
1
false statement in this list.
2
false statements in this list.
3
How many false statements are there in the list above?
0
1
2
3
|
Reduced_mass Knowpia
In physics, the reduced mass is the "effective" inertial mass appearing in the two-body problem of Newtonian mechanics. It is a quantity which allows the two-body problem to be solved as if it were a one-body problem. Note, however, that the mass determining the gravitational force is not reduced. In the computation, one mass can be replaced with the reduced mass, if this is compensated by replacing the other mass with the sum of both masses. The reduced mass is frequently denoted by
{\displaystyle \mu }
(mu), although the standard gravitational parameter is also denoted by
{\displaystyle \mu }
(as are a number of other physical quantities). It has the dimensions of mass, and SI unit kg.
Given two bodies, one with mass m1 and the other with mass m2, the equivalent one-body problem, with the position of one body with respect to the other as the unknown, is that of a single body of mass[1][2]
{\displaystyle \mu ={\cfrac {1}{{\cfrac {1}{m_{1}}}+{\cfrac {1}{m_{2}}}}}={\cfrac {m_{1}m_{2}}{m_{1}+m_{2}}},\!\,}
where the force on this mass is given by the force between the two bodies.
The reduced mass is always less than or equal to the mass of each body:
{\displaystyle \mu \leq m_{1},\quad \mu \leq m_{2}\!\,}
and has the reciprocal additive property:
{\displaystyle {\frac {1}{\mu }}={\frac {1}{m_{1}}}+{\frac {1}{m_{2}}}\,\!}
which by re-arrangement is equivalent to half of the harmonic mean.
{\displaystyle m_{1}=m_{2}}
{\displaystyle {\mu }={\frac {m_{1}}{2}}={\frac {m_{2}}{2}}\,\!}
{\displaystyle m_{1}\gg m_{2}}
{\displaystyle \mu \approx m_{2}}
The equation can be derived as follows.
Newtonian mechanicsEdit
Using Newton's second law, the force exerted by a body (particle 2) on another body (particle 1) is:
{\displaystyle \mathbf {F} _{12}=m_{1}\mathbf {a} _{1}}
The force exerted by particle 1 on particle 2 is:
{\displaystyle \mathbf {F} _{21}=m_{2}\mathbf {a} _{2}}
According to Newton's third law, the force that particle 2 exerts on particle 1 is equal and opposite to the force that particle 1 exerts on particle 2:
{\displaystyle \mathbf {F} _{12}=-\mathbf {F} _{21}}
{\displaystyle m_{1}\mathbf {a} _{1}=-m_{2}\mathbf {a} _{2}\;\;\Rightarrow \;\;\mathbf {a} _{2}=-{m_{1} \over m_{2}}\mathbf {a} _{1}}
The relative acceleration arel between the two bodies is given by:
{\displaystyle \mathbf {a} _{\rm {rel}}:=\mathbf {a} _{1}-\mathbf {a} _{2}=\left(1+{\frac {m_{1}}{m_{2}}}\right)\mathbf {a} _{1}={\frac {m_{2}+m_{1}}{m_{1}m_{2}}}m_{1}\mathbf {a} _{1}={\frac {\mathbf {F} _{12}}{\mu }}}
Note that (since the derivative is a linear operator), the relative acceleration
{\displaystyle \mathbf {a} _{\rm {rel}}}
is equal to the acceleration of the separation
{\displaystyle \mathbf {x} _{\rm {rel}}}
between the two particles.
{\displaystyle \mathbf {a} _{\rm {rel}}=\mathbf {a} _{1}-\mathbf {a} _{2}={\frac {d^{2}\mathbf {x} _{1}}{dt^{2}}}-{\frac {d^{2}\mathbf {x} _{2}}{dt^{2}}}={\frac {d^{2}}{dt^{2}}}(\mathbf {x} _{1}-\mathbf {x} _{2})={\frac {d^{2}\mathbf {x} _{\rm {rel}}}{dt^{2}}}}
This simplifies the description of the system to one force (since
{\displaystyle \mathbf {F} _{12}=-\mathbf {F} _{21}}
), one coordinate
{\displaystyle \mathbf {x} _{\rm {rel}}}
, and one mass
{\displaystyle \mu }
. Thus we have reduced our problem to a single degree of freedom, and we can conclude that particle 1 moves with respect to the position of particle 2 as a single particle of mass equal to the reduced mass,
{\displaystyle \mu }
Lagrangian mechanicsEdit
Alternatively, a Lagrangian description of the two-body problem gives a Lagrangian of
{\displaystyle {\mathcal {L}}={1 \over 2}m_{1}\mathbf {\dot {r}} _{1}^{2}+{1 \over 2}m_{2}\mathbf {\dot {r}} _{2}^{2}-V(|\mathbf {r} _{1}-\mathbf {r} _{2}|)\!\,}
{\displaystyle {\mathbf {r} }_{i}}
is the position vector of mass
{\displaystyle m_{i}}
(of particle
{\displaystyle i}
). The potential energy V is a function as it is only dependent on the absolute distance between the particles. If we define
{\displaystyle \mathbf {r} =\mathbf {r} _{1}-\mathbf {r} _{2}}
and let the centre of mass coincide with our origin in this reference frame, i.e.
{\displaystyle m_{1}\mathbf {r} _{1}+m_{2}\mathbf {r} _{2}=0}
{\displaystyle \mathbf {r} _{1}={\frac {m_{2}\mathbf {r} }{m_{1}+m_{2}}},\;\mathbf {r} _{2}=-{\frac {m_{1}\mathbf {r} }{m_{1}+m_{2}}}.}
Then substituting above gives a new Lagrangian
{\displaystyle {\mathcal {L}}={1 \over 2}\mu \mathbf {\dot {r}} ^{2}-V(r),}
{\displaystyle \mu ={\frac {m_{1}m_{2}}{m_{1}+m_{2}}}}
is the reduced mass. Thus we have reduced the two-body problem to that of one body.
Reduced mass can be used in a multitude of two-body problems, where classical mechanics is applicable.
Moment of inertia of two point masses in a lineEdit
Two point masses rotating around the center of mass.
In a system with two point masses
{\displaystyle m_{1}}
{\displaystyle m_{2}}
such that they are co-linear, the two distances
{\displaystyle r_{1}}
{\displaystyle r_{2}}
to the rotation axis may be found with
{\displaystyle r_{1}=R{\frac {m_{2}}{m_{1}+m_{2}}}}
{\displaystyle r_{2}=R{\frac {m_{1}}{m_{1}+m_{2}}}}
{\displaystyle R}
is the sum of both distances
{\displaystyle R=r_{1}+r_{2}}
This holds for a rotation around the center of mass. The moment of inertia around this axis can be then simplified to
{\displaystyle I=m_{1}r_{1}^{2}+m_{2}r_{2}^{2}=R^{2}{\frac {m_{1}m_{2}^{2}}{(m_{1}+m_{2})^{2}}}+R^{2}{\frac {m_{1}^{2}m_{2}}{(m_{1}+m_{2})^{2}}}=\mu R^{2}.}
Collisions of particlesEdit
In a collision with a coefficient of restitution e, the change in kinetic energy can be written as
{\displaystyle \Delta K={\frac {1}{2}}\mu v_{\rm {rel}}^{2}(e^{2}-1)}
where vrel is the relative velocity of the bodies before collision.
For typical applications in nuclear physics, where one particle's mass is much larger than the other the reduced mass can be approximated as the smaller mass of the system. The limit of the reduced mass formula as one mass goes to infinity is the smaller mass, thus this approximation is used to ease calculations, especially when the larger particle's exact mass is not known.
Motion of two massive bodies under their gravitational attractionEdit
In the case of the gravitational potential energy
{\displaystyle V(|\mathbf {r} _{1}-\mathbf {r} _{2}|)=-{\frac {Gm_{1}m_{2}}{|\mathbf {r} _{1}-\mathbf {r} _{2}|}}\,,}
we find that the position of the first body with respect to the second is governed by the same differential equation as the position of a body with the reduced mass orbiting a body with a mass equal to the sum of the two masses, because
{\displaystyle m_{1}m_{2}=(m_{1}+m_{2})\mu \!\,}
Non-relativistic quantum mechanicsEdit
Consider the electron (mass me) and proton (mass mp) in the hydrogen atom.[3] They orbit each other about a common centre of mass, a two body problem. To analyze the motion of the electron, a one-body problem, the reduced mass replaces the electron mass
{\displaystyle m_{e}\rightarrow {\frac {m_{e}m_{p}}{m_{e}+m_{p}}}}
and the proton mass becomes the sum of the two masses
{\displaystyle m_{p}\rightarrow m_{e}+m_{p}}
This idea is used to set up the Schrödinger equation for the hydrogen atom.
"Reduced mass" may also refer more generally to an algebraic term of the form[citation needed]
{\displaystyle x^{*}={1 \over {1 \over x_{1}}+{1 \over x_{2}}}={x_{1}x_{2} \over x_{1}+x_{2}}\!\,}
that simplifies an equation of the form
{\displaystyle \ {1 \over x^{*}}=\sum _{i=1}^{n}{1 \over x_{i}}={1 \over x_{1}}+{1 \over x_{2}}+\cdots +{1 \over x_{n}}.\!\,}
The reduced mass is typically used as a relationship between two system elements in parallel, such as resistors; whether these be in the electrical, thermal, hydraulic, or mechanical domains. A similar expression appears in the transversal vibrations of beams for the elastic moduli.[4] This relationship is determined by the physical properties of the elements as well as the continuity equation linking them.
Chirp mass, a relativistic equivalent used in the post-Newtonian expansion
^ Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, (Verlagsgesellschaft) 3-527-26954-1, (VHC Inc.) 0-89573-752-3
^ Experimental study of the Timoshenko beam theory predictions, A.Díaz-de-Anda J.Flores, L.Gutiérrez, R.A.Méndez-Sánchez, G.Monsivais, and A.Morales.Journal of Sound and Vibration Volume 331, Issue 26, 17 December 2012, Pages 5732-5744 https://doi.org/10.1016/j.jsv.2012.07.041
|
Find the dimensions of the rectangle of largest area that
Find the dimensions of the rectangle of largest area that has its base on the x-axis and its other two vertices above the x-axis and lying on the parabola. (Round your answers to the nearest hundredth.)
y=6-{x}^{2}
Area of the rectangle is A=xySubstitue
y=6-{x}^{2}
in the formula of the area.
A=x\left(6-{x}^{2}\right)
=6x-{x}^{3}
Diferentiate area with respect to x and equate to 0.
{A}^{\prime }=\frac{d}{dx}\left(6x-{x}^{3}\right)
=6-3{x}^{2}
6-3{x}^{2}=0
3{x}^{2}=6
x=±\sqrt{2}
x=±\sqrt{2}\text{ }\in \text{ }y=6-{x}^{2}
to get value of height,
y=6-{x}^{2}
=6-{\left(\sqrt{2}\right)}^{2}
=4
Hence. the dimensions of rectangle are,
2\sqrt{2}×4.
Find an equation of the tangent plane to the given surface at the specified point.
z=3{y}^{2}-2{x}^{2}+x,\text{ }\left(2,-1,-3\right)
f\left(x\right)=\mathrm{tan}x+3
\left(\frac{\pi }{4},4\right)
Find all values of x such that the tangent line to
f\left(x\right)={\left({x}^{2}-9\right)}^{2}
∣−6∣
z=\left(x+2\right)2-2\left(y-1\right)2-5,\left(2,3,3\right)
h + 9.7 = -9.7
|
Publish/Subscribe Pattern (Pub-sub) | James's Knowledge Graph
The publish/subscribe pattern (pub-sub) is a software engineering design pattern used commonly in distributed systems to communicate asynchronously and parallelize tasks across applications, data pipelines, and services in a decoupled way.
Events are usually published by a single publisher and consumed by multiple subscribers. Publishers are systems of record applications that publish events while subscribers consume and process events. Publishers generally publishes events without regard to how or when subscribers will process them. Events are simply records of "something that happened" (e.g. "customer added", "order placed", and so on).
Designing event models
For events to be useful, they must carry enough information for subscribers to process them successfully. Given that it's impractical, and maybe impossible, to foresee all the potential subscribers of an event, it's useful to take a methodical approach to event model design.
One such approach is to use a transitive closure which calculates the relationships that would need to be added between entities to directly (as opposed to indirectly) associate related data. For example, take the following relational schema used to book musical acts at various venues:
erDiagram BANDS { int id string name } BAND-MEMBERS { int bandId int personId } PEOPLE { int id string name } SHOWS { int bandId int venueId string name date start date end } SHOW-CHECKLIST { int id int showId string toDo } VENUES { int id string street string postalCode } BANDS ||--o{ SHOWS : schedule BANDS ||--|{ BAND-MEMBERS : have BAND-MEMBERS }o--|| PEOPLE : are SHOWS }o--|| VENUES : have SHOWS }|--o{ SHOW-CHECKLIST: has
To generate an "show scheduled" event, we need to determine what information a subscriber would need to know whenever an event is booked. We can calculate that by identifying all of the direct and indirect relationships to the SHOWS entities and then demoralize them into the event structure.
The first step is to follow the relationships from the SHOWS table. In this case, we see that a show is defined by the associate BAND and VENUE entities, thus some data from the BANDS and VENUES records are likely to be useful to consumers of the "show scheduled" event. However, to BANDS have BAND-MEMBERS which have PEOPLE, thus it likely makes sense to include data from these entities as well.
The second step is to follow the relationships to the SHOWS table. In this case, we see that each SHOW can have zero or more CHECKLIST items, thus some data from the SHOW-CHECKLIST is likely to be useful as well.
Finally, with all the required information identified, we can design a reasonable schema for the "show scheduled" event:
erDiagram SHOW_SCHEDULED_EVENT { int showId string showName date startDate date endDate string bandName list peopleNames string venueName list toDos }
Though this can seem excessive or a violation of YAGNI because this data can be looked up with only the id of the added SHOWS entity, it's not. Such lookups can create excessive "chatter" that actually consumes more network, storage, and compute cycles than the larger payload would tend to save. There is also the risk that the underlying data changes between when the event was published and when it was processed, leaving subscribers with inaccurate data about the event when it occurred.
Finally, this level of de-normalization can reduce dependence on message order. If, for example, we relied on the "venue created" event to be processed before the "show scheduled" event, ensuring the messages are processed in sequence can add complexity and reduce scalability.
Competing consumer pattern
The competing consumer pattern organizes subscribers by type to simplify how services scale. When there are multiple types of subscribers, each events should be consumed by all types of subscriber. However, when there are multiple instances of a single type of subscriber then each event should only be consumed by a single subscriber instance.
To manage this distinction, subscribers can be grouped into exchanges. An exchange is a message queue that serves a single type of subscriber. This allows the system to scale based on the workload (i.e. queue size) and avoid redundant event processing.
flowchart TD Publisher -->|publish| Exchange((Exchange)) Exchange -->|push| Q1[(Queue 1)] Exchange -->|push| Q2[(Queue 2)] Exchange -->|push| Q3[(Queue 3)] Q1 -->|pull| Q1P[Q1 Subscriber Pool] Q2 -->|pull| Q2P[Q2 Subscriber Pool] Q3 -->|pull| Q3P[Q3 Subscriber Pool]
Each subscriber in the subscriber pool pulls from the same queue, ensuring each message is only processed once.
Commutative message handling
Commutative message handling is a desirable property of subscribers where the order in which messages are processed doesn't matter, similar to algebraic property (i.e.
a + b = b + a
). Though message queues typically deliver messages in the order they were received, message processing may error out and have to be retried as later messages are successfully processed, or a later message may simply get processed faster than an earlier message.
Messages of the same type can be made commutative by attaching a timestamp to the event. For example, if an "venue description changed" event is processed out of order, it could cause the subscriber to revert its venue description to the older version. However, if the event includes a timestamp and the stored data is newer than the message timestamp, then the message can be safely discarded.
Messages of different types can be made commutative by storing related information as it arrives. For example, if a show is scheduled for a venue and then the venue description is changed, we can't know for sure that the related messages will be processed in that order. Thus, if the "venue description changed" event is processed before the "show scheduled" event (even though it happened afterward), we could process stale venue when the "show scheduled" event is processed. To prevent this, the venue data should also be saved by the subscriber, along with the timestamp. Then when the "show scheduled" event is processed, the timestamp can be compared to the timestamps for related data and only "new" information processed.
Deeper Knowledge on Publish/Subscribe Pattern (Pub-sub)
Broader Topics Related to Publish/Subscribe Pattern (Pub-sub)
Publish/Subscribe Pattern (Pub-sub) Knowledge Graph
|
Analysis of deviance for generalized linear regression model - MATLAB devianceTest - MathWorks Italia
devianceTest
Analysis of deviance for generalized linear regression model
tbl = devianceTest(mdl)
tbl = devianceTest(mdl) returns an analysis of deviance table for the generalized linear regression model mdl. The table tbl gives the result of a test that determines whether the model mdl fits significantly better than a constant model.
Perform a deviance test on a generalized linear regression model.
Test whether the model differs from a constant in a statistically significant way.
The small p-value indicates that the model significantly differs from a constant. Note that the model display of mdl includes the statistics shown in the second row of the table.
tbl — Analysis of deviance summary statistics
Analysis of deviance summary statistics, returned as a table.
tbl contains analysis of deviance statistics for both a constant model and the model mdl. The table includes these columns for each model.
Deviance is twice the difference between the loglikelihoods of the corresponding model (mdl or constant) and the saturated model. For more information, see Deviance.
Degrees of freedom for the error (residuals), equal to n – p, where n is the number of observations, and p is the number of estimated coefficients
F-statistic or chi-squared statistic, depending on whether the dispersion is estimated (F-statistic) or not (chi-squared statistic)
F-statistic is the difference between the deviance of the constant model and the deviance of the full model, divided by the estimated dispersion.
Chi-squared statistic is the difference between the deviance of the constant model and the deviance of the full model.
p-value associated with the test: chi-squared statistic with p – 1 degrees of freedom, or F-statistic with p – 1 numerator degrees of freedom and DFE denominator degrees of freedom, where p is the number of estimated coefficients
-2\left(\mathrm{log}L\left({b}_{1},y\right)-\mathrm{log}L\left({b}_{S},y\right)\right),
\begin{array}{l}D={D}_{2}-{D}_{1}=-2\left(\mathrm{log}L\left({b}_{2},y\right)-\mathrm{log}L\left({b}_{S},y\right)\right)+2\left(\mathrm{log}L\left({b}_{1},y\right)-\mathrm{log}L\left({b}_{S},y\right)\right)\\ \text{ }\text{ }\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}=-2\left(\mathrm{log}L\left({b}_{2},y\right)-\mathrm{log}L\left({b}_{1},y\right)\right).\end{array}
GeneralizedLinearModel | CompactGeneralizedLinearModel | coefTest
|
Gyroscope sensor parameters - MATLAB - MathWorks 한êµ
gyroparams
AccelerationBias
Generate Gyroscope Data from Stationary Inputs
Gyroscope sensor parameters
The gyroparams class creates a gyroscope sensor parameters object. You can use this object to model a gyroscope when simulating an IMU with imuSensor. See the Algorithms section of imuSensor for details of gyroparams modeling.
params = gyroparams(Name,Value)
params = gyroparams returns an ideal gyroscope sensor parameters object with default values.
params = gyroparams(Name,Value) configures gyroparams object properties using one or more Name,Value pair arguments. Name is a property name and Value is the corresponding value. Name must appear inside single quotes (''). You can specify several name-value pair arguments in any order as Name1,Value1,...,NameN,ValueN. Any unspecified properties take default values.
MeasurementRange — Maximum sensor reading (rad/s)
Resolution — Resolution of sensor measurements ((rad/s)/LSB)
Resolution of sensor measurements in (rad/s)/LSB, specified as a real nonnegative scalar. Here, LSB is the acronym for least significant bit.
ConstantBias — Constant sensor offset bias (rad/s)
{v}_{measure}=\frac{1}{100}M\text{â}{v}_{true}=\frac{1}{100}\left[\begin{array}{ccc}{m}_{11}& {m}_{12}& {m}_{13}\\ {m}_{21}& {m}_{22}& {m}_{23}\\ {m}_{31}& {m}_{32}& {m}_{33}\end{array}\right]{v}_{true}
NoiseDensity — Power spectral density of sensor noise ((rad/s)/√Hz)
Power spectral density of sensor noise in (rad/s)/√Hz, specified as a real scalar or 3-element row vector. This property corresponds to the angle random walk (ARW). Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
BiasInstability — Instability of the bias offset (rad/s)
RandomWalk — Integrated white noise of sensor ((rad/s)(√Hz))
Integrated white noise of sensor in (rad/s)(√Hz), specified as a real scalar or 3-element row vector. Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
TemperatureBias — Sensor bias from temperature ((rad/s)/℃)
Sensor bias from temperature in ((rad/s)/℃), specified as a real scalar or 3-element row vector. Any scalar input is converted into a real 3-element row vector where each element has the input scalar value.
AccelerationBias — Sensor bias from linear acceleration (rad/s)/(m/s2)
Generate gyroscope data for an imuSensor object from stationary inputs.
Generate a gyroscope parameter object with a maximum sensor reading of 4.363
\mathrm{rad}/\mathrm{s}
and a resolution of 1.332e-4
\left(\mathrm{rad}/\mathrm{s}\right)/\mathrm{LSB}
. The constant offset bias is 0.349
\mathrm{rad}/\mathrm{s}
. The sensor has a power spectral density of 8.727e-4
\begin{array}{l}rad/s/\sqrt{Hz}\\ \text{.}\text{ }\text{The}\text{ }\text{bias}\text{ }\text{from}\text{ }\text{temperature}\text{ }\text{is}\text{ }\text{0.349}\text{ }rad/s/{}^{0}C\end{array}
\left(\mathrm{rad}/{\mathrm{s}}^{2}\right)
/{}^{0}C
. The scale factor error from temperature is 0.2%
/{}^{0}C
. The sensor axes are skewed by 2%. The sensor bias from linear acceleration is 0.178e-3
\left(rad/s\right)/\left(m/{s}^{2}\right)
params = gyroparams('MeasurementRange',4.363,'Resolution',1.332e-04,'ConstantBias',0.349,'NoiseDensity',8.727e-4,'TemperatureBias',0.349,'TemperatureScaleFactor',0.02,'AxesMisalignment',2,'AccelerationBias',0.178e-3);
Use a sample rate of 100 Hz spaced out over 1000 samples. Create the imuSensor object using the gyroscope parameter object.
imu = imuSensor('accel-gyro','SampleRate', Fs, 'Gyroscope', params);
Generate gyroscope data from the imuSensor object.
[~, gyroData] = imu(acc, angvel, orient);
Plot the resultant gyroscope data.
plot(t, gyroData)
accelparams | magparams | imuSensor
|
A car at point
A
on a straight road goes west for
20
seconds, arriving at point
B
200
m away from
A.
The car then heads back to the east for
30
C
800
B.
What is the displacement of the car from point
A
+
is east and
-
is west.
One morning, you wake up and decide to take a jog through the town. You take
54 \text{ seconds}
to run
250 \text{ m}
straight north, then you turn right and take
42 \text{ seconds}
178 \text{ m}.
Then you turn right again and run down the street for
9 \text{ seconds}
30 \text{ m}
until you to stop. Assuming that the coordinates of your home are the origin
(0, 0),
find the position vector of the place where you stop.
Take eastward as
+\hat{i}
and northward as
+\hat{j} .
178 \text{ m} \hat{i} - 220 \text{ m} \hat{j}
178 \text{ m} \hat{i} + 280 \text{ m} \hat{j}
220 \text{ m} \hat{i} + 178 \text{ m} \hat{j}
178 \text{ m} \hat{i} + 220 \text{ m} \hat{j}
A soccer player undergoes two successive displacements:
\Delta \vec{r_A} = (28 \text{ m})\hat{i} + (8\text{ m})\hat{j} \text{ followed by } \Delta \vec{r_B} = (-28\text{ m}) \hat{i}+(7\text{ m}) \hat{j}.
What is the total displacement of the soccer player?
30 \text{ m} \hat{i} - 5 \text{ m} \hat{j}
15 \text{ m} \hat{i}
20 \text{ m} \hat{i} - 5 \text{ m} \hat{j}
15 \text{ m} \hat{j}
The motion of a creature in three dimensions can be described by the following equations for positions in
x, y
z
\begin{aligned} x(t)&=3t^2 + 6 \\ y(t)&=- t^2 + 3t - 2 \\ z(t)&= 3t + 1. \end{aligned}
Find the position vector of the creature at
t = 3.
( {36}, -6, {8})
( {33}, -2, {10})
( {38}, -5, {12})
( {28}, -6, {15})
The Andromeda galaxy is a giant spiral cluster of stars whose mass is that of
300
billion Suns. You can see it with the naked eye as a faint elongated cloud in the night sky. Inasmuch as it subtends an angle of
4.1^{\circ}
and is known to be larger than our own galaxy [
163 \times 10^3
light-years (units of ly) in diameter for Andromeda as compared to
100 \times 10^3
light-years for our galaxy], how far away is it in light-years?
4.6 \times 10^6 \text{ ly}
1.7 \times 10^6 \text{ ly}
5.9 \times 10^5 \text{ ly}
2.3 \times 10^6 \text{ ly}
|
Use cramer's rule to solve x_1+2x_2=5 -x_1+x_2=1
Use cramers
f\left(x\right)=\sqrt{4-x}
a=0
\sqrt{3.9}
\sqrt{3.99}
The table shows the number y of muffins baked in x pans. What 1s the missing y-value that makes the table represent a linear function?
For the given a system of linear equations
-2x+3y+z=12
Use matrix inversion to solve simultaneous equations.
What I wanted to ask was , given a homogeneous system of n variables , like this having 4 variables:
{a}_{1}x+{a}_{2}y+{a}_{3}z+{a}_{4}w=0
{a}_{2}x+{a}_{3}y+{a}_{4}z+{a}_{1}w=0
{a}_{3}x+{a}_{4}y+{a}_{1}z+{a}_{2}w=0
{a}_{4}x+{a}_{1}y+{a}_{2}z+{a}_{3}w=0
Here , as we know that this system has zero solution and we can see that the coefficients are rotating in each of the linear equation .
So , is there any general form for solution of this system other than the zero solution ?
Also , what if we are given the non-zero solution then can we find the values of
\left({a}_{1},{a}_{2},{a}_{3},{a}_{4}\right)
How do I craft a linear equation so that it is in the form of
ax+bx+c=0
{a}^{2}+{b}^{2}=1
if I have two points? I know how to get it into the form ax+bx+c=0 but I can't figure out the algorithm for satisfying the second condition.
The reduced row echelon form of a system of linear equations is given.Write the system of equations corresponding to the given matrix. Use
x,y
x,y,z;
{x}_{1},{x}_{2},{x}_{3},{x}_{4}
as variables. Determine whether the system is consistent or inconsistent. If it is consistent, give the solution.
\left[1,0;1,0;5,-1\right]
|
Finite Element Method Basics - MATLAB & Simulink - MathWorks 한êµ
The core Partial Differential Equation Toolbox™ algorithm uses the Finite Element Method (FEM) for problems defined on bounded domains in 2-D or 3-D space. In most cases, elementary functions cannot express the solutions of even simple PDEs on complicated geometries. The finite element method describes a complicated geometry as a collection of subdomains by generating a mesh on the geometry. For example, you can approximate the computational domain Ω with a union of triangles (2-D geometry) or tetrahedra (3-D geometry). The subdomains form a mesh, and each vertex is called a node. The next step is to approximate the original PDE problem on each subdomain by using simpler equations.
âââ
\left(câu\right)+au=f\text{ on domain }\mathrm{Ω}
u=r
â{\mathrm{Ω}}_{D}
â{\mathrm{Ω}}_{N}
â\mathrm{Ω}=â{\mathrm{Ω}}_{D}âªâ{\mathrm{Ω}}_{N}
is the boundary of Ω.
v
and integrating over the domain Ω.
\underset{\mathrm{Ω}}{â«}\left(ââ·\left(câu\right)+auâf\right)v\text{â}d\mathrm{Ω}=0\text{â}âv
v=0
â{\mathrm{Ω}}_{D}
v
â{\mathrm{Ω}}_{D}
Integrating by parts (Green’s formula) the second-order term results in:
\underset{\mathrm{Ω}}{â«}\left(câu\text{â}âv+auv\right)d\mathrm{Ω}â\underset{â{\mathrm{Ω}}_{N}}{â«}\stackrel{â}{n}\text{â}·\text{â}\left(câu\right)\text{â}v\text{â}dâ{\mathrm{Ω}}_{N}+\underset{â{\mathrm{Ω}}_{D}}{â«}\stackrel{â}{n}\text{â}·\text{â}\left(câu\right)\text{â}v\text{â}dâ{\mathrm{Ω}}_{D}=\underset{\mathrm{Ω}}{â«}fv\text{â}d\mathrm{Ω}\text{â}âv
v=0
â{\mathrm{Ω}}_{D}
\underset{\mathrm{Ω}}{â«}\left(câu\text{â}âv+auv\right)d\mathrm{Ω}+\underset{â{\mathrm{Ω}}_{N}}{â«}quv\text{â}dâ{\mathrm{Ω}}_{N}=\underset{â{\mathrm{Ω}}_{N}}{â«}gv\text{â}dâ{\mathrm{Ω}}_{N}+\underset{\mathrm{Ω}}{â«}fv\text{â}d\mathrm{Ω}\text{â}âv
Note that all manipulations up to this stage are performed on continuum Ω, the global domain of the problem. Therefore, the collection of admissible functions and trial functions span infinite-dimensional functional spaces. Next step is to discretize the weak form by subdividing Ω into smaller subdomains or elements
{\mathrm{Ω}}^{e}
\mathrm{Ω}=âª{\mathrm{Ω}}^{e}
{u}_{h}
{v}_{h}
{\mathrm{Ω}}^{e}
\underset{{\mathrm{Ω}}^{e}}{â«}\left(câ{u}_{h}â{v}_{h}+a{u}_{h}{v}_{h}\right)\text{â}d{\mathrm{Ω}}^{e}+\underset{â{\mathrm{Ω}}_{N}^{e}}{â«}q{u}_{h}v{\text{â}}_{h}dâ{\mathrm{Ω}}_{N}^{e}=\underset{â{\mathrm{Ω}}_{N}^{e}}{â«}gv{\text{â}}_{h}dâ{\mathrm{Ω}}_{N}^{e}+\underset{{\mathrm{Ω}}^{e}}{â«}f{v}_{h}d{\mathrm{Ω}}^{e}\text{â}â{v}_{h}
Next, let ϕi, with i = 1, 2, ... , Np, be the piecewise polynomial basis functions for the subspace containing the collections
{u}_{h}
{v}_{h}
{u}_{h}
{u}_{h}=\underset{1}{\overset{{N}_{p}}{â}}{U}_{i}{\mathrm{Ï}}_{i}
{u}_{h}
{v}_{h}={\mathrm{Ï}}_{i}
FEM yields a system KU = F where the matrix K and the right side F contain integrals in terms of the test functions ϕi, ϕj, and the coefficients c, a, f, q, and g defining the problem. The solution vector U contains the expansion coefficients of uh, which are also the values of uh at each node xk (k = 1,2 for a 2-D problem or k = 1,2,3 for a 3-D problem) since uh(xk) = Ui.
d\frac{âu}{ât}âââ
\left(câu\right)+au=f
{u}_{h}\left(x,t\right)=\underset{i=1}{\overset{N}{â}}{U}_{i}\left(t\right){\mathrm{Ï}}_{i}\left(x\right)
M\frac{dU}{dt}+KU=F
M\frac{{d}^{2}U}{d{t}^{2}}+KU=F
âââ
\left(câu\right)+au=\mathrm{λ}du
for the unknowns u and λ, where λ is a complex number. Using the FEM discretization, you solve the algebraic eigenvalue problem KU = λMU to find uh as an approximation to u. To solve eigenvalue problems, use solvepdeeig.
Nonlinear problems. If the coefficients c, a, f, q, or g are functions of u or ∇u, the PDE is called nonlinear and FEM yields a nonlinear system K(U)U = F(U).
|
invmellin - Maple Help
Home : Support : Online Help : Mathematics : Calculus : Transforms : inttrans : invmellin
invmellin(expr, t, s)
expression, equation, or set of expressions or equations to be transformed
range for Re(t) (optional)
option to run transform under (optional)
The invmellin function computes the inverse Mellin transform (F(s)) of expr (f(t)), a linear transformation from
C\left(C\right)\to C[0,\infty )
defined by the contour integral:
F\left(s\right)=\frac{-\frac{I}{2}\left({\int }_{c-\mathrm{\infty }I}^{c+\mathrm{\infty }I}f\left(t\right){s}^{-t}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}ⅆt\right)}{\mathrm{\pi }}
In this integral, c is assumed to be real. Also note that Maple currently does not handle general contour integrals. The above contour integral definition is only used to provide the information below on properties of the inverse Mellin transform.
F\left(s\right)
returned is defined only on the positive real axis.
There are multiple transforms
F\left(s\right)
f\left(t\right)
, corresponding to the cases where
\mathrm{c1}<\mathrm{ℜ}\left(t\right)
<\mathrm{c2}
for various boundaries
\mathrm{c1}
\mathrm{c2}
. The range is specified by the parameter ran. This parameter is optional. If the range parameter is not given, it is assumed to be
-\mathrm{\infty }..\mathrm{\infty }
All constants are assumed to be complex unless otherwise specified.
The invmellin function attempts to simplify an expression according to a set of heuristics, and then to match the result against internal lookup tables of patterns. These tables are of expressions containing algebraic, Bessel, exponential, GAMMA, trigonometric, as well as other functions. The user can add their own functions to invmellin's lookup tables with the function addtable.
Other functions that can be transformed are linear combinations of products of integer powers of t; rational polynomials; terms of the form
{a}^{-t}
0<a
; some definite integrals of functions whose transforms are known; derivatives of functions whose transforms are known; convolutions of two functions
f\left(t\right)
g\left(t\right)
whose transforms are known; and functions of the form
f\left(at+b\right)
f\left(a{t}^{n}\right)
0<a,b
complex, and
n
a positive integer where the transforms of
f\left(t\right)
f\left({t}^{n}\right)
invmellin recognizes the Dirac-delta (or unit-impulse) function as Dirac(t) and Heaviside's unit step function as Heaviside(t).
The command with(inttrans,invmellin) allows the use of the abbreviated form of this command.
\mathrm{with}\left(\mathrm{inttrans}\right):
\mathrm{assume}\left(0<a\right)
\mathrm{assume}\left(b,\mathrm{complex}\right):
\mathrm{assume}\left(c,\mathrm{complex}\right):
\mathrm{assume}\left(n,\mathrm{posint}\right):
Inversion of mellin
G≔\mathrm{mellin}\left(g\left(x\right),x,y\right)
\textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{mellin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\right)
\mathrm{invmellin}\left(G,y,z\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)
\mathrm{with}\left(\mathrm{inttrans}\right):
\mathrm{addtable}\left(\mathrm{invmellin},F\left(t\right),f\left(s\right),t,s,\mathrm{invmellin}=-\mathrm{\infty }..\mathrm{\infty }\right):
\mathrm{addtable}\left(\mathrm{invmellin},\mathrm{F1}\left(t\right),\mathrm{f1}\left(s\right),t,s,\mathrm{invmellin}=-\mathrm{\infty }..\mathrm{\infty }\right):
\mathrm{addtable}\left(\mathrm{invmellin},\mathrm{F2}\left(t\right),\mathrm{f2}\left(s\right),t,s,\mathrm{invmellin}=-\mathrm{\infty }..\mathrm{\infty }\right):
\mathrm{invmellin}\left(F\left(x\right),x,y\right)
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{y}\right)
\mathrm{invmellin}\left(\mathrm{F1}\left(x\right),x,y\right)
\textcolor[rgb]{0,0,1}{\mathrm{f1}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{y}\right)
\mathrm{invmellin}\left(\mathrm{F2}\left(x\right),x,y\right)
\textcolor[rgb]{0,0,1}{\mathrm{f2}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{y}\right)
\mathrm{invmellin}\left(b\mathrm{F1}\left(z\right)+c\mathrm{F2}\left(z\right),z,x\right)
\textcolor[rgb]{0,0,1}{\mathrm{b~}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{f1}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{c~}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{f2}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{invmellin}\left(F\left(az+b\right),z,x\right)
\frac{{\textcolor[rgb]{0,0,1}{x}}^{\frac{\textcolor[rgb]{0,0,1}{\mathrm{b~}}}{\textcolor[rgb]{0,0,1}{\mathrm{a~}}}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{x}}^{\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{\mathrm{a~}}}}\right)}{\textcolor[rgb]{0,0,1}{\mathrm{a~}}}
\mathrm{invmellin}\left(F\left(a{z}^{n}\right),z,x\right)
\frac{\textcolor[rgb]{0,0,1}{\mathrm{invmellin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{F}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{z}}^{\textcolor[rgb]{0,0,1}{\mathrm{n~}}}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{x}}^{\frac{\textcolor[rgb]{0,0,1}{1}}{{\textcolor[rgb]{0,0,1}{\mathrm{a~}}}^{\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{\mathrm{n~}}}}}}\right)}{{\textcolor[rgb]{0,0,1}{\mathrm{a~}}}^{\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{\mathrm{n~}}}}}
\mathrm{invmellin}\left(\mathrm{D}\left(F\right)\left(z\right),z,x\right)
\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{invmellin}\left(zF\left(z\right),z,x\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\left(\frac{\textcolor[rgb]{0,0,1}{ⅆ}}{\textcolor[rgb]{0,0,1}{ⅆ}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\right)
\mathrm{invmellin}\left({a}^{-z}F\left(z\right),z,x\right)
\textcolor[rgb]{0,0,1}{\mathrm{invmellin}}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{a~}}\right)}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{F}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{invmellin}\left(\frac{\mathrm{\Gamma }\left(2-z\right)}{\mathrm{\Gamma }\left(1-z\right)}F\left(z-1\right),z,x\right)
\frac{\textcolor[rgb]{0,0,1}{ⅆ}}{\textcolor[rgb]{0,0,1}{ⅆ}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{invmellin}\left(\mathrm{F1}\left(z\right)\mathrm{F2}\left(1-z\right),z,x\right)
{\textcolor[rgb]{0,0,1}{\int }}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{\mathrm{\infty }}}\textcolor[rgb]{0,0,1}{\mathrm{f1}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{_U}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{f2}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{_U}}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0,0,1}{ⅆ}\textcolor[rgb]{0,0,1}{\mathrm{_U}}
Some simple functions
\mathrm{invmellin}\left(1,z,x\right)
\textcolor[rgb]{0,0,1}{\mathrm{Dirac}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)
\mathrm{invmellin}\left(z,z,x\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{Dirac}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)
\mathrm{invmellin}\left(\mathrm{exp}\left(a{z}^{2}\right),z,x\right)
\frac{{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{-}\frac{{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{x}}\right)}^{\textcolor[rgb]{0,0,1}{2}}}{\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{a~}}}}}{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\sqrt{\textcolor[rgb]{0,0,1}{\mathrm{\pi }}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{a~}}}}
\mathrm{invmellin}\left(\mathrm{\Gamma }\left(z\right),z,x\right)
\textcolor[rgb]{0,0,1}{\mathrm{invmellin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{\Gamma }}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}\right)
\mathrm{invmellin}\left(\mathrm{\Gamma }\left(z\right),z,x,0..\mathrm{\infty }\right)
{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}}
\mathrm{invmellin}\left(\mathrm{\Gamma }\left(z\right),z,x,-1..0\right)
{\textcolor[rgb]{0,0,1}{ⅇ}}^{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}
|
Solve system of nonlinear equations - MATLAB fsolve - MathWorks Nordic
Solution of 2-D Nonlinear System
Solution with Nondefault Options
Solve Parameterized Equation
Solve a Problem Structure
Solution Process of Nonlinear System
Examine Matrix Equation Solution
Solve system of nonlinear equations
Solves a problem specified by
for x, where F(x) is a function that returns a vector value.
x = fsolve(fun,x0) starts at x0 and tries to solve the equations fun(x) = 0, an array of zeros.
Passing Extra Parameters explains how to pass extra parameters to the vector function fun(x), if necessary. See Solve Parameterized Equation.
x = fsolve(fun,x0,options) solves the equations with the optimization options specified in options. Use optimoptions to set these options.
x = fsolve(problem) solves problem, a structure described in problem.
[x,fval] = fsolve(___), for any syntax, returns the value of the objective function fun at the solution x.
[x,fval,exitflag,output] = fsolve(___) additionally returns a value exitflag that describes the exit condition of fsolve, and a structure output with information about the optimization process.
[x,fval,exitflag,output,jacobian] = fsolve(___) returns the Jacobian of fun at the solution x.
This example shows how to solve two nonlinear equations in two variables. The equations are
Convert the equations to the form .
Write a function that computes the left-hand side of these two equations.
Save this code as a file named root2d.m on your MATLAB® path.
Solve the system of equations starting at the point [0,0].
Examine the solution process for a nonlinear system.
Set options to have no display and a plot function that displays the first-order optimality, which should converge to 0 as the algorithm iterates.
The equations in the nonlinear system are
Solve the nonlinear system starting from the point [0,0] and observe the solution process.
You can parameterize equations as described in the topic Passing Extra Parameters. For example, the paramfun helper function at the end of this example creates the following equation system parameterized by
c
\begin{array}{c}2{x}_{1}+{x}_{2}=\mathrm{exp}\left(c{x}_{1}\right)\\ -{x}_{1}+2{x}_{2}=\mathrm{exp}\left(c{x}_{2}\right).\end{array}
To solve the system for a particular value, in this case
c=-1
c
in the workspace and create an anonymous function in x from paramfun.
Solve the system starting from the point x0 = [0 1].
To solve for a different value of
c
c
in the workspace and create the fun function again, so it has the new
c
This code creates the paramfun helper function.
Create a problem structure for fsolve and solve the problem.
Solve the same problem as in Solution with Nondefault Options, but formulate the problem using a problem structure.
Set options for the problem to have no display and a plot function that displays the first-order optimality, which should converge to 0 as the algorithm iterates.
Create the remaining fields in the problem structure.
This example returns the iterative display showing the solution process for the system of two equations and two unknowns
\begin{array}{c}2{x}_{1}-{x}_{2}={e}^{-{x}_{1}}\\ -{x}_{1}+2{x}_{2}={e}^{-{x}_{2}}.\end{array}
Rewrite the equations in the form
F\left(x\right) = 0
\begin{array}{c}2{x}_{1}-{x}_{2}-{e}^{-{x}_{1}}=0\\ -{x}_{1}+2{x}_{2}-{e}^{-{x}_{2}}=0.\end{array}
Start your search for a solution at x0 = [-5 -5].
First, write a function that computes F, the values of the equations at x.
Create the initial point x0.
The iterative display shows f(x), which is the square of the norm of the function F(x). This value decreases to near zero as the iterations proceed. The first-order optimality measure likewise decreases to near zero as the iterations proceed. These entries show the convergence of the iterations to a solution. For the meanings of the other entries, see Iterative Display.
The fval output gives the function value F(x), which should be zero at a solution (to within the FunctionTolerance tolerance).
X
X*X*X=\left[\begin{array}{cc}1& 2\\ 3& 4\end{array}\right]
starting at the point x0 = [1,1;1,1]. Create an anonymous function that calculates the matrix equation and create the point x0.
Set options to have no display.
Examine the fsolve outputs to see the solution quality and process.
message: 'Equation solved....'
The exit flag value 1 indicates that the solution is reliable. To verify this manually, calculate the residual (sum of squares of fval) to see how close it is to zero.
This small residual confirms that x is a solution.
You can see in the output structure how many iterations and function evaluations fsolve performed to find the solution.
fun — Nonlinear equations to solve
Nonlinear equations to solve, specified as a function handle or function name. fun is a function that accepts a vector x and returns a vector F, the nonlinear equations evaluated at x. The equations to solve are F = 0 for all components of F. The function fun can be specified as a function handle for a file
fsolve passes x to your objective function in the shape of the x0 argument. For example, if x0 is a 5-by-3 array, then fsolve passes x to fun as a 5-by-3 array.
the function fun must return, in a second output argument, the Jacobian value J, a matrix, at x.
If fun returns a vector (matrix) of m components and x has length n, where n is the length of x0, the Jacobian J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (The Jacobian J is the transpose of the gradient of F.)
Example: fun = @(x)x*x*x-[1,2;3,4]
Initial point, specified as a real vector or real array. fsolve uses the number of elements in and size of x0 to determine the number and size of variables that fun accepts.
Choose between 'trust-region-dogleg' (default), 'trust-region', and 'levenberg-marquardt'.
The Algorithm option specifies a preference for which algorithm to use. It is only a preference because for the trust-region algorithm, the nonlinear system of equations cannot be underdetermined; that is, the number of equations (the number of elements of F returned by fun) must be at least as many as the length of x. Similarly, for the trust-region-dogleg algorithm, the number of equations must be the same as the length of x. fsolve uses the Levenberg-Marquardt algorithm when the selected algorithm is unavailable. For more information on choosing the algorithm, see Choosing the Algorithm.
To set some algorithm options using optimset instead of optimoptions:
Algorithm — Set the algorithm to 'trust-region-reflective' instead of 'trust-region'.
InitDamping — Set the initial Levenberg-Marquardt parameter λ by setting Algorithm to a cell array such as {'levenberg-marquardt',.005}.
Check whether objective function values are valid. 'on' displays an error when the objective function returns a value that is complex, Inf, or NaN. The default, 'off', displays no error.
Maximum number of function evaluations allowed, a positive integer. The default is 100*numberOfVariables for the 'trust-region-dogleg' and 'trust-region' algorithms, and 200*numberOfVariables for the 'levenberg-marquardt' algorithm. See Tolerances and Stopping Criteria and Iterations and Function Counts.
If true, fsolve uses a user-defined Jacobian (defined in fun), or Jacobian information (when using JacobianMultiplyFcn), for the objective function. If false (default), fsolve approximates the Jacobian using finite differences.
For optimset, the name is Jacobian and the values are 'on' or 'off'. See Current and Legacy Option Names.
Typical x values. The number of elements in TypicalX is equal to the number of elements in x0, the starting point. The default value is ones(numberofvariables,1). fsolve uses TypicalX for scaling finite differences for gradient estimation.
The trust-region-dogleg algorithm uses TypicalX as the diagonal terms of a scaling matrix.
When true, fsolve estimates gradients in parallel. Disable by setting to the default, false. See Parallel Computing.
where Jinfo contains a matrix used to compute J*Y (or J'*Y, or J'*(J*Y)). The first argument Jinfo must be the same as the second argument returned by the objective function fun, for example, in
If flag == 0, W = J'*(J*Y).
If flag > 0, W = J*Y.
If flag < 0, W = J'*Y.
In each case, J is not formed explicitly. fsolve uses Jinfo to compute the preconditioner. See Passing Extra Parameters for information on how to supply values for any additional parameters jmfun needs.
'SpecifyObjectiveGradient' must be set to true for fsolve to pass Jinfo from fun to jmfun.
See Minimization with Dense Structured Hessian, Linear Equalities for a similar example.
Use JacobPattern when it is inconvenient to compute the Jacobian matrix J in fun, though you can determine (say, by inspection) when fun(i) depends on x(j). fsolve can approximate J via sparse finite differences when you give JacobPattern.
In the worst case, if the structure is unknown, do not set JacobPattern. The default behavior is as if JacobPattern is a dense matrix of ones. Then fsolve computes a full finite-difference approximation in each iteration. This can be very expensive for large problems, so it is usually better to determine the sparsity structure.
Maximum number of PCG (preconditioned conjugate gradient) iterations, a positive scalar. The default is max(1,floor(numberOfVariables/2)). For more information, see Equation Solving Algorithms.
Determines how the iteration step is calculated. The default, 'factorization', takes a slower but more accurate step than 'cg'. See Trust-Region Algorithm.
'jacobian' can sometimes improve the convergence of a poorly scaled problem. The default is 'none'.
Example: options = optimoptions('fsolve','FiniteDifferenceType','central')
Objective function value at the solution, returned as a real vector. Generally, fval = fun(x).
exitflag — Reason fsolve stopped
Reason fsolve stopped, returned as an integer.
Equation solved. First-order optimality is small.
Equation solved. Change in x smaller than the specified tolerance, or Jacobian at x is undefined.
Equation solved. Change in residual smaller than the specified tolerance.
Equation solved. Magnitude of search direction smaller than specified tolerance.
Output function or plot function stopped the algorithm.
Equation not solved. The exit message can have more information.
Final displacement in x (not in 'trust-region-dogleg')
The function to be solved must be continuous.
When successful, fsolve only gives one root.
The default trust-region dogleg method can only be used when the system of equations is square, i.e., the number of equations equals the number of unknowns. For the Levenberg-Marquardt method, the system of equations need not be square.
For large problems, meaning those with thousands of variables or more, save memory (and possibly save time) by setting the Algorithm option to 'trust-region' and the SubproblemAlgorithm option to 'cg'.
The Levenberg-Marquardt and trust-region methods are based on the nonlinear least-squares algorithms also used in lsqnonlin. Use one of these methods if the system may not have a zero. The algorithm still returns a point where the residual is small. However, if the Jacobian of the system is singular, the algorithm might converge to a point that is not a solution of the system of equations (see Limitations).
By default fsolve chooses the trust-region dogleg algorithm. The algorithm is a variant of the Powell dogleg method described in [8]. It is similar in nature to the algorithm implemented in [7]. See Trust-Region-Dogleg Algorithm.
The trust-region algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Algorithm.
The Optimize Live Editor task provides a visual interface for fsolve.
fsolve supports code generation using either the codegen (MATLAB Coder) function or the MATLAB Coder™ app. You must have a MATLAB Coder license to generate code.
For an example, see Generate Code for fsolve.
|
If handler is a procedure, then a type call of the form
\mathrm{type}\left(\mathrm{expr},\mathrm{typename}\right)
results in a call to
\mathrm{handler}\left(\mathrm{expr}\right)
. Additional arguments can be passed to handler from a type call of the form
\mathrm{type}\left(\mathrm{expr},\mathrm{typename}\left(\mathrm{arg1},\mathrm{arg2},...\right)\right)
, which results in the call
\mathrm{handler}\left(\mathrm{expr},\mathrm{arg1},\mathrm{arg2},...\right)
. The handler argument, if it is a procedure, must return either true or false; no other return value is acceptable. It must also be prepared to handle any argument sequence that is passed to it.
\mathrm{TypeTools}[\mathrm{AddType}]\left(\mathrm{tff},'{\mathrm{identical}\left(\mathrm{FAIL}\right),\mathrm{identical}\left(\mathrm{false}\right),\mathrm{identical}\left(\mathrm{true}\right)}'\right)
\mathrm{type}\left(\mathrm{FAIL},'\mathrm{tff}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{type}\left([\mathrm{true},\mathrm{false}],'\mathrm{list}\left(\mathrm{tff}\right)'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{TypeTools}[\mathrm{AddType}]\left(\mathrm{integer7},t↦\mathrm{evalb}\left(t::'\mathrm{integer}'\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{and}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{irem}\left(t,7\right)=0\right)\right)
\mathrm{type}\left(4,'\mathrm{integer7}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{type}\left(28,'\mathrm{integer7}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{type}\left([2,x],'\mathrm{pair}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{type}\left([2,x],'\mathrm{pair}\left(\mathrm{integer}\right)'\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{type}\left([2,3],'\mathrm{pair}\left(\mathrm{integer}\right)'\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
|
Which of the following statements regarding glucagon is false?
It is secreted by
\mathrm{\alpha }
-cells of Langerhans
It acts antagonistically to insulin
It decreases blood sugar level
The gland responsible for its secretion is heterocrine gland.
Glucagon is a hormone, secreted by the
\mathrm{\alpha }
- cells of the islets of Langerhans in the pancreas, that increases the concentration of glucose in the blood by stimulating the metabolic breakdown of glycogen. It thus antagonizes the effects of insulin.
Scapula (shoulder blade) is the largest of the bones that make up each half of the pectoral (shoulder) girdle. It is a flat triangular bone, providing anchorage for the muscles of the forelimb and an articulation for the humerus at the glenoid cavity. It is joined to the clavicle (collar bone) in front.
Clavicle is a bone that forms part of the pectoral (shoulder) girdle, linking the scapula (shoulder blade) to the sternum (breast bone). In humans it forms the collar bone and serves as a brace for the shoulders.
Humerus is the long bone of the upper arm which articulates with the scapula (shoulder blade) at the glenoid cavity and with the ulna and radius (via a condyle) at the elbow.
Ilium is the largest of the three bones that make up each half of the pelvic girdle. The ilium bears a flattened wing of bone that is attached by ligaments to the sacrum.
The black pigment in the eye which reduces the internal reflection is located in
The black pigment in the eye which reduces the internal reflection is known as retina. It is the inner most coat of the eyeball and it is a thin, light sensitive nervous layer. The external coat of the eyeball is known as sclerotic but in front of the sclerotic, there is a transparent connective tissue called cornea. Iris is the pigmented part present in front of choroid.
Hearing impairment affects which part of brain?
Temporal lobe is one of the main divisions of the cerebral cortex in each hemisphere of the brain, lying at the side within the temple of the skull and separated from the frontal lobe by a cleft, the lateral sulcus. Areas of the cortex in this lobe are concerned with the appreciation of sound and spoken language.
Which of the following does not come under the Class Mammals?
Lamprey (or Petromyzon) belongs to class cyclostomata. The lamprey has about 1m long greenish brown, cylindrical body with smooth, scaleless, slimy skin; anterior circular, jawless mouth; a single dorsal naris; seven pairs of circular gill slits; 2 dorsal tins and a tail tin. It's life cycle includes two quite different phases. The larval phase (called ammocoete) is a fresh water sedentary, filter feeding and microphagus creature reminiscent of the lancet. The fish like adult lives in the sea and is parasitic on fishes.
Which of the following is an eye disease?
Glaucoma is a condition in which loss of vision occurs because of an abnormally high pressure in the eye. This is also known as primary glaucoma and is of two types- acute and chronic simple.
Acute or Angle- closure Glaucoma- There is an abrupt rise in pressure due to sudden closure of the angle between cornea and iris.
Chronic Simple or Open- angle Glaucoma- Pressure increases gradually, without producing painand the visual loss is insidious.
Hepatitis is an inflammation of the liver caused by viruses, toxic substance or immunological abnormalities.
Measles or Rubeola disease is an acute infectious eruptive viral disease of childhood caused by specific virus of the group myxoviruses. It is the infection of respiratory tract and conjunctiva which is transmitted by contact, fomite and droplet methods.
Bronchitis is the inflammation of the bronchi.
The component of blood which prevents its coagulation in the blood vessels is
Heparin prevents blood coagulation in the blood vessels. It is secreted by mast cells. It is an anticoagulant, blocking conversion of prothrombin to thrombin.
Haemoglobin is the blood pigment necessary for oxygen transport.
Plasma is the component of blood.
Thrombin is the product of blood clotting.
Which match is true?
Vitamin deficiency disease Vitamin Source
Severe bleeding Tocopherol Milk, egg
Anaemia Ascorbic acid Lemon, orange
Night blindness Retinol Carrot, milk
Sterility Calciferol Milk, butter
Night blindness is the inability to see in dim light or at night. It is due to disorder of the cells in the retina that are responsible for vision in dim light and can result from dietary deficiency of vitamin A (retinol).
Name Sources Effect of deficiency
Vitamin C/ Ascorbic acid Citrus fruits such as lemon, orange etc. Green vegetables, Potatoes etc. Scurvy is characterised by wound healing an dgrowth retardation etc.
Vitamin D/ Ergocalciferol and Cholecalciferol Synthesized in skin cells in sunlight. Also found in butter, liver, kidneys, egg yolk etc. Rickets, a disorder of children of 6 months to 2 years.
Vitamin E/ Tocopherol Green vegetables, oils, egg yolk, wheat, animal tissues Reversible sterility in female.
"Omnis-cellula-e-cellula" was given by
Rudolf Virchow was the first to suggest that new cells are formed from the division of the pre-existing cells - "omnis-cellula-e-cellula" i.e. every cell is derived from a previous cell.
Robert Hooke was the first to coin the term "cell" for small structures in a piece of cork under a microscope. His observations were published in a book named Micrographia.
Leeuwenhoek was the first person to observe and describe microscopic organisms and living cell. He observed nucleus in RBC of salmon fish and used simple lens and observed nuclei and unicellular organisms including bacteria. In 1676, he described the bacteria and gave the term animalcules. His observations laid the foundations for the science of bacteriology and microbiology. Robert Brown ( 1831) described and named nucleus.
Which of the following match is correct?
Oxytocin Milk ejection hormone
Glucagon Decreases blood sugar level
Adrenaline Decreases heart rate
Thyroxine Decreases BMR
Oxytocin is a hormone that causes both contraction of smooth muscle in the uterus during birth and expulsion of milk from the mammary glands during suckling. It is produced in the neurosecretory cells of the hypothalamus but is stored and secreted by the posterior pituitary gland.
\mathrm{\alpha }
(or A) cells of the islets of Langerhans in the pancreas, that increases the concentration of glucose in the blood by stimulating the metabolic breakdown of glycogen. It thus antagonizes the effects of insulin.
Adrenaline (epinephrine) is a hormone produced by the medulla of the adrenal glands, that increases heart activity, improves the power and prolongs the action of muscles, and increases the rate and depth of breathing to prepare the body for 'fright, flight, or fight'. At the same time it inhibits digestion and excretion.
Thyroxine is secreted by thyroid gland. It controls the rate of all metabolic processes in the body and influence physical development and activity of the nervous system.
|
Initial Conditions for Defining an Arrow of Time at the Start of Inflation?
This investigation sets forth initial conditions for a start of the arrow of time in cosmology based upon the idea that of having initial degrees of freedom set as
{g}_{\ast }~1000
initially, instead of a maximum value of
{g}_{\ast }~100\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}120
for the number of degrees of freedom during the electro weak era.
Times Arrow, Degrees of Freedom
S\equiv \left[E-\mu N\right]/T\to S\propto {T}^{3}
\mu \to 0
S~{10}^{5}
{g}_{\ast }~100\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}120
{g}_{\ast }~1000
{S}_{\text{initial}}\propto {T}^{3}
S\propto {T}^{3}
S\propto {T}^{3}
S\equiv \left[E-\mu N\right]/T\to S\propto {T}^{3}
\mu \to 0
S~{10}^{5}
{g}_{\ast }~100\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}120
{g}_{\ast }~1000
S\equiv \left[E-\mu N\right]/T\underset{\mu \to 0}{\to }S\propto {T}^{3}\approx n
S~{10}^{5}
S\equiv E/T
\mu =0
{l}_{\text{Planck}}~{10}^{-35}
\begin{array}{l}{{m}_{\text{graviton}}|}_{\text{RELATIVISTIC}}<4.4\times {10}^{-22}{h}^{-1}\text{\hspace{0.17em}}\text{eV}/{\text{c}}^{2}\\ \Leftrightarrow {\lambda }_{\text{graviton}}\equiv \frac{\hslash }{{m}_{\text{graviton}}\cdot c}<2.8\times {10}^{-8}\text{\hspace{0.17em}}\text{meters}\end{array}
{\rho }_{\mathrm{max}}\propto 2.07\cdot {\rho }_{\text{planck}}
{\rho }_{\text{planck}}\approx 5.1\times {10}^{99}
{E}_{eff}\propto 2.07\cdot {l}_{\text{Planck}}^{3}\cdot {\rho }_{\text{planck}}~5\times {10}^{24}\text{GeV}
S\equiv E/T~{10}^{5}
T\approx {T}_{\text{Planck}}~{10}^{19}\text{GeV}
\nu \propto {10}^{10}\text{Hz}
{E}_{\text{graviton-effective}}\propto 2\cdot hv\approx 5\times {10}^{-5}\text{eV}
S\equiv {E}_{eff}/T~\left[{10}^{38}\times {E}_{\text{graviton-effective}}\left(v\approx {10}^{10}\text{Hz}\right)\right]/\left[T~{10}^{19}\text{GeV}\right]\approx {10}^{5}
\left[{E}_{\text{graviton-effective}}\propto 2\cdot hv\approx 5\times {10}^{-5}\text{eV}\right]
E~{m}_{\text{graviton}}\left[\text{red-shift}~0.55\right]~\left({10}^{-27}\text{eV}\right)
V\left[\varphi \right]
{H}^{2}~V\left[\varphi \right]/{m}_{\text{Planck}}^{2}
{T}^{\ast }
T\le {T}^{\ast }
{T}^{\ast }={T}_{c}
v\left({T}_{c}\right)/{T}_{c}>1
v\left({T}_{c}\right)/{T}_{c}>1
H~1.66\cdot \left[\sqrt{{\stackrel{˜}{g}}_{\ast }}\right]\cdot \left[{T}^{2}/{m}_{\text{Planck}}^{2}\right]
{\stackrel{˜}{g}}_{\ast }
{\stackrel{˜}{g}}_{\ast }\approx 100\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}120
{\stackrel{˜}{g}}_{\ast }~1000
T\approx {T}_{\text{Planck}}~{10}^{19}\text{GeV}
V\left[\varphi \right]\approx {E}_{net}
S~3\frac{{m}_{\text{Plank}}^{2}{\left[H=1.66\cdot \sqrt{{\stackrel{˜}{g}}_{\ast }}\cdot {T}^{2}/{m}_{\text{planck}}\right]}^{2}}{T}~3\cdot {\left[1.66\cdot \sqrt{{\stackrel{˜}{g}}_{\ast }}\right]}^{2}{T}^{3}
{T}^{\ast }
{\stackrel{˜}{g}}_{\ast }\approx 1000
S\propto {T}^{3}
{\stackrel{˜}{g}}_{\ast }\approx 1000
T~{10}^{19}\text{GeV}\gg {T}^{\ast }
S\propto {T}^{3}
{\stackrel{˜}{g}}_{\ast }\approx 1000
0<t<{t}_{\text{Planck}}~{10}^{-44}
{\stackrel{˜}{g}}_{\ast }\approx 1000
{\stackrel{˜}{g}}_{\ast }\approx 1000
S\propto {T}^{3}~n
{\stackrel{˜}{g}}_{\ast }\approx 1000
{\eta }_{1}
|{\eta }_{1}|~O\left(\beta \cdot {t}_{\text{Planck}}\right)
{t}_{\text{Planck}}
n~{\mathrm{sinh}}^{2}\left[{m}_{0}{\eta }_{1}\right]
\Phi
V={\stackrel{↔}{m}}^{2}{\Phi }^{\ast }\Phi
\stackrel{↔}{m}\approx \sqrt{\frac{3}{8}}\cdot \left[{\sqrt{\frac{3{H}^{2}}{4\pi G}}|}_{\text{time}~{10}^{-35}\mathrm{sec}}+{\sqrt{\frac{3{H}^{2}}{4\text{π}G}}|}_{\text{time}~{10}^{-44}\mathrm{sec}}\right]
|\stackrel{↔}{m}|\le \left[\frac{{l}^{2}}{4}\right]
{\text{d}{S}^{2}|}_{5\text{-}\mathrm{dim}}=\frac{{l}^{2}}{{z}^{2}}\cdot \left[{\eta }_{uv}\text{d}{x}^{\mu }\text{d}{x}^{v}+\text{d}{z}^{2}\right]
{\phi }_{0,-}=\sqrt{2/3}\cdot \stackrel{↔}{m}\cdot \left[{t}_{1\text{st-EXIT}}~{10}^{-35}\mathrm{sec}\right]
{\phi }_{+}={\left[{\phi }_{0,+}^{3}-\sqrt{3/2}\cdot \frac{3{M}^{2}t}{\stackrel{↔}{m}}\right]}^{1/3}
{n}_{f}\approx {10}^{6}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}{10}^{7}
{H}^{2}=\frac{1}{6}\cdot \left[{\stackrel{˙}{\phi }}^{2}+{\stackrel{↔}{m}}^{2}{\phi }^{2}+\frac{{M}^{2}}{{\phi }^{2}}\right]↔\left(\frac{{\stackrel{˜}{\kappa }}^{2}}{3}\left[\rho +\frac{{\rho }^{2}}{2\lambda }\right]\right)+\frac{m}{{a}^{4}}
\stackrel{˙}{H}=V-3H↔\stackrel{˙}{H}\cong \frac{2m}{{a}^{4}}
{\frac{3{H}^{2}}{4\text{π}G}\gg V\left(t\right)|}_{\text{time}~{10}^{-44}\mathrm{sec}}
I={S}_{\text{total}}/{k}_{B}\mathrm{ln}2={\left[#\text{operations}\right]}^{3/4}={\left[\rho \cdot {c}^{5}\cdot {t}^{4}/\hslash \right]}^{3/4}
0<t<{t}_{\text{Planck}}
{n}_{f}=\left[1/4\right]\cdot \left[\sqrt{\frac{v\left({a}_{\text{initial}}\right)}{v\left(a\right)}}-\sqrt{\frac{v\left(a\right)}{v\left({a}_{\text{final}}\right)}}\right]
{h}_{0}~0.75
{\Omega }_{gw}\left(v\right)\cong \frac{3.6}{{h}_{0}^{2}}\cdot \left[\frac{{n}_{f}}{{10}^{37}}\right]\cdot {\left(\frac{v}{1\text{\hspace{0.17em}}\text{kHz}}\right)}^{4}
a~{a}_{\text{final}}
{n}_{f}=\left[1/4\right]\cdot \left[\sqrt{\frac{v\left({a}_{\text{initial}}\right)}{v\left(a\right)}}-1\right]~\left[1/4\right]\cdot \left[\sqrt{\frac{v\left({a}_{\text{initial}}\right)}{v\left(a\right)}}\right]
{\Omega }_{g}\approx {10}^{-5}\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}{10}^{-14}
{\Omega }_{g}\approx {10}^{-5}
v\left({a}_{\text{initial}}\right)\approx {10}^{8}\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}{10}^{10}
v\left({a}_{\text{final}}\right)\approx {10}^{0}\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}{10}^{2}
a~\left[{a}_{\text{final}}=1\right]-{\delta }^{+}
S\approx n
S\ne n
S\propto {T}^{3}
{T}_{\text{Planck}}\approx {10}^{19}
{\stackrel{˜}{g}}_{\ast }\approx 1000
S\propto {T}^{3}
T\approx {10}^{32}\text{Kelvin}
z\approx {10}^{25}
T\approx \sqrt{{\epsilon }_{V}}\times {10}^{28}\text{Kelvin}~{T}_{\text{Hawkings}}\cong \frac{\hslash \cdot {H}_{\text{initial}}}{2\text{π}\cdot {k}_{B}}
Beckwith, A.W. (2018) Initial Conditions for Defining an Arrow of Time at the Start of Inflation? Journal of High Energy Physics, Gravitation and Cosmology, 4, 787-795. https://doi.org/10.4236/jhepgc.2018.44044
1. Beckwith, A.W. (2010) Could Gravitons from Prior Universe Survive Quantum Bounce to Reappear in Present Universe. http://vixra.org/abs/1008.0050
3. Ng, Y.J. (2008) Spacetime Foam: From Entropy and Holography to Infinite Statistics and Nonlocality. Entropy, 10, 441-461. https://doi.org/10.3390/e10040441
4. Beckwith, A.W. (2010) Entropy Growth in the Early Universe, and the Search for Determining if Gravity is Classical or Quantum Foundations (Is Gravity a Classical or Quantum Phenomenon at Its Genesis 13.7 Billion Years Ago?) http://vixra.org/abs/0910.0057
5. Valev, D. (2008) Neutrino and Graviton Rest Mass Estimations by a Phenomenological Approach. Aerospace Research in Bulgaria, 22, 68-82. http://arxiv.org/abs/hep-ph/0507255
6. Balazs, C., Carena, M., Menon, A., Morrissey, D.E. and Wagner, C.E.M. (2005) Overview of Electroweak Baryogenesis. 2005 ALCPG & ILC Workshops, Snowmass. http://www.slac.stanford.edu/econf/C0508141/proc/papers/ALCPG0333.PDF
7. de Vega, H. (2009) Inflation in the Standard Model of the Universe. http://isapp2009.mib.infn.it/media/lectures/devega.pdf
8. Morris, M.S. (1989) Initial Condition for Perturbations in R +εR2 Cosmology. Physical Review D, 39, 1511-1516. http://authors.library.caltech.edu/6670/1/MORprd89.pdf.https://doi.org/10.1103/PhysRevD.39.1511
9. Mukhanov, V. and Winitzki, S. (2007) Introduction to Quantum Effects in Gravity. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511809149
10. Yurov, A. (2002) Complex Field as Inflaton and Quintessence. https://arxiv.org/abs/hep-th/0208129
11. Maartens, R. (2004) Brane-World Gravity. Living Reviews in Relativity, 7, 7.http://www.livingreviews.org/lrr-2004-7
12. Lloyd, S. (2002) Computational Capacity of the Universe. Physical Review Letters, 88, Article ID: 237901. https://doi.org/10.1103/PhysRevLett.88.237901
13. Glinka, L. (2007) Quantum Information from Graviton-Matter Gas. SIGMA, 3, 87-100.
14. Glinka, L. (2009) New Approach to Quantization of Cosmological Models. Gravitation and Cosmology, 15, 317-322.
15. Grishchuk, L.P. (2001) Relic Gravitational Waves and Their Detection. Lecture Notes in Physics, 562, 167-194.
16. Smoot, G. (2007) CMB Observations and the Standard Model of the Universe. https://chalongedevega.fr/Paris07_Smoot.pdf
17. de Vega, H., Falvella, M. and Sanchez, N. (2009) HIGHLIGHTS and CONCLUSIONS of the Chalonge 13th Paris Cosmology Colloquium: The Standard Model of the Universe: From Inflation to Today Dark Energy.
19. Mersini-Houghton, L. (2006) The Arrow of Time Forbids a Positive Cosmological Constant Λ. https://arxiv.org/abs/gr-qc/0609006
20. Beckwith, A. (2018) Using “Enhanced Quantization” to Bound the Cosmological Constant, (For a Bound-On Graviton Mass), by Comparing Two Action Integrals (One Being from General Relativity) at the Start of Inflation.http://vixra.org/abs/1802.0305
|
Find the equations of the circles satisfying the given conditions : 1)touches y axis at (0, 3) and passes through - Maths - Conic Sections - 12273911 | Meritnation.com
Find the equations of the circles satisfying the given conditions :
1)touches y axis at (0,?3) and passes through (-1,0)
2)touches the line 2x-3y-7=0 at (2,-1) and passes through (4,1)
3)touches the line 3x+y+3=0 at (-3,6) and tangent x+3y-7=0.
\mathrm{Assuming} \mathrm{in} \mathrm{part} 1\right) \mathrm{the} \mathrm{point} \mathrm{as} \left(0,\sqrt{3}\right)\phantom{\rule{0ex}{0ex}}\mathrm{Let} \left(\mathrm{h},\mathrm{k}\right) \mathrm{be} \mathrm{centre} \mathrm{of} \mathrm{circle}.\phantom{\rule{0ex}{0ex}}\mathrm{Circle} \mathrm{touches} \mathrm{the} \mathrm{y}-\mathrm{axis}\phantom{\rule{0ex}{0ex}}\mathrm{So}, \mathrm{radius} \mathrm{will} \mathrm{be} \mathrm{h}.\phantom{\rule{0ex}{0ex}}\mathrm{Equation} \mathrm{of} \mathrm{circle}\phantom{\rule{0ex}{0ex}}{\left(\mathrm{x}-\mathrm{h}\right)}^{2}+\left(\mathrm{y}-\mathrm{k}{\right)}^{2}={\mathrm{h}}^{2}\phantom{\rule{0ex}{0ex}}\mathrm{It} \mathrm{touches} \mathrm{the} \mathrm{y}-\mathrm{axis} \mathrm{at} \left(0,\sqrt{3}\right)\phantom{\rule{0ex}{0ex}}{\left(0-\mathrm{h}\right)}^{2}+\left(\sqrt{3}-\mathrm{k}{\right)}^{2}={\mathrm{h}}^{2}\phantom{\rule{0ex}{0ex}}⇒{\mathrm{h}}^{2}+\left(\sqrt{3}-\mathrm{k}{\right)}^{2}={\mathrm{h}}^{2}\phantom{\rule{0ex}{0ex}}⇒\left(\sqrt{3}-\mathrm{k}{\right)}^{2}=0\phantom{\rule{0ex}{0ex}}⇒\sqrt{3}-\mathrm{k}=0\phantom{\rule{0ex}{0ex}}⇒\mathrm{k}=\sqrt{3}\phantom{\rule{0ex}{0ex}}\mathrm{It} \mathrm{passes} \mathrm{through} \mathrm{the} \mathrm{point} \left(-1,0\right)\phantom{\rule{0ex}{0ex}}\mathrm{So},{\left(-1-\mathrm{h}\right)}^{2}+\left(0-\sqrt{3}{\right)}^{2}={\mathrm{h}}^{2}\phantom{\rule{0ex}{0ex}}⇒1+{\mathrm{h}}^{2}+2\mathrm{h}+3={\mathrm{h}}^{2}\phantom{\rule{0ex}{0ex}}⇒4+2\mathrm{h}=0\phantom{\rule{0ex}{0ex}}⇒\mathrm{h}=-2\phantom{\rule{0ex}{0ex}}\mathrm{So}, \mathrm{equation} \mathrm{of} \mathrm{circle} \mathrm{is} \phantom{\rule{0ex}{0ex}}{\left(\mathrm{x}+2\right)}^{2}+\left(\mathrm{y}-\sqrt{3}{\right)}^{2}=\left(-2{\right)}^{2}\phantom{\rule{0ex}{0ex}}⇒{\left(\mathrm{x}+2\right)}^{2}+\left(\mathrm{y}-\sqrt{3}{\right)}^{2}=4\phantom{\rule{0ex}{0ex}}
|
High School Mathematics Extensions/Supplementary/Basic Counting - Wikibooks, open books for an open world
High School Mathematics Extensions/Supplementary/Basic Counting
1 Ordered selection (Permutation)
1.1 Permutation with repetition
2 Unordered selection (Combination)
2.1 Combination with repetition
3 Binomial coefficient
Ordered selection (Permutation)Edit
Suppose there are 10 songs in your music collection and you want to shuffle play them all (i.e. the 10 songs would play in a random order). In how many different ways can the collection be played?
This type of problem is called ordered selection (permutation), as we are selecting the songs from the collection in a certain order. For instace, these selections are considered different:
{\displaystyle {\begin{array}{l}1,2,3,4,5,6,7,8,9,10\\7,3,10,4,9,5,1,8,2,6\\2,1,3,4,5,6,7,8,9,10\end{array}}}
These clearly define different orders or in effect, playlists.
Let's think this step by step. We can pick 10 different songs for the first position (or first song to be played), 9 for the second position (as there is 1 song already picked), 8 for the third (as 2 songs are fixed), and so on and so forth. The total number of ways can be calculated as follows:
{\displaystyle 10\times 9\times 8\times 7\times 6\times 5\times 4\times 3\times 2\times 1}
Almost 4 million different playlists!
Now we shall introduce the factorial function, which is a compact way of expressing:
{\displaystyle n!=n\times (n-1)\times ...\times 3\times 2\times 1}
Formally defined by:
{\displaystyle {\begin{array}{l}0!=1\\n!=n\times (n-1)!\end{array}}}
{\displaystyle {\begin{array}{l}10!=10\times 9!=10\times 9\times 8!...\\10\times 9\times 8\times 7\times 6\times 5\times 4\times 3\times 2\times 1\end{array}}}
What if we had 30 songs, but still wanted to shuffle just 10 of them? Similar reasoning leads to
{\displaystyle 30\times 29\times 28\times 27\times 26\times 25\times 24\times 23\times 22\times 21}
different ways. This is equivalent to:
{\displaystyle {\frac {30!}{20!}}={\frac {30!}{(30-10)!}}}
Be cautious! The factorial function is not distributive over the addition or multiplication.
Generally speaking, the number of permutations of m items out of n (for instance, 10 songs out of 30) is:
{\displaystyle {\frac {n!}{(n-m)!}}}
The idea behind it is that we cancel out all but the first m factors of the n! product:
{\displaystyle 30!=\underbrace {30\times 29\times ...\times 22\times 21} _{\text{First 10 factors}}\times \underbrace {20\times ...\times 2\times 1} _{(30-10)!=20!}}
When m equals n we say we are counting the number of permutations of n items:
{\displaystyle {\frac {n!}{(n-n)!}}={\frac {n!}{0!}}=n!}
While solving counting problems, we'll generally use the words items or elements (in this case, our songs) and sets (in this case, the collection or playlists).
1. In how many ways can I arrange my bookshelf (7 books) if I own a trilogy and the 3 books have to be always together, in the same order?
2. In how many ways can I arrange my family (6 members) in a line, with the only rule that my parents have to be in adjacent positions?
Permutation with repetitionEdit
How many different two-digit numbers can be formed with only odd prime digits (i.e. 3, 5, or 7)? Numbers 75, 37, 77, 33, are possible solutions.
This is a special case of permutations, the items may be selected more than once. Counting, we see there are 3 possibilities for every position, since digits may be repeated:
{\displaystyle \underbrace {3\times 3} _{\text{2 positions}}=3^{2}}
9 different numbers.
If we were to select m items out of n, with possible repetition, there would be
{\displaystyle \underbrace {n\times n\times n...\times n} _{\text{m times}}=n^{m}}
1. How many three-digit numbers can be formed with only prime digits, and odd digits may only be repeated twice?
Circular permutationEdit
In how many ways can we sit 4 people (Alice, Bob, Charles, and Donald) at a round table?
This is another special case of permutations, the 4 elements are arranged in a circular order, in which the start and end of an arrangement are undefined.
If we simply count
{\displaystyle 4!}
there will be some repetitions. We'll be counting each unique arrangement more than once since layouts like
{\displaystyle {\begin{array}{l}{\text{A, B, C, D}}\\{\text{D, A, B, C}}\\{\text{C, D, A, B}}\\{\text{B, C, D, A}}\end{array}}}
are considered equivalent. Remember, there is no start in a round table.
There is no coincidence in having 4 people chairs, and groups of 4 equivalent arrangements.
If we arbitrarily numbered the chairs, we could obtain all equivalent arrangements by cycling our whole layout. How many times we would cycle? How many chairs/positions are there? 4.
Dividing our first intent by 4 (i.e. dividing in groups of 4 equivalent dispositions):
{\displaystyle {\frac {4!}{4}}=3!}
6 unique dispositions.
Another way of approaching the problem is to forcibly define an arbitrary starting point for the layout, in other words, to fix one element and permute the other 3 left:
{\displaystyle (4-1)!=3!=6}
ways to sit four people at a round table.
1. What if there are 7 people, but the table only has 5 chairs?
Unordered selection (Combination)Edit
Out of the 15 people in the math class, 5 will be chosen to represent the class in a school wide mathematics competition. How many ways are there to choose the 5 students?
This type of problem is called an unordered selection (combination), as the order in which you select the students is not important. For example, these selections are considered equivalent:
{\displaystyle {\begin{array}{l}{\text{Joe, Lee, Sue, Violet, Justin}}\\{\text{Lee, Joe, Sue, Justin, Violet}}\end{array}}}
As seen before, there are
{\displaystyle {\frac {15!}{10!}}}
ways to choose the 5 candidates in ordered selection, but there are 5! ordered selections for each different combination (i.e. the same 5 students). This means we are counting each combination 5! times. Consequently, there are
{\displaystyle {\frac {15!}{10!\times 5!}}}
ways of choosing 5 students to represent the class.
In general, the number of ways to choose m items out of n items is:
{\displaystyle {\frac {n!}{(n-m)!\times m!}}}
We took the formula for permutations of m items out of n, and then divided by m! because each combination was counted as m! ordered selections (m! times).
This formula is denoted by:
{\displaystyle {n \choose m}={\frac {n!}{(n-m)!\times m!}}}
{\displaystyle {n \choose m}}
is often read as n choose m.
1. Try not to use a calculator and think of the problem with groups of 1 student.
2. Again, no calculator and this time there are 1 million students in the class and we want to form groups of 999,999 students.
Combination with repetitionEdit
In this particular case of combination problems, elements may be selected more than once. That is, groups can include many samples of the same element.
Knowing which elements are picked is not enough for defining a selection, we need to know how many times is an element selected (i.e. the amount of samples selected). Both approaches are, essentially, the same: saying an element was picked 0 times is equivalent to saying it was not picked at all.
Now suppose we were to select m elements out of n. Instead of thinking the problem as "choosing", we should think it as "placing" the m elements inside the n distinct sets. Again, both interpretations are equivalent.
Then we place on every set as many objects as samples we selected from the element the set represented. The final step is to concatenate a string to identify the selection, for instance, representing each object (sample) with a square and set (element which the sample makes reference to) boundaries with a dash:
{\displaystyle -\underbrace {{\blacksquare }{\blacksquare }{\blacksquare }} _{\text{First set/element}}-\underbrace {{\blacksquare }{\blacksquare }} _{\text{Second set/element}}-\underbrace {{\blacksquare }{\blacksquare }} _{\text{Third set/element}}-}
This means we chose three samples from the first set (first element), two from the second set (second element), two from the third set (third element). In other words, the selection consists of three elements of the first kind, two elements of the second kind, two elements of the third kind. The string could represent a random group of five letters formed, solely, with letters A, B, C:
{\displaystyle {\text{A, A, A, B, B, C, C}}}
The concatenated strings consist of m squares, and n + 1 dashes. For example, the following represents three elements selected out of five:
{\displaystyle -\underbrace {{\blacksquare }{\blacksquare }} _{\text{First set/element}}-\underbrace {} _{\text{Second set/element}}-\underbrace {\blacksquare } _{\text{Third set/element}}-\underbrace {} _{\text{Forth set/element}}-\underbrace {} _{\text{Fifth set/element}}-}
Note that two dashes are fixed, and the other n - 1 are positioned along the string forming n sets of samples. There are n - 1 + m movable characters among n - 1 + m positions. The number of possible strings equals the number of ways to position m squares among n - 1 + m, in other words, the number of ways to choose m positions out of n - 1 + m:
{\displaystyle {{n-1+m} \choose {m}}}
Analogously, we could talk about positioning the n - 1 dashes among the n - 1 + m:
{\displaystyle {{n-1+m} \choose {n-1}}}
Since every string exclusively determines one selection or group, clearly the number of strings is equivalent to the number of groups or selections possible.
If we were to pack three fruits for long journey, and you only got bananas, apples, oranges, peaches, and kiwis, there would be
{\displaystyle {{5-1+3} \choose {3}}={{5-1+3} \choose {5-1}}=35}
possible groups to pack.
Binomial coefficientEdit
{\displaystyle {n \choose m}}
expresses the number of unordered groups that can be formed by choosing m elements from a set containing n.
These are some basic and useful properties:
The number of ways of choosing m elements out of n equals the number of ways of (not) choosing the remaining n - m out of the n elements. A group of selected elements defines another group of non selected elements.
{\displaystyle {n \choose m}={n \choose {n-m}}}
There is only one way to choose no elements, or to (not) choose them all.
{\displaystyle {n \choose 0}={n \choose n}=1}
There are n ways of choosing one element, or (not) choosing n - 1.
{\displaystyle {n \choose 1}={n \choose {n-1}}=n}
{\displaystyle {n \choose m}}
{\displaystyle {{n+1} \choose m}}
is that in the first formula we are not counting the groups which include the element added in the second formula. The amount of groups size m which include a particular element from a set with n + 1 elements is
{\displaystyle {{(n+1)-1} \choose {m-1}}={n \choose {m-1}}}
{\displaystyle {{n+1} \choose m}={n \choose m}+{n \choose {m-1}}}
{\displaystyle {n \choose {m+1}}}
{\displaystyle {n \choose m}}
is that in the first formula we are counting groups with one more element than the ones in the second formula. Every group size m needs one "missing" element of the n - m remaining in order to grow to size m - 1.
{\displaystyle {n \choose m}\times (n-m)}
However, we must observe that we are counting every group size m + 1 many times, m + 1 times indeed. This is because the "missing" element can be any of the m + 1 elements in the resulting group, in other words, m + 1 groups size m (when associated with the right element) will formed the same group size m + 1. How many groups size m can we formed by removing an element from a group size m + 1?
{\displaystyle {{m+1} \choose 1}=m+1}
{\displaystyle {n \choose {m+1}}={n \choose m}\times {\frac {n-m}{m+1}}}
Binomial expansionEdit
The binomial expansion describes the algebraic expansion of following expression:
{\displaystyle (a+b)^{n}}
Take n = 2 for example, we shall try to expand the expression manually, we get
{\displaystyle (a+b)^{2}=(a+b)(a+b)=aa+ab+ba+bb}
Take n = 3:
{\displaystyle (a+b)^{3}=(a+b)(a+b)(a+b)=aaa+aab+aba+abb+baa+bab+bba+bbb}
We deliberately did not simplify the expression at any point during the expansion. As you can see, the final expanded form has four and eight terms, respectively. These terms are all the possible combinations of a and b in products of two and three factors, respectively.
The number of factors equals the number of binomials (a + b) in the first product, which in turn equals n.
{\displaystyle (a+b)^{n}}
there are n factors in each term. How many terms are there with only one b? In other words, in how many ways can I place one b among n different positions? Or even better, in how many ways can I pick one position (to place a b) out of n?
{\displaystyle {n \choose 1}}
Similarly we can work out all the coefficients of the other terms:
{\displaystyle (a+b)^{3}={3 \choose 0}a^{3}+{3 \choose 1}a^{2}b+{3 \choose 2}ab^{2}+{3 \choose 3}b^{3}}
{\displaystyle (a+b)^{n}={n \choose 0}a^{n}+{n \choose 1}a^{n-1}b+{n \choose 2}a^{n-2}b^{2}+...{n \choose n-1}ab^{n-1}+{n \choose n}b^{n}}
Or more compactly using the summation operator:
{\displaystyle (a+b)^{n}=\sum _{i=0}^{n}{n \choose i}a^{n-i}b^{i}}
How many different ways can the letters of the word BOOK be arranged?
How many ways are there to choose five diamonds from a deck of cards?
Joey wants to make himself a sandwich. He will use ham, cheese, salami, tomato, and lettuce but does not like the ham and the salami to be touching each other. He always puts the lettuce on top. In how many ways can Joey arrange the ingredients of his sandwich?
Rachel wants to buy an ice-cream cup. She picks three different flavours from a total of ten but does not like mixing chocolate with vanilla. How many different ice-cream cups can Rachel order?
Joey is still hungry and wants to eat another sandwich. He can use ham, cheese, salami, tomato, and lettuce (there is no need to use them all). He does not want the ham and the salami to be in the same sandwich. How many different sandwiches can Joey make (the order of the ingredients inside the sandwich does not matter)?
Retrieved from "https://en.wikibooks.org/w/index.php?title=High_School_Mathematics_Extensions/Supplementary/Basic_Counting&oldid=3044161"
|
Normal mapping - Wikipedia
Texture mapping technique
Normal mapping used to re-detail simplified meshes. This normal map is encoded in object space.
In 3D computer graphics, normal mapping, or Dot3 bump mapping, is a texture mapping technique used for faking the lighting of bumps and dents – an implementation of bump mapping. It is used to add details without using more polygons. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model or height map.
Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal.
3 Calculating tangent space
5 Normal mapping in video games
In 1978 Jim Blinn described how the normals of a surface could be perturbed to make geometrically flat faces have a detailed appearance.[1] The idea of taking geometric details from a high polygon model was introduced in "Fitting Smooth Surfaces to Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996,[2] where this approach was used for creating displacement maps over nurbs. In 1998, two papers were presented with key ideas for transferring details with normal maps from high to low polygon meshes: "Appearance Preserving Simplification", by Cohen et al. SIGGRAPH 1998,[3] and "A general method for preserving attribute values on simplified meshes" by Cignoni et al. IEEE Visualization '98.[4] The former introduced the idea of storing surface normals directly in a texture, rather than displacements, though it required the low-detail model to be generated by a particular constrained simplification algorithm. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the recreation of any attributes of the high-detail model (color, texture coordinates, displacements, etc.) in a way that is not dependent on how the low-detail model was created. The combination of storing normals in a texture, with the more general creation process is still used by most currently available tools.
The orientation of coordinate axes differs depending on the space in which the normal map was encoded. A straightforward implementation encodes normals in object-space, so that red, green, and blue components correspond directly with X, Y, and Z coordinates. In object-space the coordinate system is constant.
However object-space normal maps cannot be easily reused on multiple models, as the orientation of the surfaces differ. Since color texture maps can be reused freely, and normal maps tend to correspond with a particular texture map, it is desirable for artists that normal maps have the same property.
A texture map (left). The corresponding normal map in tangent space (center). The normal map applied to a sphere in object space (right).
Normal map reuse is made possible by encoding maps in tangent space. The tangent space is a vector space which is tangent to the model's surface. The coordinate system varies smoothly (based on the derivatives of position with respect to texture coordinates) across the surface.
A pictorial representation of the tangent space of a single point
{\displaystyle x}
on a sphere.
Tangent space normal maps can be identified by their dominant purple color, corresponding to a vector facing directly out from the surface. See below.
Calculating tangent space[edit]
This section needs expansion with: more math. You can help by adding to it. (October 2011)
This section may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (January 2022) (Learn how and when to remove this template message)
In order to find the perturbation in the normal the tangent space must be correctly calculated.[5] Most often the normal is perturbed in a fragment shader after applying the model and view matrices. Typically the geometry provides a normal and tangent. The tangent is part of the tangent plane and can be transformed simply with the linear part of the matrix (the upper 3x3). However, the normal needs to be transformed by the inverse transpose. Most applications will want bitangent to match the transformed geometry (and associated UVs). So instead of enforcing the bitangent to be perpendicular to the tangent, it is generally preferable to transform the bitangent just like the tangent. Let t be tangent, b be bitangent, n be normal, M3x3 be the linear part of model matrix, and V3x3 be the linear part of the view matrix.
{\displaystyle t'=t\times M_{3x3}\times V_{3x3}}
{\displaystyle b'=b\times M_{3x3}\times V_{3x3}}
{\displaystyle n'=n\times (M_{3x3}\times V_{3x3})^{-1T}=n\times M_{3x3}^{-1T}\times V_{3x3}^{-1T}}
Rendering using the normal mapping technique. On the left, several solid meshes. On the right, a plane surface with the normal map computed from the meshes on the left.
Example of a normal map (center) with the scene it was calculated from (left) and the result when applied to a flat surface (right). This map is encoded in tangent space.
To calculate the Lambertian (diffuse) lighting of a surface, the unit vector from the shading point to the light source is dotted with the unit vector normal to that surface, and the result is the intensity of the light on that surface. Imagine a polygonal model of a sphere - you can only approximate the shape of the surface. By using a 3-channel bitmap textured across the model, more detailed normal vector information can be encoded. Each channel in the bitmap corresponds to a spatial dimension (X, Y and Z). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques.
Unit Normal vectors corresponding to the u,v texture coordinate are mapped onto normal maps. Only vectors pointing towards the viewer (z: 0 to -1 for Left Handed Orientation) are present, since the vectors on geometries pointing away from the viewer are never shown. The mapping is as follows:
X: -1 to +1 : Red: 0 to 255
Y: -1 to +1 : Green: 0 to 255
Z: 0 to -1 : Blue: 128 to 255
light green light yellow
dark cyan light blue light red
dark blue dark magenta
A normal pointing directly towards the viewer (0,0,-1) is mapped to (128,128,255). Hence the parts of object directly facing the viewer are light blue. The most common color in a normal map.
Since a normal will be used in the dot product calculation for the diffuse lighting computation, we can see that the {0, 0, –1} would be remapped to the {128, 128, 255} values, giving that kind of sky blue color seen in normal maps (blue (z) coordinate is perspective (deepness) coordinate and RG-xy flat coordinates on screen). {0.3, 0.4, –0.866} would be remapped to the ({0.3, 0.4, –0.866}/2+{0.5, 0.5, 0.5})*255={0.15+0.5, 0.2+0.5, -0.433+0.5}*255={0.65, 0.7, 0.067}*255={166, 179, 17} values (
{\displaystyle 0.3^{2}+0.4^{2}+(-0.866)^{2}=1}
). The sign of the z-coordinate (blue channel) must be flipped to match the normal map's normal vector with that of the eye (the viewpoint or camera) or the light vector. Since negative z values mean that the vertex is in front of the camera (rather than behind the camera) this convention guarantees that the surface shines with maximum strength precisely when the light vector and normal vector are coincident.[6]
Normal mapping in video games[edit]
Interactive normal map rendering was originally only possible on PixelFlow, a parallel rendering machine built at the University of North Carolina at Chapel Hill.[citation needed] It was later possible to perform normal mapping on high-end SGI workstations using multi-pass rendering and framebuffer operations[7] or on low end PC hardware with some tricks using paletted textures. However, with the advent of shaders in personal computers and game consoles, normal mapping became widely used in commercial video games starting in late 2003. Normal mapping's popularity for real-time rendering is due to its good quality to processing requirements ratio versus other methods of producing similar effects. Much of this efficiency is made possible by distance-indexed detail scaling, a technique which selectively decreases the detail of the normal map of a given texture (cf. mipmapping), meaning that more distant surfaces require less complex lighting simulation. Many authoring pipelines use high resolution models baked into low/medium resolution in-game models augmented with normal maps.
Basic normal mapping can be implemented in any hardware that supports palettized textures. The first game console to have specialized normal mapping hardware was the Sega Dreamcast. However, Microsoft's Xbox was the first console to widely use the effect in retail games. Out of the sixth generation consoles[citation needed], only the PlayStation 2's GPU lacks built-in normal mapping support, though it can be simulated using the PlayStation 2 hardware's vector units. Games for the Xbox 360 and the PlayStation 3 rely heavily on normal mapping and were the first game console generation to make use of parallax mapping. The Nintendo 3DS has been shown to support normal mapping, as demonstrated by Resident Evil: Revelations and Metal Gear Solid: Snake Eater.
Baking (computer graphics)
Tessellation (computer graphics)
^ Blinn. Simulation of Wrinkled Surfaces, Siggraph 1978
^ Krishnamurthy and Levoy, Fitting Smooth Surfaces to Dense Polygon Meshes, SIGGRAPH 1996
^ Cohen et al., Appearance-Preserving Simplification, SIGGRAPH 1998 (PDF)
^ Cignoni et al., A general method for preserving attribute values on simplified meshes, IEEE Visualization 1998 (PDF)
^ Mikkelsen, Simulation of Wrinkled Surfaces Revisited, 2008 (PDF)
^ "LearnOpenGL - Normal Mapping". learnopengl.com. Retrieved 2021-10-19.
^ Heidrich and Seidel, Realistic, Hardware-accelerated Shading and Lighting Archived 2005-01-29 at the Wayback Machine, SIGGRAPH 1999 (PDF)
Wikimedia Commons has media related to Normal mapping.
Normal Map Tutorial Per-pixel logic behind Dot3 Normal Mapping
NormalMap-Online Free Generator inside Browser
Normal Mapping on sunandblackcat.com
Blender Normal Mapping
Normal Mapping with paletted textures using old OpenGL extensions.
Normal Map Photography Creating normal maps manually by layering digital photographs
Normal Mapping Explained
Simple Normal Mapper Open Source normal map generator
Texture mapping techniques
Occlusion mapping
Retrieved from "https://en.wikipedia.org/w/index.php?title=Normal_mapping&oldid=1082762403"
|
Indeterminate Forms | Brilliant Math & Science Wiki
Patrick Corn, Andrew Ellinor, Swagat Panda, and
The limit of an expression involving multiple functions can often be evaluated by taking the limits of these functions separately. For instance, if
\lim\limits_{x\to 2} f(x) = 1
\lim\limits_{x\to 2} g(x) = 3
\lim\limits_{x\to 2} \big(f(x)+g(x)\big) = 1+3 = 4
. An indeterminate form is an expression involving two functions whose limit cannot be determined solely from the limits of the individual functions. These forms are common in calculus; indeed, the limit definition of the derivative is the limit of an indeterminate form.
f(x) = \sin(2x),
f'(0).
By the limit definition of the derivative,
f'(0) = \lim_{h\to 0} \frac{f(0+h)-f(0)}{h} = \lim_{h\to 0} \frac{\sin(2h)-\sin(0)}{h} = \lim_{h \to 0} \frac{\sin(2h)}{h}.
The limit of a quotient of functions can often be computed by taking the quotient of the limits, but in this case, the limits of the top and bottom functions are both
0.
\frac00
is not meaningful, so computing the limit requires another technique.
(
In fact the answer is 2—see the wiki on derivatives of trigonometric functions.
)
_\square
Quotient Indeterminate Forms
Forms that are not Indeterminate
The most common indeterminate forms stem from evaluating limits of a ratio of functions
\frac{f(x)}{g(x)}
\frac{0}{0}
\frac{\infty}{\infty}
. The notation is shorthand for a limit of
\frac{f(x)}{g(x)}
where the limits of
f(x)
g(x)
0
\infty
An indeterminate form
\frac{0}{0}
\frac{\infty}{\infty}
can have limit equal to any real number, or the limit may not exist. "Canceling" or other improper manipulations can lead to incorrect answers; see the Common Misconceptions wiki for examples.
For example, these limits are both of the form
\frac{0}{0}
\begin{aligned} \lim_{x\to 0} \dfrac{ax}{x} &= a \ \ (a\in {\mathbb R})\\\\ \lim_{x\to 0} \dfrac{x\sin\left(\frac1x\right)}{x} &= \text{DNE}. \end{aligned}
Computing these limits, in general, is the fundamental problem of differential calculus since, as noted above, the derivative
f'(x) = \lim_{h\to 0} \dfrac{f(x+h)-f(x)}{h}
is the limit of an indeterminate form
\frac{0}{0}
f(x)
g(x)
are differentiable and
\frac{f(x)}{g(x)}
is one of these indeterminate forms, its limit can often be simplified using L'Hopital's rule.
Product: The form
0 \cdot \infty
is indeterminate. (So "0 times anything is 0" does not apply!) It can be converted to the quotient form by changing
f(x)
\frac1{\hspace{2mm} \frac{1}{f(x)}\hspace{2mm} }
\lim\limits_{x\to 0^+} x\ln(x).
This is an indeterminate form
0 \cdot (-\infty)
. (Some lists classify this as a different form from
0 \cdot \infty
, but there is no difference in the techniques used to evaluate the limit.)
\begin{aligned} \lim_{x\to 0^+} x\ln(x) &= \lim_{x\to 0^+} \dfrac{\ln(x)}{\frac1x} &&\left(\text{of the form} \dfrac{-\infty}{\infty}\right) \\ &= \lim_{x\to 0^+} \dfrac{\frac1x}{\hspace{1mm} -\frac1{x^2}\hspace{1mm} } &&\text{(L'Hopital)} \\ &= \lim_{x\to 0^+} (-x) \\&= 0.\ _\square \end{aligned}
Subtraction: The form
\infty - \infty
is indeterminate. Again, the general strategy for computing limits of this form is to convert to an indeterminate quotient.
\lim\limits_{x\to\infty} \left(\sqrt{x^2+3x+7}-x\right)
Multiply by the "conjugate" to obtain
\begin{aligned} \lim_{x\to\infty} \left(\sqrt{x^2+3x+7}-x\right)\left( \dfrac{\sqrt{x^2+3x+7}+x}{\sqrt{x^2+3x+7}+x} \right)&= \lim_{x\to\infty} \dfrac{x^2+3x+7-x^2}{\sqrt{x^2+3x+7}+x} \\ &= \lim_{x\to\infty} \dfrac{3x+7}{\sqrt{x^2+3x+7}+x} \\ &= \lim_{x\to\infty} \dfrac{3+\dfrac7x}{\sqrt{1+\dfrac3x+\dfrac{7}{x^2}}+1} \\ &= \dfrac3{\sqrt{1}+1} \\ &= \dfrac32.\ _\square \end{aligned}
Exponential: There are three of these:
0^0, \infty^0, 1^\infty
. The first of these is a common misconception, since
0^0 = 1
in many contexts. But, for instance,
\lim\limits_{x\to 0} 0^x = 0
, or more exotically,
\lim\limits_{x\to 0^+} (a^{1/x})^x = a
; this is of the form
0^0
0 \le a < 1
The strategy for evaluating exponential limits of the above types is to let
y
be the function which gives the indeterminate form and then to find the limit of
\ln(y)
. This turns the problem into the limit of a function with indeterminate product form
0 \cdot \infty
\lim\limits_{x\to \infty} \left( 1+\dfrac4{x} \right)^x.
This is an indeterminate form of type
1^\infty
y = \left( 1+\frac4{x} \right)^x
\begin{aligned} \lim_{x\to\infty} \ln(y) &= \lim_{x\to\infty} \ln\left( 1+\frac4{x} \right)^x \\ &= \lim_{x\to\infty} x \ln\left( 1+\frac4{x}\right) \\ &= \lim_{x\to\infty} \frac{\ln\left( 1+\frac4{x}\right)}{\frac1x} \\ &= \lim_{x\to\infty} \frac{\frac1{1+\frac4{x}} \left(\frac{-4}{x^2} \right)}{-\frac{1}{x^2}} &&(\text{L'Hopital}) \\ &= \lim_{x\to\infty} \frac4{1+\frac4{x}} \\ &= 4. \end{aligned}
So the original limit is
e^4
_\square
\pi
0 ^ 0 = 1
\Large \lim_{ x \rightarrow 0^+ } x ^{^ { \frac{ 1}{\ln x} }}?
Other combinations of functions lead to limits that can be determined (possibly with some information about signs—see below) just from the value of the component limits.
Quotient: The fractions
\frac0{\infty}
\frac1{\infty}
are not indeterminate; the limit is
0
\frac10
\frac{\infty}0
are not indeterminate. If the denominator is positive, the limit is
\infty
. If the denominator is negative, the limit is
-\infty
. If the denominator takes both positive and negative values in any neighborhood of the point where the limit is being taken, the limit does not exist.
\begin{aligned} \lim_{x\to 0^+} \dfrac1{x} &= \infty \\ \lim_{x\to 0^-} \dfrac1{x} &= -\infty \\ \lim_{x\to 0} \dfrac1{x} &= \text{DNE} \end{aligned}
\infty \cdot \infty
is not indeterminate; the limit is
\infty
Exponential:
0^\infty
\infty^\infty
are not indeterminate; the limits are
0
\infty
0^{-\infty}
\infty^{-\infty}
\infty
0
, respectively. (In all cases the assumption is that the base of the exponential is a nonnegative function; otherwise the exponential itself is undefined in general.)
Cite as: Indeterminate Forms. Brilliant.org. Retrieved from https://brilliant.org/wiki/indeterminate-forms/
|
On Ehrenfest's Paradox - Wikisource, the free online library
Translation:On Ehrenfest's Paradox
On Ehrenfest's Paradox (1911)
by Vladimir Varićak, translated from German by Wikisource
In German: Zum Ehrenfestschen Paradoxon, Physikalische Zeitschrift 12, 169-170
1166332On Ehrenfest's ParadoxVladimir Varićak1911
On Ehrenfest's paradox.
By V. Varićak.
That Ehrenfest took the Lorentzian standpoint in his argumentation is concluded by me from the questions directed by him to v. Ignatowsky[1], and mainly therefrom that he thinks to find this contradiction at the tracing images
{\displaystyle \Pi }
{\displaystyle \Pi _{1}}
as well. It seems to me that those tracing images must be identical; they will have the same radius and the same periphery.
To justify this, it shall be allowed for me to take recourse to the uniform translation of a rigid body, at which that contraction is ordinarily demonstrated as a concomitant of that translation. A mirror shall be fixed at the front end
{\displaystyle B}
, and a light source at the back end
{\displaystyle A}
. The doubled length of the rod is measured by the time required by a light signal, to come from
{\displaystyle A}
{\displaystyle B}
and back to
{\displaystyle A}
. In order of don't becoming to spacious, I allude e.g. to the work of Lewis and Tolman[2], who especially emphasized the radical difference in the views of Lorentz and Einstein. There one can also see, by which considerations the stationary observer is forced to assume the contraction of the moving rod. But he remains conscious, that this contraction is so to speak only a psychological, not a physical fact, i.e., that the body experienced no change in reality.
Now, the stationary observer shall execute with this rod the same experiment, that according to Ehrenfest shall be executed by him with the rotating disc.[3] There are marks at both ends of the rod. While the rod is at rest, the stationary observer holds a tracing paper
{\displaystyle P}
above him, and traces the marks upon the resting paper.
While the rod is uniformly moving forwards in a straight line, the resting observer holds a tracing paper
{\displaystyle P_{1}}
above him and traces in the moment, when his clock indicates
{\displaystyle t}
, at one stroke both marks upon the resting paper.
{\displaystyle B}
measures the distance of those marks at the resting tracing images
{\displaystyle \Pi }
{\displaystyle \Pi _{1}}
I believe, that they find the same distance in both cases, because the rod hasn't become shorter in reality.
The mentioned procedure of the resting observer is surely identical with the mechanical adjustment of the measuring rod to the object to be measured; yet this is not the same operation as measuring the length by the aid of optical signals.
I still want to mention in short, that it's known that the clocks in points
{\displaystyle A}
{\displaystyle B}
of the moving rod, although they have the same rate, indicate different times when the clock of the resting observer indicates
{\displaystyle t}
Only a historical remark shall still be given. After Lorentz stated his hypotheses, that all bodies suffer a contraction of their dimensions in the direction of Earth's motion, the question was near at hand, whether this deformation or compression shall not be accompanied by double refraction. The corresponding experiments of Rayleigh and Brace gave a negative result.
According to Einstein's relativity principle, one wouldn't come to this question at all.
In the mentioned report of Laub, one can also read about other similar experiments, to which one is quite consequently led by Lorentz's standpoint, yet which can have no meaning in Einstein's theory.
Eventually I would like to remark, that it is not allowed without further ado, to transfer that contraction arising at a translation, to the case of rotation. The occurrence of that contraction is in connection with the change of clock rate due to motion. In rotation, however, the time parameter changes in another way, and changes not at all when the observer is in the center, as it is easy to show.
Agram, February 5, 1911.
↑ This journal, 11, 1129, 1910.
↑ G. N. Lewis and R. C. Tolman, The Principle of Relativity, and Non-Newtonian Mechanics, Proceedings of the American Academy of Arts and Sciences 44, 711, 1909. Recently, J. Laub emphatically alluded to the fact, that in the theory of Lorentz the interpretation of this phenomenon is essentially different than that of Einstein. See Jahrb. der Radioaktivität und Elektronik 7, 430, 1910.
↑ I leave open the question, whether this experiment is to be considered as possible from the relativistic standpoint.
Retrieved from "https://en.wikisource.org/w/index.php?title=Translation:On_Ehrenfest%27s_Paradox&oldid=10821753"
|
Bitcoin: A Peer-to-Peer Electronic Cash System | Bitcoin Paper
The solution we propose begins with a timestamp server. A timestamp server works by taking a hash of a block of items to be timestamped and widely publishing the hash, such as in a newspaper or Usenet post [2–5]. The timestamp proves that the data must have existed at the time, obviously, in order to get into the hash. Each timestamp includes the previous timestamp in its hash, forming a chain, with each additional timestamp reinforcing the ones before it.
To implement a distributed timestamp server on a peer-to-peer basis, we will need to use a proof- of-work system similar to Adam Back's Hashcash [6], rather than newspaper or Usenet posts. The proof-of-work involves scanning for a value that when hashed, such as with SHA-256, the hash begins with a number of zero bits. The average work required is exponential in the number of zero bits required and can be verified by executing a single hash.
The race between the honest chain and an attacker chain can be characterized as a Binomial Random Walk. The success event is the honest chain being extended by one block, increasing its lead by +1, and the failure event is the attacker's chain being extended by one block, reducing the gap by −1.
--Mathjax--
p
= probability an honest node finds the next block
p
= probability the attacker finds the next block
{q}_{z}
{\displaystyle {q}_{z}=\left\{\begin{array}{cc}\hfill 1\hfill & \mathrm{if}p\le q\\ {(q∕p)}^{z}& \mathrm{if}p>q\end{array}\right\}}
Given our assumption that
p>q
, the probability drops exponentially as the number of blocks the attacker has to catch up with increases. With the odds against him, if he doesn't make a lucky lunge forward early on, his chances become vanishingly small as he falls further behind.
\lambda =z\frac{q}{p}
{\displaystyle \sum _{k=0}^{\infty }\frac{{\lambda }^{k}{e}^{-\lambda }}{k!}\cdot \left\{\begin{array}{cc}{\left(q∕p\right)}^{\left(z-k\right)}& \mathrm{if}k\le z\\ 1& \mathrm{if}k>z\end{array}\right\}}
{\displaystyle 1-\sum _{k=0}^{z}\frac{{\lambda }^{k}{e}^{-\lambda }}{k!}\cdot \left(1-{(q∕p}^{(z-k)}\right)}
z=0 P=1.0000000
z=10 P=0.0416605
[2] H. Massias, X.S. Avila, and J.-J. Quisquater, "Design of a secure timestamping service with minimal trust requirements," In 20th Symposium on Information Theory in the Benelux, May 1999.
[3] S. Haber, W.S. Stornetta, "How to time-stamp a digital document," In Journal of Cryptology, vol 3, no 2, pages 99-111, 1991.
|
networks(deprecated)/shortpathtree - Maple Help
Home : Support : Online Help : networks(deprecated)/shortpathtree
construct a shortest path spanning tree
shortpathtree(G, v)
Important: The networks package has been deprecated.Use the superseding command GraphTheory[DijkstrasAlgorithm] instead.
This is an implementation of Dijkstra's algorithm for shortest path spanning tree. A priority queue is used for storing the edges.
The final distances are recorded as the vertex weights in the resulting graph (a spanning tree). The tree is rooted at v and the ancestors and daughters of each node are computed relative to this tree.
Edge weights are assumed to be lengths or distances so that edge weights are required to be non-negative. Undirected edges are assumed to be bidirectional.
This routine is normally loaded via the command with(networks) but may also be referenced using the full name networks[shortpathtree](...).
\mathrm{with}\left(\mathrm{networks}\right):
G≔\mathrm{petersen}\left(\right):
T≔\mathrm{shortpathtree}\left(G,1\right):
\mathrm{ancestor}\left(T\right)
\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{}\left([\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{6}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{6}}]\right)
\mathrm{daughter}\left(T\right)
\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{}\left([\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\varnothing }\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\varnothing }\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{10}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\varnothing }\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\varnothing }\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\varnothing }\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\varnothing }]\right)
GraphTheory[DijkstrasAlgorithm]
networks(deprecated)[diameter]
|
Piece Tables, Splay Trees, and "Trables" (Oh My!) – averylaird.com
“...premature optimization is the root of all evil.”
When thinking about balanced binary trees, there are always three main structures to consider: AVL trees, splay trees, and normal (unbalanced) BST trees. It may seem like a contradiction to include unbalanced BSTs in a list of balanced trees; however, UBSTs can remained balanced as long as the keys being inserted are sufficiently unsorted. A text editor doesn’t usually meet this requirement.
AVL Trees usually perform very well, almost as well as red-black trees. If you’re unsure about the nature of the data being stored in the tree, the AVL tree works nicely. We can work a bit smarter though – there’s a simpler and faster solution to consider: Splay trees. This structure redistributes the nodes of a tree similar to AVL trees. The main difference is that we do not aim to balance the tree at all – we keep shifting around (“splaying”) nodes until the most recently inserted node is the root of the tree. And as I’ll explain below, we can “encode” the positions of pieces in the structure of the tree, eliminating the need to keep track of indices.
At first glance, it might seem that splay trees should not perform well at all, perhaps even worse than the UBST, because it is not particularly balanced. Surprisingly, the amortized time complexity of operations on a splay tree is
O(logN)
. This is because splay trees rely on the assumption that recently accessed data is the most likely to be required soon in the future. This is generally true, but especially relevant in the case of text editors. Most of the time, we are working on only a small section of the document.
Parts of Atom were recently rewritten to use a splay tree, with apparently pretty good results. I know that splay trees will perform very well with the common use case – I’m unsure how they will fare with large buffers and multiple cursors/concurrent editors. There’s only one way to find out.
When designing the buffer interface, two basic functions must be considered:
span
is an ordered set of two indices,
(i_s, i_e)
which represents all characters within that range of indices (inclusive). In this post, I will handle only the insertion operation, and address deletion in a later post.
index
is known to us through the cursor position, and the
string
is given by user input along with
span
. All other values must be handled internally. We are storing the information about edits in a piece table, abstractly; but actually, since we keep each piece in a tree, it’s more of a “piece tree” (or a “trable?”). There are some conditions that we must place on this tree, mainly that an inorder traversal starting from the root of the tree results in a proper evaluation of the buffer. Because a node is equivalent to a piece, and a tree is equivalent to a table, I will use these terms interchangeably (depending on what is most appropriate in the context).
Although we can properly evaluate the entire table by starting at the root, and any section of the buffer by traversing a child node and its subtrees, we do not know where the section belongs in the buffer. As I’ll explain, there’s no way to know a node’s position in the buffer without any information about the rest of the tree. This is because we will use a very clever suggestion from Joaquin Abela, which allows us to store nodes based on their relative index — their index relative to other pieces in the table.
To implement this method, there are two changes we need to make to the typical splay tree: how we insert nodes, and how we store data. The procedure for inserting nodes is affected by the key we choose to sort the data. Intuitively, this should be something related to the position of a piece within the document. At first, we might consider using the desired insertion index,
i_d
The issue with using
i_d
is that it must change for any insertion at index
i \leq i_d
. After such an insertion, we would have to update every node after
i
. In the case of several insertions at the beginning of an existing buffer, this cases a
O(n)
time complexity for each insertion — no better than an array.
Joaquin Abela suggests a solution to this problem: store the offset information in the nodes themselves. He does this by storing subtree sizes, rather than indices:
Explicit Indexing Relative Indexing
n1 n1
index: 14 size_right: 1
length: 10 size_left: 14
/ \ => length: 10
n2 n3 / \
index: 0 index: 23 n2 n3
length: 14 length: 1 size_right: 0 size_right: 0
/ \ / \ size_left: 0 size_left: 0
A B C D length: 14 length: 1
/ \ / \
A B C D
It’s worth spending a bit of time talking about the example above, because there are two very important differences to consider. Firstly, note that n2 has a index of 0 and a length of 14, while n1 picks up at index 14. This is because size is 1-indexed (I do not consider strings of length 0 to have meaning) whereas index is 0-indexed. So, n2 actually spans indices 0 -> 13. I will also define an insertion at index
i
to have the following behaviour:
^ insert 'i' @ index D
This is a senitence
Such that the index
i
refers to the desired final position.
Secondly, and most importantly, note how the structure of the tree does not change in either case. This illustrates how the structure of the tree itself stores the correct order of the pieces relative to each other, and implies that any resources spent storing and updating the index is a waste. Unlike indices, this order is also constant even after splaying — once the node is inserted properly, everything works fine.
Finally, we need to consider how an inserted piece may require the “split” of an existing piece. This involves devising a test to detect when a split should occur, and performing the split such that all properties of a piece table and a splay tree are preserved (I will not give a particularly in-depth explanation of splay trees here, but the Wikipedia page is a good place to start).
To make an insertion, we will still need to consider the desired index, but we can forget it afterwards. I will go through an example below. We will use the same example as above, assuming the existing buffer was inserted as a single piece.
(1) Original Tree (2) Insert 'i' @ index 13
size_right: 0 size_right: 0 Because:
size_left: 0 size_left: 0 offset <index 13 < offset+length
length: 18 --> length: 18 we must split!
text: 0x0 text: 0x0
(2.1) View of subtree during split
size_right: 0
size_left: 13
text: 0x1
size_left: 0
(2.2) Join split subtree with parent node
size_left: 0 <-- must update size_left
length: 18 <-- must update length
text: 0x0 <-- must update text pointer
(2.3) Update parent node
size_left: 19 <-- 13+1+5
length: 5 <-- 18-13
text: 0x3 <-- explained below
Of course, there are actually (up to) 3 different ways to perform such a split. I choose to form a linked list on whatever side the node is supposed to be inserted, because we know there aren’t any children there. The other thing to consider is how memory is managed throughout. If we examine just the memory addresses mentioned above, we might see something like this:
(1) 0x0: This is a sentence
(2.1) 0x0: This is a sentence
0x2: This is a sen
(2.3) 0x0: This is a sentence <-- here we can free(0x0)
0x3: tence
This method can be done without copying if we also store a start value in the node. We still need to allocate memory for the newly inserted character, but we would never have to change a pointer to memory once it is allocated. We can just increment the start value to be whatever the length of the first node in the split is. For example, the following memory contents and pointer/start pairs:
0x0 This is a sentence
0x0, start = 0 , length = 13
0x1, start = 1 , length = 1
0x0, start = 13, length = 5
The start value would be relative to the text pointer. We can get the section of text we want by reading from text+start by sizeof(text_type)*length bytes (the data we’re storing may or may not be a char).
Finally, if we do an inorder traversal of the example tree, we get:
|-----------|||---|
piece1 | |
piece2----| |
piece3-------|
An insertion without a split would form a subtree of a single node, and perform the same join operation. However, only the size_left of the parent node (in this case) would be changed — the length and start values would not change. Actually, if we follow the process described above, a split will always join of the left side of a parent, because the desired insertion index is less than the span of that piece.
First we will need a piece, which is just a collection of three values:
typedef struct Piece {
I’m assuming we’re storing chars here, but we can replace this with a different type later if we have to. We will be storing these pieces in a tree structure:
struct Piece *piece;
struct Tree *left, *right, *parent;
unsigned long size_left, size_right;
In terms of types, that’s about it. I won’t post a bunch of code here, it’s all available on github. However, I will talk about the functions involved in insertions, and how I handle that process.
First, we follow the typical BST insertion algorithm. However, before performing the insertion, we perform a split test. If a split must be performed, we do it, otherwise, just insert the node normally. We record the address of the newly inserted node, and pass it to the splay function.
The splay function takes the address of the new node, and splays it up the tree until it is the root of the tree. That’s it! Sounded pretty easy to me the first time I tried to write it. And, maybe if I had better coding chops, it would’ve been. However, after a while, I did manage to get a testable base implementation.
It’s hard to do useful, comprehensive tests in this case, since there are so many situations which may occur. However, I did some very rudimentary testing and benchmarking of the insert operation, with interesting results.
First, I inserted 1,000,000 characters into a piece table, which is equivalent to a dense document with 30,000 pages,1 giving the following graph:
This is certainly a strange looking graph indeed, and that is mostly related to the nature of a splay tree. My guess is that the straight lines represent operations close to each other, and therefore very fast. The much slower insertion times represent operations very far from previous ones, which then become fast due to the splay operation. This is not bad for a worst case, but not particularly impressive either. Things get interesting if we take a look at a different quantity.
Inserting characters randomly doesn’t give a fair representation of the common use case for most editors. To get a more balanced perspective, I also recorded the average time for each insertion into a table of a certain size. I kept a running total of insertion time, and after each insertion, I divided by the current table size. This reflects the average time for each insertion up to that point. I got the following result:
This suggests that, no matter the table size, it should always take about the same time to perform an insertion, and that is a result I’m happy with. As some simple, preliminary tests, there is not much that can be inferred — perhaps a comparison between other implementations (array, linked list, etc) would be interesting. My inner skeptic also feels that running times this quick do seem a bit too good to be true, and there’s always the possibility of a bug that I haven’t noticed. I plan to design some better tests to rule out this possibility.
You can check out all the code on github. My next rough steps are supporting deletions, undo, and then working on an API. After that, multiple cursors, users — and beyond!
This article is also available in Russian thanks to Stanislav
I’m using Joaqin’s estimate here ↩
|
Given a random event, one might consider the sample space of possible events or outcomes. For instance, the sample space of a fair six-sided die has six possible events, one for each side. One could denote the sample space by the set
\{1, 2, 3, 4, 5, 6 \}
. A random variable can then be thought of as a function on the sample space, where the probability of taking on any given outcome is weighted accordingly. In the case of the fair die, each outcome is equally likely, although one could certainly come up with a loaded die in which all six events are not equally likely.
To describe a random variable completely, one must specify both the sample space (usually as a set) and the probability of all events in the sample space. For a discrete random variable
X
, the latter is often accomplished by writing
P(X = A),
A
is some event in the sample space. For instance, for a fair six-sided die whose roll is the random variable
Y
, one could write
P(Y = j) = \frac{1}{6}
j = 1, 2, 3, 4, 5, 6.
For a continuous random variable
X
whose sample space is a subset of the real numbers, generally one gives the probability density function
p
P(a \leq X \leq b) = \int_a^b p(x) \, dx,
P (a \leq X \leq b)
X
and
b,
inclusive. One can view
p(x) \, dx
as the infinitesimal probability of obtaining a value within a small interval
dx
x
p(x)
across a finite interval
[a, b]
thus provides the probability that the outcome will lie in that region.
The normal distribution has density
p(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[-\frac{(x - \mu)^2}{2\sigma^2}\right],
\mu
\sigma
are (fixed) parameters of the distribution.
The chi-squared distribution with
k
degrees of freedom is the sum of
k
squared normal random variables:
X = \sum_{i=1}^k Z_i^2,
Z_i's
is an independent normal random variable with
\mu = 0
\sigma = 1.
One fundamental property of a random variable is that the probability of obtaining any outcome must be equal to
1
S
denotes the sample space,
P(X \in S) = 1.
Suppose one is given a set of numerical data
X_1, X_2, \ldots, X_n
, all of which are real-numbered values. How might one describe the data? Perhaps of interest is the (arithmetic) mean, denoted by
\mu
, the sum of all values divided by the number of values:
\mu = \frac{X_1 + X_2 + \cdots + X_n}n.
\sigma^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2.
In any case, given a set of random variables
X_1, X_2, \ldots, X_n
, one can represent the data using one or more statistics. A statistic, such as the mean or variance, is simply a function of the random variables
f( X_1, X_2, \cdots, X_n).
An estimator is some function
\delta(X_1, X_2, \ldots, X_n)
of the sample random variables. It is itself a random variable. The estimator evaluated for given values of the sample random variables is called an estimate. When a particular
|
A note on proofs of falsehood.
Jan Krajicek — 1987
Jan Krajíček — 2004
We study diagonalization in the context of implicit proofs of [10]. We prove that at least one of the following three conjectures is true: ∙ There is a function f: 0,1* → 0,1 computable in that has circuit complexity
{2}^{\Omega \left(n\right)}
. ∙ ≠ co . ∙ There is no p-optimal propositional proof system. We note that a variant of the statement (either ≠ co or ∩ co contains a function
{2}^{\Omega \left(n\right)}
hard on average) seems to have a bearing on the existence of good proof complexity generators. In particular, we prove that if a minor variant...
We investigate the proof complexity, in (extensions of) resolution and in bounded arithmetic, of the weak pigeonhole principle and of the Ramsey theorem. In particular, we link the proof complexities of these two principles. Further we give lower bounds to the width of resolution proofs and to the size of (extensions of) tree-like resolution proofs of the Ramsey theorem. We establish a connection between provability of WPHP in fragments of bounded arithmetic and cryptographic assumptions (the existence...
Some theorems on the lattice of local interpretability types
A possible modal reformulation of comprehension scheme
Speed-up for propositional Frege systems via generalizations of proofs
5 Krajíček, J
1 Krajicek, J
|
the equation of the tangent at the point (0,0) to the circle , making intercepts of length 2a and 2b - Maths - Conic Sections - 11701549 | Meritnation.com
the equation of the tangent at the point (0,0) to the circle , making intercepts of length 2a and 2b units on the coordinate axes are
ans is ax+ by = 0 and ax - by = 0
\mathrm{Equation} \mathrm{of} \mathrm{circle} \mathrm{passing} \mathrm{through} \mathrm{origin} \mathrm{and} \mathrm{cutting} \mathrm{off} \mathrm{intercepts}\phantom{\rule{0ex}{0ex}}2\mathrm{a} \mathrm{and} 2\mathrm{b} \mathrm{units} \mathrm{on} \mathrm{the} \mathrm{coordinate} \mathrm{axes} \mathrm{is} {\mathrm{x}}^{2}+{\mathrm{y}}^{2}±2\mathrm{ax}±2\mathrm{by}=0\phantom{\rule{0ex}{0ex}}\mathrm{and} \mathrm{equation} \mathrm{of} \mathrm{tangent} \mathrm{at} \left(0,0\right) \mathrm{is} \mathrm{ax}±\mathrm{by}=0
|
DISCRETE MATHEMATICS - Encyclopedia Information
Discrete mathematics Information
https://en.wikipedia.org/wiki/Discrete_mathematics
Find sources: "Discrete mathematics" – news · newspapers · books · scholar · JSTOR (February 2015) ( Learn how and when to remove this template message)
Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. [1] [2] [3] [4] By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets [5] (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics". [6]
In university curricula, "Discrete Mathematics" appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. [7] [8] Some high-school-level discrete mathematics textbooks have appeared as well. [9] At this level, discrete mathematics is sometimes seen as a preparatory course, not unlike precalculus in this respect. [10]
Much research in graph theory was motivated by attempts to prove that all maps, like this one, can be colored using only four colors so that no areas of the same color share an edge. Kenneth Appel and Wolfgang Haken proved this in 1976. [11]
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance). [11]
The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers. [12] At the same time, military requirements motivated advances in operations research. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades. Operations research remained important as a tool in business and project management, with the critical path method being developed in the 1950s. The telecommunication industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need.
Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life. [13]
Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems. [14]
Logical formulas are discrete structures, as are proofs, which form finite trees [15] or, more generally, directed acyclic graph structures [16] [17] (with each inference step combining one or more premise branches to give a single conclusion). The truth values of logical formulas usually form a finite set, generally restricted to two values: true and false, but logic can also be continuous-valued, e.g., fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied, [18] e.g. infinitary logic.
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. [19] Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
{\displaystyle V(x-c)\subset \operatorname {Spec} K[x]=\mathbb {A} ^{1}}
{\displaystyle K}
{\displaystyle \operatorname {Spec} K[x]/(x-c)\cong \operatorname {Spec} K}
{\displaystyle \operatorname {Spec} K[x]_{(x-c)}}
^ Franklin, James (2017). "Discrete and continuous: a fundamental dichotomy in mathematics". Journal of Humanistic Mathematics. 7 (2): 355–378. doi: 10.5642/jhummath.201702.18. Retrieved 30 June 2021.
^ Brotherston, J.; Bornat, R.; Calcagno, C. (January 2008). "Cyclic proofs of program termination in separation logic". ACM SIGPLAN Notices. 43 (1). CiteSeerX 10.1.1.111.1105. doi: 10.1145/1328897.1328453.
Retrieved from " https://en.wikipedia.org/?title=Discrete_mathematics&oldid=1080932814"
Discrete Mathematics Videos
Discrete Mathematics Websites
Discrete Mathematics Encyclopedia Articles
|
Engineering Acoustics/Electro-Mechanical Analogies - Wikibooks, open books for an open world
Engineering Acoustics/Electro-Mechanical Analogies
1 Why Circuit Analogs?
2 How Electro-Mechanical Analogies Work
3 The Basic Elements of an Oscillating Mechanical System
4 The Impedance Analog
5 The Mobility Analog
Why Circuit Analogs?Edit
Acoustic devices are often combinations of mechanical and electrical elements. A common example of this would be a loudspeaker connected to a power source. It is useful in engineering applications to model the entire system with one method. This is the reason for using a circuit analogy in a vibrating mechanical system. The same analytic method can be applied to Electro-Acoustic Analogies.
How Electro-Mechanical Analogies WorkEdit
An electrical circuit is described in terms of its potential (voltage) and flux (current). To construct a circuit analog of a mechanical system we define flux and potential for the system. This leads to two separate analog systems. The Impedance Analog denotes the force acting on an element as the potential and the velocity of the element as the flux. The Mobility Analog equates flux with the force and velocity with potential.
Impedance Analog
Potential: Force Voltage
Flux: Velocity Current
Mobility Analog
Potential: Velocity Voltage
Flux: Force Current
For many, the mobility analog is considered easier for a mechanical system. It is more intuitive for force to flow as a current and for objects oscillating the same frequency to be wired in parallel. However, either method will yield equivalent results and can also be translated using the dual (dot) method.
The Basic Elements of an Oscillating Mechanical SystemEdit
The Mechanical Spring:
The ideal spring is considered to be operating within its elastic limit, so the behavior can be modeled with Hooke's Law. It is also assumed to be massless and have no damping effects.
{\displaystyle F=-cx,\ }
The Mechanical Mass
In a vibrating system, a mass element opposes acceleration. From Newton's Second Law:
{\displaystyle F=mx^{\prime \prime }=ma=m{\frac {du}{dt}}}
{\displaystyle F=K\int \,udt}
The dashpot is an ideal viscous damper which opposes velocity.
{\displaystyle F=Ru\displaystyle }
Ideal Generators
The two ideal generators which can drive any system are an ideal velocity and ideal force generator. The ideal velocity generator can be denoted by a drawing of a crank or simply by declaring
{\displaystyle u(t)=f(t)}
, and the ideal force generator can be drawn with an arrow or by declaring
{\displaystyle F(t)=f(t)}
Simple Damped Mechanical Oscillators
In the following sections we will consider this simple mechanical system as a mobility and impedance analog. It can be driven either by an ideal force or an ideal velocity generator, and we will consider simple harmonic motion. The m in the subscript denotes a mechanical system, which is currently redundant, but can be useful when combining mechanical and acoustic systems.
The Impedance AnalogEdit
The Mechanical Spring
In a spring, force is related to the displacement from equilibrium. By Hooke's Law,
{\displaystyle F(t)=c_{m}\Delta x=c_{m}\int _{0}^{t}u(\tau )d\tau }
The equivalent behaviour in a circuit is a capacitor:
{\displaystyle V(t)={\frac {1}{C}}\int _{0}^{t}\,i(\tau )d\tau }
The force on a mass is related to the acceleration (change in velocity). The behaviour, by Newton's Second Law, is:
{\displaystyle F(t)=m_{m}a=m_{m}{\frac {d}{dt}}u(t)}
The equivalent behaviour in a circuit is an inductor:
{\displaystyle V(t)=L{\frac {d}{dt}}i(t)}
For a viscous damper, the force is directly related to the velocity
{\displaystyle F=R_{m}u\displaystyle }
The equivalent is a simple resistor of value
{\displaystyle R_{m}\displaystyle }
{\displaystyle V=Ri\displaystyle }
Thus the simple mechanical oscillator in the previous section becomes a series RCL Circuit:
The current through all three elements is equal (they are at the same velocity) and that the sum of the potential drops across each element will equal the potential at the generator (the driving force). The ideal voltage generator depicted here would be equivalent to an ideal force generator.
IMPORTANT NOTE: The velocity measured for the spring and dashpot is the relative velocity ( velocity of one end minus the velocity of the other end). The velocity of the mass, however, is the absolute velocity.
Spring Capacitor
{\displaystyle Z_{c}={\frac {V_{c}}{I_{c}}}={\frac {c_{m}}{j\omega }}}
Mass Inductor
{\displaystyle Z_{m}={\frac {V_{m}}{I_{m}}}=j\omega m_{m}}
Dashpot Resistor
{\displaystyle Z_{d}={\frac {V_{m}}{I_{m}}}=R_{m}}
The Mobility AnalogEdit
Like the Impedance Analog above, the equivalent elements can be found by comparing their fundamental equations with the equations of circuit elements. However, since circuit equations usually define voltage in terms of current, in this case the analogy would be an expression of velocity in terms of force, which is the opposite of convention. However, this can be solved with simple algebraic manipulation.
{\displaystyle F(t)=c_{m}\int u(t)dt}
The equivalent behavior for this circuit is the behavior of an inductor.
{\displaystyle \int Vdt=\int L{\frac {d}{dt}}i(t)dt}
{\displaystyle i={\frac {1}{L}}\int \,Vdt}
{\displaystyle F=m_{m}a=m_{m}{\frac {d}{dt}}u(t)}
Similar to the spring element, if we take the general equation for a capacitor and differentiate,
{\displaystyle {\frac {d}{dt}}V(t)={\frac {d}{dt}}{\frac {1}{C}}\int \,i(t)dt}
{\displaystyle i(t)=C{\frac {d}{dt}}V(t)}
Since the relation between force and velocity is proportionate, the only difference is that the mechanical resistance becomes inverted:
{\displaystyle F={\frac {1}{r_{m}}}u=R_{m}u}
{\displaystyle i={\frac {1}{R}}V}
The simple mechanical oscillator drawn above would become a parallel RLC Circuit. The potential across each element is the same because they are each operating at the same velocity. This is often the more intuitive of the two analogy methods to use, because you can visualize force "flowing" like a flux through your system. The ideal voltage generator in this drawing would correspond to an ideal velocity generator.
IMPORTANT NOTE: Since the measure of the velocity of a mass is absolute, a capacitor in this analogy must always have one terminal grounded. A capacitor with both terminals at a potential other than ground may be realized physically as an inverter, which completes all elements of this analogy.
Spring Inductor
{\displaystyle Z_{c}={\frac {V_{m}}{I_{m}}}={\frac {j\omega }{c_{m}}}}
Mass Capacitor
{\displaystyle Z_{m}={\frac {V_{c}}{I_{c}}}={\frac {1}{j\omega m_{m}}}}
{\displaystyle Z_{d}={\frac {V_{m}}{I_{m}}}=r_{m}={\frac {1}{R_{m}}}}
Retrieved from "https://en.wikibooks.org/w/index.php?title=Engineering_Acoustics/Electro-Mechanical_Analogies&oldid=3232717"
|
Phosphate - New World Encyclopedia
Previous (Phosgene)
Next (Phosphorescence)
Some samples of phosphate rock, placed alongside a United States one-cent coin (for scale).
A phosphate, in inorganic chemistry, is a salt of phosphoric acid. In organic chemistry, a phosphate, or organophosphate, is an ester of phosphoric acid. Phosphates are important in biochemistry and biogeochemistry.
4 Phosphate species at different pH values
The largest rock phosphate deposits in North America lie in the Bone Valley region of central Florida, United States, the Soda Springs region of Idaho, and the coast of North Carolina. Smaller deposits are located in Montana, Tennessee, Georgia, and South Carolina near Charleston along Ashley Phosphate road. The small island nation of Nauru and its neighbor Banaba Island, which used to have massive phosphate deposits of the best quality, have been mined excessively. Rock phosphate can also be found on Navassa Island. Morocco, Tunisia, Israel, Togo, and Jordan have large phosphate mining industries as well.
In biological systems, phosphorus is found as a free phosphate ion in solution and is called inorganic phosphate, to distinguish it from phosphates bound in various phosphate esters. Inorganic phosphate is generally denoted Pi and can be created by the hydrolysis of pyrophosphate, which is denoted PPi:
P2O74− + H2O → 2HPO42−
However, phosphates are most commonly found in the form of adenosine phosphates, (AMP, ADP and ATP) and in DNA and RNA and can be released by the hydrolysis of ATP or ADP. Similar reactions exist for the other nucleoside diphosphates and triphosphates. Phosphoanhydride bonds in ADP and ATP, or other nucleoside diphosphates and triphosphates, contain high amounts of energy which give them their vital role in all living organisms. They are generally referred to as high energy phosphate, as are the phosphagens in muscle tissue. Compounds such as substituted phosphines, have uses in organic chemistry but do not seem to have any natural counterparts.
In ecological terms, because of its important role in biological systems, phosphate is a highly sought after resource. Consequently, it is often a limiting reagent in environments, and its availability may govern the rate of growth of organisms. Addition of high levels of phosphate to environments and to micro-environments in which it is typically rare can have significant ecological consequences; for example, booms in the populations of some organisms at the expense of others, and the collapse of populations deprived of resources such as oxygen (see eutrophication). In the context of pollution, phosphates are a principal component of total dissolved solids, a major indicator of water quality.
The general chemical structure of a phosphate
This is the structural formula of the phosphoric acid functional group as found in a weakly acidic aqueous solution. In more basic aqueous solutions, the group will donate the two hydrogen atoms and ionize as a phosphate group with a negative charge of 2. [1]
The phosphate ion is a polyatomic ion with the empirical formula PO43− and a molar mass of 94.973 g/mol; it consists of one central phosphorus atom surrounded by four identical oxygen atoms in a tetrahedral arrangement. The phosphate ion carries a negative three formal charge and is the conjugate base of the hydrogenphosphate ion, HPO42−, which is the conjugate base of H2PO4−, the dihydrogen phosphate ion, which in turn is the conjugate base of H3PO4, phosphoric acid. It is a hypervalent molecule (the phosphorus atom has 10 electrons in its valence shell). Phosphate is also an organophosphorus compound with the formula OP(OR)3
A phosphate salt forms when a positively charged ion attaches to the negatively charged oxygen atoms of the ion, forming an ionic compound. Many phosphates are insoluble in water at standard temperature and pressure, except for the alkali metal salts.
In a dilute aqueous solution, phosphate exists in four forms. In strongly basic conditions, the phosphate ion (PO43−) predominates, while in weakly basic conditions, the hydrogen phosphate ion (HPO42−) is prevalent. In weakly acid conditions, the dihydrogen phosphate ion (H2PO4−) is most common. In strongly acid conditions, aqueous phosphoric acid (H3PO4) is the main form.
Phosphate can form many polymeric ions, diphosphate (also pyrophosphate), P2O74−, triphosphate, P3O105−, and so forth. The various metaphosphate ions have an empirical formula of PO3− and are found in many compounds.
Phosphate deposits can contain significant amounts of naturally occurring uranium. Subsequent uptake of such soil amendments can lead to crops containing uranium concentrations.
The image above shows the annual mean sea surface phosphate concentrations for the World Ocean. Data from the World Ocean Atlas 2001.[2]
Phosphates were once commonly used in laundry detergent in the form trisodium phosphate (TSP), but because of algae boom-bust cycles tied to emission of phosphates into watersheds, phosphate detergent sale or usage is restricted in some areas.
In agriculture phosphate refers to one of the three primary plant nutrients, and it is a component of fertilizers. Rock phosphate is quarried from phosphate beds in sedimentary rocks. In former times, it was simply crushed and used as is, but the crude form is now used only in organic farming. Normally, it is chemically treated to make superphosphate, triple superphosphate, or ammonium phosphates, which have higher concentration of phosphate and are also more soluble, therefore more quickly usable by plants.
Fertilizer grades normally have three numbers; the first is the available nitrogen, the second is the available phosphate (expressed on a P2O5 basis), and the third is the available potash (expressed on a K2O basis). Thus, a 10-10-10 fertilizer would contain ten percent of each, with the remainder being filler.
Surface runoff of phosphates from excessively fertilized farmland can be a cause of phosphate pollution leading to eutrophication (nutrient enrichment), algal bloom and consequent oxygen deficit. This can lead to anoxia for fish and other aquatic organisms in the same manner as phosphate-based detergents.
Phosphate compounds are occasionally added to the public drinking water supply to counter plumbosolvency.
Phosphate species at different pH values
The dissociation of phosphoric acid takes place in stages, generating various phosphate species. As the pH of the solution is changed, different phosphate species become dominant in the solution. Consider the following three equilibrium reactions:
H3PO4 ⇌ H+ + H2PO4−
{\displaystyle K_{a1}={\frac {[{\mbox{H}}^{+}][{\mbox{H}}_{2}{\mbox{PO}}_{4}^{-}]}{[{\mbox{H}}_{3}{\mbox{PO}}_{4}]}}\simeq 7.5\times 10^{-3}}
{\displaystyle K_{a2}={\frac {[{\mbox{H}}^{+}][{\mbox{HPO}}_{4}^{2-}]}{[{\mbox{H}}_{2}{\mbox{PO}}_{4}^{-}]}}\simeq 6.2\times 10^{-8}}
{\displaystyle K_{a3}={\frac {[{\mbox{H}}^{+}][{\mbox{PO}}_{4}^{3-}]}{[{\mbox{HPO}}_{4}^{2-}]}}\simeq 2.14\times 10^{-13}}
In a strongly basic solution (pH=13):
{\displaystyle {\frac {[{\mbox{H}}_{2}{\mbox{PO}}_{4}^{-}]}{[{\mbox{H}}_{3}{\mbox{PO}}_{4}]}}\simeq 7.5\times 10^{10}{\mbox{ , }}{\frac {[{\mbox{HPO}}_{4}^{2-}]}{[{\mbox{H}}_{2}{\mbox{PO}}_{4}^{-}]}}\simeq 6.2\times 10^{5}{\mbox{ , }}{\frac {[{\mbox{PO}}_{4}^{3-}]}{[{\mbox{HPO}}_{4}^{2-}]}}\simeq 2.14}
These ratios show that only PO43− and HPO42− are in significant amounts at high pH.
In a solution at neutral pH (pH=7.0, such as in the cytosol):
{\displaystyle {\frac {[{\mbox{H}}_{2}{\mbox{PO}}_{4}^{-}]}{[{\mbox{H}}_{3}{\mbox{PO}}_{4}]}}\simeq 7.5\times 10^{4}{\mbox{ , }}{\frac {[{\mbox{HPO}}_{4}^{2-}]}{[{\mbox{H}}_{2}{\mbox{PO}}_{4}^{-}]}}\simeq 0.62{\mbox{ , }}{\frac {[{\mbox{PO}}_{4}^{3-}]}{[{\mbox{HPO}}_{4}^{2-}]}}\simeq 2.14\times 10^{-6}}
The above ratios indicate that only H2PO4− and HPO42− ions are in significant amounts (62% H2PO4−, 38% HPO42−) at neutral pH. Note that in the extracellular fluid (pH=7.4), this proportion is inverted: 61% HPO42−, 39% H2PO4−.
In a strongly acidic solution (pH=1):
{\displaystyle {\frac {[{\mbox{H}}_{2}{\mbox{PO}}_{4}^{-}]}{[{\mbox{H}}_{3}{\mbox{PO}}_{4}]}}\simeq 0.075{\mbox{ , }}{\frac {[{\mbox{HPO}}_{4}^{2-}]}{[{\mbox{H}}_{2}{\mbox{PO}}_{4}^{-}]}}\simeq 6.2\times 10^{-7}{\mbox{ , }}{\frac {[{\mbox{PO}}_{4}^{3-}]}{[{\mbox{HPO}}_{4}^{2-}]}}\simeq 2.14\times 10^{-12}}
These ratios show that H3PO4 is dominant with respect to H2PO4− in a highly acidic solution. HPO42− and PO43− are practically absent.
↑ Campbell, Neil A. and Reece, Jane B. (2005). Biology, Seventh Edition, San Francisco, California: Benjamin Cummings, 65. ISBN 0-8053-7171-0.
↑ On-line Objective Analyses and Statistics (HTML/ASCII). World Ocean Atlas 2001. National Oceanographic Data Center, National Oceanographic and Atmospheric Administration (2003).
Chang, Raymond. 2006. Chemistry. 9th ed. New York: McGraw-Hill Science/Engineering/Math. ISBN 0073221031 and ISBN 9780073221038.
Cotton, F. Albert, Geoffrey Wilkinson, Carlos A. Murillo, and Manfred Bochmann. 1999. Advanced Inorganic Chemistry. 6th edition. New York: Wiley. ISBN 0471199575.
Nelson, David L., and Michael M. Cox. 2004. Lehninger Principles of Biochemistry. 4th ed. New York: W.H. Freeman. ISBN 0716743396.
Watson, James D., Tania A. Baker, Stephen P. Bell, Alexander Gann, Michael Levine, and Richard Losick. 2004. Molecular Biology of the Gene. 5th ed. New York: Pearson Education. ISBN 080534635X.
Phosphate history
History of "Phosphate"
Retrieved from https://www.newworldencyclopedia.org/p/index.php?title=Phosphate&oldid=682286
|
The net pressure gradient that causes the fluid to filter out of the glomeruli into the capsule is
Kidneys help in the formation of urine, from the blood flowing through glomerular capillaries. About 20% of plasma fluid filters out into the Bowman's capsule through a thin glomeular- capsular membrane due to a net or effective filtration of about 10 to 15 mm Hg.
Proteins are the polymers of amino acids in which amino acids are joined by peptide bonds. Glycine has the simplest structure.
Nucleotide are building blocks of nucleic acids, nucleotide is a composite molecule formed by
(base-sugar-phosphate)n
base-sugar-OH
base-sugar-phosphate
Nucleotides are the building blocks of nucleic acids (DNA/ RNA). A single nucleotide comprises of-
(i) phosphate molecule
(ii) a five carbon sugar (either ribose or deoxyribose)
(iii) a purine (adenine or guanine) or a pyrimidine (thymine or cytosine or uracil) nitrogenous base.
Nucleoside = Base + Sugar
Nucleotide = Base + Sugar + Phosphate
In a man, abducens nerve is injured. Which one ofthe following functions will be affected?
Movement of the eye ball
Abducens (abducent) nerve is a cranial nerve which originates from the ventral surface of medulla oblongota. It innervates the lateral rectus muscle of eye ball. It is a motor nerve and controls the movements of the eye ball. Hence, if this nerve is injured in a human, movement of eye ball will be affected.
Damage to thymus in a child may lead to
loss of antibody mediated immunity
loss of cell mediated immunity
Thymus gland is located in the upper part of thorax near the heart. It is a bilobed, pinkish gland. It secretes thymosin hormone, thymic humoral factor and thymopoietin.
Proliferation of lymphocytes and differentiation of these lymphocytes into a variety of clones are induced by these factors. These clones are differentially specialized to destroy different specific category of antigens and pathogens. Therefore, thymus gland brings fourth T- lymphocytes for cell mediated immunity.
A student wishes to study the cell structure under a light microscope having 10X eyepiece and 45X objective. He should illuminate the object by which one of the following colours of light so as to get the best possible resolution?
Resolving Power or resolution is the ability of the lens to distinguish fine details and structure. It is the ability of a lens to differentiate between two points at a specified distance apart.
Resolving Power =
\frac{\mathrm{Wavelength}\quad \mathrm{of}\quad \mathrm{light}}{2\quad \times \quad \mathrm{NA}}
It depends upon 2 factors:
(i) Wavelength of light used for illumination
(ii) Power of objective lenses
Among Yellow, green, red and blue light; Blue colour has the shortest wavelength so, it will give best resolution.
Chromosomes are responsible for the transmission of the hereditary information from one generation to the next. Arms of the chromosome is known as chromatid. They are joined together in the centre known as centromere or primary constriction. During cell division, spindle fibres attach to centromere and help in the movement towards the poles.
Enzymes, vitamins and hormones can be classified into a single category of biological chemicals, because all of these
are exclusively synthesized in the body of a living organism as at present
Enzymes, vitamins and hormones are classified into a single category of biological chemical because all of them help in regulation of metabolism.
Enzymes are a proteinaceous catalyst produced by cell and are responsible for high rate and specificity of one or more inter/ intra cellular biochemical recations.
Vitamin is an organic substance, synthesized by plants (except Vitamin- D).
Hormones are chemical messengers which on secretion bring about a specific and adaptive physiological response.
Which of the following substances, if introduce in the blood stream, would cause coagulation, at the site of its introduction?
Lipoproteinaceous, thromboplastin is released by the injured tissue. It reacts with Ca2+ ions present in blood and forms prothrombinase enzyme. Later, in the presence of Ca2+ ions, it inactivates heparin (anticoagulant) and catalyses prothrombin (inactive plasma protein) into an active thrombin protein.
Thrombin acts as an enzyme and catalyses fibrinogen (soluble plasma protein) into an insoluble fibre like polymer, fibrin. These form a dense network upon the wound and trap blood corpuscles and thus form a clot. This further seals the wound and stops bleeding
In blood vessels, thromboplastin does not release due to which blood does not clot. However, external thromboplastin causes blood clotting at the site of its introduction due to formation of prothrombinase enzyme.
G-6-P dehydrogenase deficiency is associated with haemolysis of
G-6-P dehydrogenase deficiency is associated with haemolysis of RBCs.
|
Engineering Acoustics/Microphone Design and Operation - Wikibooks, open books for an open world
Engineering Acoustics/Microphone Design and Operation
3.1 48V Phantom Powering
3.2 12V T-Powering
3.3 Electret Condenser Microphones
6 Microphone Manufactures Links
Microphones are devices which convert pressure fluctuations into electrical signals. Two main methods of achieving this are used in the mainstream entertainment industry today - dynamic microphones and condenser microphones. Piezoelectric transducers can also be used as microphones but they are not commonly used in the entertainment industry.
Dynamic microphonesEdit
Dynamic microphones utilise 'Faraday’s Law'. The principle states that when an electrical conductor is moved through a magnetic field, an electrical current is induced within the conductor. In these microphones the magnetic field comes from permanent magnets. There are two common arrangements for the conductor.
Figure 1: Sectional View of Moving-Coil Dynamic Microphone
The first conductor arrangement has a moving coil of wire. The wire is typically copper and is attached to a circular membrane or piston usually made from lightweight plastic or occasionally aluminum. The impinging pressure fluctuation on the piston causes it to move in the magnetic field and thus creates the desired electrical current.
Figure 2: Dynamic Ribbon Microphone
The second conductor arrangement is a ribbon of metallic foil suspended between magnets. The metallic ribbon moves in response to a pressure fluctuation and an electrical current is produced. In both configurations, dynamic microphones follow the same principals as acoustical transducers.
Condenser MicrophonesEdit
Condenser microphones convert pressure fluctuations into electrical potentials by changes in electrical capacitance, hence they are also known as capacitor microphones. An electrical capacitor consists of two charged electrical conductors placed at some relatively small distance to each other. The basic relation that describes capacitors is:
{\displaystyle Q=C\times V}
where Q is the electrical charge of the capacitor’s conductors, C is the capacitance, and V is the electric potential between the capacitor’s conductors. If the electrical charge of the conductors is held at a constant value, then the voltage between the conductors will be inversely proportional to (a) the capacitance and (b) the distance between the conductors.
Figure 3: Sectional View of Condenser Microphone
The capacitor in a condenser microphone consists of two parts: the diaphragm and the backplate. The diaphragm moves due to impinging pressure fluctuations and the backplate is held in a stationary position. When the diaphragm moves closer to the backplate, the capacitance increases and a change in electric potential is produced. The diaphragm is typically made of metallic coated Mylar. The assembly that houses both the backplate and the diaphragm is commonly referred to as a capsule.
To keep the diaphragm and backplate at a constant charge, an electric potential must be presented to the capsule. There are various ways this can be achieved. The first uses a battery to supply the needed DC potential to the capsule (figure 4). The resistor across the leads of the capsule is very high, in the range of 10 mega ohms, to keep the charge on the capsule close to constant.
Figure 4: Internal Battery Powered Condenser Microphone
An alternative technique for providing a constant charge on the capacitor is to supply a DC electric potential through the microphone cable that carries the microphones output signal. Standard microphone cable is known as XLR cable and is terminated by three pin connectors. Pin one connects to the shield around the cable. The microphone signal is transmitted between pins two and three.
Figure 5: Dynamic Microphone Connected to a Mixing Console via XLR Cable
48V Phantom PoweringEdit
The most popular method of providing a DC potential through a microphone cable is to supply +48 V to both of the microphone output leads, pins 2 and 3, and use the shield of the cable, pin 1, as the ground to the circuit. Because pins 2 and 3 see the same potential, any fluctuation of the microphone powering potential will not affect the microphone signal seen by the attached audio equipment. The +48 V will be stepped down at the microphone using a transformer and provide the potential to the backplate and diaphragm in a similar fashion as the battery solution.
Figure 6: Condenser Microphone Powering Techniques
12V T-PoweringEdit
A less popular method of running the potential through the cable is to supply 12 V between pins 2 and 3. This method is referred to as T-powering. The main problem with T-powering is that potential fluctuation in the powering of the capsule will be transmitted into an audio signal because the audio equipment analyzing the microphone signal will not see a difference between a potential change across pins 2 and 3 due to a pressure fluctuation and one due to the power source electric potential fluctuation.
Electret Condenser MicrophonesEdit
Finally, the diaphragm and backplate can be manufactured from a material that maintains a fixed charge, known as 'electret' (from electric+magnet, because these materials can be seen as the electric equivalents of permanent magnets). As a result, these microphones are termed electret condenser microphones (ECM). In early electret designs, the charge on the material tended to become unstable over time. Advances in science and manufacturing have effectively eliminated this problem in present designs.
Two types of microphones are used in the entertainment industry.
Dynamic microphones, which are found in the moving-coil and ribbon configurations. The movement of the conductor in dynamic microphones induces an electric current which is then transformed into the reproduction of sound.
Condenser microphones which utilize the properties of capacitors. The charge on the capsule of condenser microphones can be accomplished by battery, by phantom powering, by T-powering, and by using 'electrets' - materials with a fixed charge.
-Sound Recording Handbook. Woram, John M. 1989.
-Handbook of Recording Engineering Fourth Edition. Eargle, John. 2003.
Microphone Manufactures LinksEdit
Retrieved from "https://en.wikibooks.org/w/index.php?title=Engineering_Acoustics/Microphone_Design_and_Operation&oldid=3561587"
|
Model Gain-Scheduled Control Systems in Simulink - MATLAB & Simulink - MathWorks Italia
Model Scheduled Gains
Scheduled Gain in Controller
Gain-Scheduled Equivalents for Commonly Used Control Elements
Gain-Scheduled Notch Filter
Gain-Scheduled PI Controller
Matrix-Valued Gain Schedules
Custom Gain-Scheduled Control Structures
Tunability of Gain Schedules
In Simulink®, you can model gain-scheduled control systems in which controller gains or coefficients depend on scheduling variables such as time, operating conditions, or model parameters. The library of linear parameter-varying blocks in Control System Toolbox™ lets you implement common control-system elements with variable gains. Use blocks such as lookup tables or MATLAB Function blocks to implement the gain schedule, which gives the dependence of these gains on the scheduling variables.
To model a gain-scheduled control system in Simulink:
Identify the scheduling variables and the signals that represent them in your model. For instance, if your system is a cruising aircraft, then the scheduling variables might be the incidence angle and the airspeed of the aircraft.
Use a lookup table block or a MATLAB Function block to implement a gain or coefficient that depends on the scheduling variables. If you do not have lookup table values or MATLAB® expressions for gain schedules that meet your performance requirements, you can use systune to tune them. See Tune Gain Schedules in Simulink.
Replace ordinary control elements with gain-scheduled elements. For instance, instead of a fixed-coefficient PID controller, use a Varying PID Controller block, in which the gain schedules determine the PID gains.
Add scheduling logic and safeguards to your model as needed.
A gain schedule converts the current values of the scheduling variables into controller gains. There are several ways to implement a gain schedule in Simulink.
Available blocks for implementing lookup tables include:
Lookup tables — A lookup table is a list of breakpoints and corresponding gain values. When the scheduling variables fall between breakpoints, the lookup table interpolates between the corresponding gains. Use the following blocks to implement gain schedules as lookup tables.
1-D Lookup Table (Simulink), 2-D Lookup Table (Simulink), n-D Lookup Table (Simulink) — For a scalar gain that depends on one, two, or more scheduling variables.
Matrix Interpolation (Simulink) — For a matrix-valued gain that depends on one, two, or three scheduling variables. (This block is in the Simulink Extras library.)
MATLAB Function (Simulink) block — When you have a functional expression relating the gains to the scheduling variables, use a MATLAB Function block. If the expression is a smooth function, using a MATLAB function can result in smoother gain variations than a lookup table. Also, if you use a code-generation product such as Simulink Coder™ to implement the controller in hardware, a MATLAB function can result in a more memory-efficient implementation than a lookup table.
If you have Simulink Control Design™, you can use systune to tune gain schedules implement as either lookup tables or MATLAB functions. See Tune Gain Schedules in Simulink.
As an example, The model rct_CSTR includes a PI controller and a lead compensator in which the controller gains are implemented as lookup tables using 1-D Lookup Table (Simulink) blocks. Open that model and examine the controllers.
open_system(fullfile(matlabroot,'examples','controls_id','main','rct_CSTR.slx'))
Both the Concentration controller and Temperature controller blocks take the CSTR plant output, Cr, as an input. This value is both the controlled variable of the system and the scheduling variable on which the controller action depends. Double-click the Concentration controller block.
This block is a PI controller in which the proportional gain Kp and integrator gain Ki are determined by feeding the scheduling parameter Cr into a 1-D Lookup Table block. Similarly, the Temperature controller block contains three gains implemented as lookup tables.
Use the Linear Parameter Varying block library of Control System Toolbox to implement common control elements with variable parameters or coefficients. These blocks provide common elements in which the gains or parameters are available as external inputs. The following table lists some applications of these blocks.
Use these blocks to implement a Butterworth lowpass filter in which the cutoff frequency varies with scheduling variables.
Use these blocks to implement a notch filter in which the notch frequency, width, and depth vary with scheduling variables.
Varying PID Controller
Discrete Varying PID
Varying 2DOF PID
Discrete Varying 2DOF PID
These blocks are preconfigured versions of the PID Controller and PID Controller (2DOF) blocks. Use them to implement PID controllers in which the PID gains vary with scheduling variables.
Use these blocks to implement a transfer function of any order in which the polynomial coefficients of the numerator and denominator vary with scheduling variables.
Use these blocks to implement a state-space controller in which the A, B, C, and D matrices vary with the scheduling variables.
Use these blocks to implement a gain-scheduled observer-form state-space controller, such as an LQG controller. In such a controller, the A, B, C, D matrices and the state-feedback and state-observer gain matrices vary with the scheduling variables.
For example, the subsystem in the following illustration uses a Varying Notch Filter block to implement a filter whose notch frequency varies as a function of two scheduling variables. The relationship between the notch frequency and the scheduling variables is implemented in a MATLAB function.
As another example, the following subsystem is a gain-scheduled discrete-time PI controller in which both the proportional and integral gains depend on the same scheduling variable. This controller uses 1-D Lookup Table blocks to implement the gain schedules.
You can also implement matrix-valued gain schedules Simulink. A matrix-valued gain schedule takes one or more scheduling variables and returns a matrix rather than a scalar value. For instance, suppose that you want to implement a time-varying LQG controller of the form:
\begin{array}{c}d{x}_{e}=A{x}_{e}+Bu+L\left(y-C{x}_{e}-Du\right)\\ u=-K{x}_{e},\end{array}
where, in general, the state-space matrices A, B, C, and D, the state-feedback matrix K, and the observer-gain matrix L all vary with time. In this case, time is the scheduling variable, and the gain schedule determines the values of the matrices at a given time.
In your Simulink model, you can implement matrix-valued gain schedules using:
MATLAB Function (Simulink) block — Specify a MATLAB function that takes scheduling variables and returns matrix values.
Matrix Interpolation (Simulink) block — Specify a lookup table to associate a matrix value with each scheduling-variable breakpoint. Between breakpoints, the block interpolates the matrix elements. (This block is in the Simulink Extras library.)
For the LQG controller, use either MATLAB Function blocks or Matrix Interpolation blocks to implement the time-varying matrices as inputs to a Varying Observer Form block. For example:
In this implementation, the time-varying matrices are each implemented as a MATLAB Function block in which the associated function takes the simulation time and returns a matrix of appropriate dimensions.
If you have Simulink Control Design, you can tune matrix-valued gain schedules implemented as either MATLAB Function blocks or as Matrix Interpolation blocks. However, to tune a Matrix Interpolation block, you must set Simulate using to Interpreted execution. See the Matrix Interpolation (Simulink) block reference page for information about simulation modes.
You can also use the scheduled gains to build your own control elements. For example, the model rct_CSTR includes a gain-scheduled lead compensator with three coefficients that depend on the scheduling variable, CR. To see how this compensator is implemented, open the model and examine the Temperature controller subsystem.
Here, the overall gain Kt, the zero location a, and the pole location b are each implemented as a 1-D lookup table that takes the scheduling variable as input. The lookup tables feed directly into product blocks.
For a lookup table or MATLAB Function block that implements a gain schedule to be tunable with systune, it must ultimately feed into either:
A block in the Linear Parameter Varying block library.
A Product block that applies the gain to a given signal. For instance, if the Product block takes as inputs a scheduled gain g(α) and a signal u(t), then the output signal of the block is y(t) = g(α)u(t).
There can be one or more of the following blocks between the lookup table or MATLAB Function block and the Product block or parameter-varying block:
Blocks that are equivalent to a unit gain in the linear domain, including:
Transport Delay, Variable Transport Delay
Saturate, Deadzone
Rate Limiter, Rate Transition
Quantizer, Memory, Zero-Order Hold
Switch blocks, including:
Inserting such blocks can be useful, for example, to constrain the gain value to a certain range, or to specify how often the gain schedule is updated.
|
LMIs in Control/pages/LMI for H2/Hinf Polytopic Controller for Robot Arm. - Wikibooks, open books for an open world
LMIs in Control/pages/LMI for H2/Hinf Polytopic Controller for Robot Arm.
{\displaystyle {\begin{aligned}{\dot {x}}(t)&=Ax(t)+B_{1}w(t)+B_{2}u(t)\\z(t)&=C_{1}x(t)+D_{11}w(t)+D_{12}u(t)\\y(t)&=C_{2}x(t)+D_{21}w(t)+D_{22}u(t)\\{\dot {x}}_{K}(t)&=A_{K}x_{K}(t)+B_{K}y(t)\\u(t)&=C_{K}x_{K}(t)+D_{K}y(t)\\\end{aligned}}}
Given a state space system of
{\displaystyle {\begin{aligned}{\dot {x}}(t)&=Ax(t)+B_{1}w(t)+B_{2}u(t)\\z(t)&=C_{1}x(t)+D_{11}w(t)+D_{12}u(t)\\y(t)&=C_{2}x(t)+D_{21}w(t)+D_{22}u(t)\\{\dot {x}}_{K}(t)=A_{K}x_{K}(t)+B_{K}y(t)\\u(t)&=C_{K}x_{K}(t)+D_{K}y(t)\end{aligned}}}
{\displaystyle A_{K},}
{\displaystyle B_{K},}
{\displaystyle C_{K},}
{\displaystyle D_{K},}
form the K matrix as defined in below. This, therefore, means that the Regulator system can be re-written as:
{\displaystyle {\begin{bmatrix}{\dot {x}}(t)\\z_{1}(t)\\z_{2}(t)\\y(t)\end{bmatrix}}={\begin{bmatrix}{\begin{array}{c|c c|c}A&{B}&0&{B}\\C&D&0&D\\0&0&0&I\\C&D&I&D\end{array}}\end{bmatrix}}{\begin{bmatrix}{\dot {x}}(t)\\w_{1}(t)\\w_{2}(t)\\u(t)\end{bmatrix}}}
With the above 9-matrix representation in mind, the we can now derive the controller needed for solving the problem, which in turn will be accomplished through the use of LMI's. Firstly, we will be taking our
{\displaystyle H_{2}}
{\displaystyle H_{\infty }}
state-feedback control and make some modifications to it. More specifically, since the focus is modeling for worst-case scenario of a given parameter, we will be modifying the LMI's such that the mixed
{\displaystyle H_{2}}
{\displaystyle H_{\infty }}
controller is polytopic.
{\displaystyle H_{2}}
{\displaystyle H_{\infty }}
Polytopic Controller for Quadrotor with Robotic Arm.
Recall that from the 9-matrix framework ,
{\displaystyle w_{1}(t)}
{\displaystyle {w_{2}}(t)}
represent our process and sensor noises respectively and
{\displaystyle u(t)}
represents our input channel. Suppose we were interested in modeling noise across all three of these channels. Then the best way to model uncertainty across all three cases would be modifying the
{\displaystyle D}
matrix to
{\displaystyle D_{i}}
{\displaystyle i=1,..,k}
{\displaystyle D_{i}=nI}
{\displaystyle n}
is a constant noise value). This, in turn results in our
{\displaystyle D_{11}}
{\displaystyle D_{22}}
matrices to be modifified to
{\displaystyle D_{11,i}}
{\displaystyle D_{22,i}}
Using the LMI's given for optimal
{\displaystyle H_{2}}
{\displaystyle H_{\infty }}
-optimal state-feedback controller from Peet Lecture 11 as reference, our resulting polytopic LMI becomes:
{\displaystyle \min \limits _{\gamma _{1},\gamma _{2},X_{1},Y_{1},Z,A_{n},B_{n},C_{n},D_{n}}}
{\displaystyle \gamma _{1}^{2}}
{\displaystyle \gamma _{2}^{2}}
{\displaystyle {\begin{aligned}{\begin{bmatrix}AA_{i}&AB_{i}^{T}&AC_{i}^{T}\\AB_{i}&BB_{i}&BC_{i}^{T}\\AC_{i}&BC_{i}&-I\end{bmatrix}}&<0\\{\begin{bmatrix}AA_{i}&AB_{i}^{T}&AC_{i}^{T}&AD_{i}^{T}\\AB_{i}&BB_{i}&BC_{i}^{T}&BD_{i}^{T}\\AC_{i}&BC_{i}&-I&CD_{i}^{T}\\AD_{i}&BD_{i}&CD_{i}&{-\gamma _{2}^{2}}I\end{bmatrix}}&<0\\{\begin{bmatrix}Y_{1}&I&AD_{i}^{T}\\I&X_{1}&BD_{i}^{T}\\AD_{i}&BD_{i}&Z\end{bmatrix}}&>0\\\end{aligned}}}
{\displaystyle trace(Z)<\gamma _{1}^{2}}
where i=1,..,k,
{\displaystyle ||S(K,P)||_{H_{2}}}
{\displaystyle <\gamma _{1}}
{\displaystyle ||S(K,P)||_{H_{\infty }}<\gamma _{2}}
{\displaystyle {\begin{aligned}AA_{i}=AY_{1}+Y_{1}A^{T}+B_{2}C_{n}+C_{n}^{T}B_{2}^{T}\\AB_{i}=A^{T}+A_{n}+[B_{2}D_{n}C_{2}]^{T}\\AC_{i}=[B_{1}+B_{2}D_{n}D_{21,i}]^{T}\\AD_{i}=C_{1}Y_{1}+D_{12,i}C_{n}\\BB_{i}=X_{1}A+A^{T}X_{1}+B_{n}C_{2}+C_{2}^{T}B_{n}^{T}\\BC_{i}=[X_{1}B_{1}+B_{n}D_{21,i}]^{T}\\BD_{i}=C_{1}+D_{12,i}D_{n}C_{2}\\CD_{i}=D_{11,i}+D_{12,i}D_{n}D_{21,i}\end{aligned}}}
After solving for both the optimal
{\displaystyle H_{2}}
{\displaystyle H_{\infty }}
gain ratios as well as
{\displaystyle {X_{1}},{Y_{1}},Z,{A_{n}},{B_{n}},{C_{n}},{D_{n}}}
, we can then construct our worst-case scenario controller by setting our
{\displaystyle D}
matrix (and consequently our
{\displaystyle {D_{11}},{D_{12}},{D_{21}},{D_{22}}}
matrices) to the highest
{\displaystyle n}
value. This results in the controller:
{\displaystyle {\begin{aligned}K={\begin{bmatrix}{\begin{array}{c|c}{A_{K}}&{B_{K}}\\\hline {C_{K}}&{D_{K}}\\\end{array}}\end{bmatrix}}\end{aligned}}}
which is constructed by setting:
{\displaystyle {\begin{aligned}&{D_{K}}=(I+D_{K_{2}}D_{22})^{-1}{D_{K_{2}}}\\&{B_{K}}={B_{K_{2}}}(I+D_{22}D_{K})\\&{C_{K}}=(I-D_{K}D_{22}){C_{K_{2}}}\\&{A_{K}}={A_{K_{2}}}-{B_{K}}(I-D_{22}D_{K})^{-1}D_{22}{C_{K}}\end{aligned}}}
{\displaystyle {\begin{aligned}&{X_{2}}{Y_{2}^{T}}=I-{X_{1}}{Y_{1}}\\&{\begin{bmatrix}{\begin{array}{c|c}{A_{K_{2}}}&{B_{K_{2}}}\\\hline {C_{K_{2}}}&{D_{K_{2}}}\end{array}}\end{bmatrix}}={\begin{bmatrix}{X_{2}}&{X_{1}}{B_{2}}\\0&I\end{bmatrix}}^{-1}{\begin{bmatrix}{A_{n}}-{X_{1}}A{Y_{1}}&{B_{n}}\\{C_{n}}&{D_{n}}\end{bmatrix}}{\begin{bmatrix}Y_{2}^{T}&0\\{C_{2}}{Y_{1}}&I\end{bmatrix}}\end{aligned}}}
The LMI is feasible and the resulting controller is found to be stable under normal noise disturbances for all states.
1. An LMI-Based Approach for Altitude and Attitude Mixed H2/Hinf-Polytopic Regulator Control of a Quadrotor Manipulator by Aditya Ramani and Sudhanshu Katarey.
Retrieved from "https://en.wikibooks.org/w/index.php?title=LMIs_in_Control/pages/LMI_for_H2/Hinf_Polytopic_Controller_for_Robot_Arm.&oldid=3621676"
|
Home : Support : Online Help : Mathematics : Factorization and Solving Equations : solve : float
expressions involving floating-point numbers
equations (as for solve), but with floating-point values
The solve function with floating-point numbers works by converting the floating-point numbers to approximate rationals, calling solve with these converted arguments, and converting the results back to floating-point numbers using evalf.
This can be convenient for solving equations with a combination of floating-point numbers and parameters (since fsolve will not solve equations with unassigned parameters). In most cases, it is a better idea to convert the input into exact values manually since this will generally give more meaningful answers.
\mathrm{eq}≔{x}^{2}-3x+0.01:
\mathrm{solve}\left(\mathrm{eq},x\right)
\textcolor[rgb]{0,0,1}{2.996662955}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.00333704529}
\mathrm{eqe}≔\mathrm{convert}\left(\mathrm{eq},'\mathrm{rational}','\mathrm{exact}'\right)
\textcolor[rgb]{0,0,1}{\mathrm{eqe}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{100}}
\mathrm{sol}≔\mathrm{solve}\left(\mathrm{eqe},x\right)
\textcolor[rgb]{0,0,1}{\mathrm{sol}}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\sqrt{\textcolor[rgb]{0,0,1}{14}}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\sqrt{\textcolor[rgb]{0,0,1}{14}}}{\textcolor[rgb]{0,0,1}{5}}
\mathrm{evalf}\left(\mathrm{sol}\right)
\textcolor[rgb]{0,0,1}{2.996662955}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.003337045}
The variable x is a parameter in the following example
\mathrm{solve}\left({3.7y+z=\mathrm{sin}\left(x\right),{x}^{2}-y=z},{y,z}\right)
{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{0.3703703704}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{0.3703703704}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1.370370370}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{0.3703703704}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)}
\mathrm{fsolve}\left({3.7y+z=\mathrm{sin}\left(x\right),{x}^{2}-y=z},{y,z}\right)
Error, (in fsolve) x is in the equation, and is not solved for
|
Graph each polynomial function. f(x)=x^{4} - x^{3} - 6x^{2} + 4x + 8 PSZ
Graph each polynomial function.f(x)=x^{4} - x^{3} - 6x^{2} + 4x + 8 PSZ
f\left(x\right)={x}^{4}\text{ }-\text{ }{x}^{3}\text{ }-\text{ }6{x}^{2}\text{ }+\text{ }4x\text{ }+\text{ }8
For the following exercises, use the given information about the polynomial graph to write the equation. Degree 5. Roots of multiplicity 2 at x = 3 and x = 1, and a root of multiplicity 1 at x = −3. y-intercept at (0, 9)
For the following exercise, for each polynomial, a. find the degree. b. find the zeros, if any. c. find the y-intercept(s), if any. d. use the leading coefficient to determine the graph’s end behavior. and e. determine algebraically whether the polynomial is even, odd, or neither.
f\left(x\right)=-3{x}^{2}+6x
f\left(x\right)=4{x}^{5}-8{x}^{4}-x+2
The graph of a polynomial function has the following characteristics: a) Its domain and range are the set of all real numbers. b) There are turning points at x = -2, 0, and 3. a) Draw the graphs of two different polynomial functions that have these three characteristics. b) What additional characteristics would ensure that only one graph could be drawn?
Sketch the graphs of two functions that are not polynomial functions.
f\left(x\right)=\frac{1}{x}
f\left(x\right)=\sqrt{x}
f\left(x\right)=4{x}^{4}+7{x}^{2}-2
For the following exercises, use the given information about the polynomial graph to write the equation. Degree 4. Root of multiplicity 2 at x = 4, and roots of multiplicity 1 at x = 1 and x = −2. y-intercept at (0, −3).
|
How I Built My Blog - tunkshif.one
blog-howto
#blog#nextjs
I've been thinking of a way to easily manage my blog posts, images, and comments. For traditional static site generators like Hexo or Hugo, all the contents are located in a local filesystem. Every time you add a new post, you'll have to manually generate the site. Or if you're using Netlify or Vercel to host your blog, you'll still have to push your new posts to the repository to trigger a new build. Also, you need to turn to other approaches to managing your images and comments.
The other day, I came across the sairin project and it inspired me that I can use Github Issues as a CMS combined with the incremental static regeneration feature from Next.js.
Write posts in GitHub Issue, each issue is just a blog post
Use GitHub API to fetch issue data, then use remarkjs to process it, and generate blog pages at build time
When new posts are added, you don't have to rebuild the whole site thanks to Next.js ISR
In this way, you can easily add a new post, upload images, and write comments in an issue.
Thanks to sairin for the great idea and ecklf.co for design inspiration.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis ultrices ornare euismod. Nam ut imperdiet tortor. Sed tempus placerat lacus ac auctor. Duis euismod tristique vulputate. Fusce blandit a massa eu dignissim. Nulla laoreet felis libero, ac scelerisque velit facilisis vel. Pellentesque ornare pharetra mollis. Pellentesque at imperdiet nisl, sit amet hendrerit nunc. Nunc tincidunt lectus accumsan dui condimentum, vel suscipit est consectetur. Phasellus scelerisque posuere sapien, placerat volutpat lorem varius a. Proin fringilla in sem molestie tincidunt. Proin porttitor nunc et diam faucibus consectetur. Nullam pharetra eleifend elit, non luctus lectus accumsan et. Praesent egestas varius augue, sed euismod urna ultricies vitae.
send self, :hello
5000 -> IO.puts :timeout
E=mc^2
|
At one time, the hockey teams received two points when they won a game and one p
At one time, the hockey teams received two points when they won a game and one point when they tied. One season, a team won a championship with 60 poi
At one time, the hockey teams received two points when they won a game and one point when they tied. One season, a team won a championship with 60 points. They won 9 more games than they tied. How many wins and how many ties did the team have?
Let x be the number of ties so that the number of wins is x+9. Each win earns 2 points and each tie earns 1 point. Since the team won with 60 points, we can write: 2(x+9)+1(x)=60 Solve for x: 2x+18+x=60 3x+18=60 3x=42 x=14 So, the team had 14+9=23 wins and 14 ties.
What does algebra include?
Solve the equations and inequalities:
\frac{{2}^{x}}{3}\le \frac{{5}^{x}}{4}
Please, solve the system of equations:
\left\{\begin{array}{ccccc}5x& +& y& =& 14\\ 2x& +& y& =& 5\end{array}
A parallelogram
XYZW,\stackrel{―}{YZ}=2b,\stackrel{―}{ZW}=a+2,\stackrel{―}{WX}=b+1,\stackrel{―}{XY}=3a-4
{\mathrm{log}}_{2}\left(3x-1\right)={\mathrm{log}}_{2}\left(x+1\right)+3
Solve, please, the equation where v is a real number
{\left(v-8\right)}^{3}-64=0
Tell the definition of algebra
|
To explain:The shape of the new distribution and the measure of center and varia
To explain:The shape of the new distribution and the measure of center and variation affected in a given data.
d2saint0
A data set is a symmetric distribution.
Every value in the data set is doubled.
Bivariate joint frequency distribution are contingency table.
The total row and total column of the marginal distribution while the body of the table is the joint frequencies
The height will be doubled but will still be symmetric.
Each value in a numerical data set is multiplied by a real number k
Where k > 0
The measures of center and variation is founded by multiplying the original measures by k
This is also dobled.
The pathogen Phytophthora capsici causes bell pepper plants to wilt and die. A research project was designed to study the effect of soil water content and the spread of the disease in fields of bell peppers. It is thought that too much water helps spread the disease. The fields were divided into rows and quadrants. The soil water content (percent of water by volume of soil) was determined for each plot. An important first step in such a research project is to give a statistical description of the data. Soil Water Content for Bell Pepper Study 1591510amp;14amp;15amp;12amp;10amp;14amp;12amp;9amp;10amp;14amp;9amp;10amp;11amp;13amp;10amp;9amp;9amp;12amp;7amp;9amp;amp;11amp;14amp;16amp;amp;11amp;13amp;16amp;amp;11amp;14amp;12amp;amp;11amp;8amp;10amp;amp;10amp;9amp;11amp;amp;11amp;8amp;11amp;amp;13amp;11amp;12amp;amp;16amp;13amp;15amp;amp;10amp;13amp;6 (a) Make a box-and-whisker plot of the data. Find the interquartile range.
Suppose that, for a sample of pairs of observations from two variables, the linear correlation coefficient, r , is negative. Does this result necessarily imply that the variables are negatively linearly correlated?
Construct 90% and 95% confidence intervals for the population proportion. Interpret the results and compare the widths of the confidence intervals. In a survey of 2241 U.S. adults in a recent year, 650 made a New Year's resolution to eat healthier.
\text{Diet}\text{ }\text{Lost weight?}
\begin{array}{lcccc}& \mathrm{A}& \mathrm{B}& \mathrm{C}& \text{ Total }\\ \text{ Yes }& & 60& & 180\\ \text{ No }& & 40& & 120\\ \text{ Total }& 90& 100& 110& 300\end{array}
The following advanced exercise use a generalized ratio test to determine convergence of some series that arise in particular applications, including the ratio and root test, are not powerful enough to determine their convergence. The test states that if $$
lim\left\{n\to \mathrm{\infty }\right\}\frac{a\left\{2n\right\}}{{a}_{n}}<\frac{1}{2}
\sum {a}_{n}
converges,while if
lim\left\{n\to \mathrm{\infty }\right\}\frac{a\left\{2n+1\right\}}{{a}_{n}}>\frac{1}{2}
\sum {a}_{n}
{a}_{n}=\frac{1}{1+x}\frac{2}{2+x}\dots \frac{n}{n+x}\frac{1}{n}=\frac{\left(n-1\right)!}{\left(1+x\right)\left(2+x\right)\dots \left(n+x\right)}
\frac{{a}_{2n}}{{a}_{n}}\le \frac{{e}^{-x/2}}{2}
. For which x > 0 does the generalized ratio test imply convergence of
\sum _{n=1}^{\mathrm{\infty }}{a}_{n}
f\left(t\right)=-0.00447{t}^{3}+0.09864{t}^{2}+0.05192t+0.8\left(0\le t\le 15\right)
gives the number of surgeries (in millions) performed in physicians' offices in year t, with
t=0
corresponding to the beginning of 1986.
a. Plot the graph of f in the viewing window
\left[0,15\right]×\left[0,10\right]
|
Consider the solid that is bounded below by the cone z = sqrt{3x^{2}+3y^{2}} an
Consider the solid that is bounded below by the cone z = sqrt{3x^{2}+3y^{2}} and above by the sphere x^{2} +y^{2} + z^{2} = 16..Set up only the appropriate triple integrals in cylindrical and spherical coordinates needed to find the volume of the solid.
Consider the solid that is bounded below by the cone
z=\sqrt{3{x}^{2}+3{y}^{2}}
and above by the sphere
{x}^{2}+{y}^{2}+{z}^{2}=16.
.Set up only the appropriate triple integrals in cylindrical and spherical coordinates needed to find the volume of the solid.
To set the triple integral in cylindrical coordinates
By using relation,
x=r\mathrm{cos}\theta
y=r\mathrm{sin}\theta
z=z
The cone
z=\sqrt{3{x}^{2}+3{y}^{2}}
in cylindrical coordinate becomes,
z=\sqrt{3{r}^{2}}=\sqrt{3r}
And the sphere become
{r}^{2}+{z}^{2}=16
To find the limit of r,
⇒\sqrt{3{r}^{2}}=\sqrt{16-{r}^{2}}
⇒3{r}^{2}=16-{r}^{2}
⇒{r}^{2}=4
⇒r=2
E=\left\{\left(r,\theta ,z\right)|0\le \theta \le 2\pi |0\le r\le 2\sqrt{3r}\le z\le \sqrt{16-{r}^{2}}\right\}
Hence, the triple integral for the volume by cylindrical coordinates is
V={\int }_{0}^{2\pi }{\int }_{0}^{2}{\int }_{\sqrt{3r}}^{\sqrt{16-{r}^{2}}}rdzdrd\theta
Now, to set the triple integral in spherical coordinates
{p}^{2}={x}^{2}+{y}^{2}+{z}^{2}
\mathrm{tan}\theta =\frac{y}{x}
\phi =\mathrm{arccos}\left(\frac{z}{\sqrt{{x}^{2}+{y}^{2}+{z}^{2}}}\right)
The sphere
{x}^{2}+{y}^{2}+{z}^{2}=16\text{ }gives\text{ }{p}^{2}=16=>p=4
From the cone
z=\sqrt{3{x}^{2}+3{y}^{2}}=\sqrt{3r}
⇒p\mathrm{cos}\left(\phi \right)=\sqrt{3}p\mathrm{sin}\left(\phi \right)
⇒p\mathrm{tan}\left(\phi \right)=\frac{1}{\sqrt{3}}
\phi =\frac{\pi }{6}
E=\left\{\left(p,\phi ,\theta \right)|0\le p\le 4,0\le \phi \le \frac{\pi }{6},0\le \theta \le 2\pi \right\}
Hence, the triple integral for the volume of the solid by spherical coordinate is
V={\int }_{0}^{2\pi }{\int }_{0}^{\frac{\pi }{6}}{\int }_{0}^{4}{p}^{2}\mathrm{sin}\left(\phi \right)\text{ }d\text{ }pd\text{ }\phi \text{ }d\text{ }\theta
Now, evaluating the integral of cylindrical coordinate we get.
⇒V={\int }_{0}^{2\pi }{\int }_{0}^{2}{\int }_{\sqrt{3r}}^{\sqrt{16-{r}^{2}}}rdzdrd\theta
⇒V=17.9582
And evaluating the integral of spherical coordinate
⇒V={\int }_{0}^{2\pi }{\int }_{0}^{\frac{\pi }{6}}{\int }_{0}^{\sqrt{4}}{r}^{2}\mathrm{sin}\left(\phi \right)dpd\phi d\theta
⇒V=17.9582
Thus, by both coordinate systems, we get the same volume.
Therefore, both triple integrals are appropriate.
a\right)\stackrel{\to }{A}\left(x,y,z\right)={\stackrel{\to }{e}}_{x}
d\right)\stackrel{\to }{A}\left(\rho ,\varphi ,z\right)={\stackrel{\to }{e}}_{\rho }
g\right)\stackrel{\to }{A}\left(r,\theta ,\varphi \right)={\stackrel{\to }{e}}_{\theta }
j\right)\stackrel{\to }{A}\left(x,y,z\right)=\frac{-y{\stackrel{\to }{e}}_{x}+x{\stackrel{\to }{e}}_{y}}{{x}^{2}+{y}^{2}}
Find Maximum Volume of a rectangular box that is inscribed in a sphere of radius r.
The part i do not get the solutions manual got
2x,\text{ }2y
2z
as the volume. Can you explain how the volume of the cube for this problem is
v=8xyz?
Which of the following coordinate systems is most common?
\mathcal{B}=\left\{\left[\begin{array}{c}3\\ -1\\ 4\end{array}\right]\left[\begin{array}{c}2\\ 0\\ -5\end{array}\right]\left[\begin{array}{c}8\\ -2\\ 7\end{array}\right]\right\}
R{R}^{n}.
{\mathbb{R}}^{\mathbb{3}}
y=-2
\left(2,-2\right)
\left(-1,\sqrt{3}\right)
\left(r,\theta \right)
\theta
2\pi
\left(r,\theta \right)
\theta
2\pi
\left(-4,0\right)
|
(8st^{2}u^{8}v)(2st^{5}uv)
\left(8s{t}^{2}{u}^{8}v\right)\left(2s{t}^{5}uv\right)
\left(8s{t}^{2}{u}^{8}v\right)\left(2s{t}^{5}uv\right)
=\left(8×2\right)\left(s×s\right)\left({t}^{2}×{t}^{5}\right)\left({u}^{8}×u\right)\left(v×v\right)=\left(8×2\right)\left({s}^{1}+1\right)\left({t}^{2}+5\right){u}^{\left(8+1\right)}\left({v}^{1}+1\right)=\left(8×2\right)\left({s}^{2}\right)\left({t}^{7}\right)\left({u}^{9}\right)\left({v}^{2}\right)=16\left({s}^{2}\right)\left({t}^{7}\right)\left({u}^{9}\right)\left({v}^{2}\right)
h\left(t\right)=-16{t}^{2}+24t
Find all the zeros, real and nonreal, of the polynomial and use that information to express p(x) as a product of linear factors.
p\left(x\right)={x}^{3}+11x
-\frac{1}{6}×2\frac{3}{4}=
Thirteen people on a softball team show up for a game. How many ways are there to assign the 10 positions by selecting players from the 13 people who show up?
{a}^{4}{b}^{2}+a{b}^{5}
Write the following expression as a sum and/or difference of logarithms. Express powers as factors:
\mathrm{ln}\left(\frac{e}{x}\right)
Are ' numbers infinite?
|
Using the existence and uniqueness theorem for second order linear ordinary diff
Using the existence and uniqueness theorem for second order linear ordinary differential equations, find the largest interval in which the solution to the initial value is certain to exist. t(t^2-4)y''-ty'+3t^2y=0, y(1)=1 y'(1)=3
Using the existence and uniqueness theorem for second order linear ordinary differential equations, find the largest interval in which the solution to the initial value is certain to exist.
t\left({t}^{2}-4\right){y}^{″}-t{y}^{\prime }+3{t}^{2}y=0,y\left(1\right)=1{y}^{\prime }\left(1\right)=3
The given initial value problem is:
t\left({t}^{2}-4\right){y}^{″}-t{y}^{\prime }+3{t}^{2}y=0,y\left(1\right)=1{y}^{\prime }\left(1\right)=3
If p(x), q(x) and g(x) are continuous on the interval [a,b], then the second order differential equation
{y}^{″}+p\left(x\right){y}^{\prime }+q\left(x\right)y=g\left(x\right),y\left({x}_{0}\right)={y}_{0},{y}^{\prime }\left({x}_{0}\right)={y}_{0}^{\prime }
has a unique solution defined for all x in [a,b].
Redefine the differential equation as follows.
{y}^{″}-\frac{t}{t\left({t}^{2}-4\right)}{y}^{\prime }+\frac{3{t}^{2}}{t\left({t}^{2}-4\right)}y=0
{y}^{″}-\frac{t}{{t}^{2}-4}{y}^{\prime }+\frac{3{t}^{2}}{{t}^{2}-4}y=0
Compare the above equation with standard equation of initial value problem.
p\left(t{\right)}^{\prime }=\frac{1}{{t}^{2}-4},q\left(t\right)=\frac{3t}{{t}^{2}-4}
r\left(t\right)=0,{t}_{0}=1
{y}_{0}=1
The function p(t) is continuous for all values of t except t=w and t=-2
The function q(t) is continuous for all values of t except t=w and t=-2
The domain of the both the function is
\left(-\mathrm{\infty },-2\right)\cup \left(-2,2\right)\cup \left(2,\mathrm{\infty }\right)
Thus, the function has a unique solution at the point t=1 on the largest interval (-2,2)
The solution to the initial value problem exist on the largest interval is (−2, 2).
{y}^{″}+6{y}^{\prime }+12y=0
Find the differential of the function.
T=\frac{v}{1}+uvw
Find the differential dy for the given values of x and dx.
y=\frac{{e}^{x}}{10},x=0,dx=0.1
Find the general solution of
y{}^{″}+9y=4\mathrm{cos}2x
find general solution in semi homogenous method of
\frac{dy}{dx}=x-y+\frac{1}{x}+y-1
y{}^{″}+2{y}^{\prime }+2y=0,\text{ }y\left(\frac{\pi }{4}\right)=2,\text{ }{y}^{\prime }\left(\frac{\pi }{4}\right)=-2
d\frac{{d}^{2}y}{d{t}^{2}}\text{ }-\text{ }8d\frac{dy}{dt}\text{ }+\text{ }15y=9t{e}^{3t}\text{ }with\text{ }y\left(0\right)=5,\text{ }{y}^{\prime }\left(0\right)=10
|
PersistentTable/Has - Maple Help
Home : Support : Online Help : PersistentTable/Has
query a PersistentTable connection for the existence of a row
Has(connection, keys)
The Has command returns true if the given connection contains a row with the given values in the primary key columns, and false otherwise.
\mathrm{with}\left(\mathrm{PersistentTable}\right)
[\textcolor[rgb]{0,0,1}{\mathrm{Close}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Count}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Get}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{GetAll}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{GetKeys}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Has}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{MaybeGet}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Open}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{RawCommand}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Set}}]
\mathrm{connection}≔\mathrm{Open}\left(":memory:",\mathrm{style}=[\mathrm{k1}::\mathrm{anything},\mathrm{k2}::'\mathrm{integer}',v::\mathrm{anything},\mathrm{primarykey}=2]\right)
\textcolor[rgb]{0,0,1}{\mathrm{connection}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{<< 3-column persistent table at :memory: >>}}
\mathrm{connection}[x,3]≔{y}^{2}
{\textcolor[rgb]{0,0,1}{\mathrm{connection}}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}
\mathrm{connection}[z,5]≔{y}^{3}
{\textcolor[rgb]{0,0,1}{\mathrm{connection}}}_{\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{3}}
\mathrm{Has}\left(\mathrm{connection},x,3\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{Has}\left(\mathrm{connection},a,4\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{Close}\left(\mathrm{connection}\right)
The PersistentTable[Has] command was introduced in Maple 2021.
|
For adiabatic process, which is correct?
∆
∆
Adiabatic process is a process that occurs without transfer of heat or mass of substances between a thermodynamic system and it surroundings. Therefore, q = 0
In the van der Waals equation, 'a' signifies
intramolecular attraction
attraction between molecules and wall of container
volume of molecules
In van der Waal's equation, a signifies the intermolecular force of attraction.
Smallest wavelength occurs for
Smallest wavelength occurs in Lyman series.
Wavelength increases from Lyman > Balmer > Paschen > Brackett > Pfund.
Which of the following is not hygroscopic?
CsCl is not hygroscopic in nature while MgCl2, CaCl2 and LiCl are hygroscopic in nature.
Ksp of CaSO4.5H2O is 9
\times
10-6, find the volume for 1 g of CaSO4 (M.wt. = 136).
CaSO4 (s)
\rightleftharpoons \quad \underset{\mathrm{S}}{{\mathrm{Ca}}^{2+}}\quad \left(\mathrm{aq}\right)\quad +\quad \underset{\mathrm{S}}{{\mathrm{SO}}_{4}^{2-}\quad }\quad \left(\mathrm{aq}\right)
Ksp = S2 = 9
\times
\times
10-3 mol L-1
Solubility in g litre-1 = molecular mass
\times
\times
\times
10-3 = 408
\times
10-3 g L-1
\times
10-3 g of CaSO4 present in 1 litre.
1 gm of CaSO4 present is
\frac{1}{408\quad \times \quad {10}^{-3}}\quad =\quad 2.45\quad \mathrm{litres}
Rate is equal in both directions
Measurable quantities are constant at equilibrium.
Equilibrium occurs in reversible condition.
Equilibrium occurs only in open vessel at constant temperature.
Equilibrium state can only be achieved if a reversible reaction is carried out in a closed space.
Which of the following is wrong for Bohr model?
It establishes stability of atom
It is inconsistent with Heisenberg uncertainty principle.
It explains the concept of spectral lines for hydrogen like species.
Electrons behave as particle and wave
Among all the given options, option d is wrong. It can be corrected as Bohr model does not explain de Broglie concept of the dual character of matter.
Predict the product of reaction of I2 with H2O2 in basic medium
{}_{3}^{-}
{}_{3}^{-}
I- is the product of reaction in basic medium.
I2 (s) + H2O2 (aq) +2OH- (aq)
\to
2I- (aq) + 2H2O (l) + O2 (g)
Decreasing order of bond angle is
BeCl2 > NO2 > SO2
BeCl2 > SO2 > NO2
SO2 > BeCl2 > NO2
SO2 > NO2 > BeCl2
Decreasing order of bond angle is BeCl2 > NO2 > SO2.
Bond angle of BeCl2, NO2 and SO2 is 180
°
°
and 119.5
°
The enthalpy of formation of CO(g), CO2(g), N2O(g), and N2O4(g) is -110, -393, +811 and 10 kJ/mol respectively. For the reaction, N2O4 + 3CO(g)
\to
N2O(g)+ 3CO2(g);
∆
Hr (kJ/mol) is
N2O4(g)+ 3CO(g)
\to
N2O (g) + 3CO2 (g)
∆{H}_{reaction}\quad =\quad {\sum }_{\mathrm{Heat}\quad \mathrm{of}\quad \mathrm{formation}\quad \mathrm{of}\quad \mathrm{products}}\quad -\quad {\sum }_{\mathrm{Heat}\quad \mathrm{of}\quad \mathrm{formation}\quad \mathrm{of}\quad \mathrm{reactants}}
{\text{∆H}}_{\mathrm{reaction}\quad }=\quad [∆{\mathrm{H}}_{\mathrm{f}}{\mathrm{N}}_{2}\mathrm{O}\quad +\quad 3\quad \times \quad ∆{\mathrm{H}}_{\mathrm{f}}{\mathrm{CO}}_{2}]\quad -\quad [∆{\mathrm{H}}_{\mathrm{f}}{\mathrm{N}}_{2}{\mathrm{O}}_{4}\quad +\quad 3\quad \times \quad ∆{\mathrm{H}}_{\mathrm{f}}\mathrm{CO}]\phantom{\rule{0ex}{0ex}}∆{\mathrm{H}}_{\mathrm{r}}\quad =\quad \left[+811\quad +\quad 3(-393)\right]\quad -\quad \left[10\quad +\quad 3(-110)\right]
= [811 - 1179] - [-320]
|
Grid Computing - Maple Help
Home : Support : Online Help : System : Information : Updates : Maple 16 : Grid Computing
Grid Computing (Parallel Distributed Computing) in Maple 16
Distributed systems offer fantastic gains when it comes to solving large-scale problems. By sharing the computational load, you can solve problems too large for a single computer to handle, or solve problems in a fraction of the time it would take with a single computer. The Grid Computing Toolbox platform support has been extended for Maple 16, making it easy to create and test parallel distributed programs on some of the world's largest clusters.
MPICH2 is a high performance implementation of the Message Passing Interface (MPI) standard, distributed by Argonne National Laboratory (http://www.mcs.anl.gov/research/projects/mpich2/). The stated goals of MPICH2 are to:
Provide an MPI implementation that efficiently supports different computation and communication platforms including commodity clusters (desktop systems, shared-memory systems, multicore architectures), high-speed networks (10 Gigabit Ethernet, InfiniBand, Myrinet, Quadrics) and proprietary high-end computing systems (Blue Gene, Cray, SiCortex) and
Enable cutting-edge research in MPI through an easy-to-extend modular framework for other derived implementations.
The Grid Computing Toolbox for Maple 16 includes the ability to easily set up multi-process computations that interface with MPICH2 to to deploy multi-machine or cluster parallel computations.
\mathrm{Area} = {∫}_{a}^{b}f\left(x\right) ⅆx = \underset{N→\mathrm{infinity}}{lim}\frac{1}{N}\underset{i=1}{\overset{N}{∑}}f\left(r\right)\left(b-a\right)
This procedure efficiently calculates a one-variable integral using the above formula where r is a random input to
f
#simple monte-carlo integrator approxint := proc( expr, r::name=numeric..numeric, { numsamples::integer := 1000 } ) local f, randvals; f := `if`( type(expr,procedure), expr, unapply(expr,lhs(r)) ); randomize(); randvals := LinearAlgebra:-RandomVector(numsamples,generator=evalf(rhs(r)),datatype=float[8]); (add( f(randvals[i]), i=1..numsamples )/numsamples) * (rhs(rhs(r)) - lhs(rhs(r))); end proc:
\mathrm{approxint}\left( {x}^{2}, x=1..3\right);
\textcolor[rgb]{0,0,1}{8.63278051029565}
{∫}_{1}^{3}{x}^{2} ⅆx
\frac{\textcolor[rgb]{0,0,1}{26}}{\textcolor[rgb]{0,0,1}{3}}
\stackrel{\text{at 10 digits}}{\to }
\textcolor[rgb]{0,0,1}{8.666666667}
A parallel implementation adds the following code to split the problem over all available nodes and send the partial results back to node 0. Note that here the head node, 0, performs the calculation and then accumulates the results from the other nodes.
parallelApproxint := proc( expr, lim::name=numeric..numeric, { numSamples::integer := 1000 } ) uses Grid; local me, numNodes, r, n; me := MyNode(); numNodes := NumNodes(); n := trunc(numSamples/numNodes); r := approxint( expr, lim, numsamples = n ); printf("Node %d computed result %a using %d samples\n", me, r, n ); if me = 0 then r := (r + add( Receive(), i=1..numNodes-1 )) / numNodes; else Send(0,r): end if; end proc:
Integrate over the range, lim, using N samples. Use as many nodes as are available in your cluster.
Note: In the following command, replace "MyGridServer" with the name of the head node of your Grid Cluster.
\mathrm{Grid}:-\mathrm{Setup}\left("hpc",'\mathrm{host}'="MyGridServer"\right);
\mathrm{Grid}:-\mathrm{Launch}\left(\mathrm{parallelApproxint},{x}^{2}, x = 1..3, \mathrm{numSamples} = {10}^{7}, \mathrm{imports}=\left['\mathrm{approxint}'\right],\mathrm{numnodes}=16\right);
\textcolor[rgb]{0,0,1}{8.66806767157001}
Execution times are summarized as follows. Computations were executed on a 3-blade cluster with 6 quad-core AMD Opteron 2378/2.4GHz processors and 8GB of memory per pair of CPUs, running Windows HPC Server 2008.
Real Time to Compute Solution
1 (using serialized code)
The speedup is a measure of
\frac{{T}_{1}}{{T}_{p}}
{T}_{1}
is the execution time of the sequential algorithm and
{T}_{p}
is the execution time of the parallel algorithm using p processes.
1 (using Grid)
369.18
194.63
123.12
4
93.57
5
72.67
6
57.22
48.80
8
43.15
37.96
10
34.60
11
31.58
28.83
26.66
18
19.41
22
16.02
15.30 s
The compute time in Maple without using MapleGrid is the first number in the table -- ~6 minutes. The rest of the times were using MapleGrid with a varying number of cores. The graph shows that adding cores scales linearly. When 23 cores are dedicated to the same example, it takes only 15.3 seconds to complete.
Grid package documentation
|
Permeable pavements: Sizing - LID SWM Planning and Design Guide
Revision as of 22:10, 10 December 2021 by Dean Young (talk | contribs)
The following calculation is used to size the stone storage bed (reservoir) used as a base course. It is assumed that the footprint of the stone bed will be equal to the footprint of the pavement. The following equations are derived from the Interlocking Concrete Pavement Institute (ICPI) manual [1]
For full infiltration design, to calculate the total depth of clear stone aggregate layers needed for the water storage reservoir[edit]
The equation for the maximum depth of the stone reservoir (dr, max, m) is as follows:
{\displaystyle d_{r,max}={\frac {(RVC_{T}\times R)+RVC_{T}-(f'\times D)}{n}}}
RVCT = Runoff volume control target (m)
{\displaystyle RVC_{T}=D\times i}
D = Duration of the design storm event event (hr)
i = Intensity of the design storm event (m/hr)
R = the ratio of impervious contributing drainage area to permeable pavement area; Ai/Ap
Ai = Impervious contributing drainage area (m2)
Ap = Permeable pavement area (m2)
f' = Design infiltration rate of underlying native soil (m/hr)
n = Porosity of the stone bed aggregate material (typically 0.4 for 50 mm dia. clear stone)
It is important to note that R should not exceed 2 to limit hydraulic loading and help avoid premature clogging. Also important to note is that the contributing drainage area should not contain pervious areas that are sources of sediment that can lead to premature clogging.
On highly permeable soils (e.g., infiltration rate of 45 mm/hr or greater), a maximum stone reservoir depth of 2 metres is recommended to prevent soil compaction and loss of permeability from the mass of overlying stone and stored water.
For partial infiltration design, to calculate the depth of the storage reservoir needed below the invert of the underdrain pipe[edit]
For designs that include an underdrain, the depth of the storage reservoir below the invert of the underdrain pipe (dr) can be calculated as follows:
{\displaystyle d_{r}={\frac {f'\times t}{n}}}
f' = Design infiltration rate (mm/hr), and
t = Drainage time (hrs), e.g. 72 hours, check local regulations for drainage time requirements.
Where the total contributing drainage area (Ac) and total depth of clear stone aggregate needed for load bearing capacity of the pavement are known (i.e., storage reservoir depth is fixed) or if available space is constrained in the vertical dimension due to water table or bedrock elevation, the minimum footprint area of the water storage reservoir, Ar can be calculated as follows:
{\displaystyle A_{r}={\frac {D(i-f')\times A_{c}}{d_{r}\times n}}}
Ac = Ai + Ap, and Ar = Ap (i.e., assumed that the water storage reservoir area and permeable pavement area are the same)
Then increase Ar accordingly to keep R between 0 and 2, which reduces hydraulic loading and helps avoid premature clogging.
Back to Permeable pavements
↑ Smith, D. 2017. Permeable Interlocking Concrete Pavements; Selection, Design, Specifications, Construction, Maintenance. 5th Edition. Interlocking Concrete Pavement Institute. Chantilly VA
The period between the maximum water level and the minimum level (dry weather or antecedent level).
Retrieved from "https://wiki.sustainabletechnologies.ca/index.php?title=Permeable_pavements:_Sizing&oldid=12215"
|
The following are the dimensions of a few rectangles. Find the are of the two ri
The following are the dimensions of a few rectangles. Find the are of the two right triangles that are cut from the rectangles using the formmula of the area of a triangle. A. Lenght= 13.5 m, Breadth=10.5 m B. Lenght=18 m, Breadth=8 cm C. Lenght=1.5 m, Breadth=30 cm
(A )Given length=13.5 m, Breadth=10.5 m
Every rectangle has two equal right triangles.
Area of each right triangle
=\frac{1}{2}×13.5×10.5=70.875{m}^{2}
(B) Given length=18 m , Breadth=8 m
=\frac{1}{2}×18×8=72{m}^{2}
(C) Given length=1.5 m , Breadth
=30cm=\frac{30}{100}=0.3m
=\frac{1}{2}×1.5×0.3=0.225{m}^{2}
y=5x+3
\mathrm{tan}\left(\mathrm{arcsin}\left(\frac{x}{8}\right)\right)
\mathrm{cos}\left(ar\mathrm{sin}\left(\frac{x}{8}\right)\right)
\left(\frac{1}{2}\right)\mathrm{sin}\left(2\mathrm{arcsin}\left(\frac{x}{8}\right)\right)
\mathrm{sin}\left(\mathrm{arctan}\left(\frac{x}{8}\right)\right)
\mathrm{cos}\left(\mathrm{arctan}\left(\frac{x}{8}\right)\right)
A plane, diving with constant speed at an angle of
{53.0}^{\circ }
with the vertical, releases a projectile at an altitude of 730 m. The projectile hits the ground 5.00 s after release.
a) What is the speed of the plane?
b) How far does the projectile travel horizontally during its flight? What are the
c) horizontal and (d) vertical components of its velocity just before striking the ground?
|
Find sets of parametric equations and symmetric equations of the line that passe
Find sets of parametric equations and symmetric equations of the line that passes through the given point and is parallel to the given vector or line.
Find sets of parametric equations and symmetric equations of the line that passes through the given point and is parallel to the given vector or line. (For each line, write the direction numbers as integers.) Point
\left(-1,\text{ }0,\text{ }8\right)
v=3i\text{ }+\text{ }4j\text{ }-\text{ }8k
The given point is
\left(-1,\text{ }0,\text{ }8\right)\text{ }\text{and the vector or line is}\text{ }v=3i\text{ }+\text{ }4j\text{ }-\text{ }8k.
(a) parametric equations (b) symmetric equations
(a) The parametric equations for a line passing through
\left({x}_{0},\text{ }{y}_{0},\text{ }{z}_{0}\right)
v=ai\text{ }+\text{ }bj\text{ }+\text{ }ck
x={x}_{0}\text{ }+\text{ }at,\text{ }y={y}_{0}\text{ }+\text{ }bt,\text{ }z={z}_{0}\text{ }+\text{ }ct
The required parametric equations are
x=\text{ }-1\text{ }+\text{ }3t,\text{ }y=4t,\text{ }z=8\text{ }-\text{ }8t
(b) The parametric equations are
x=\text{ }-1\text{ }+\text{ }3t,\text{ }y=4t,\text{ }z=8\text{ }-\text{ }8t
Solving for t we have,
⇒\text{ }t=\text{ }\frac{x\text{ }+\text{ }1}{3},\text{ }t=\text{ }\frac{y}{4},\text{ }t=\text{ }\frac{z\text{ }-\text{ }8}{-8}
⇒\text{ }\frac{x\text{ }+\text{ }1}{3}=\text{ }\frac{y}{4}=\text{ }\frac{z\text{ }-\text{ }8}{-8}
⇒\text{ }\frac{x\text{ }+\text{ }1}{3}=\text{ }\frac{y}{4}=\text{ }\frac{8\text{ }-\text{ }z}{8}
Symmetric equations
\frac{x\text{ }+\text{ }1}{3}=\frac{y}{4}=\frac{8\text{ }-\text{ }z}{8}
Find a set of parametric equations for the rectangular equation.
y=\left(x+3{\right)}^{2}\mathrm{\setminus }5
Write a short paragraph explaining this statement. Use the following example and your answers Does the particle travel clockwise or anticlockwise around the circle? Find parametric equations if the particles moves in the opposite direction around the circle. The position of a particle is given by the parametric equations
x=\mathrm{sin}t,y=\mathrm{cos}t
where 1 represents time. We know that the shape of the path of the particle is a circle.
\mathrm{cos}2\mathrm{sin}\left(-9\right)-\mathrm{cos}9\mathrm{sin}2
\mathrm{sin}\left(\mathrm{arcsin}\frac{\sqrt{3}}{2}+\mathrm{arccos}0\right)
P\left(-1,\text{ }-3\right)\text{ }\text{and ending at}\text{ }Q\left(6,\text{ }-16\right)
r\ge 1
\left(1,\frac{3\pi }{4}\right),\left(5,\frac{5\pi }{8}\right)
|
One Dimensional Kinematics | Brilliant Math & Science Wiki
Madison Jones and Sravanth C. contributed
The First Step: Choosing Coordinates
Before beginning a problem in kinematics, you must set up your coordinate system. In one-dimensional kinematics, this is simply an x-axis and the direction of the motion is usually the positive-x direction. Though displacement, velocity, and acceleration are all vector quantities, in the one-dimensional case they can all be treated as scalar quantities with positive or negative values to indicate their direction.
The positive and negative values of these quantities are determined by the choice of how you align the coordinate system.
Velocity represents the rate of change of displacement over a given amount of time. The displacement in one-dimension is generally represented in regards to a starting point of
x_1
x_2
. The time that the object in question is at each point is denoted as
t_1
t_2
(always assuming that
t_2
t_1
, since time only proceeds one way). The change in a quantity from one point to another is generally indicated with the Greek letter delta,
\Delta
Using these notations, it is possible to determine the average velocity (vav) in the following manner:
v_{av} = \dfrac{(x_2 - x_1)}{(t2 - t1)} = \dfrac{\Delta x}{\Delta t}
. If you apply a limit as
\Delta t
approaches 0, you obtain an instantaneous velocity at a specific point in the path. Such a limit in calculus is the derivative of x with respect to t, or
\dfrac{dx}{dt}
. Acceleration
Acceleration represents the rate of change in velocity over time. Using the terminology introduced earlier, we see that the average acceleration (
a_{av}
a_{av} = \dfrac{(v_2 - v_1)}{(t_2 - t_1)} = \dfrac{\Delta x}{\Delta t}
Again, we can apply a limit as
\Delta t
approaches 0 to obtain an instantaneous acceleration at a specific point in the path. The calculus representation is the derivative of v with respect to t, or
\dfrac{dv}{dt}
. Similarly, since v is the derivative of x, the instantaneous acceleration is the second derivative of x with respect to t, or d2x/dt2. Constant Acceleration
In several cases, such as the Earth's gravitational field, the acceleration may be constant - in other words the velocity changes at the same rate throughout the motion. Using our earlier work, set the time at 0 and the end time as t (picture starting a stopwatch at 0 and ending it at the time of interest). The velocity at time 0 is
v_0
and at time t is v, yielding the following two equations:
a = \dfrac{(v - v_0)}{(t - 0)};\\ v = v_0 + at
Applying the earlier equations for
v_{av}
x_0
at time 0 and x at time t, and applying some manipulations (which I will not prove here), we get:
x = x_0 + v_0t + 0.5at_2 v_2 = v_02 + 2a(x - x_0)
x - x_0 = \dfrac{(v_0 + v)t}2
The above equations of motion with constant acceleration can be used to solve any kinematic problem involving motion of a particle on a straight line with constant acceleration.
Cite as: One Dimensional Kinematics . Brilliant.org. Retrieved from https://brilliant.org/wiki/one-dimensional-kinematics/
|
Slope | Brilliant Math & Science Wiki
The slope of a line characterizes both the steepness and direction of the line.
Positive, Negative, Zero, and Undefined Slope
y
x
Slope is sometimes expressed as rise over run. You can determining slope by visualizing walking up a flight of stairs, dividing the vertical change, which comes first, by the horizontal change, which come second.
The slope,
m,
of a line is defined to be
m = \frac{\text{rise}}{\text{run}} = \frac{\text{change in } y}{\text{change in }x}.
Ramp A Ramp B Ramps A and B have the same slope
Ramps are carefully designed to provide easy access for wheelchairs, carts, dollies, etc. Which ramp has a greater slope?
The road rises 1.2 units for every 100 units of horizontal distance The road rises 12 units for every 100 units of horizontal distance The road makes an angle of
12^\circ
with the horizontal The road rises 1 unit for every 12 units of horizontal distance
When driving uphill, you see a sign that looks like this one.
Lines that rise from left to right have a positive rise and a positive run, yielding a positive slope.
Lines that fall from left to right have a negative rise and a positive run, yielding a negative slope.
Horizontal lines have zero rise and a positive run, yielding a zero slope.
Vertical lines can have any amount of rise and zero run, yielding an undefined slope.
Line A travels through the points
(-3,4)
(3,-5).
Line B travels through the points
(-3, 4)
(3, 5).
Which line has a negative slope?
Line A has a negative slope because when moving from the lower
x
-value of -3 to the higher
x
-value of 3, it has a horizontal change of positive 6 but a vertical change of -9. Therefore, it has a negative slope.
A B Neither A nor B Both A and B
Which line has a slope of 0?
The slope of a line is the same everywhere on the line. Therefore, when finding the slope of a line from a graph, we can pick any two points on the line to use to find the slope. For the graph below, moving from the point on the left to the point on the right, we move right 2 units and up 3 units. So the ratio of the change in
y
to the change is
x
\frac{3}{2}.
What is the slope of the blue line below?
Choosing two points on the line that travel through lattice points on the grid, we see that moving from left to right the line travels right 2 units and down 7 units. So the ratio of the change in
y
x
\frac{-7}{2}.
For two different points on a line,
(x_1, y_1)
(x_2, y_2)
\frac{ y_2 - y_1 } { x_2 - x_1}
. It is also known as the rate of change of
y
x
What is the slope of the line through the points
(4,7)
(8,10)\,?
\frac{ y_2 - y_1 } { x_2 - x_1} =\frac{10-7}{8-4} = \frac{3}{4}.
Straight line A of infinite length contains the points
(-4,3)
(2,1).
Straight line B of infinite length contains the points
(2,1)
(11,-2).
Are A and B the same line?
If a line has slope
10
(3, 39)
, what value of
a
(4, a)
makes that point lie on the line?
Cite as: Slope. Brilliant.org. Retrieved from https://brilliant.org/wiki/slope/
|
Estimate frequency response with fixed frequency resolution using spectral analysis - MATLAB spa - MathWorks India
\pi
y\left(t\right)=G\left(q\right)u\left(t\right)+v\left(t\right)
G\left(q\right)u\left(t\right)=\sum _{k=1}^{\infty }g\left(k\right)u\left(t-k\right)
G\left(q\right)=\sum _{k=1}^{\infty }g\left(k\right){q}^{-k}\text{ }{q}^{-1}u\left(t\right)=u\left(t-1\right)
{\stackrel{^}{\Phi }}_{v}\left(\omega \right)
{\stackrel{^}{G}}_{N}\left({e}^{i\omega }\right)=\frac{{\stackrel{^}{\Phi }}_{yu}\left(\omega \right)}{{\stackrel{^}{\Phi }}_{u}\left(\omega \right)}
{\stackrel{^}{\Phi }}_{v}\left(\omega \right)={\stackrel{^}{\Phi }}_{y}\left(\omega \right)-\frac{{|{\stackrel{^}{\Phi }}_{yu}\left(\omega \right)|}^{2}}{{\stackrel{^}{\Phi }}_{u}\left(\omega \right)}
y\left(t\right)=G\left(q\right)u\left(t\right)+v\left(t\right)
{\Phi }_{y}\left(\omega \right)={|G\left({e}^{i\omega }\right)|}^{2}{\Phi }_{u}\left(\omega \right)+{\Phi }_{v}\left(\omega \right)
{\Phi }_{yu}\left(\omega \right)=G\left({e}^{i\omega }\right){\Phi }_{u}\left(\omega \right)
{\Phi }_{v}\left(\omega \right)\equiv \sum _{\tau =-\infty }^{\infty }{R}_{v}\left(\tau \right){e}^{-iw\tau }
{\stackrel{^}{\Phi }}_{yu}\left(\omega \right)
{\stackrel{^}{\Phi }}_{u}\left(\omega \right)
v\left(t\right)=H\left(q\right)e\left(t\right)
\lambda
{\Phi }_{v}\left(\omega \right)=\lambda {|H\left({e}^{i\omega }\right)|}^{2}
\begin{array}{l}{\stackrel{^}{R}}_{y}\left(\tau \right)=\frac{1}{N}\sum _{t=1}^{N}y\left(t+\tau \right)y\left(t\right)\\ {\stackrel{^}{R}}_{u}\left(\tau \right)=\frac{1}{N}\sum _{t=1}^{N}u\left(t+\tau \right)u\left(t\right)\\ {\stackrel{^}{R}}_{yu}\left(\tau \right)=\frac{1}{N}\sum _{t=1}^{N}y\left(t+\tau \right)u\left(t\right)\end{array}
\begin{array}{l}{\stackrel{^}{\Phi }}_{y}\left(\omega \right)=\sum _{\tau =-M}^{M}{\stackrel{^}{R}}_{y}\left(\tau \right){W}_{M}\left(\tau \right){e}^{-i\omega \tau }\\ {\stackrel{^}{\Phi }}_{u}\left(\omega \right)=\sum _{\tau =-M}^{M}{\stackrel{^}{R}}_{u}\left(\tau \right){W}_{M}\left(\tau \right){e}^{-i\omega \tau }\\ {\stackrel{^}{\Phi }}_{yu}\left(\omega \right)=\sum _{\tau =-M}^{M}{\stackrel{^}{R}}_{yu}\left(\tau \right){W}_{M}\left(\tau \right){e}^{-i\omega \tau }\end{array}
{W}_{M}\left(\tau \right)
{\stackrel{^}{G}}_{N}\left({e}^{i\omega }\right)
{\stackrel{^}{\Phi }}_{v}\left(\omega \right)
{\stackrel{^}{G}}_{N}\left({e}^{i\omega }\right)=\frac{{\stackrel{^}{\Phi }}_{yu}\left(\omega \right)}{{\stackrel{^}{\Phi }}_{u}\left(\omega \right)}
{\Phi }_{v}\left(\omega \right)\equiv \sum _{\tau =-\infty }^{\infty }{R}_{v}\left(\tau \right){e}^{-iw\tau }
S=\sum _{m=-M}^{M}Ez\left(t+m\right)z{\left(t\right)}^{\prime }{W}_{M}\left({T}_{s}\right)\mathrm{exp}\left(-i\omega m\right)
|
Transfinite arithmetic
Transfinite arithmetic --- Introduction ---
This is an exercise as well as a game. Let a set
E
k
elements in the finite ring
\mathbb{Z}/n\mathbb{Z}
k
integers between 0 and
n-1
. For any element
t\in \mathbb{Z}/n\mathbb{Z}
, the addition or multiplication by
transforms
E
into another set
{E}_{t}\subset \mathbb{Z}/n\mathbb{Z}
Thus the exercise generates two sets
E
F\subset \mathbb{Z}/n\mathbb{Z}
, of the same number of elements, and asks you to transform
E
F
by successive additions and multiplications by elements of
\mathbb{Z}/n\mathbb{Z}
. And you will be scored according to the number of steps you make.
Set up Severity level:
You can also configure the exercise by a detailed menu .
Description: transformation in Z/nZ by addition and multiplication. This is the main site of WIMS (WWW Interactive Multipurpose Server): interactive exercises, online calculators and plotters, mathematical recreation and games
|
Octahedron - Wikipedia
(Redirected from Octahedra)
Polyhedron with 8 triangular faces
For the album, see Octahedron (album).
shortcode 4<> 3z
r{3,3} or
{\displaystyle {\begin{Bmatrix}3\\3\end{Bmatrix}}}
Dihedral angle 109.47122° = arccos(−1⁄3)
3D model of regular octahedron.
In geometry, an octahedron (plural: octahedra, octahedrons) is a polyhedron with eight faces. The term is most commonly used to refer to the regular octahedron, a Platonic solid composed of eight equilateral triangles, four of which meet at each vertex.
1.6.3 Snub octahedron
1.7 Characteristic orthoscheme
1.11 Uniform colorings and symmetry
3.3 Tetrahedral octet truss
4.4 Other related polyhedra
Regular octahedron[edit]
{\displaystyle r_{u}={\frac {\sqrt {2}}{2}}a\approx 0.707\cdot a}
{\displaystyle r_{i}={\frac {\sqrt {6}}{6}}a\approx 0.408\cdot a}
{\displaystyle r_{m}={\tfrac {1}{2}}a=0.5\cdot a}
{\displaystyle \left|x-a\right|+\left|y-b\right|+\left|z-c\right|=r.}
{\displaystyle A=2{\sqrt {3}}a^{2}\approx 3.464a^{2}}
{\displaystyle V={\frac {1}{3}}{\sqrt {2}}a^{3}\approx 0.471a^{3}}
{\displaystyle \left|{\frac {x}{x_{m}}}\right|+\left|{\frac {y}{y_{m}}}\right|+\left|{\frac {z}{z_{m}}}\right|=1,}
{\displaystyle A=4\,x_{m}\,y_{m}\,z_{m}\times {\sqrt {{\frac {1}{x_{m}^{2}}}+{\frac {1}{y_{m}^{2}}}+{\frac {1}{z_{m}^{2}}}}},}
{\displaystyle V={\frac {4}{3}}\,x_{m}\,y_{m}\,z_{m}.}
{\displaystyle I={\begin{bmatrix}{\frac {1}{10}}m(y_{m}^{2}+z_{m}^{2})&0&0\\0&{\frac {1}{10}}m(x_{m}^{2}+z_{m}^{2})&0\\0&0&{\frac {1}{10}}m(x_{m}^{2}-y_{m}^{2})\end{bmatrix}}.}
{\displaystyle x_{m}=y_{m}=z_{m}=a\,{\frac {\sqrt {2}}{2}}.}
Dual[edit]
The octahedron is the dual polyhedron of the cube.
If the length of an edge of an octahedron inscribed in a cube
{\displaystyle =a}
, then the length of an edge of the dual cube
{\displaystyle ={\sqrt {2}}a}
Stellation[edit]
The interior of the compound of two dual tetrahedra is an octahedron, and this compound, called the stella octangula, is its first and only stellation. Correspondingly, a regular octahedron is the result of cutting off from a regular tetrahedron, four regular tetrahedra of half the linear size (i.e. rectifying the tetrahedron). The vertices of the octahedron lie at the midpoints of the edges of the tetrahedron, and in this sense it relates to the tetrahedron in the same way that the cuboctahedron and icosidodecahedron relate to the other Platonic solids.
Snub octahedron[edit]
One can also divide the edges of an octahedron in the ratio of the golden mean to define the vertices of an icosahedron. This is done by first placing vectors along the octahedron's edges such that each face is bounded by a cycle, then similarly partitioning each edge into the golden mean along the direction of its vector. There are five octahedra that define any given icosahedron in this fashion, and together they define a regular compound. An icosahedron produced this way is called a snub octahedron.
Octahedra and tetrahedra can be alternated to form a vertex, edge, and face-uniform tessellation of space. This and the regular tessellation of cubes are the only such uniform honeycombs in 3-dimensional space.
Characteristic orthoscheme[edit]
Like all regular convex polytopes, the octahedron can be dissected into an integral number of disjoint orthoschemes, all of the same shape characteristic of the polytope. A polytope's characteristic orthoscheme is a fundamental property because the polytope is generated by reflections in the facets of its orthoscheme.
The faces of the octahedron's characteristic orthoscheme lie in the octahedron's mirror planes of symmetry. The octahedron's symmetry group is denoted B3. The octahedron and its dual polytope, the cube, have the same symmetry group and mirror planes, but different characteristic orthoschemes.
The octahedron's characteristic orthoscheme, the characteristic tetrahedron of the regular octahedron, is a quadrirectangular irregular tetrahedron. One edge of the tetrahedron is also an edge of the octahedron. Each equilateral triangle face of the octahedron is divided into two 30°-60°-90° right triangle faces that belong to adjacent orthoschemes. The regular octahedron can be dissected (3 different ways) into 16 such characteristic tetrahedra , which all surround the same axis of the octahedron and meet at the octahedron's center. If the octahedron has edge length √2, its orthoscheme's six edges have lengths √2, √2/2, √3/2 (the exterior right triangle face), plus 1, 1, √2/2 (edges that are radii of the octahedron). The 3-edge path along orthogonal edges of the orthoscheme is √2/2 √2/2, 1, first along an octahedron edge to its midpoint, then turning 90° to the octahedron center, and finally turning 90° again to the fourth orthoscheme vertex.
The characteristic orthoscheme of the regular octahedron occurs in two chiral forms which are mirror images of each other. A left-handed orthoscheme and a right-handed orthoscheme meet at each of the octahedron's eight faces, forming a trirectangular tetrahedron: a triangular pyramid with the octahedron face as its equilateral base, and its cube-cornered apex at the center of the octahedron.
The octahedron is unique among the Platonic solids in having an even number of faces meeting at each vertex. Consequently, it is the only member of that group to possess, among its mirror planes, some that do not pass through any of its faces.
The regular octahedron has eleven arrangements of nets.
Faceting[edit]
Uniform colorings and symmetry[edit]
Irregular octahedra[edit]
Schönhardt polyhedron, a non-convex polyhedron that cannot be partitioned into tetrahedra without introducing new vertices.
Bricard octahedron, a non-convex self-crossing flexible polyhedron
Octagonal hosohedron: degenerate in Euclidean space, but can be realized spherically.
Octahedra in the physical world[edit]
Octahedra in nature[edit]
Octahedra in art and culture[edit]
Tetrahedral octet truss[edit]
A space frame of alternating tetrahedra and half-octahedra derived from the Tetrahedral-octahedral honeycomb was invented by Buckminster Fuller in the 1950s. It is commonly regarded as the strongest building structure for resisting cantilever stresses.
Tetratetrahedron[edit]
Trigonal antiprism[edit]
Square bipyramid[edit]
Other related polyhedra[edit]
Truncation of two opposite vertices results in a square bifrustum.
The octahedron can be generated as the case of a 3D superellipsoid with all exponent values set to 1.
Octahedral sphere
^ Finbow, Arthur S.; Hartnell, Bert L.; Nowakowski, Richard J.; Plummer, Michael D. (2010). "On well-covered triangulations. III". Discrete Applied Mathematics. 158 (8): 894–912. doi:10.1016/j.dam.2009.08.002. MR 2602814.
^ "Archived copy". Archived from the original on 10 October 2011. Retrieved 2 May 2006. {{cite web}}: CS1 maint: archived copy as title (link)
^ "Counting polyhedra".
^ Klein, Douglas J. (2002). "Resistance-Distance Sum Rules" (PDF). Croatica Chemica Acta. 75 (2): 633–649. Archived from the original (PDF) on 10 June 2007. Retrieved 30 September 2006.
^ Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (Chapter V: The Kaleidoscope, Section: 5.7 Wythoff's construction)
^ "Two Dimensional symmetry Mutations by Daniel Huson".
"Octahedron" . Encyclopædia Britannica. Vol. 19 (11th ed.). 1911.
Klitzing, Richard. "3D convex uniform polyhedra x3o4o – oct".
Retrieved from "https://en.wikipedia.org/w/index.php?title=Octahedron&oldid=1086794790"
|
{\displaystyle d_{r,max}={\frac {\left[\left(RVC_{T}\times R\right)+RVC_{T}-\left(f'\times D\right)\right]}{n}}}
{\displaystyle RVC_{T}=D\times i}
{\displaystyle d_{r}={\frac {f'\times t}{n}}}
Where the total contributing drainage area of the pavement (Ac) and total depth of clear stone aggregate needed for load bearing capacity are known (i.e., storage reservoir depth is fixed) or if available space is constrained in the vertical dimension due to water table or bedrock elevation, the footprint area of the water storage reservoir, Ar can be calculated as follows:
{\displaystyle A_{r}={\frac {D(i-f')\times A_{c}}{d_{r}\times n}}}
|
Which statement best characterizes the definitions of categorical and quantitati
Which statement best characterizes the definitions of categorical and quantitative data? Quantitative data consist of numbers, whereas categorical dat
Which statement best characterizes the definitions of categorical and quantitative data?
Quantitative data consist of numbers, whereas categorical data consist of names and labels that are not numeric.
Quantitative data consist of numbers representing measurements or counts, whereas categorical data consist of names or labels
Quantitative data consist of values that can be arranged in order, whereas categorical data consist of values that cannot be arranged in order.
Quantitative data have an uncountable number of possible values, whereas categorical data have a countable number of possible values.
A data is said to be quantitative data if it can be measured on numeric or quantitative scale. Mathematical operations can be performed on the quantitative variables.
A data is said to be categorical data if it can classify the observations as names or labels. Mathematical operations cannot be performed on the categorical variables.
Reason for correct answer:
According to the definitions, the quantitative data consist of numbers that represent measurements or counts and categorical data consist of names and labels.
Thus, second option is correct.
Reason for incorrect answer:
Categorical data consist of names and labels, and that can be numeric. Such as, age groups. For example the age groups 20-30, 30-40, 40-50 etc. categorize the variable age, the jersey number of a football player is numerical, but it labels the player.
Hence, first option is not correct.
Quantitative data can be arranged in numerical order and some categorical data can be arranged in logical order. Thus, third option is not correct.
Quantitative and categorical data, both have countable number of possible values. Thus, fourth option is not correct.
Students score on a biology test is an example of which scale of measurement?
What is cross tabulation?
The school cafeteria collects data on students’ juice box selections. What type of data are these?
1.categorical, nominal data
2.numerical, discrete data
3.categorical, ordinal data
4.numerical, continuous data
Let N(x) be the statement “x has visited North India,” where the domain consists of the students in your section. Express each of these quantifications in English.
a\right)\exists xN\left(x\right),b\right)\forall xN\left(x\right),c\right)¬\exists xN\left(x\right),d\right)\exists x¬N\left(x\right),e\right)¬\forall xN\left(x\right),f\right)\forall x¬N\left(x\right)
In each of the following, identify the variables are categorical nominal, or categorical ordinal, or numerical discrete or numerical continuous.
2.Pant size of a randomly selected male as S, M, L, XL, XXL.
Identify the type of data that would be used to describe a response.
Most Watched Television Show
Quantitative - Continuous
Quantitative - Discrete
All groups of order 14
|
Asymptotically optimal algorithm - Wikipedia
Find sources: "Asymptotically optimal algorithm" – news · newspapers · books · scholar · JSTOR (October 2008) (Learn how and when to remove this template message)
In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor (independent of the input size) worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use of big-O notation.
More formally, an algorithm is asymptotically optimal with respect to a particular resource if the problem has been proven to require Ω(f(n)) of that resource, and the algorithm has been proven to use only O(f(n)).
These proofs require an assumption of a particular model of computation, i.e., certain restrictions on operations allowable with the input data.
As a simple example, it's known that all comparison sorts require at least Ω(n log n) comparisons in the average and worst cases. Mergesort and heapsort are comparison sorts which perform O(n log n) comparisons, so they are asymptotically optimal in this sense.
If the input data have some a priori properties which can be exploited in construction of algorithms, in addition to comparisons, then asymptotically faster algorithms may be possible. For example, if it is known that the N objects are integers from the range [1, N], then they may be sorted O(N) time, e.g., by the bucket sort.
A consequence of an algorithm being asymptotically optimal is that, for large enough inputs, no algorithm can outperform it by more than a constant factor. For this reason, asymptotically optimal algorithms are often seen as the "end of the line" in research, the attaining of a result that cannot be dramatically improved upon. Conversely, if an algorithm is not asymptotically optimal, this implies that as the input grows in size, the algorithm performs increasingly worse than the best possible algorithm.
In practice it's useful to find algorithms that perform better, even if they do not enjoy any asymptotic advantage. New algorithms may also present advantages such as better performance on specific inputs, decreased use of resources, or being simpler to describe and implement. Thus asymptotically optimal algorithms are not always the "end of the line".
Although asymptotically optimal algorithms are important theoretical results, an asymptotically optimal algorithm might not be used in a number of practical situations:
It only outperforms more commonly used methods for n beyond the range of practical input sizes, such as inputs with more bits than could fit in any computer storage system.
It is too complex, so that the difficulty of comprehending and implementing it correctly outweighs its potential benefit in the range of input sizes under consideration.
The inputs encountered in practice fall into special cases that have more efficient algorithms or that heuristic algorithms with bad worst-case times can nevertheless solve efficiently.
On modern computers, hardware optimizations such as memory cache and parallel processing may be "broken" by an asymptotically optimal algorithm (assuming the analysis did not take these hardware optimizations into account). In this case, there could be sub-optimal algorithms that make better use of these features and outperform an optimal algorithm on realistic data.
An example of an asymptotically optimal algorithm not used in practice is Bernard Chazelle's linear-time algorithm for triangulation of a simple polygon. Another is the resizable array data structure published in "Resizable Arrays in Optimal Time and Space",[1] which can index in constant time but on many machines carries a heavy practical penalty compared to ordinary array indexing.
Formally, suppose that we have a lower-bound theorem showing that a problem requires Ω(f(n)) time to solve for an instance (input) of size n (see big-O notation for the definition of Ω). Then, an algorithm which solves the problem in O(f(n)) time is said to be asymptotically optimal. This can also be expressed using limits: suppose that b(n) is a lower bound on the running time, and a given algorithm takes time t(n). Then the algorithm is asymptotically optimal if:
{\displaystyle \lim _{n\rightarrow \infty }{\frac {t(n)}{b(n)}}<\infty .}
Note that this limit, if it exists, is always at least 1, as t(n) ≥ b(n).
Although usually applied to time efficiency, an algorithm can be said to use asymptotically optimal space, random bits, number of processors, or any other resource commonly measured using big-O notation.
Sometimes vague or implicit assumptions can make it unclear whether an algorithm is asymptotically optimal. For example, a lower bound theorem might assume a particular abstract machine model, as in the case of comparison sorts, or a particular organization of memory. By violating these assumptions, a new algorithm could potentially asymptotically outperform the lower bound and the "asymptotically optimal" algorithms.
SpeedupEdit
The nonexistence of an asymptotically optimal algorithm is called speedup. Blum's speedup theorem shows that there exist artificially constructed problems with speedup. However, it is an open problem whether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is an O(nα(n)) algorithm for finding minimum spanning trees, where α(n) is the very slowly growing inverse of the Ackermann function, but the best known lower bound is the trivial Ω(n). Whether this algorithm is asymptotically optimal is unknown, and would be likely to be hailed as a significant result if it were resolved either way. Coppersmith and Winograd (1982) proved that matrix multiplication has a weak form of speed-up among a restricted class of algorithms (Strassen-type bilinear identities with lambda-computation).
Element uniqueness problem
^ Brodnik, Andrej; Carlsson, Svante; Sedgewick, Robert; Munro, JI; Demaine, ED (1999), Resizable Arrays in Optimal Time and Space (PDF), Department of Computer Science, University of Waterloo
Retrieved from "https://en.wikipedia.org/w/index.php?title=Asymptotically_optimal_algorithm&oldid=958699474"
|
Conjugate Equation - Maple Help
Home : Support : Online Help : Mathematics : Calculus of Variations : Conjugate Equation
ConjugateEquation
compute a determining equation for conjugate points from a two-parameter family of extremals
ConjugateEquation(extremals, parms, parm_values, t, t0)
list of extremals, or algebraic expression for extremals
list of parameter boundary values
one end point for the independent variable
The ConjugateEquation(extremals, parms, parm_values, t, t0) command computes an algebraic equation. Its roots are the conjugate points.
The extremals option can be specified as a list, for example, [a*sin(t) + b*cos(t), c*sinh(t/c)+d*cosh(t/d)].
The extremals option can also be expressed algebraically, for example, A*cosh((t-a)/A).
n
extremals, there must be
2n
The parm_values options specifies the list of values of the parameters parms at the extremal points matching the boundary conditions of the problem.
The t0 option is an end point for the independent variable, for example, the left one.
To find any conjugate points, set the returned equation to 0 (zero) and solve.
\mathrm{with}\left(\mathrm{VariationalCalculus}\right)
[\textcolor[rgb]{0,0,1}{\mathrm{ConjugateEquation}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Convex}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{EulerLagrange}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Jacobi}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Weierstrass}}]
\mathrm{ConjugateEquation}\left(A\mathrm{sin}\left(t\right)+B\mathrm{cos}\left(t\right),[A,B],[1,0],t,0\right)
\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{t}\right)
The conjugate points to 0 are the zeros of this.
VariationalCalculus[Jacobi]
|
AreCoplanar - Maple Help
Home : Support : Online Help : Mathematics : Geometry : 3-D Euclidean : Point Functions : AreCoplanar
test if the given objects are on the same plane
AreCoplanar(A, B, C, D)
AreCoplanar(l1, l2)
The routine returns true if the four given points or two given lines are coplanar (i.e., they are on the same plane); false if they are not; and FAIL if it is unable to reach a conclusion.
The command with(geom3d,AreCoplanar) allows the use of the abbreviated form of this command.
\mathrm{with}\left(\mathrm{geom3d}\right):
\mathrm{point}\left(A,0,0,0\right),\mathrm{point}\left(B,1,1,11\right),\mathrm{point}\left(C,3,7,5\right):
\mathrm{plane}\left(p,[A,B,C]\right)
\textcolor[rgb]{0,0,1}{p}
Generate a random point on p
\mathrm{randpoint}\left(E,p\right):
Hence, A, B, C and E must be on the same plane
\mathrm{AreCoplanar}\left(A,B,C,E\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{point}\left(o,0,0,0\right):
\mathrm{line}\left(\mathrm{l1},[o,[1,\frac{1}{2},\frac{1}{3}]]\right):
\mathrm{plane}\left(\mathrm{p1},3x-4y+5z=10,[x,y,z]\right):
\mathrm{plane}\left(\mathrm{p2},2x+2y-3z=4,[x,y,z]\right):
\mathrm{line}\left(\mathrm{l2},[\mathrm{p1},\mathrm{p2}]\right):
\mathrm{AreCoplanar}\left(\mathrm{l1},\mathrm{l2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{false}}
\mathrm{AreSkewLines}\left(\mathrm{l1},\mathrm{l2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
geom3d[AreSkewLines]
|
The following describes the most significant changes which occurred in the NDF_ system between versions V1.2 and V1.3 (not the current version):
New facilities have been added for handling NDF history information (see §22).
New facilities have been added to allow the automatic reading and writing of data files written in a variety of “foreign” (i.e. non-NDF) formats. These are described in a separate document (SSN/20).
A new routine NDF_OPEN has been added to provide a general means of accessing NDF datasets by name, locator, or a combination of both. It is modelled on the Fortran OPEN statement (see §20.10) and provides flexible NDF access for programmers who do not wish to use the ADAM parameter system.
The symbolic constant DAT__ROOT provided by HDS is now supported by all NDF_ routines which accept HDS locators (see §20). Use of this constant in place of an HDS locator indicates that the associated component name is in fact the full name of the HDS object (or NDF). This allows access to HDS objects by name as an alternative to using locators. The name of a foreign format data file may also be supplied using this mechanism (SSN/20).
All routines that accept the names of pre-existing NDF datasets now support subscripting, and will return an appropriate NDF section.
A new selective copy routine NDF_SCOPY has been added (see §20.9) which performs component propagation in a similar manner to NDF_PROP but does not depend on the ADAM parameter system.
The two sets of routines NDF_XGT0x and NDF_XPT0x (where “x” is I, R, D, L or C) will now accept compound component names when reading or writing the values of objects in NDF extensions. This allows direct access to values stored within nested structures or arrays in extensions (see §§11.4 & 11.5). The routine NDF_XIARY has also been similarly enhanced.
The routine NDF_TUNE has been extended to support new tuning parameters, most of which are associated with the facilities for accessing foreign data formats (see above).
Tuning parameters now acquire their default values from environment variables (see §23.3).
Due to changes in the underlying data system (HDS), locators to data objects may now be annulled freely without risk of affecting the operation of the NDF_ library.
There is no longer any need to call the routine HDS_START in standalone programs which use the NDF_ library (previously this was required), although doing so will do no harm.
Instructions for compiling and linking NDF applications on UNIX have been added to the documentation.
On UNIX systems where shareable libraries are supported, these are now installed in a separate .../share directory (rather than alongside the non-shareable libraries in the .../lib directory). You should include the appropriate .../share directory (normally /star/share on Starlink systems) in your library search path if you wish to use shareable libraries on UNIX.
The routine NDF_XNUMB now returns a guaranteed value of zero if it is called with STATUS set, or if it should fail for any reason.
A number of new error codes associated with the NDF history component and with access to foreign data formats have been added to the include file NDF_ERR.
The routine NDF_IMPRT has been documented as obsolete. Its function is now performed by NDF_FIND by specifying a blank second argument.
The routine NDF_TRACE has been documented as obsolete. Its function is now performed by NDF_TUNE via the tuning parameter ‘TRACE’.
The routines NDF_TUNE and NDF_GTUNE have been extended to support the new DOCVT tuning parameter. This allows automatic conversion of foreign format data files (SSN/20) to be disabled when not required.
The maximum number of foreign data formats that can be recognised by the NDF_ library (SSN/20) has been increased from 20 to 50.
Two new routines NDF_GTWCS and NDF_PTWCS have been provided to read and write World Coordinate System (WCS) information to an NDF. These WCS facilities are implemented using the new AST library (see SUN/210). NDF_GTWCS returns an AST pointer to a FrameSet and NDF_PTWCS expects a similar pointer to be supplied.
Note that this constitutes only a preliminary introduction of WCS facilities to the NDF library, mainly to permit the writing of data format conversion applications that support WCS information. Descriptions of the new routines are included in this document, but the main text does not yet contain an overview of the WCS facilities, for which you should consult SUN/210 at present. Further recommendations on the use of AST with the NDF_ library will be given in future, once experience with the new facilities has been gained.
The new “WCS” component is now supported by other NDF_ routines, where appropriate (e.g. NDF_PROP, NDF_RESET, NDF_SCOPY, NDF_SECT and NDF_STATE).
A bug has been fixed in the NDF_SBB routine which could occasionally cause it to access the bad-bits value for the wrong NDF.
A bug has been fixed which could result in failure to access a named NDF data structure comprising one of the AXIS components of another NDF (for example, “NDF.AXIS(2)”).
The documentation (SUN/33 and SSN/20) has been updated to reflect these changes.
The following describes the most significant changes which occurred in the NDF_ system between versions V1.4 and V1.5:
An interface to the library has been added which is callable from C (see Appendices E and F).
A new include file “ndf.h” has been added to support the C interface.
Several new error codes have been introduced to support the C interface.
References to the VMS operating system have been removed from the documentation (VMS is no longer supported by the current version of the NDF_ library).
This document (SUN/33) has been updated to reflect recent changes to the library.
Limited support for NDF array components stored in “scaled” form has been introduced. The new routine NDF_PTSZx will associate scale and zero values with an existing array component, thus converting it into a scaled array. See §12.4.
NDF sections expressions can now include WCS axis values. See §16.3.
New tuning parameters (“PXT...”) can be used to supress the default propagation of named NDF extensions by NDF_PROP and NDF_SCOPY. See NDF_TUNE.
The routine NDF_HSDAT has been added. This allows history records to be created with a specified date.
A new standard WCS Frame called FRACTION has been introduced, This Frame is created automatically by the NDF library in the same that the GRID, PIXEL and AXIS Frames are created. The FRACTION Frame represents normalised pixel coordinates within the array - each pixel axis spans the range 0.0 to 1.0 in the FRACTION Frame.
NDF sections specifiers can now use the “%” character to indicate that a value is a percentage of the entire pixel axis. Thus “m31(
\sim
50%,)” will create a section covering the central 50 percent of the NDF on the first pixel axis.
A history component will now be added automatically to the NDFs created by NDF_CREAT and NDF_NEW if the “NDF_AUTO_HISTORY” environment variable, or “AUTO_HISTORY” tuning parameter, is set to a non-zero integer.
The following describes the most significant changes which occurred in the NDF_ system between versions V1.9 and V1.10:
A new function NDF_ISIN has been added, which determines if one NDF is contained within another NDF.
The NDF_SCOPY and NDF_PROP functions now allow an asterisk to be used within the CLIST argument as a wild card to match all extension names.
The following describes the most significant changes which occurred in the NDF_ system between versions V1.10 and V1.11:
Limited support for NDF array components stored in “delta’ compressed form has been introduced. The new routine NDF_ZDELT will create a delta compressed copy of an input NDF, and the new routine NDF_GTDLT will return details of the compression of a delta compressed NDF. See §12.5.
A new function NDF_ZSCAL has been added, which creates a SCALED copy of an input NDF with SIMPLE or PRIMITIVE array components.
A new function NDF_CREPL has been added, which allows an NDF placeholder to be created via a specified environment parameter.
Support for 64 bit integer data values added.
A new tuning parameter SECMAX has been aded, which allows the maximum number of pixels within an NDF section to be specified. See §23.3.
A new routine NDF_CANCL can be used to cancel the association between an environment parameter and an NDF. This is identical to calling PAR_CANCL, except that NDF_CANCL provides an option to cancel all NDF parameters in a single call, without needing to know their names.
The following describes the most significant changes that occurred in the NDF_ system between versions V1.12 and V1.13 (the current version):
A new routine NDF_HCOPY has been added to copy history information from one NDF to another.
No changes to existing applications should be required, nor is any re-compilation or re-linking essential.
The following describes the most significant changes that occurred in the NDF_ system between versions V1.13 and V2.0:
The NDF library is now written entirely in C. However, the Fortran interface has not changed and is provided by a thin layer on top of the C library.
The only change to the C interface is that there is now no need to call ndfInit to initialise the library. This is done automatically when an application first calls an NDF function.
The C interface is now thread safe.
New functions are provided in the C interface to allow each NDF to be locked for exclusive access by a single thread.
The following describes the most significant changes that occurred in the NDF_ system between versions V2.0 and V2.1:
A new tuning parameter called ROUND has been added that controls how floating-point values are converted to integer during automatic type conversion.
The behaviour of ndfUnlock has changed. Previously, calling ndfUnlock would automatically annull all other identifiers associated with the same base NDF. This no longer happens. Such identifiers can now be used safely once the original thread has regained the lock on the base NDF.
The following describes the most significant changes that occurred in the NDF_ system between versions V2.1 and V2.2 (the current version):
It is now possible to specifiy the extent of an NDF section using arc-distance values (see §??).
Existing applications should be re-compiled and re-linked.
|
Charge_(physics) Knowpia
In physics, a charge is any of many different quantities, such as the electric charge in electromagnetism or the color charge in quantum chromodynamics. Charges correspond to the time-invariant generators of a symmetry group, and specifically, to the generators that commute with the Hamiltonian. Charges are often denoted by the letter Q, and so the invariance of the charge corresponds to the vanishing commutator
{\displaystyle [Q,H]=0}
, where H is the Hamiltonian. Thus, charges are associated with conserved quantum numbers; these are the eigenvalues q of the generator Q.
Abstract definitionEdit
Abstractly, a charge is any generator of a continuous symmetry of the physical system under study. When a physical system has a symmetry of some sort, Noether's theorem implies the existence of a conserved current. The thing that "flows" in the current is the "charge", the charge is the generator of the (local) symmetry group. This charge is sometimes called the Noether charge.
Thus, for example, the electric charge is the generator of the U(1) symmetry of electromagnetism. The conserved current is the electric current.
In the case of local, dynamical symmetries, associated with every charge is a gauge field; when quantized, the gauge field becomes a gauge boson. The charges of the theory "radiate" the gauge field. Thus, for example, the gauge field of electromagnetism is the electromagnetic field; and the gauge boson is the photon.
The word "charge" is often used as a synonym for both the generator of a symmetry, and the conserved quantum number (eigenvalue) of the generator. Thus, letting the upper-case letter Q refer to the generator, one has that the generator commutes with the Hamiltonian [Q, H] = 0. Commutation implies that the eigenvalues (lower-case) q are time-invariant: dq/dt = 0.
So, for example, when the symmetry group is a Lie group, then the charge operators correspond to the simple roots of the root system of the Lie algebra; the discreteness of the root system accounting for the quantization of the charge. The simple roots are used, as all the other roots can be obtained as linear combinations of these. The general roots are often called raising and lowering operators, or ladder operators.
The charge quantum numbers then correspond to the weights of the highest-weight modules of a given representation of the Lie algebra. So, for example, when a particle in a quantum field theory belongs to a symmetry, then it transforms according to a particular representation of that symmetry; the charge quantum number is then the weight of the representation.
Various charge quantum numbers have been introduced by theories of particle physics. These include the charges of the Standard Model:
The electric charge for electromagnetic interactions. In mathematics texts, this is sometimes referred to as the
{\displaystyle u_{1}}
-charge of a Lie algebra module.
Other quark-flavor charges, such as strangeness or charm. Together with the
isospin mentioned above, these generate the global SU(6) flavor symmetry of the fundamental particles; this symmetry is badly broken by the masses of the heavy quarks. Charges include the hypercharge, the X-charge and the weak hypercharge.
The hypothetical magnetic charge is another charge in the theory of electromagnetism. Magnetic charges are not seen experimentally in laboratory experiments, but would be present for theories including magnetic monopoles.
In supersymmetry:
The supercharge refers to the generator that rotates the fermions into bosons, and vice versa, in the supersymmetry.
In conformal field theory:
The central charge of the Virasoro algebra, sometimes referred to as the conformal central charge or the conformal anomaly. Here, the term 'central' is used in the sense of the center in group theory: it is an operator that commutes with all the other operators in the algebra. The central charge is the eigenvalue of the central generator of the algebra; here, it is the energy–momentum tensor of the two-dimensional conformal field theory.[1]
In gravitation:
Eigenvalues of the energy–momentum tensor correspond to physical mass.
Charge conjugationEdit
In the formalism of particle theories, charge-like quantum numbers can sometimes be inverted by means of a charge conjugation operator called C. Charge conjugation simply means that a given symmetry group occurs in two inequivalent (but still isomorphic) group representations. It is usually the case that the two charge-conjugate representations are complex conjugate fundamental representations of the Lie group. Their product then forms the adjoint representation of the group.
Thus, a common example is that the product of two charge-conjugate fundamental representations of SL(2,C) (the spinors) forms the adjoint rep of the Lorentz group SO(3,1); abstractly, one writes
{\displaystyle 2\otimes {\overline {2}}=3\oplus 1.\ }
That is, the product of two (Lorentz) spinors is a (Lorentz) vector and a (Lorentz) scalar. Note that the complex Lie algebra sl(2,C) has a compact real form su(2) (in fact, all Lie algebras have a unique compact real form). The same decomposition holds for the compact form as well: the product of two spinors in su(2) being a vector in the rotation group O(3) and a singlet. The decomposition is given by the Clebsch–Gordan coefficients.
A similar phenomenon occurs in the compact group SU(3), where there are two charge-conjugate but inequivalent fundamental representations, dubbed
{\displaystyle 3}
{\displaystyle {\overline {3}}}
, the number 3 denoting the dimension of the representation, and with the quarks transforming under
{\displaystyle 3}
and the antiquarks transforming under
{\displaystyle {\overline {3}}}
. The Kronecker product of the two gives
{\displaystyle 3\otimes {\overline {3}}=8\oplus 1.\ }
That is, an eight-dimensional representation, the octet of the eight-fold way, and a singlet. The decomposition of such products of representations into direct sums of irreducible representations can in general be written as
{\displaystyle \Lambda \otimes \Lambda '=\bigoplus _{i}{\mathcal {L}}_{i}\Lambda _{i}}
for representations
{\displaystyle \Lambda }
. The dimensions of the representations obey the "dimension sum rule":
{\displaystyle d_{\Lambda }\cdot d_{\Lambda '}=\sum _{i}{\mathcal {L}}_{i}d_{\Lambda _{i}}.}
{\displaystyle d_{\Lambda }}
is the dimension of the representation
{\displaystyle \Lambda }
, and the integers
{\displaystyle {\mathcal {L}}}
being the Littlewood–Richardson coefficients. The decomposition of the representations is again given by the Clebsch–Gordan coefficients, this time in the general Lie-algebra setting.
^ Fuchs, Jurgen (1992), Affine Lie Algebras and Quantum Groups, Cambridge University Press, ISBN 0-521-48412-X
|
Simplify 11/24+12/48+2/12-5/12 Answers with solution please - Maths - Decimals - 10775183 | Meritnation.com
Simplify 11/24+12/48+2/12-5/12. Answers with solution please
\frac{11}{24} + \frac{12}{48} + \frac{2}{12} - \frac{5}{12}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}=\frac{11}{24} + \frac{12}{48} + \frac{\left(2 - 5\right)}{12}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}=\frac{11}{24} + \frac{12}{48} + \left(-\frac{3}{12}\right)\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}= \frac{11}{24} + \frac{12}{48} -\frac{3}{12}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\mathrm{Now},\mathrm{LCM} \mathrm{of} 24, 48 \mathrm{and} 12 \mathrm{is} 48.\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\mathrm{Now}, \mathrm{we} \mathrm{will} \mathrm{convert} \mathrm{each} \mathrm{of} \mathrm{the} \mathrm{above} \mathrm{fraction} \mathrm{into} \mathrm{their} \mathrm{equivalent} \mathrm{fraction} \mathrm{with} \mathrm{denominator} 48\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\mathrm{So}, \frac{11}{24} + \frac{12}{48} -\frac{3}{12} = \frac{11×2}{24×2} + \frac{12×1}{48×1} -\frac{3×4}{12×4}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}=\frac{22}{48} + \frac{12}{48} - \frac{12}{48}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}=\frac{22 + 12 - 12}{48}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}=\frac{22}{48}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}=\frac{11}{24}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}\mathrm{So}, \frac{11}{24} + \frac{12}{48} + \frac{2}{12} - \frac{5}{12} = \frac{11}{24}
Pranathi Chekuri answered this
Aman Pratap Singh answered this
0.45833333 is the answer.
Pradyumna Chintamani answered this
12/48 + 2/12 = 5/12 and 5/12 - 5/12 =0.
Hence 11/24 is remained.
|
Finding Missing Values in Ratios | Brilliant Math & Science Wiki
Finding Missing Values in Ratios
A ratio is a comparison between two or more quantities. For example, for most mammals, the ratio of legs to noses is
4:1
, but for humans, the ratio of legs to noses is
2:1
Ratios can be written in the fractional form, so comparing three boys with five girls could be written
3:5
\frac{3}{5}
Ratios are equivalent when they simplify to the same ratio. For example, the ratios
4:10
14:35
are equivalent because they both simplify to
2:5.
4:6
6 : 9
equivalent?
The simplified ratio of
4:6
\frac{4}{2} : \frac{6}{2} = 2 : 3
6:9
\frac{6}{3} : \frac{ 9}{3} = 2 : 3
Since they have the same simplified ratio, these ratios are equivalent.
_\square
15 : 25?
To find the unknown term in a ratio, we can write the ratios as fractions, and then use some fraction sense or cross-multiply to find the unknown value.
Given the equivalent ratios below, what is the value of
x\,?
6: 15 = 10 : x
Expressing them as fractions, we get
\frac{6}{15} = \frac{10}{x},
or in simplified form
\frac{2}{5} = \frac{10}{x}.
The values in the right fraction are five times greater than the corresponding values in the left fraction, so
x= 5\times 5 = 25.
Solving by cross-multiplication, we get
\begin{aligned} 2x &= (10)(5) \\ 2x &= 50 \\ x &= 25.\end{aligned}
Every 3 shelves require 18 screws. How many screws are needed for 4 shelves? How many shelves can we build with 42 screws?
We can use equivalent ratios to find the missing values. Using the first two rows of the table, we know that
18:3
? : 4.
18:3
6:1
so every shelf requires 6 screws. Therefore, four shelves require
4 \times 6 = 24
screws.
Since every shelf requires 6 screws, we can build
42 \div 6 = 7
shelves with 42 screws.
N
4 : N = N : 9
\frac{4}{N} = \frac{N}{9}
Cross-multiplying, we get
4 \times 9 = N \times N
36 = N^2
Hence, this has solutions
N = \pm 6
The ratio 2 to 3 is equivalent to the ratio
6 : N
N
The ratio of girls to boys in a class is 4 to 5. If there are a total of 27 students in a class, how many boys are there in the class?
Cite as: Finding Missing Values in Ratios. Brilliant.org. Retrieved from https://brilliant.org/wiki/finding-missing-values-in-ratios/
|
Performance Considerations - MATLAB & Simulink - MathWorks Australia
Managing Memory with Outputs
Managing Memory Using End-of-Period Processing Functions
Optimizing Accuracy: About Solution Precision and Error
Example: Improving Solution Accuracy
There are two general approaches for managing memory when solving most problems supported by the SDE engine:
Perform a traditional simulation to simulate the underlying variables of interest, specifically requesting and then manipulating the output arrays.
This approach is straightforward and the best choice for small or medium-sized problems. Since its outputs are arrays, it is convenient to manipulate simulated results in the MATLAB® matrix-based language. However, as the scale of the problem increases, the benefit of this approach decreases, because the output arrays must store large quantities of possibly extraneous information.
For example, consider pricing a European option in which the terminal price of the underlying asset is the only value of interest. To ease the memory burden of the traditional approach, reduce the number of simulated periods specified by the required input NPeriods and specify the optional input NSteps. This enables you to manage memory without sacrificing accuracy (see Optimizing Accuracy: About Solution Precision and Error).
In addition, simulation methods can determine the number of output arguments and allocate memory accordingly. Specifically, all simulation methods support the same output argument list:
[Paths,Times,Z]
where Paths and Z can be large, three-dimensional time series arrays. However, the underlying noise array is typically unnecessary, and is only stored if requested as an output. In other words, Z is stored only at your request; do not request it if you do not need it.
If you need the output noise array Z, but do not need the Paths time series array, then you can avoid storing Paths two ways:
It is best practice to use the ~ output argument placeholder. For example, use the following output argument list to store Z and Times, but not Paths:
[~,Times,Z]
Use the optional input flag StorePaths, which all simulation methods support. By default, Paths is stored (StorePaths = true). However, setting StorePaths to false returns Paths as an empty matrix.
Specify one or more end-of-period processing functions to manage and store only the information of interest, avoiding simulation outputs altogether.
This approach requires you to specify one or more end-of-period processing functions, and is often the preferred approach for large-scale problems. This approach allows you to avoid simulation outputs altogether. Since no outputs are requested, the three-dimensional time series arrays Paths and Z are not stored.
This approach often requires more effort, but is far more elegant and allows you to customize tasks and dramatically reduce memory usage. See Pricing Equity Options.
The following approaches improve performance when solving SDE problems:
Specifying model parameters as traditional MATLAB arrays and functions, in various combinations. This provides a flexible interface that can support virtually any general nonlinear relationship. However, while functions offer a convenient and elegant solution for many problems, simulations typically run faster when you specify parameters as double-precision vectors or matrices. Thus, it is a good practice to specify model parameters as arrays when possible.
Use models that have overloaded Euler simulation methods, when possible. Using Brownian motion (BM) and geometric Brownian motion (GBM) models that provide overloaded Euler simulation methods take advantage of separable, constant-parameter models. These specialized methods are exceptionally fast, but are only available to models with constant parameters that are simulated without specifying end-of-period processing and noise generation functions.
Replace the simulation of a constant-parameter, univariate model derived from the SDEDDO class with that of a diagonal multivariate model. Treat the multivariate model as a portfolio of univariate models. This increases the dimensionality of the model and enhances performance by decreasing the effective number of simulation trials.
This technique is applicable only to constant-parameter univariate models without specifying end-of-period processing and noise generation functions.
Take advantage of the fact that simulation methods are designed to detect the presence of NaN (not a number) conditions returned from end-of-period processing functions. A NaN represents the result of an undefined numerical calculation, and any subsequent calculation based on a NaN produces another NaN. This helps improve performance in certain situations. For example, consider simulating paths of the underlier of a knock-out barrier option (that is, an option that becomes worthless when the price of the underlying asset crosses some prescribed barrier). Your end-of-period function could detect a barrier crossing and return a NaN to signal early termination of the current trial.
The simulation architecture does not, in general, simulate exact solutions to any SDE. Instead, the simulation architecture provides a discrete-time approximation of the underlying continuous-time process, a simulation technique often known as a Euler approximation.
In the most general case, a given simulation derives directly from an SDE. Therefore, the simulated discrete-time process approaches the underlying continuous-time process only in the limit as the time increment dt approaches zero. In other words, the simulation architecture places more importance on ensuring that the probability distributions of the discrete-time and continuous-time processes are close, than on the pathwise proximity of the processes.
Before illustrating techniques to improve the approximation of solutions, it is helpful to understand the source of error. Throughout this architecture, all simulation methods assume that model parameters are piecewise constant over any time interval of length dt. In fact, the methods even evaluate dynamic parameters at the beginning of each time interval and hold them fixed for the duration of the interval. This sampling approach introduces discretization error.
However, there are certain models for which the piecewise constant approach provides exact solutions:
Creating Brownian Motion (BM) Models with constant parameters, simulated by Euler approximation (simByEuler).
Creating Geometric Brownian Motion (GBM) Models with constant parameters, simulated by closed-form solution (simBySolution).
Creating Hull-White/Vasicek (HWV) Gaussian Diffusion Models with constant parameters, simulated by closed-form solution (simBySolution)
More generally, you can simulate the exact solutions for these models even if the parameters vary with time, if they vary in a piecewise constant way such that parameter changes coincide with the specified sampling times. However, such exact coincidence is unlikely; therefore, the previously discussed constant parameter condition is commonly used in practice.
One obvious way to improve accuracy involves sampling the discrete-time process more frequently. This decreases the time increment (dt), causing the sampled process to more closely approximate the underlying continuous-time process. Although decreasing the time increment is universally applicable, however, there is a tradeoff among accuracy, run-time performance, and memory usage.
To manage this tradeoff, specify an optional input argument, NSteps, for all simulation methods. NSteps indicates the number of intermediate time steps within each time increment dt, at which the process is sampled but not reported.
It is important and convenient at this point to emphasize the relationship of the inputs NSteps, NPeriods, and DeltaTime to the output vector Times, which represents the actual observation times at which the simulated paths are reported.
NPeriods, a required input, indicates the number of simulation periods of length DeltaTime, and determines the number of rows in the simulated three-dimensional Paths time series array (if an output is requested).
DeltaTime is optional, and indicates the corresponding NPeriods-length vector of positive time increments between successive samples. It represents the familiar dt found in stochastic differential equations. If DeltaTime is unspecified, the default value of 1 is used.
NSteps is also optional, and is only loosely related to NPeriods and DeltaTime. NSteps specifies the number of intermediate time steps within each time increment DeltaTime.
Specifically, each time increment DeltaTime is partitioned into NSteps subintervals of length DeltaTime/NSteps each, and refines the simulation by evaluating the simulated state vector at (NSteps - 1) intermediate times. Although the output state vector (if requested) is not reported at these intermediate times, this refinement improves accuracy by causing the simulation to more closely approximate the underlying continuous-time process. If NSteps is unspecified, the default is 1 (to indicate no intermediate evaluation).
The output Times is an NPeriods + 1-length column vector of observation times associated with the simulated paths. Each element of Times is associated with a corresponding row of Paths.
The following example illustrates this intermediate sampling by comparing the difference between a closed-form solution and a sequence of Euler approximations derived from various values of NSteps.
Consider a univariate geometric Brownian motion (GBM) model using gbm with constant parameters:
d{X}_{t}=0.1{X}_{t}dt+0.4{X}_{t}d{W}_{t}.
Assume that the expected rate of return and volatility parameters are annualized, and that a calendar year comprises 250 trading days.
Simulate approximately four years of univariate prices for both the exact solution and the Euler approximation for various values of NSteps:
nPeriods = 1000;
obj = gbm(0.1,0.4,'StartState',100);
[X1,T1] = simBySolution(obj,nPeriods,'DeltaTime',dt);
[Y1,T1] = simByEuler(obj,nPeriods,'DeltaTime',dt);
[X2,T2] = simBySolution(obj,nPeriods,'DeltaTime',...
dt,'nSteps',2);
[Y2,T2] = simByEuler(obj,nPeriods,'DeltaTime',...
[X3,T3] = simBySolution(obj,nPeriods, 'DeltaTime',...
dt,'nSteps',10);
dt,'nSteps',100);
Compare the error (the difference between the exact solution and the Euler approximation) graphically:
plot(T1,X1 - Y1,'red')
plot(T2,X2 - Y2,'blue')
plot(T3,X3 - Y3,'green')
plot(T4,X4 - Y4,'black')
ylabel('Price Difference')
title('Exact Solution Minus Euler Approximation')
legend({'# of Steps = 1' '# of Steps = 2' ...
'# of Steps = 10' '# of Steps = 100'},...
whos T X Y
As expected, the simulation error decreases as the number of intermediate time steps increases. Because the intermediate states are not reported, all simulated time series have the same number of observations regardless of the actual value of NSteps.
Furthermore, since the previously simulated exact solutions are correct for any number of intermediate time steps, additional computations are not needed for this example. In fact, this assessment is correct. The exact solutions are sampled at intermediate times to ensure that the simulation uses the same sequence of Gaussian random variates in the same order. Without this assurance, there is no way to compare simulated prices on a pathwise basis. However, there might be valid reasons for sampling exact solutions at closely spaced intervals, such as pricing path-dependent options.
sde | bm | gbm | merton | bates | drift | diffusion | sdeddo | sdeld | cev | cir | heston | hwv | sdemrd | ts2func | simulate | simByEuler | interpolate | simByQuadExp | simBySolution | simBySolution
|
De Moivre's Theorem | Brilliant Math & Science Wiki
Mei Li, Alain Demenet, Jubayer Nirjhor, and
De Moivre's theorem gives a formula for computing powers of complex numbers. We first gain some intuition for de Moivre's theorem by considering what happens when we multiply a complex number by itself.
Recall that using the polar form, any complex number
z=a+ib
z = r ( \cos \theta + i \sin \theta )
\begin{array}{rl} \mbox{Absolute value: } & r = \sqrt{ a^2 + b^2 } \\ \mbox{Argument } \theta \text{ subject to: } & \cos{\theta} = \frac{a}{r},\ \sin{\theta}=\frac{b}{r}. \end{array}
Then squaring the complex number
z
\begin{aligned} z^2 &= \big( r ( \cos \theta + i \sin \theta )\big) ^2\\ &= r^2 \left( \cos \theta + i \sin \theta \right)^2\\ &= r^2 \left( \cos \theta \cos \theta + i \sin \theta \cos \theta + i \sin \theta \cos \theta + i^2 \sin \theta \sin \theta \right) \\ &= r^2 \big( ( \cos \theta \cos \theta - \sin \theta \sin \theta ) + i ( \sin \theta \cos \theta + \sin \theta \cos \theta )\big) \\ &= r^2 \left( \cos 2\theta + i \sin 2\theta \right). \end{aligned}
This shows that by squaring a complex number, the absolute value is squared and the argument is multiplied by
2
n \geq 3
, de Moivre's theorem generalizes this to show that to raise a complex number to the
n^\text{th}
power, the absolute value is raised to the
n^\text{th}
power and the argument is multiplied by
n
x
n
\big( r ( \cos \theta + i \sin \theta )\big)^n = r^n \big( \cos ( n \theta) + i \sin (n \theta) \big).
We'll prove by induction.
\begin{aligned} z^{n} &= \big(r(\cos(\theta) + i\sin(\theta)\big)^{n}\\ &= r^{n}\big(\cos(\theta) + i\sin(\theta)\big)^{n}. \end{aligned}
Let's focus on the second part:
\big(\cos(\theta) + i\sin(\theta)\big)^{n}
n = 1
\big(\cos(\theta) + i\sin(\theta)\big)^{1} = \cos(1\cdot \theta) + i\sin(1\cdot \theta),
We can assume the same formula is true for
n = k
\big(\cos(\theta) + i\sin(\theta)\big)^{k} = \cos(k\theta) + i\sin(k\theta).
n = k + 1
, we expect to have
\big(\cos(\theta) + i\sin(\theta)\big)^{k + 1} = \cos\big((k + 1)\theta\big) + i\sin\big((k + 1)\theta\big).
\begin{aligned} \big(\cos(\theta) + i\sin(\theta)\big)^{k + 1} &= \big(\cos(\theta) + i\sin(\theta)\big)^{k}\big(\cos(\theta) + i\sin(\theta)\big)^{1}\\ &= \big(\cos(k\theta) + i\sin(k\theta)\big)\big(\cos(1\cdot \theta) + i\sin(1\cdot \theta)\big) && (\text{We assume this to be true for } x = k.)\\ &= \cos(k\theta)\cos(\theta) + \cos(k\theta)i\sin(\theta) + i\sin(k\theta)\cos(\theta) + i^{2}\sin(k\theta)\sin(\theta) && (\text{We have } i^{2} = -1.)\\ &= \cos(k\theta)\cos(\theta) - \sin(k\theta)\sin(\theta) + i\big(\cos(k\theta)\sin(\theta) + \sin(k\theta)\cos(\theta)\big)\\ &= \cos(k\theta + \theta) + i\sin(k\theta + \theta) && (\text{deducted from the trigonometry rules})\\ &= \cos\big((k+1)\theta\big) + i\sin\big((k+1)\theta\big). \end{aligned}
n = k + 1,
\big(\cos(\theta) + i\sin(\theta)\big)^{k + 1} = \cos\big((k+1)\theta\big) + i\sin\big((k+1)\theta\big),
As the theorem is true for
n = 1
n = k + 1
, it is true for all
n \geq 1
_\square
Note that in de Moivre's theorem, the complex number is in the form
z = r ( \cos \theta + i \sin \theta ) .
For complex numbers in the general form
z = a + bi
, it may be necessary to first compute the absolute value and argument to convert
z
r ( \cos \theta + i \sin \theta )
before applying de Moivre's theorem.
Raising to a Power - Basic
Raising to a Power - Intermediate
\alpha
\beta
x^2 + x + 1 = 0,
then the product of the roots of the equation whose roots are
\alpha^{19}
\beta ^7
\text{\_\_\_\_\_\_\_\_\_\_}.
( 1 - i )^{6}
In order to express
z = 1 - i
r (\cos \theta + i \sin \theta),
we calculate the absolute value
r
and argument
\theta
\begin{aligned} \mbox{Absolute value}: & r = \sqrt{ 1^2 + (-1) ^2 } = \sqrt{2} \\ \mbox{Argument}: & \theta = \arctan \frac{-1 }{1} = -\frac{\pi}{4}. \end{aligned}
Now, applying DeMoivre's theorem, we obtain
\begin{aligned} z^{6} &= \left[ \sqrt{2} \left( \cos \left( -\frac{ \pi}{4} \right) + i \sin \left( -\frac{\pi}{4} \right) \right) \right]^{6} \\ &= \sqrt{2}^{6} \left[ \cos \left(- \frac{ 6\pi } { 4} \right) + i \sin \left(- \frac{6\pi}{4}\right) \right] \\ &= 2^3 \left[ \cos \left(- \frac{ 3\pi }{2} \right) + i \sin \left( - \frac{3\pi}{2} \right) \right] \\ &= 8 ( 0 + 1 i ) \\ &= 8i.\ _\square \end{aligned}
\left( \frac{\sqrt{2}}{2}+ \frac{\sqrt{2}}{2} i \right)^{1000} .
z = \left( \frac{\sqrt{2}}{2}+ \frac{\sqrt{2}}{2} i \right)
r (\cos \theta + i \sin \theta),
r
\theta
\begin{aligned} \mbox{Absolute value}: & r = \sqrt{ \left( \frac{\sqrt{2}}{2}\right)^2 + \left( \frac{\sqrt{2}}{2}\right)^2 } = 1 \\ \mbox{Argument}: & \theta = \arctan 1 = \frac{\pi}{4}. \end{aligned}
\begin{aligned} z^{1000} &= \Bigg( \cos \left( \frac{ \pi}{4} \right) + i \sin \left( \frac{\pi}{4} \right) \Bigg)^{1000} \\ &= \cos \left( \frac{ 1000\pi }{ 4} \right) + i \sin \left( \frac{1000\pi}{4} \right) \\ &= \cos 250\pi + i \sin 250 \pi \\ &= \cos (0 + 125 \times 2\pi) + i \sin (0 + 125 \times 2\pi)\\ & = 1.\ _\square \end{aligned}
\big( 1 + \sqrt{3} i \big)^{2013}.
z = 1 + \sqrt{3} i
r (\cos \theta + i \sin \theta),
r
\theta
\begin{aligned} \mbox{Absolute value}: & r = \sqrt{ 1^2 + \big(\sqrt{3}\big) ^2 } = \sqrt{4} = 2 \\ \mbox{Argument}: & \theta = \arctan \frac{\sqrt{3} } {1} = \frac{\pi}{3}. \end{aligned}
\begin{aligned} z^{2013} &= \Bigg( 2 \left( \cos \frac{ \pi}{3} + i \sin \frac{\pi}{3} \right) \Bigg)^{2013} \\ &= 2^{2013} \left( \cos \frac{ 2013 \pi } { 3} + i \sin \frac{2013\pi}{3} \right) \\ &= 2^{2013} ( - 1 + 0 i ) \\&= - 2^{2013}.\ _\square \end{aligned}
x
n
( \cos x + i \sin x )^n = \cos ( nx) + i \sin (nx).
Proof: We prove this formula by induction on
and by applying the trigonometric sum and product formulas. We first consider the non-negative integers. The base case
n=0
is clearly true. For the induction step, observe that
\begin{array} { l l } ( \cos x + i \sin x)^{k+1} & = (\cos x + i \sin x )^k \times ( \cos x + i \sin x ) \\ & = \big( \cos (kx) + i \sin (kx) \big) ( \cos x + i \sin x ) \\ & = \cos (kx) \cos x - \sin(kx) \sin x + i\big( \sin (kx) \cos x + \cos(kx) \sin x\big) \\ & = \cos \big[(k+1)x\big] + i \sin \big[(k+1)x\big].\ _\square \end{array}
Note that the proof above is only valid for integers
n
. There is a more general version, in which
n
is allowed to be a complex number. In this case, the left-hand side is a multi-valued function, and the right-hand side is one of its possible values.
Euler's formula for complex numbers states that if
z
is a complex number with absolute value
r_z
\theta_z
z = r_z e^{i \theta_z}.
The proof of this is best approached using the (Maclaurin) power series expansion and is left to the interested reader. With this, we have another proof of De Moivre's theorem that directly follows from the multiplication of complex numbers in polar form.
\cos (5\theta) = cos^5 \theta - 10 \cos^3 \theta \sin^2 \theta + 5 \cos \theta \sin^4 \theta.
Applying De Moivre's theorem for
n= 5
\cos (5 \theta) + i \sin ( 5 \theta) = ( \cos \theta + i \sin \theta) ^ 5 .
Expand the RHS using the binomial theorem and compare real parts to obtain
\cos ( 5 \theta) = \cos^5 \theta - 10 \cos^3 \theta \sin^2 \theta + 5 \cos \theta \sin^4 \theta.\ _\square
Note: For an integer
n
\cos ( n \theta)
solely in terms of
\cos \theta
by using the identity
\sin^2 \theta = 1 - \cos^2 \theta
. This is known as the Chebyshev polynomial of the first kind.
\sin (0\theta) + \sin (1 \theta) + \sin (2 \theta) + \cdots + \sin (n \theta).
Applying De Moivre's formula, this is equivalent to the imaginary part of
( \cos \theta + i \sin \theta)^0 + ( \cos \theta + i \sin \theta)^1 + ( \cos \theta + i \sin \theta) ^2 + \cdots + ( \cos \theta + i \sin \theta)^n.
Interpreting this as a geometric progression, the sum is
\frac{ (\cos \theta + i \sin \theta)^{n+1} -1} {( \cos \theta + i \sin \theta) - 1 }
as long as the ratio is not 1, which means
\theta \neq 2k \pi
\big(
Note that in this case, we get that each term
\sin (k\theta)
is 0, and hence the sum is 0.
\big)
Converting this to polar form, we obtain
\frac{ e^{i (n+1) \theta} - 1 } { e^{i\theta} -1 } = \frac{ e^{ i \left( \frac{n+1}{2} \right)\theta} } {e^{i \frac{1}{2} \theta} } \times \frac{e^{ i \left( \frac{n+1}{2} \right)\theta} - e^{ - i \left( \frac{n+1}{2} \right)\theta} } { e^{ i \frac{1}{2} \theta} - e^{-i \frac{1}{2} \theta} } = e^{ i\frac{n}{2} \theta} \frac{2i \sin \left[ ( \frac{n+1}{2})\theta \right] } { 2i \sin \left(\frac{1}{2} \theta\right)} .
Taking imaginary parts, we obtain
\frac{ \sin \left( \frac{n}{2} \theta \right) \sin \left( \frac{n+1}{2} \theta \right) } { \sin \left( \frac{1}{2} \theta \right) }.\ _\square
n^\text{th}
z^n = 1.
Suppose complex number
z = a + bi
is a solution to this equation, and consider the polar representation
z = r e^{i\theta}
r = \sqrt{a^2 + b^2}
\tan \theta = \frac{b}{a}, 0 \leq \theta < 2\pi
. Then, by De Moivre's theorem, we have
1 = z^n = \big(r e^{i\theta} \big) ^n = r^n (\cos \theta + i \sin \theta)^n = r^n (\cos n \theta + i \sin n \theta).
r^n = 1
r
is a real, non-negative number, we have
r = 1.
n \theta = 2k \pi
\theta = \frac{2k \pi}{n}
k
. Now, the values
k = 0, 1, 2, \ldots, n-1
give distinct values of
\theta
and, for any other value of
k
, we can add or subtract an integer multiple of
n
to reduce to one of these values of
\theta
n^\text{th}
e^{\frac{2k\pi }{ n} i} = \cos \left( \frac{2k\pi }{ n } \right) + i \sin \left( \frac{2k\pi }{ n } \right) \text{ for } k = 0, 1, 2, \ldots, n-1.
Observe that this gives
n
n^\text{th}
roots of unity, as we know from the fundamental theorem of algebra. Since all of the complex roots of unity have absolute value 1, these points all lie on the unit circle. Furthermore, since the angle between any two consecutive roots is
\frac{2\pi}{n}
, the complex roots of unity are evenly spaced around the unit circle.
What are the complex solutions to the equation
z = \sqrt[3]{1}?
Cubing both sides gives
z^3 = 1,
z
3^\text{rd}
root of unity. By the above, the
3^\text{rd}
e^{ \frac{2k\pi }{ 3 } i} = \cos \left( \frac{2k\pi }{ 3} \right) + i \sin \left( \frac{2k\pi }{ 3 } \right) \text{ for } k = 0,1,2.
This gives the roots of unity
1, e^{\frac{2\pi}{3} i}, e^{\frac{4\pi}{3} i}
1,\quad -\frac{1}{2} + \frac{\sqrt{3}}{2} i,\quad -\frac{1}{2} - \frac{ \sqrt{3}}{2}i.\ _\square
Note: Another way to solve this equation would be to factorize
z^3 -1 = (z-1) (z^2 + z + 1)
. Then the solutions are
z=1
and the solutions to the quadratic equation
z^2 + z + 1=0
, which can be found using the quadratic formula.
Given positive integer
n
\zeta = e^{\frac{2k\pi }{ n} i }
k = 1, 2, \ldots, n-1
\zeta
is one of the
n^\text{th}
root of unity that is not equal to
1
1 + \zeta + \zeta^2 + \cdots + \zeta^{n-1} = 0.
\zeta
n^\text{th}
root of unity, we have
\zeta^n = 1
0 = 1 - \zeta^n = (1- \zeta)\big( 1 + \zeta + \zeta^2 + \cdots + \zeta^{n-1}\big).
\zeta \ne 1
1 + \zeta + \zeta^2 + \cdots + \zeta^{n-1} = 0.\ _\square
\dfrac{31-2\delta_{1}}{1-2\delta_{1}} +\dfrac{31-2\delta_{2}}{1-2\delta_{2}}+\dfrac{31-2\delta_{3}}{1-2\delta_{3}}
1,\delta_{1},\delta_{2},\delta_{3}
are distinct fourth roots of unity, then evaluate the expression above.
Cite as: De Moivre's Theorem. Brilliant.org. Retrieved from https://brilliant.org/wiki/de-moivres-theorem/
Learn more in our Complex Numbers course, built by experts for you.
|
Signals and Systems/Engineering Functions - Wikibooks, open books for an open world
Signals and Systems/Engineering Functions
< Signals and Systems
2 Unit Step Function
2.3 Time Inversion
3 Impulse Function
3.3 Types of Delta
5 Rect Function
Oftentimes, complex signals can be simplified as linear combinations of certain basic functions (a key concept in Fourier analysis), which are useful to the field of engineering. These functions will be described here, and studied more in the following chapters.
Unit Step Function[edit | edit source]
The unit step function and the impulse function are considered to be fundamental functions in engineering, and it is strongly recommended that the reader becomes very familiar with both of these functions.
Shifted Unit Step function
The unit step function, also known as the Heaviside function, is defined as such:
{\displaystyle u(t)=\left\{{\begin{matrix}0,&{\mbox{if }}t<0\\1,&{\mbox{if }}t\geq 0\end{matrix}}\right.}
Sometimes, u(0) is given other values, usually either 0 or 1. For many applications, it is irrelevant what the value at zero is. u(0) is generally written as undefined.
Derivative[edit | edit source]
The unit step function is level in all places except for a discontinuity at t = 0. For this reason, the derivative of the unit step function is 0 at all points t, except where t = 0. Where t = 0, the derivative of the unit step function is infinite.
The derivative of a unit step function is called an impulse function. The impulse function will be described in more detail next.
The integral of a unit step function is computed as such:
{\displaystyle \int _{-\infty }^{t}u(s)ds=\left\{{\begin{matrix}0,&{\mbox{if }}t<0\\\int _{0}^{t}ds=t,&{\mbox{if }}t\geq 0\end{matrix}}\right\}=tu(t)}
In other words, the integral of a unit step is a "ramp" function. This function is 0 for all values that are less than zero, and becomes a straight line at zero with a slope of +1.
Time Inversion[edit | edit source]
if we want to reverse the unit step function, we can flip it around the y axis as such: u(-t). With a little bit of manipulation, we can come to an important result:
{\displaystyle u(-t)=1-u(t)}
{\displaystyle t\neq 0}
Other Properties[edit | edit source]
Here we will list some other properties of the unit step function:
{\displaystyle u(\infty )=1}
{\displaystyle u(-\infty )=0}
{\displaystyle u(t)+u(-t)=1}
{\displaystyle t\neq 0}
These are all important results, and the reader should be familiar with them.
Impulse Function[edit | edit source]
An impulse function is a special function that is often used by engineers to model certain events. An impulse function is not realizable, in that by definition the output of an impulse function is infinity at certain values. An impulse function is also known as a "delta function", although there are different types of delta functions that each have slightly different properties. Specifically, this unit-impulse function is known as the Dirac delta function. The term "Impulse Function" is unambiguous, because there is only one definition of the term "Impulse".
Let's start by drawing out a rectangle function, D(t), as such:
We can define this rectangle in terms of the unit step function:
{\displaystyle D(t)={\frac {1}{A}}[u(t+A/2)-u(t-A/2)]}
Now, we want to analyze this rectangle, as A becomes infinitesimally small. We can define this new function, the delta function, in terms of this rectangle:
{\displaystyle \delta (t)=\lim _{A\to 0}{\frac {1}{A}}[u(t+A/2)-u(t-A/2)]}
We can similarly define the delta function piecewise, as such:
{\displaystyle \delta (t)=0{\mbox{ for }}t\neq 0}
{\displaystyle \delta (t)=+\infty {\mbox{ for }}t=0}
{\displaystyle \int _{-\infty }^{\infty }\delta (t)dt=1}
Although, this definition is less rigorous than the previous definition.
Integration[edit | edit source]
From its definition it follows that the integral of the impulse function is just the step function:
{\displaystyle \int \delta (t)dt=u(t)}
Thus, defining the derivative of the unit step function as the impulse function is justified.
Shifting Property[edit | edit source]
Furthermore, for an integrable function f:
{\displaystyle \int _{-\infty }^{\infty }\delta (t-A)f(t)dt=f(A)}
This is known as the shifting property (also known as the sifting property or the sampling property) of the delta function; it effectively samples the value of the function f, at location A.
The delta function has many uses in engineering, and one of the most important uses is to sample a continuous function into discrete values.
Using this property, we can extract a single value from a continuous function by multiplying with an impulse, and then integrating.
Types of Delta[edit | edit source]
There are a number of different functions that are all called "delta functions". These functions generally all look like an impulse, but there are some differences. Generally, this book uses the term "delta function" to refer to the Dirac Delta Function.
w:Dirac delta function
w:Kronecker delta
Sinc Function[edit | edit source]
There is a particular form that appears so frequently in communications engineering, that we give it its own name. This function is called the "Sinc function" and is discussed below:
The Sinc function is defined in the following manner:
{\displaystyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}{\mbox{ if }}x\neq 0}
{\displaystyle \operatorname {sinc} (0)=1}
The value of sinc(x) is defined as 1 at x = 0, since
{\displaystyle \lim _{x\rightarrow 0}\operatorname {sinc} (x)=1}
This fact can be proven by noting that for x near 0,
{\displaystyle 1>{\frac {\sin {(x)}}{x}}>\cos {(x)}}
Then, since cos(0) = 1, we can apply the Squeeze Theorem to show that the sinc function approaches one as x goes to zero. Thus, defining sinc(0) to be 1 makes the sinc function continuous.
Also, the Sinc function approaches zero as x goes towards infinity, with the envelope of sinc(x) tapering off as 1/x.
Rect Function[edit | edit source]
The Rect Function is a function which produces a rectangular-shaped pulse with a width of 1 centered at t = 0. The Rect function pulse also has a height of 1. The Sinc function and the rectangular function form a Fourier transform pair.
A Rect function can be written in the form:
{\displaystyle \operatorname {rect} \left({\frac {t-X}{Y}}\right)}
where the pulse is centered at X and has width Y. We can define the impulse function above in terms of the rectangle function by centering the pulse at zero (X = 0), setting its height to 1/A and setting the pulse width to A, which approaches zero:
{\displaystyle \delta (t)=\lim _{A\to 0}{\frac {1}{A}}\operatorname {rect} \left({\frac {t-0}{A}}\right)}
We can also construct a Rect function out of a pair of unit step functions:
{\displaystyle \operatorname {rect} \left({\frac {t-X}{Y}}\right)=u(t-X+Y/2)-u(t-X-Y/2)}
Here, both unit step functions are set at distance of Y/2 away from the center point of (t - X).
Square Wave[edit | edit source]
A square wave is a series of rectangular pulses. Here are some examples of square waves:
These two square waves have the same amplitude, but the second has a lower frequency. We can see that the period of the second is approximately twice as large as the first, and therefore that the frequency of the second is about half the frequency of the first.
These two square waves have the same frequency and the same peak-to-peak amplitude, but the second wave has no DC offset. Notice how the second wave is centered on the x axis, while the first wave is completely above the x axis.
Retrieved from "https://en.wikibooks.org/w/index.php?title=Signals_and_Systems/Engineering_Functions&oldid=4001377"
Book:Signals and Systems
|
Write and solve a system of equations for each situation. Check your answers A
Write and solve a system of equations for each situation. Check your answers A shop has one-pound bags of peanuts for $2 and three-pound bags of peanuts for $5.50. If you buy 5 bags and spend $17, how many of each size bag did you buy?
Write and solve a system of equations for each situation. Check your answers
A shop has one-pound bags of peanuts for $2 and three-pound bags of peanuts for $5.50. If you buy 5 bags and spend $17, how many of each size bag did you buy?
hesgidiauE
x=y=5,\text{ }and\text{ }2x+5.5y+17
x=5-y\text{ }and\text{ }x=8.5-2.75y
\begin{array}{|rrr|}\hline y& x\left(1\right)& x\left(2\right)\\ 0& 5& 8.5\\ 1& 4& 5.75\\ 2& 3& 3\\ 3& 2& 0.25\\ 4& 1& -2.5\\ \hline\end{array}
You bought 3 one-pound bags and 2 three-pound bags.
-2y+y=6
4x-2y=5
\left\{\begin{array}{l}-2x+y=5\\ -6x+3y=21\end{array}
Find value(s) of k so that the linear system is consistent? (Enter your answers as a comma-separated list.)
6{x}_{1}-7{x}_{2}=4
9{x}_{1}+k{x}_{2}=-1
x-4y=-8
5y-1=x
Solve for the variables y and z in the system of equations:
\left\{\begin{array}{l}6y+2z=-5\\ 4y+2z=3\end{array}
Solve the following linear system:
x+2y-3z=4
3x-y+5z=2
4x+y+\left({s}^{2}-14\right)z=s+2
2a+5b=18
5a-3b=14
|
Home : Support : Online Help : Mathematics : Factorization and Solving Equations : LinearFunctionalSystems : UniversalDenominator
return a universal denominator of rational solutions for a linear functional system
UniversalDenominator(sys, vars, opts)
UniversalDenominator(A, b, x, case, opts)
UniversalDenominator(A, x, case, opts)
name indicating the case of the system; one of differential, difference, or qdifference
optional arguments of the form 'keyword'='value', where keyword is either hybrid (for differential systems only) or refined
The UniversalDenominator command returns a universal denominator of a given linear functional system of equations with polynomial coefficients (that is, a denominator which is a multiple of the denominator of any component of any rational solution of the given system). If a universal denominator cannot be found then FAIL is returned.
\mathrm{Ly}\left(x\right)=\mathrm{Ay}\left(x\right)+b
, where L is an operator (either differential, difference, or q-difference),
y\left(x\right)
is a vector of the functions to solve for, A is a rational matrix, and b is a rational vector (right-hand side).
The function works differently depending on the case of the given system.
In the differential case, the singularities of the system are computed. If 'hybrid'=false is specified then for each singularity the function rewrites the given system in the singularity point and constructs the corresponding matrix recurrence system. Then it bounds the corresponding pole order using LinearFunctionalSystems[MatrixTriangularization] for the leading matrix of this matrix recurrence system as LinearFunctionalSystems[PolynomialSolution] does for the polynomial solution degree using LinearFunctionalSystems[MatrixTriangularization] for the trailing matrix. The universal denominator is the product of the bounded poles that are found. If 'hybrid'=true is specified (the default) then a hybrid method is used, which combines the above approach with an other one based on an algorithm by Bronstein & Trager working on regular singularities. The latter algorithm (with some heuristic added) allows to separate regular singularities (though some of them may be missed). For these separated regular singularities a classical method is used, and the rest of the singularities are processed by the above approach based on LinearFunctionalSystems[MatrixTriangularization].
In the difference case, the function builds polynomials which are analogous to the leading and trailing coefficient of the recurrence corresponding to the difference equation in the scalar case. Then it uses the dispersion algorithm by S.A. Abramov for finding the denominator. The algorithm for computing the dispersion of two polynomials by Wright and Man is used.
In q-difference case, the combinations of the above approaches is used. At zero (0), the function bounds the order of the pole and at all other points the q-analog of the dispersion algorithm is applied. The q-extension of the algorithm for computing the q-dispersion of two polynomials by Wright and Man is used.
This function returns the inverse value of the universal denominator.
If 'refined'=true is specified then the universal denominator is refined for every component of the solution. In this case, the procedure returns a component-wise sequence
\frac{1}{\mathrm{u1}},\frac{1}{\mathrm{u2}},...,\frac{1}{\mathrm{un}}
\mathrm{ui}
is a multiple of the denominator of i-th component of any rational solution of the system. Each element of the sequence can be simpler then the common universal denominator. Here the algorithm by S.A. Abramov and S.P. Polyakov is used.
If the number of linear independent equations of the system is less than the number of the function variables then the universal denominator cannot be found and FAIL is returned. In the case it is possible to extend the system by additional equations, in particular giving the desirable value of some of the function variables (see example).
The given system should be of first order. (This is always true if the system is given in the matrix form.)
The error conditions associated with UniversalDenominator are the same as those which are generated by LinearFunctionalSystems[Properties] and LinearFunctionalSystems[CanonicalSystem].
This function is part of the LinearFunctionalSystems package; so it can be used in the form UniversalDenominator(..) only after executing the command with(LinearFunctionalSystems). However, it can always be accessed through the long form of the command by using the form LinearFunctionalSystems[UniversalDenominator](..).
\mathrm{with}\left(\mathrm{LinearFunctionalSystems}\right):
\mathrm{sys}≔[-{x}^{2}\mathrm{y2}\left(x\right)+2{x}^{2}\mathrm{y1}\left(x\right)+4\mathrm{y1}\left(x\right)x-\mathrm{y1}\left(x+1\right){x}^{2}-4\mathrm{y1}\left(x+1\right)x+2\mathrm{y1}\left(x\right)-4\mathrm{y1}\left(x+1\right),\mathrm{y2}\left(x+1\right)-\mathrm{y1}\left(x\right)]:
\mathrm{vars}≔[\mathrm{y1}\left(x\right),\mathrm{y2}\left(x\right)]:
\mathrm{UniversalDenominator}\left(\mathrm{sys},\mathrm{vars}\right)
\frac{\textcolor[rgb]{0,0,1}{1}}{{\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}}
\mathrm{sys}≔[x\mathrm{diff}\left(\mathrm{y1}\left(x\right),x\right)-2\mathrm{y1}\left(x\right)-\mathrm{y2}\left(x\right)]:
\mathrm{vars}≔[\mathrm{y1}\left(x\right),\mathrm{y2}\left(x\right)]:
u≔\mathrm{UniversalDenominator}\left(\mathrm{sys},\mathrm{vars}\right)
\textcolor[rgb]{0,0,1}{u}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{FAIL}}
\mathrm{sys}≔[\mathrm{op}\left(\mathrm{sys}\right),\mathrm{y2}\left(x\right)=\frac{1}{{x}^{3}}]:
\mathrm{UniversalDenominator}\left(\mathrm{sys},\mathrm{vars}\right)
\frac{\textcolor[rgb]{0,0,1}{1}}{{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}}
\mathrm{sys}≔[x\left(x+3\right)\left(2x+3\right)\mathrm{y1}\left(x+1\right)-\left(x+2\right)\left(x-1\right)\left(2x-1\right)\mathrm{y2}\left(x\right),\mathrm{y2}\left(x+1\right)-\mathrm{y1}\left(x\right)]
\textcolor[rgb]{0,0,1}{\mathrm{sys}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{y1}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{-}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{y2}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{y2}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{y1}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)]
\mathrm{UniversalDenominator}\left(\mathrm{sys},[\mathrm{y1}\left(x\right),\mathrm{y2}\left(x\right)]\right)
\frac{\textcolor[rgb]{0,0,1}{1}}{\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)}
\mathrm{UniversalDenominator}\left(\mathrm{sys},[\mathrm{y1}\left(x\right),\mathrm{y2}\left(x\right)],'\mathrm{refined}'=\mathrm{true}\right)
\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{1}}{\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)}
Abramov, S. A. "EG-Eliminations." Journal of Difference Equations and Applications. (1999): 393-433.
Abramov, S. A. "Rational Solutions of Linear Difference and Q-difference Equations with Polynomial Coefficients." Programming and Computer Software, Vol. 21(6). (1995): 273-278.
Bronstein, M., and Trager, B. M. "A Reduction for Regular Differential Systems." In Proceedings of MEGA '03, CD-ROM. 2003.
Man, Yiu-Kwong, and Wright, Francis J. "Fast Polynomial Dispersion Computation and its Application to Indefinite Summation." In Proceedings of ISSAC '94, pp. 175-180. Edited by Malcolm MacCallum. New York: ACM Press, 1994.
Abramov, S. A., and Polyakov, S.P. "Refined universal denominators" Programming and Computer Software, (2007), to appear.
|
Let A=begin{bmatrix}1 & 2 -1 & 1 end{bmatrix} text{ and } C=begin{bmatrix}-1 & 1
Let A=begin{bmatrix}1 & 2 -1 & 1 end{bmatrix} text{ and } C=begin{bmatrix}-1 & 1 2 & 1 end{bmatrix}a)Find elementary matrices E_1 text{ and } E_2 such that C=E_2E_1Ab)Show that is no elementary matrix E such that C=EA
A=\left[\begin{array}{cc}1& 2\\ -1& 1\end{array}\right]\text{ and }C=\left[\begin{array}{cc}-1& 1\\ 2& 1\end{array}\right]
a)Find elementary matrices
{E}_{1}\text{ and }{E}_{2}
C={E}_{2}{E}_{1}A
b)Show that is no elementary matrix E such that
C=EA
Consider the given information,
A=\left[\begin{array}{cc}1& 2\\ -1& 1\end{array}\right]\text{ and }C=\left[\begin{array}{cc}-1& 1\\ 2& 1\end{array}\right]
now, calculate the elementary matrices.
The matrix C can be be calculated by A by the following operations.
Interchange the Rows of the matrix A. Then,
{A}^{\ast }=\left[\begin{array}{cc}-1& 1\\ 1& 2\end{array}\right]
Now, multiply -1 in the row one and add in second.
{A}^{\ast }=\left[\begin{array}{cc}-1& 1\\ 2& 1\end{array}\right]
Then the elementary matrix are defined as,
{E}_{1}=\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right]
{E}_{2}=\left[\begin{array}{cc}1& 0\\ -1& 1\end{array}\right]
(b) It can be observed from part a. A and C lines above are equivalent. However, none of the 2 matrices A and C can be changed to another by a single row operation. Hence there is no primary matrix E such that
C=EA.
2×2
\text{Basis }=\left\{\left[\begin{array}{cc}& \\ & \end{array}\right],\left[\begin{array}{cc}& \\ & \end{array}\right]\right\}
This semester you have learnt how to solve simultaneous equations algebraically. Another method of solving these equations is using Cramers
Given matrix A and matrix B. Find (if possible) the matrices: (a) AB (b) BA.
A=\left[\begin{array}{cccc}1& 2& 3& 4\end{array}\right],B=\left[\begin{array}{c}1\\ 2\\ 3\\ 4\end{array}\right]
Let A and B be
n×n
matrices. Recall that the trace of A , written tr(A),equal
\sum _{i=1}^{n}{A}_{ii}
Prove that tr(AB)=tr(BA) and
tr\left(A\right)=tr\left({A}^{t}\right)
M\in {R}^{10×10},s.t.{M}^{2020}=0
. Prove
{M}^{10}=0
Identify orthogonal matrices Invert orthogonal matrice
\left[\begin{array}{ccc}\frac{1}{3}& \frac{2}{3}& \frac{2}{3}\\ \frac{2}{3}& \frac{1}{3}& -\frac{2}{3}\\ \frac{2}{3}& -\frac{2}{3}& \frac{1}{3}\end{array}\right]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.